It's a race between the people who want to fix the vulnerabilities and the people who want to exploit them. Finding vulnerabilities is difficult — writing correct code is a lot more difficult than writing almost correct code.
Suppose (numbers pulled out of thin air) that 99% of security defects in released products are found and fixed by the authors themselves, or reported privately to the authors. That means 1% of security defects are made public through an exploit (a zero-day). You only hear about the publicized exploits, not about the ones that are quietly fixed — so that 1% is all you hear about. The authors and analysts have to fix every single vulnerability in order to make a completely secure product; the “hackers” only need to find one unfixed vulnerability.
But it's actually worse than that: even in the 99% case, when the authors release a new version, not everyone will upgrade immediately. There's another race: the race between exploitation and patching. Once the upgrade is available, crackers can study the differences with the previous version and build exploits for older versions that people are still running. This leaves a lot of exposed machines.
There are a number of techniques to look for vulnerabilities, used by both sides alike. Fuzzing is one of them: try feeding random data to a program, and see if it reacts in an interesting way. For example, if the program crashes (which in itself is often a denial of service), study the way in which it crashes carefully; often crafting the input data then lets you escalate to executing arbitrary code. Another approach is static analysis: run automatic analyses on source code or compiled code, to look for suspicious patterns (e.g. look for unchecked array accesses) or verify the absence of certain kinds of bad behavior (e.g. prove that all array accesses are unchecked). Static analysis can be complemented by manual reviews, to look for aspects that automated tools aren't good enough to catch (e.g. review that all temporary files are created securely). Experience helps, of course — sometimes someone will think “hey, this is a difficult problem, I wonder if the developers got that case right”.
I recommend reading the presentation of OpenBSD's audit process and looking further at the OpenBSD project. This is an open source project, with open discussions, and they make security a priority. I don't know of a more comprehensive presentation of their security auditing process; you can find out more about it by scouring their mailing lists.
More generally, look at security announcements for open source software (e.g. a Linux distribution). When a security patch is announced, look at the description of the vulnerability, at the original source, and at the patch. Try to get a feeling for what patterns can be problematic, how they were found (the vulnerability announcement often has a link to a more detailed write-up) and how they were fixed.