A Standard Thought Process For Testing
Testing is, in some sense, a mess. Part of the problem is similar to the dynamic/static schism: nobody seems to agree on what those terms mean. Case in point: what's "integration testing"? Here's the definition from the excellent Code Complete 2:
Integration testing is the combined execution of two or more classes, packages, components, or subsystems that have been created by multiple programmers or programming teams. This kind of testing typically starts as soon as there are two classes to test and continues until the entire system is complete.
Read that carefully. Look for a flaw.
So, let's say that you're working for a company and you're the only programmer. By the aforementioned definition, you're not capable of doing integration testing because there are no multiple programmers or teams. In fact, the "component testing" definition has a similar flaw. Lone programmers are not capable of it if you accept the "Code Complete" definition.
Mind you, I don't want to use this as an excuse to rip into an otherwise excellent book. There are many areas of testing which are, um, not terribly well defined. BDD (Behavior Driven Development), for example, is often described in rather curious terms to the point where some people admit that they just don't understand it. I've never done it, but I've proposed similar things before and I think I understand it (I think it would be a nice way to kill off FIT testing).
I would love to see a standardised description of different testing techniques along with plenty of hard data (I've never seen this for FIT or BDD) to suggest which give more bang for the buck. For example, are there areas where QA is less useful than others? Having a single repository of this information, along with references, would be great.