I first started doing unit tests about 10 years before I heard about the idea of automated tests. It was understood that unit tests were the programer testing small sections of code in isolation. Who better to test all the logic works than the programmer who did the work? You wrote a test program to give each function the input you cared about, and inspected the state of the system was what you expected. It worked great, I remember finding many subtle bugs in code that I thought was right until I noticed the internal state of my system wasn't right. Once you were done with each test you modified your fixture to test the next input, without saving your test program. When you finished you deleted the test program.
The above was often called white box testing, as opposed to black box testing. In white box testing the person doing the testing knows how the system is implemented and takes advantage of that fact to write tests. By contrast, in black box testing the person doing the testing doesn't know the implementation, they are supposed to verify it works as it should. Having both types of manual tests was considered best practice.
If you are switching from the above work to automated tests it might seem easy an obvious that you should take the test your wrote previously and save them. It turns out that this is actually wrong. Throwing away those tests was the right thing to do. Even though they found many bugs, they are not the types of tests you want long term. As I wrote previously, the reason we automate tests is to support future changes to the code. Unit tests don't support the future of the code, they verify the current state. This subtle distinction means that those old unit tests are not useful.
The future of your code is change: new features, bug fixes, performance improvements and refactoring, each will change the internal state of your program and thus break your unit tests. If those changes break your automated tests the only thing to do is throw the tests out and write new. Even if you start with the old tests and make minor tweaks you should still consider them brand new tests. This is not what you want. When you fix a bug you should add new tests but otherwise not change any tests.When you refactor or make a performance improvement: there should be zero tests that change. When you add a feature you might find a couple tests that are in conflict with the new feature and are deleted, but overall you should be adding tests. Unit tests do none of the above: because unit tests know the internal state of your system they are the wrong answer for the future.
This does not mean unit tests are useless. I often insert printfs or debugger breakpoints into my code while trying to understand a bug. Many languages have some form of ASSERT() which will abort the program if some condition is met. Those are all forms of unit tests. However the fact that I know some details as they are now doesn't mean I should test those details automatically.
When writing automated tests you need take advantage of your knowledge of the system to ensure that all of the internal states are tested. However you must not actually check the internal state of the system to ensure it is correct. Instead you need to figure out what the external state of the system should be based on that state. Often this means you combine what would have been several different unit tests into one test.
Is there an exception to this rule? Code to prevent a cross
thread race condition cannot be tested by external behavior, but it still needs to work. You can try running the test a million times, but even that isn't guaranteed to hit the race condition once. I hesitate to allow testing internal state evcases like this, the locks required to make the code thread safe should not be held longer than required. If the lock remains long enough for your test to check that you are holding them, the locks are held too long for good performance.
No comments:
Post a Comment