So I started this big post with lots of code talking about design and how it impacts redundancy. It got out of hand and needs some significant editing before I can publish it. I'm going to try to summarize the point without the code here and then follow up with some examples.
As I think about redundancy in a test suite I imagine conditions where I have poorly organized code that causes me to repeat tests as setup to other tests. This tells me that the code is too complicated. If you look around people will tell you things like Cyclomatic Complexities greater than five are too big, its too complicated. Well, a function with that much complexity is going to need quite a few tests, and in many cases it will get repetitive. That is a manifestation of redundancy by bad design.
Proposal, break things down. Lets try a Cyclomatic Complexity of two per method. Test each of those things independently. Then test their aggregates. We end up with many more tests but less redundancy. I haven't run the math, I can't give you a straight number here, but what I experienced was two layers of tests, almost 50% more tests, but absolute clarity about what each function does and how all functions are grouped together. I was also able to be very expressive in naming and I feel like the code (despite its poor contextual description) was easy to understand.
So despite my efforts to address redundancy in tests as a 'here is where you should cut things out', I can't find an answer to that. Every time I try, I find a new set of tests to add.
After I work out my code based example of this (next post or two I hope), I'm going to try to address issues of Integration Tests and Acceptance Tests as redundant to Unit Tests. I'll put forth the argument that what you see as redundancy is not redundant, its overlapping concerns; further justification to not delete anything I suspect.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.