05 April 2017

In retrospect, I never should have taped my ankles

When I was in high school I played varsity soccer for four years. Sometime in my sophomore year I started taping my ankles as a protection against injury. It was because I saw another player break his ankle during a game. It seemed like a really good idea a the time.

It wasn't until years later that I realized something. Once I started taping my ankles my effectiveness decreased. We were a small school and didn't have trainers or anything like that, so I was doing my own tape job, as were my teammates. I'd guess that a trainer would have told me I was doing it wrong. What I think happened was that I lost some degree of flexibility in my ankles and that took some of the punch out of my kicks. I went from a 60 yard line-drive to a 40 yard lob and I could never quite recover. 

In the grand scheme of things I think I still played ok. Though around my senior year I got so big and slow that the coaches used me more like a battering ram than a choice fullback. So Saturday night, for whatever reason, it occurred to me that the issue was the tape. I made a fundamental change in how I played the game and the consequence was decreased performance. What is really tragic is that I didn't realize it until thirty years later. (A problem I'll address elsewhere, feedback loops).

So, how does this relate to software development. Well, I put on tape to protect myself from injury just like we have smoke tests and acceptance tests to protect us from failed delivery. But sometimes those things create a blind spot. For example, if you have a fully automated build and deploy mechanism with a full suite of tests to ensure that you have not delivered a dud, you can make changes and grow your software with impunity. BUT, what if, due to a lack of discipline or pure happenstance a feature gets added to the system with insufficient coverage. That is, the core of the feature exists but it isn't complete or well maintained. Now you have a suite of test that prove that most things work, but not all of them. As an outsider to the process  you might just blithely assume that since the CI job is green, all is well. 

I see a lot of projects overly reliant on their test suites too tell them that things are ok. In the end that has lead them into deep trouble. At least one thing that happens is, after several months of decay, when a problem is detected, the team cannot find the missing test. They are top-heavy on their testing pyramid and when they dig into find the gaps they become hopelessly entangled in the minutia, never to see the resolution without massive rewrites of the test suite. 

Of course the inverse is true. I've seen projects that rely unit tests so heavily that they have no acceptance tests at all. They at least can rely on those unit tests to tell them when things are broken, but they have no vision of the integrated system functioning correctly. A topic for another day maybe.

One answer I have found to solving this problem is mandatory exploratory testing. Have every developer take 15 minutes a day and go play with the application. Go in, click around, test out some features. Try intentionally stupid things to see how the system works. As you find issue, write them down (or put them on a Trello board), and then go back to what you were doing. The project leadership team can then take all the output from those session and process them. Triage genuine defects, toss the cards deemed unworthy, and cycle the work through to the team in upcoming iterations. 

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.