10 September 2020

Abject Failure

 So I keep trying to write and failing. I'm crazy busy these days it seems. I will try to do better, there is something rattling around in my head about testing, TDD, etc.

21 July 2020

It's been a while...

So my life has been a bit more than crazy the past few years and I've been away too long. I'm going to start posting again. I'm keeping the bar lowish at 1 post per week and we'll see how it goes. This is not that post.

08 September 2018

TDD, Declarative Code, Deleting Tests, an Important Nuance

I've had something stuck in my craw for about a year now. Some debates and discussions of TDD led me to this condition that I was stuck for an articulation on. I think I got it now and I want to share it.

The Core of TDD

TDD is about driving out a solution from tests. That is to say, rule 1, tests are the cause of code in production. The goal is to drive out a solution. It should be minimalist, it should be clean, and it should only do what the test say it should. This is well understood.

Deleting Tests

One topic that seems to come up a lot is deleting tests. When do you do it? Why do you do it? 

As I've said before, I generally don't delete tests unless they are wrong. That isn't to say that I never do, but I don't spend much time worrying about it. I've built plenty of large systems with thousands of test and never had a problem caused by too many tests. It just isn't a thing. 

What I am likely to do is replace a test. That is, if I see a particularly bad example of a test -- I might kill it off and replace it with one or more cleaner, more concise tests. 

Communicating Intent Through Tests

Tests have a secondary function. Communication. When I want to understand the authors intent, I can look at the tests and understand what the author was driving at; in a well tested system at least. I don't always have to find the author, or track down the story card, or have a protracted and speculative conversation about what the author intended. I can read the tests and understand what is supposed to be happening. Admittedly I might lack the Why, but What is half the battle.

So, when we talk about deleting tests, I feel like that is an affront to a secondary characteristic of TDD. It offends me. We are deleting the expression of What the code under tests should do; how it behaves.

Declarative Code and Tests

Anyway, this leads me to the bit that has been bugging me. Deleting tests of declarative code. Or tests for 'obvious code'. The argument that 'The code explains itself and the tests don't add value' does not resonate with me. Follow along here, see if this make sense to you.

If the tests are the What
And the code only satisfies that What
And I delete code, I can recover the What by looking at the test.

If the tests are the What
And I delete the tests
I don't know What is supposed to be happening.

It is hubris to presume that we are smart enough to know or infer what the authors intent was. Further, it is wasteful -- why speculate when we can know?

An Example Case

The common occurrance of this debate topic lately seems to be around declarative code. Take this example;

class RuleApplier {

  private static final Map<Class, List<Rule>> ruleMap = new HashMap<>();

  static {
    /* code that loads up all the various possible rules goes here */
    ruleMap.put(BlueThingy.class, Arrays.asList(new BlueThingRule1(), new GenericThingRule()));
  }

  public static final void applyRules(Thingy thingy) {
    for (Rule rule : ruleMap.get(thingy.getKey())) {
      thingy.apply(rule);
    }
  }
}

Should there be a test that indicates that ruleMap should contain a rule for BlueThingy? Should it be specific about what rules are applied? 

I'd say yes!

You howl, Why? Thats silly? Thats testing structure and implementation? 

I submit the following defense.

In Defense of Not Deleting

If I test drove the solution I would have started with a test that says something like;

public class RuleApplierTest {

  @Test
  public void blueThingyGetsBlueThingRuleAndGenericThingRuleApplied() {
    BlueThingy thingy = mock(BlueThingy.class);
    when(thingy.getKey()).thenCallRealMethod();
    InOrder ruleApplications = inOrder(thingy);

    RuleApplier.applyRules(thingy);

    ruleApplications.verify(thingy).apply(isA(BlueThingRule1.class));
    ruleApplications.verify(thingy).apply(isA(GenericThingRule.class));
  }

}

There is nothing about that test that says RuleApplier has a declarative block of code, or even that it needs one? It only says RuleApplier will apply those two rules when given a BlueThingy.

The Nuance

If I delete this test 'because it is testing declarative code' I'm removing the what. So later, when I wonder about why, I won't have the benefit of a what to clue me in. In fact, I can now easily break the system. And surely you agree, relying on an integrated test is a bad choice; integrated tests are a sham.

There, I finally got that out of my system.

24 June 2018

Flow Control with Exceptions

Recently the topic of using exceptions for flow control reared its ugly head. This topic seems to show up in my life every few years so I thought I'd share some things I've learned over the past 25 years of dealing with exceptions.

Don't Do It!

OK, first, just don't. Don't use Exceptions explicitly for flow control. In fact, don't use Exceptions if you can help it. Exceptions should be the result of something essentially beyond your control. The name says it all, Exceptions are exceptional -- your handling of an exception should be to deal with the unexpected, despite how cynical you might be.

General Handling of...

So you really should try to avoid handling exceptions. That is, you should only handle an exception you can do something about.  A typical good pattern for any piece of software is to have one exception handler at the top (closest to the invocation point) and handle everything there, usually with a polite message indicating that a system error has occurred.

I Can Handle It

There are some exceptions you can handle. File Not Found Exception is a pretty common one that you can generally handle. Now by handle, what do I mean? Well, in some cases it might mean printing a helpful error message for the user. In other cases I might mean creating or downloading the missing file, or using a default configuration. 

When you are doing this, you are not using an Exception for flow control. You are using an exception to identify and handle an unexpected (but possible) condition in your application. 

Some other pointers for handling exceptions include, handle them immediately and concisely. That is, don't try to over generalize the handling of possible exceptions (other than the aforementioned top level handler). When exceptions are handled, get to the point, handle them quickly and without too many gyrations, then resume normal flow. 

Where Does It Get Messy?

Things usually get messy in highly modularized code bases. For example, if you have 20 libraries as dependencies to your application, but you wrote all of those libraries and your application, all of this is your code. This can make it hard to understand when you are using an exception for flow control and when you are dealing with things outside of your control.

An easy way to work through this is to consider what you'd do if the library throwing the exception was an open source library, what would you do then? Would you still throw the exception? If you wouldn't do this to a stranger on the internet, don't do it to yourself.

Similarly, if the library throwing the exception was some OSS library you'd pulled off Maven Central how would you handle the exception? Same rule applies to the library you wrote.

Don't Over Complicate

As with most things, it is best if we don't overcomplicate the matter at hand. Exceptions are part of our languages. There are penalties to using them, but there are also advantages. When considering how to use an exception, think about the developer who comes after you. What will make sense to them? That is what you should do. When in doubt, ask someone how they would expect things to work. 





19 June 2018

Automate Everything

Stop me if you have heard this one before. No don't read this again.

Back in 1988-89 I had a job as an assistant systems operator working for a really cool guy named Jason. Mostly I ran backups and did other really simple SysOp work and I probably spent more time learning csh and making patch cables for the machine room than doing much else. But I still learned a lot in this job. 

The most important lesson I learned was, automate everything.

It came up one day that there seemed to be a lot of idle time in the life of a SysOp. Roughly 80% of the time was available for projects like 'make patch cables' or 'clean the attic'. So I asked Jason one day, 

"How is it that we have so much spare time? When are we doing to do some SysOp-ing.?"

He said, "We are! Everything is automated. When I come into the office in the morning I check my email. I review the reports generated by the automated scripts, and if nothing is wrong I have to make stuff up for us to do all day." 

At the time it was sort of a "Ha Ha" moment and I didn't think about it too much. Years later I realized, Jason and the other Real SysOps™ had automated every single tasks they had to perform on a regular basis. They needed guys like me to change the tapes in the ExoByte drive, but not much else. And as long as things went well, there wasn't much to do.

That left lots of free time for other pursuits. Like thinking about how to make things better, more automated. They were basically working to eliminate their own jobs. As a consequence they could work on more interesting things (homework, pet projects, etc.). I wish I'd had a clue back then, but I have one now. 

By automating away all the mundane things we can create more space to think through tough problems, innovate, or just generally sleep better

So have been applying this sort of thinking since back in the day, generally with good success. I admit, sometimes it takes me a long time to figure out how to automate things. I certainly have grown to despise things that are hard to manipulate with scripts and macros. What I've gotten in the end is a fairly simple life. 

One example is a side project I'm working on. I've automated nearly everything. I did it in the Unix Way (small, atomic/acidic scripts that only do one thing). I can use all that automation to my advantage. When my partners in mischief call with an issue I can usually bang out two or three simple commands to 'fix things'. Or send instructions like "Run script X. Delete thing Y. Then restart with command such-and-such". Honestly, if I could anticipate the contortions in advance, I could get most of this down to one script.

What this has given me is the opportunity to think about the Hard Parts™ of the system and then arrive at clever solutions. Rather than spend days trying to build a DAL for the application, I spent a day deriving a generic library that works across all of the domain objects and tables. How'd I do that? Well, I didn't spend all day manually coding up a bunch of one-off objects, I automated the construction, testing, and deployment of those things. The test cycle is about 6 seconds. I was able to iterate over my clever solution so fast that it was almost (but not quite) painless to create. 

Automation is your friend. It may not be sexy and glorious, but it will enable you to do great things. So go out there and automate everything.

18 June 2018

Clever is the Enemy of Good, Part send(f(time.now)+hostname)

So in a recent coding adventure I came across some really super things. One of my favorites worked as follows. </snark>

* Get a reference to a production domain class that contains list of event types
* Get the names of the event types as strings
* Split the strings on '.'
* Use the last element of the returned list to create a snake case string (from the camel case value)
* Use send to find a method on the current object with the same name as the string
* Assemble the results into a list

Now I'm down for some good old fashioned reflection/introspection and general meta-programming. There are plenty of times where its the right thing and it makes sense.

Your test setup code is not this place. 

This example I've laid out took about 20 lines of setup code and resulted in roughly this;

let(:event1) { Event1.new }
let(:event2) { Event2.new }
let(:event3) { Event3.new }
let(:events) { [ event1, event2, event3 ] }

Why would you put all this complicated junk in your test? 

I have only one guess, Future Proofing. The only genuine motivation I can see for using a complicated setup for such a simple thing is a presumption that one day there will be more events and we want to test them all. 

This is wrong thinking. First, don't future proof your test code. It will be necessarily vague and not result in anything very helpful or useful in the future (that might not come). Second, you've now made a simple thing very complicated to the detriment of readability. 

Our first goal in TDD is to understand our system; to determine what code must be created by explaining it in terms of test code. Something like this is clearly not the development of understanding. I'm pretty confident that its an example of test after development, although I didn't check. 

One of the secondary effects of TDD is that we leave behind an explanation of how the system works. Not of how it was implemented necessarily, but of what we expect it to do. Having a let() that is 20 lines long and uses reflection to assemble a list of 3 items is not clear, concise, or helpful. 

So in both cases a test like this misses the mark for good TDD. 

15 June 2018

TDD Preconditions, Moar Design Pressure

As test drivers we need to listen to that design pressure and simplify.

I recently spent several days dissecting a a single RSpec file that was 1300+ lines long. My pair partner and I extracted a single context of 250 lines into a new file and hauled 105 lines of setup code along for the ride. There were 103 let statements and two subjects. Thats not to mention the event machine testing mix-in and the various event mothers. 

In the end we got it working but it took far longer than it should have. There was plenty of time spent questioning our understanding of the system and how it should actually behave. Had we extracted the correct setup? and did this test work before we did the extract? became our repeated refrain. Therefore we were constantly flipping back and forth with another branch and running the test suite to ensure that we weren't screwing things up. 

We got the job done, but here are some things we learned.

1) Tests with preconditions aren't really helpful in explaining anything to the reader. It seems like the should be, but the just kept confusing us. In fact, once we became familiar with the test configuration (getting the file trimmed down to < 300 lines) they were redundant. This is clear evidence that, if the test module is properly formed, the preconditions aren't necessary; hence they are a smell.

Have you ever seen a test that looks like this;


I don't like this test. The precondition (the assertion before the execution) is telling me something is wrong. Mostly what it is telling me is that the system is complicated enough that I need to establish the current state before I can even start executing. 

Thats a design smell if there ever was one.

What that precondition is telling me is our test has become so complicated we are unsure of how the setup works and therefore our test code needs a test. Thats bad. 

2) (off topic but important) Reasonable defaults to you aren't necessarily reasonable to anyone else. When you are dealing with 1000 lines of test code and numerous external factories and fixtures you can get lost and confused very quickly. It doesn't help if an external testing library sets up conditions that aren't explicit but have significant consequences. Clever is the enemy of good. Don't use an unusual setting or configuration just for fun in your defaults, and if you do, make it super obvious that you are doing so or the developer who comes after you might spend a day chasing their tail.

3) Most importantly. Listen to the design pressure your tests provide. If you feel compelled to make an assertion about the state of the system before you execute the code under test, your code is telling you 'Hey, I'm complicated!'. Part of our goal is to not have complicated things. So do something about it.