02 March 2023

Fixing It, and other uses for Duct Tape

 "If you can't fix it with duct tape, you didn't use enough duct tape"

-- unknown

"The problem is not the problem. The problem is your attitude about the problem"

-- Capt. Jack Sparrow

I've been thinking about fixing things lately. Like the general process of fixing things, not a specific list of things that need fixing (but I have one of those too). I thought I could share some of my thoughts on fixing things here.

Don't Fix Symptoms

A lot of times I see people trying to fix complicated problems by attacking the symptoms of the problem. Rather than pausing a moment and understanding the context of a problem they just tape over whatever it is and move on. This leads to layers of tape. Each layer of tape has its own consequences. And eventually, entropy catches up with you and you can't fix anything. You are too busy fixing your fixes. You should be fixing core problems.

Speculation Is Risk

Many times the reason we're patching over symptoms is due to an incomplete understanding of the problem we see. That is, we think we know the cause of our problem and we move forward. Being prone to action and probably a little overconfident we fix what we think are core problems but we're just covering up another symptom. Speculating about the cause of a problem is risky. It's better to know the cause of the problem.

Core Problems

How do you know you are fixing a core problem? 

You can't find another cause. That is, given a problem you are not able to identify another cause. This is challenging. There are a trillion moving parts in software systems, and identifying the broken one is challenging. All of us have spent hours searching the internet, peeling back layers, and reading source code in search of the causes of our pain. Experience and technique are the salves to this wound. 

Lacking Experience

If you lack the experience that's ok. Just take a deep breath and approach the problem methodically. Try small experiments. Work quickly but in a disciplined manner. With each experiment, write down what you tried and what the result was. Keep trying until you are exhausted. Then, if you still don't have it, seek expert advice. Maybe the problem is intractable and that's ok too. Just back up and look for another way to solve the bigger problem. 

Leverage Comes from the Fulcrum

If you do solve a core problem you won't just have fixed one thing, you will have fixed many things. You've fixed that core problem, but you have also fixed all the symptoms of the core problem. If the core problem was really deep in the system the impact will be huge! 

Whack A Mole

So what happens if we don't fix the core problems? 

You will spend a lot of time and money playing Whack A Mole.  Every time you fix a problem that isn't the core problem you are likely to create or reveal another problem. If that doesn't happen, the core problem will find a way to reveal itself again. Sometimes you will fix an issue and a few weeks later fix another related issue and not even notice. That is because you didn't do the analysis and figure out what the core problem is. 

Pyrricic Victories

Let's go back to that heat-death thing. Look at what you are doing and ask yourself, is my investment in the current solution so great that fixing it is as effective as replacing it? Because sometimes fixing it creates so much damage in the system you'd have been better off leaving it alone than fixing it. 


While you are playing Whack A Mole you are wasting the resources of the organization. You are creating a barrier between you and value delivery. Remember that our job as software engineers is to create and deliver value. 

Sunk Cost

Often we fall into the Sunk Cost Fallacy and that further prevents value delivery. Keep in the back of your mind that software is supposed to be soft. If the best-right thing is to rip out the broken thing and replace it, then do that. But do it for a reason, not emotional, but justifiable. 

Don't keep banging on the problem, core or otherwise, without considering that eliminating the cause is the right solution. And think big scope here, not what does this cost in my sprint, but what does it cost my organization?


We don't spend enough time talking about these subjects. We don't spend enough time educating everyone about the causes of strife and pain in the organization or the systems we are building. We certainly don't do a good job of teaching diagnostic techniques. 

Taking the time to find the root cause of an issue and fixing that, rather than the myriad of symptoms caused by that problem is well worth the effort. Being open to radical solutions is also necessary for creating effective solutions. Save your duct tape for more interesting projects.

24 February 2023

Locking It In

So in an odd set of coincidences over the past few days, I've engaged in several conversations about working with legacy code. One particular thing that migrated to the top of the conversation was locking in current behaviors. Somehow those conversations all degraded into a hairsplitting vocabulary conversation. So I thought I'd capture my thoughts and share them.


One of my great frustrations within my professional world is the degree to which we are pedantic. Collectively we vacillate between being very precise and totally vague with our language. The convenience of vagueness is that we don't have to establish a common vernacular. The problem is that it can lead to misunderstandings and miscommunication. On the other hand, pedantic arguments often lead to frustrating if not infuriating conversations about the meanings of words. I submit that pedantic-ness is wasteful. Don't be a pedant.

My Bias

I consider myself expedient. That is, I'm not much for wasting too much time. So all the pedantic hairsplitting is nonsense. This isn't really fair and I understand that sometimes you gotta stop back and explain yourself. Especially when addressing topics across cultural boundaries. We could all probably do better at the listening thing and the thoughtful consideration thing. But judgments based on unswerving or blind opinions are a nuisance to be avoided.

The Hair

So the hair that got split this week was Pin-down Test. We were discussing Characterization Testing as a component of Legacy Rescue and techniques for ensuring that we don't break existing functionality when adding a new feature to a system. I'll spare you the details as much as possible here but, the term 'pin down' was used to describe why we would characterize a system. Something along the lines of 'We are adding these tests to pin down the behavior of the system'. 

Gnashing of Teeth

Once the term pin down came out the conversation devolved into a violent agreement that pin-down testing is a thing, a good thing, and distinct from characterization testing BUT that pin-down should not be used in describing characterization testing.


So there was a protracted discussion of these things. Mostly it was all in agreement, that they have the same intent, to describe the current behavior of a system without a value judgment on rightness or wrongness. The conversation took a considerable amount of time and I categorize it as waste because it was more about egos and the need to be more formally correct (pedantic-ness) than it was about developing clarity. 


So the result of our collective dance was this. 
Pin-downs are about dissecting code; they enable refactoring and allow us to break down legacy code and they may include a bunch of scaffolding and other edifices that are (or should be) disposable. Pin-downs are often used to identify a defect in a system and then as a sentinel to identify the resolution.
Characterization tests on the other hand are intended to be permanent additions to a test suite to ensure that there are no changes in behavior. They assume that the current behavior is the correct behavior.


So the specific issue, characterization v. pin-down results in a subtle clarification. In essence they both lock in behavior so we can tell if we're breaking things. The temporality of tests and their attended scaffolding is one of the distinguishing characteristics. Furthermore, the purposes of creating such tests are distinct variations of each other. If ever a hair was split so finely. 

My point though is, much of the discussion had at the time was a waste. Debating the use of 'pin down' in the context of characterization is just flexing. As I've read and discussed this topic over the past few days it's really forced me to focus on the meanings of these terms and then be exasperated by the English language. 

Personal Note:

Thanks to Tim Ottinger @tottinge and Stephen Cavaliere @SteveCavaliere among others for the discussion and thoughts on this particular topic

20 February 2023

Where is Rich?

 Hey all, sorry for the delay in my posts. I was on vacation and I forgot to post that I was going away for 2 weeks. I'll be back with a post this coming Friday.

03 February 2023

A foolish consistency...

 "A foolish consistency is the hobgoblin of little minds" -- R.W. Emerson

I used to hate that quote. I hated it because people would misquote Emerson all the time by leaving off the 'A Foolish' at the beginning. Consistency has its purposes.

I previously wrote on code being consistent in its structure in order to reduce cognitive load. That same thinking can be applied elsewhere.


Recently I've been working with my team to develop standards and processes for our organization. We're focused at the moment on identifying known standards and capturing those in one place. But in some cases, the known standards aren't written yet. This is a big problem in the sense that there are dozens of standards and keeping them consistent and well-organized is proving difficult. 

We are doing this because we are trying to create consistency across our organization. The consistency that we believe will enable us to easily automate processes. Our mission is to reduce the friction of putting code into production. 

My Personal Dichotomy

Among the dozens of identified standards we have identified are choices about tools, technology stacks, and language standards. I personally am working on the standard for Python development. 

Anyone who's worked with me before should know that I'm personally indifferent to style guides in programming. That is, I don't care what the standard is, I care that the standard exists. You'll also know I'm not a big fan of PEP-8 or the defaults in pylint. So maybe it's ironic that my standard on Python says 'start with PEP-8 and use pylint'.

Consistency Can Increase Value

I won't drag you through my philosophical arguments against the defaults. I want to talk about value delivery and speed and how any standard enables that. 

Having a programming standard that is enforceable (and hopefully auto-fixable) gives me an advantage when I code. Taking 30 minutes to setup pylint and my IDE for auto-formatting lets me stop thinking about formatting and style. Those things become one less thing I need to do before creating a pull request. Minimally I don't have a colleague checking the rules for me. Ideally, my IDE corrects the issues as I create them. That allows me to work on problems that really matter.

Not Foolish, Pragmatic

The consistency of the standard isn't foolish. We're making this choice to remove a whole realm of discussion from the production of software. We are working to reduce friction and speed up development so we can deliver more value more often. Part of that is the standard itself and how it impacts our reasoning about code and part of that is leveraging or enabling automation. Remember, it's a choice to do things consistently, the standard is mostly an opinion on correctness. Don't blindly follow a standard, understand why you have it and what you get for having it. My advice is 'steal a standard' and tweak it, but don't 'not have a standard' and find a way to automatically enact and enforce the standard.

26 January 2023

Auth0 - Testing APIs with Live Users

So I've been working on a API for a few weeks now. Big issues with security in my testing is that I was cheating, testing behind the authentication/authorization tier. That's fine for some tests, but we need to prove that the system is secure. In particular, we're using Row Level Security and we need an undeniable way to ensure that the holder of the Bearer token is allowed to see the data they are requesting. I'll spare you the gory details of how our RLS works and the nuances of getting a user ID associated with particular data sets (thought thats interesting too). The issue we ran into was, how do I get the Bearer Token from a Python Script using username and password. 

This seems like it should be a trivial thing, but it was not. In fact, there were many hoops jumped, misunderstandings, and retrospectively trivial discoveries that need to be made. So in that light, I'm memorializing that journey in this post. Again, I will spare you the details and myself the indignity of describing all the missteps in the hopes that this post can be used as a direct guide to solving a problem for others (and a cheat sheet for me in 6 months when I've forgotten how to do this).

Setup Auth0:

So obviously you need an Auth0 account setup with an application or API configured and at least one user configured and authorized appropriately. 

Setup Python:

In our case we're working in Python 3.7 using PyTest and the Auth0-python library. The first thing we need is some fixtures to sign us in and get a bearer token.

Environmental Considerations

In the code examples below all the UPPER_CASE stuff are either constants or environment variables. I used dotenv to configure the environment and excluded the .env file from git. I won't expose those details here for obvious reasons. 

Create some PyTest Fixtures

Since we only want to sign in once, we'll use a few session level fixtures. 

First we need to initialize the API to get a Token, this is pretty trivial;

def auth0_get_token():
return GetToken(AUTH0_DOMAIN)

Second we need to get an active Auth0 client connected to our management API

def admin_access_token(auth0_get_token):
mgmt_api_url = 'https://{}/api/v2/'.format(AUTH0_DOMAIN)
token = auth0_get_token.client_credentials(MGMT_API_CLIENT_ID, MGMT_API_CLIENT_SECRET, mgmt_api_url)
return token['access_token']

And lastly we need to get the bearer token itself

def bearer_token(auth0_get_token):
return auth0_get_token.login(WEBAPP_CLIENT_ID,
scope='openid profile email',

Use the Bearer Token in a Test

In the end, that was all pretty trivial. So, now we can build a test using that bearer_token fixture like this;
def test_using_the_wrong_customer_id_fails(bearer_token):
customer_id = 999
test_user = getenv('TEST_USER')
event['headers'][X_CUSTOMER_ID_HEADER] = customer_id
event['headers']['Authorization'] = 'Bearer {}'.format(bearer_token)
expected_response = 'cannot find user {} at customer {}'.format(test_user, customer_id)

resp = lambda_handler(event, None)

assert resp == {
'body': json.dumps({'error': expected_response}),
'statusCode': 400,

20 January 2023

Special Flowers

Wow, it's been a really long time. I got distracted. Writing, for me, is a challenge. Talking on the other hand :-D So, I'm back, sorta. I'm going to try to get back into the blog cycle.

Special Flowers

(or Speaking of Design Consistency)

So what do I mean about consistency? The shape, feel, and structure of the code is more or less the same. Some people call this coherent design. Others refer to micro-patterns. I kinda like the term uniform design. Whatever word you want to use the point is that what you see in one part of the code is more or less the same as anywhere else in the code base. 

Here is an example. 

You may have made a choice to throw exceptions in order to communicate issues with validation. So you might have a function in your application that reviews the content of an HTTP request looking to see that the necessary parts are present. If something is missing you throw an exception. 

My expectation after seeing that approach is that it will be used everywhere in the code base. Every time we validate something we would expect an exception to be thrown if the validation fails. That would be 'uniform'.

OK, so what happens in your head when the next validation code you examine returns true if the content is valid and false if it isn't. Fair enough approach, but not the same as the first one. Not uniform.

Why does it make a difference?

I can give you at least two other ways that validation could be implemented. But two is one too many already. This inconsistency causes me to stop and think. What is the author doing? Is there a special reason that we're using true/false here and exceptions elsewhere? Now I have to research that and understand both. If I'm really diligent, I'll hunt down all the validation patterns and determine how many variations there are, try to figure out which is the predominant approach, and at least make any new validator consistent with that pattern. Very time-consuming. 

What's Our Job Again?

Our job as Software Engineers is to deliver value to the business. We measure that in terms of the features delivered. So thinking about how we can deliver features faster is pretty essential to our jobs. There are 1000 ways to deliver faster and we can get into some of those in another post. For this topic, I want to point out, that making yourself work harder is not an effective way to increase the rate of value delivery. 

Flowers: I only like tulips

So there should be no special flowers in the code. Ideally no variance in design and approach. If we can achieve that goal we reduce the total cognitive load necessary to understand the code. This is akin to pattern recognition (Sparrow's Deck anyone?). We should feel like, with respect to a particular code base, I've seen one validation block, I've seen them all. At least in terms of the mechanics. 


By establishing a uniform design we are reducing the amount of work we need to do to gain comprehension of the code. That means we can learn the whole code base faster. If we can learn the code base faster we can make effective changes sooner. If we make changes sooner we can deliver value to the business quickly. That is our job* 

Related, and probably fodder for other posts, by reducing the difficulty of comprehending the code we increase the available time for carefully considering innovations. Imagine not feeling rushed to 'clean up' the code because you know the pattern and so you built it 'to spec' the first time; or at least close enough that it is not an arduous task to align with the uniform design.

What's Next

Looking back on my post history I've really taken plenty of time off. I'm going to do better in 2023. It's not a resolution, it's just what I will do. I'm open to writing based on prompts so if you have suggestions hit me up on Twitter @net_zer0 I have in mind a lot of posts around code quality, still more on TDD, and an update on my thoughts around Architecture and the role of the Architect in an organization.

* There is a lot to our job. Features are (potentially) value. Value delivery is how we get measured. I'll write more about the other aspect of our jobs soon.