An observation. When I look around I see people who get stopped. I need groceries, its raining, I don't have a rain coat. I'm stopped.
Don't be stopped. Innovate, adapt, overcome, but don't be stopped.
This is a mini-blog. I'm working to find a compromise between a tweet and a lengthy essay. I find it difficult to complete longer documents because of an obsession with perfection. So this little experiment is to see if I can create a blog of mini articles. Herein I will talk about many technical things generally related to software development and Agile practices.
28 April 2017
26 April 2017
Its OK to ask dumb questions
So the other night I was sitting at a Track meet chatting with a friend. We got into a conversation about questions, and it this relates to asking for help. We agree that some people don't ask questions, frequently for fear of looking dumb. There is a lot to that. Many people are hyper conscious of other peoples perception of them and they are deeply concerned about not looking dumb.
Its OK to ask dumb questions. The only way anyone learns anything is to ask questions and you have to start someplace. So if the question you ask seems naive to others but is meaningful to you, that's ok. I'm not talking about asking intentionally dumb questions, that is wasteful. However, if you have a gap in your knowledge and you don't know something, just ask the question.
If it makes you feel better, you can preface the question by saying something like 'This might be a dumb question, but...' Or you can just blurt it out and if it turns out to be a dumb question, have a good laugh with everyone. The important thing to do is make sure you've got an answer.
Its OK to ask dumb questions. The only way anyone learns anything is to ask questions and you have to start someplace. So if the question you ask seems naive to others but is meaningful to you, that's ok. I'm not talking about asking intentionally dumb questions, that is wasteful. However, if you have a gap in your knowledge and you don't know something, just ask the question.
If it makes you feel better, you can preface the question by saying something like 'This might be a dumb question, but...' Or you can just blurt it out and if it turns out to be a dumb question, have a good laugh with everyone. The important thing to do is make sure you've got an answer.
24 April 2017
Ask for Help
So today I endeavored to resurrect an old piece of hardware. Last week I did the same with some software. Both of these projects seemed within my skill set and I started the projects with no hesitation. As it turns out though, I had issues with both projects and I had to ask for help.
Often I see people struggle mightily to get a project or task done and they seem to refuse help. They want to do it on their own. Last year a woman on my development team just wanted to 'figure it out for her self'. Today, I just wanted to work it out on my own, so I can relate to the condition.
Here is the thing. We can get a lot more productive work done if we ask for help. We can still learn, we can still accomplish, but we don't have to play the 'I don't need your help' game to do so. Ultimately, results are what matters. Getting the machine to boot, or the software to compile, or whatever it is is what counts. Saying that we did it on our own is only the icing on the cake. While I get how satisfying that is, that isn't typically what we're getting paid for. What the customer wants is to have a working solution, in a timely manner, for a reasonable price.
So, if you are stuck, don't hesitate to ask for help. It shows maturity and wisdom to know when you need the help, and in the end it results in a happy customer.
Often I see people struggle mightily to get a project or task done and they seem to refuse help. They want to do it on their own. Last year a woman on my development team just wanted to 'figure it out for her self'. Today, I just wanted to work it out on my own, so I can relate to the condition.
Here is the thing. We can get a lot more productive work done if we ask for help. We can still learn, we can still accomplish, but we don't have to play the 'I don't need your help' game to do so. Ultimately, results are what matters. Getting the machine to boot, or the software to compile, or whatever it is is what counts. Saying that we did it on our own is only the icing on the cake. While I get how satisfying that is, that isn't typically what we're getting paid for. What the customer wants is to have a working solution, in a timely manner, for a reasonable price.
So, if you are stuck, don't hesitate to ask for help. It shows maturity and wisdom to know when you need the help, and in the end it results in a happy customer.
21 April 2017
Problem Solving 101
Lately I've been getting into a lot of interesting conversations with friends about the various problems we see around us. Everything from processes that crash mysteriously to universal health care. As a result I've been doing a lot of reading on various topics, trying to learn what I can about these topics and trying to draw conclusions from that learning. It's been a lot of fun.
Something I've noticed in my tour of the articles and videos though. It seems that many people are chasing solutions through symptoms. That is, they seek to solve problems by finding remedies for the symptoms of the problem rather than solving the actual problem. As a professional problem solver this irks me.
When we fix a symptom of a problem we are only masking a secondary effect. When we fix that symptom another one will eventually spring up to take its place. If we fix that one, it will surely be followed by another. There aren't enough bandaids in the world that can repair a ruptured artery. The correct response is to fix the artery.
When we encounter an issue we must pause and think about it. Is the issue a symptom of a bigger problem or is it actually the problem. Does a RabbitMQ server fall out of its cluster because it is bad software, or because it isn't configured correctly. Does a process terminate badly because the input data is bad, or because we did not properly prepare for corrupted inputs. These are more obvious examples because they are simple problems, but the same approach applies to much more complicated issues.
We need to stop repairing symptoms and start looking for root causes to fix. Only then can we truly progress.
Something I've noticed in my tour of the articles and videos though. It seems that many people are chasing solutions through symptoms. That is, they seek to solve problems by finding remedies for the symptoms of the problem rather than solving the actual problem. As a professional problem solver this irks me.
When we fix a symptom of a problem we are only masking a secondary effect. When we fix that symptom another one will eventually spring up to take its place. If we fix that one, it will surely be followed by another. There aren't enough bandaids in the world that can repair a ruptured artery. The correct response is to fix the artery.
When we encounter an issue we must pause and think about it. Is the issue a symptom of a bigger problem or is it actually the problem. Does a RabbitMQ server fall out of its cluster because it is bad software, or because it isn't configured correctly. Does a process terminate badly because the input data is bad, or because we did not properly prepare for corrupted inputs. These are more obvious examples because they are simple problems, but the same approach applies to much more complicated issues.
We need to stop repairing symptoms and start looking for root causes to fix. Only then can we truly progress.
19 April 2017
More on Volunteering
Elsewhere I've mentioned Volunteering or Volunteerism and I've had a few questions. Specifically, what if I don't have control over my situation and I'm forced to work on specific projects or inside of specific teams. What if I can't opt out?
Well, this is going to require some amount of fortitude. First off I suggest that everyone regularly evaluate their relationship with their employer and their job. I think it is important to really honestly confront how you feel personally about what you do for a living, who you do it for, and who you do it with.
The first thing I'd do is look for a pattern. My wife says my pattern is cyclical and runs about a year long. So roughly every winter I start struggling with what I do and who I do it for and with. Apparently, every year, I say something like 'I'm going to move on...' But then I usually don't. I have a completion complex, I like to finish what I've started if people will let me. That said, i get dissatisfied with things and the cold dark of winter brings out the contemplation. In the end though this is good for me.
Anyway, if you are seeing a pattern in your dissatisfaction, look very deeply into its cause. Consider everything going on. Are you unhappy because of work, or is it outside of work? Is this thing causing you to not commit to your job and give 110% to what you are doing? Then go and fix it. Rinse and repeat.
Fixing it might mean finding a new pasture to call home. Thats ok, especially in todays environment. However, it doesn't have to. You might need a break, a good vacation or even a sabbatical. Or, you could consider real career change, like a different role in the IT/Development organization, or more radically going to work for the business.
No matter what, everyone should understand that their relationship to work is purely voluntary. You don't have to work for your current employer (or anyone for that matter) and you don't have to stay on the team you are on. You don't even need to finish the project that your doing. What you do need to do is take care of yourself.
Well, this is going to require some amount of fortitude. First off I suggest that everyone regularly evaluate their relationship with their employer and their job. I think it is important to really honestly confront how you feel personally about what you do for a living, who you do it for, and who you do it with.
The first thing I'd do is look for a pattern. My wife says my pattern is cyclical and runs about a year long. So roughly every winter I start struggling with what I do and who I do it for and with. Apparently, every year, I say something like 'I'm going to move on...' But then I usually don't. I have a completion complex, I like to finish what I've started if people will let me. That said, i get dissatisfied with things and the cold dark of winter brings out the contemplation. In the end though this is good for me.
Anyway, if you are seeing a pattern in your dissatisfaction, look very deeply into its cause. Consider everything going on. Are you unhappy because of work, or is it outside of work? Is this thing causing you to not commit to your job and give 110% to what you are doing? Then go and fix it. Rinse and repeat.
Fixing it might mean finding a new pasture to call home. Thats ok, especially in todays environment. However, it doesn't have to. You might need a break, a good vacation or even a sabbatical. Or, you could consider real career change, like a different role in the IT/Development organization, or more radically going to work for the business.
No matter what, everyone should understand that their relationship to work is purely voluntary. You don't have to work for your current employer (or anyone for that matter) and you don't have to stay on the team you are on. You don't even need to finish the project that your doing. What you do need to do is take care of yourself.
17 April 2017
More thoughts on VEC and the Scientific Method
So I just made up that acronym, VEC. But I'm tired and I don't want to type it all out today.
I started out my post on VEC talking about how a team behaves, maybe it would be more accurate to say how a team is composed. That is, what is the mindset of the team members. Anyway, I was listening to Agile for Humans with GeePaw Hill and it got me thinking more about how I'd describe Agile and being agile etc. It's a great podcast, and the ideas per hour produced was very high.
I should get to the point. Consider this thought; there is no definition of Agile. That is, there is no fixed condition called agility that can be easily identified. You cannot walk into a dev-shop, look around, and say 'Yes, these folks are agile.' Gee Paw, Amitai and Ryan all seem to agree with this thinking. That got me thinking, if that is true, then what is agile. Like can I simplify that down into some concept.
I don't know, but I'm going to try.
So what is agile? or what does it mean to be agile? It seems to me that agile could be summarized by saying that we apply the scientific method, repeatedly. Or maybe, as a fine variation, the Shewhart Cycle. I guess it depends upon how you perceive a problem.
So with respect to VEC and an agile team and how they behave I guess some component of what they are doing must be to apply some form of the scientific method. It seems like there must be more to it than that, but certainly this is an important part of the game.
I started out my post on VEC talking about how a team behaves, maybe it would be more accurate to say how a team is composed. That is, what is the mindset of the team members. Anyway, I was listening to Agile for Humans with GeePaw Hill and it got me thinking more about how I'd describe Agile and being agile etc. It's a great podcast, and the ideas per hour produced was very high.
I should get to the point. Consider this thought; there is no definition of Agile. That is, there is no fixed condition called agility that can be easily identified. You cannot walk into a dev-shop, look around, and say 'Yes, these folks are agile.' Gee Paw, Amitai and Ryan all seem to agree with this thinking. That got me thinking, if that is true, then what is agile. Like can I simplify that down into some concept.
I don't know, but I'm going to try.
So what is agile? or what does it mean to be agile? It seems to me that agile could be summarized by saying that we apply the scientific method, repeatedly. Or maybe, as a fine variation, the Shewhart Cycle. I guess it depends upon how you perceive a problem.
So with respect to VEC and an agile team and how they behave I guess some component of what they are doing must be to apply some form of the scientific method. It seems like there must be more to it than that, but certainly this is an important part of the game.
14 April 2017
The Master Equation
Elsewhere I've talked about metrics and delivery and mindset. There is a lot of talk about metrics being evil and wrong and misguided, but there is a reality that we need to consider how to make software delivery as effective as possible. That is, how do we get the highest quality, targeted, well factored systems for the lowest cost, and as fast as possible. Call it balancing the Iron Triangle if you will.
I don't know where this originally came from, but years ago in a water park/hotel conference room in Ohio I was presented with this equation,
I call this The Master Equation. From this we can derive everything else when we talk about effective software development. When we look at what we do for a living and how we are compensated, we have to consider that the only really valuable thing we can do is produce value, and more or less, the faster and more efficiently we create that value, the better we are compensated. So throughput is really a thing.
Lets set aside understanding what is valuable and what is not for a moment and focus on optimization. If we use The Master Equation, we can observe the product and processes we use to deliver software and identify what is Rework and what is Waste. With some care and precision we can even identify their source. Given that all of these things as possible, we can optimize for throughput in a system.
How is this possible you ask? Well, lets consider a few things.
One wasteful thing that we see on a regular basis is bad designs. That is, something technically correct that is embarrassingly slow or unscalable, or unsuitable for the solution space. We can mitigate those issues in a number of ways.
One, we can do our homework. A little bit of research into problem spaces and published solutions goes a long way to ensure that we don't run into issues with a proposed design. We can learn from the mistakes of others.
Spiking. We can do a series of small experiments to ensure that something feasible. When I do this, I create a decision tree of spikes. I then start at the top and work my way through the tree. If at any point the proposal becomes unsuitable, I discard and start again. I generally do some research first and make sure I'm not traveling the well worn road to failure.
Small steps make big strides in determining viability without blowing out your budget. If you can see a way to slice a solution into long end-to-end strips and deliver just those strips you can get a good sense of the effectiveness of the solution without building the whole thing.
Parallel development is another option. If we have two unproven approaches and no other way to verify which is more correct, build both. Setup two teams, give each one the guidance on the solution they need and then compare the results. You need to establish good measures before you start and a GE (good enough) threshold too. If you understand what is good enough before you start and one team reaches that goal, you can cut off the other approach and move one.
Another thing we see frequently is defects and cruft clogging up the development pipeline. You can solve these problems pretty effectively with good tooling and good communications. For one, use static analysis tools. If you can find them, use tools that automatically correct the little things like formatting, spelling, punctuation, etc. Then set a zero tolerance policy for violations and keep the team on it. I like to cook the static analysis into the automatic build process and reject PR's that can't pass these tests.
Additionally you can apply code reviews and campfires to improve quality. Code reviews are a good way to spot bad design. If commits are small (as they should be) you can usually crank out a review in fifteen minutes, and if you get good at it you can usually see the design taking shape and stop a bad one before it gets out of control.
Campfires are a great way to communicate with the team before trouble starts. Typically the tech-lead or product-owner leads a discussion on a topic with the rest of the team. Ideally you have a whiteboard present so you can draw pictures. Take 30 minutes and discuss what needs to be done. Talk through the options for how it can be done, and consider all the disagreements that might arise. This is both a great way to stay on track around design/development and for more junior players to learn from others in a group.
Defects, the bane of everyones existence. I could probably write a book on this topic. I'll start with, don't have any defects. Thats a lot to ask, but its a good objective to have. I'll also say that, the best way to avoid defects is lots of communication. Start with asking plenty of questions and validating answers, then follow up with clear explanation and demonstration. You really need to be able to show that the code you have built does what you understood it was supposed to do. If nothing else, at this point you can be told it is wrong and have a chance to fix it before it makes it into the wild.
Sometimes you don't know that you have created a defect. Often times the Product Owner doesn't know it either. You just have to role with those. You also have to get that sometimes the business doesn't know the right answer either, they just know when they don't get what they want/expect. This is just part of the human experience and we have to tolerate it.
Lastly there are defects caused by inexperience. These are the gaps in the code that sneak up on the unwary and destroy hope. No, just kidding. When we are learning and doing something new, we don't know what we don't know, so we can create defects because we didn't realize something could happen. QA guys make a living off of thinking of those things, and they can be very creative. Code reviews and campfires can help a lot here. People with different experiences will think of things that maybe the others won't, and throwing those things into the mix can help to mitigate unintended defects like these. That said, people can't think of everything. So when these occur, write it down, try to remember it, and learn from the mistake.
If we can strive toward having little or no rework and waste in our projects we can deliver more quality software. I hope The Master Equation can provide a framework for thinking about software development overall and how we can make it better.
I don't know where this originally came from, but years ago in a water park/hotel conference room in Ohio I was presented with this equation,
Throughput = Work - (Rework + Waste)
I call this The Master Equation. From this we can derive everything else when we talk about effective software development. When we look at what we do for a living and how we are compensated, we have to consider that the only really valuable thing we can do is produce value, and more or less, the faster and more efficiently we create that value, the better we are compensated. So throughput is really a thing.
Lets set aside understanding what is valuable and what is not for a moment and focus on optimization. If we use The Master Equation, we can observe the product and processes we use to deliver software and identify what is Rework and what is Waste. With some care and precision we can even identify their source. Given that all of these things as possible, we can optimize for throughput in a system.
How is this possible you ask? Well, lets consider a few things.
One wasteful thing that we see on a regular basis is bad designs. That is, something technically correct that is embarrassingly slow or unscalable, or unsuitable for the solution space. We can mitigate those issues in a number of ways.
One, we can do our homework. A little bit of research into problem spaces and published solutions goes a long way to ensure that we don't run into issues with a proposed design. We can learn from the mistakes of others.
Spiking. We can do a series of small experiments to ensure that something feasible. When I do this, I create a decision tree of spikes. I then start at the top and work my way through the tree. If at any point the proposal becomes unsuitable, I discard and start again. I generally do some research first and make sure I'm not traveling the well worn road to failure.
Small steps make big strides in determining viability without blowing out your budget. If you can see a way to slice a solution into long end-to-end strips and deliver just those strips you can get a good sense of the effectiveness of the solution without building the whole thing.
Parallel development is another option. If we have two unproven approaches and no other way to verify which is more correct, build both. Setup two teams, give each one the guidance on the solution they need and then compare the results. You need to establish good measures before you start and a GE (good enough) threshold too. If you understand what is good enough before you start and one team reaches that goal, you can cut off the other approach and move one.
Another thing we see frequently is defects and cruft clogging up the development pipeline. You can solve these problems pretty effectively with good tooling and good communications. For one, use static analysis tools. If you can find them, use tools that automatically correct the little things like formatting, spelling, punctuation, etc. Then set a zero tolerance policy for violations and keep the team on it. I like to cook the static analysis into the automatic build process and reject PR's that can't pass these tests.
Additionally you can apply code reviews and campfires to improve quality. Code reviews are a good way to spot bad design. If commits are small (as they should be) you can usually crank out a review in fifteen minutes, and if you get good at it you can usually see the design taking shape and stop a bad one before it gets out of control.
Campfires are a great way to communicate with the team before trouble starts. Typically the tech-lead or product-owner leads a discussion on a topic with the rest of the team. Ideally you have a whiteboard present so you can draw pictures. Take 30 minutes and discuss what needs to be done. Talk through the options for how it can be done, and consider all the disagreements that might arise. This is both a great way to stay on track around design/development and for more junior players to learn from others in a group.
Defects, the bane of everyones existence. I could probably write a book on this topic. I'll start with, don't have any defects. Thats a lot to ask, but its a good objective to have. I'll also say that, the best way to avoid defects is lots of communication. Start with asking plenty of questions and validating answers, then follow up with clear explanation and demonstration. You really need to be able to show that the code you have built does what you understood it was supposed to do. If nothing else, at this point you can be told it is wrong and have a chance to fix it before it makes it into the wild.
Sometimes you don't know that you have created a defect. Often times the Product Owner doesn't know it either. You just have to role with those. You also have to get that sometimes the business doesn't know the right answer either, they just know when they don't get what they want/expect. This is just part of the human experience and we have to tolerate it.
Lastly there are defects caused by inexperience. These are the gaps in the code that sneak up on the unwary and destroy hope. No, just kidding. When we are learning and doing something new, we don't know what we don't know, so we can create defects because we didn't realize something could happen. QA guys make a living off of thinking of those things, and they can be very creative. Code reviews and campfires can help a lot here. People with different experiences will think of things that maybe the others won't, and throwing those things into the mix can help to mitigate unintended defects like these. That said, people can't think of everything. So when these occur, write it down, try to remember it, and learn from the mistake.
If we can strive toward having little or no rework and waste in our projects we can deliver more quality software. I hope The Master Equation can provide a framework for thinking about software development overall and how we can make it better.
12 April 2017
Forests and Trees, Mental Exercises for the Practicing Mind
Despite appearances, I'm a pretty focused person. I tend to get a hold of a problem and not let go. In fact, I do this so well sometimes I forget the bigger picture.
Once upon a time, far, far away I was commuting to Florida weekly for a job. The hotel thing got tired real fast, and it was also very expensive. So I got an apartment. I stocked the place up with rented furniture and a nice TV, but I didn't feel like buying new computer equipment, so I shipped my printer and a few odds and ends from home. It seemed like a sensible plan, but I wouldn't do it again. The printer showed up in 4 pieces and I could not get it working again. The shipping agency had simply poured packing peanuts into the box, taped it shut, and sent it. (Last time I'll pay them to pack and ship). I bought insurance on the package, so when I found the remains of my printer I snapped a few pictures, call them up and tried to execute my claim. No dice, the package was not properly constructed and they wouldn't pay; despite the fact that they had packed the box, not me. There label was even on the box. Still no go.
I was livid. Anyone who knows me is aware that inconsistencies like this can send me into a hyper active ranting state about the lack of justice and proper behavior in the world, usually followed by a tirade on free-markets.
In this case I really got lost. I had a solid contracting gig working 40+ hours a week at a very high rate. Rather than going to work and making some money I spent the first 3 hours of my work day on the phone with the shipping company trying to get my insurance claim accepted. That's right, I spent three (potentially billable) hours talking to customer support. I went so far as to document everything, get the address of the CEO, write, print, and mail a letter of complaint. I was going to get my consumer justice. (I never heard back from them, but I refuse to ship with that organization to this day).
This was pretty foolish. I lost out on a few hundred dollars of billable time over a printer that, when new, cost me about $150. It wasn't new when I shipped it, I could just have bought another one. Heck, I could have gone to the mall at lunch, picked up a comparable model AND eaten a burger, without skipping a billable beat. But instead I got lost, staring at the trees and forgot about the forest. I stopped my lunacy at lunch time and that is when a buddy pointed out what I had just done. I felt pretty dumb.
In the end it was a good learning experience. What I learned that day was that I'd been looking at the trees and not looking at the forest. In the end it was just a printer, a good one, but no sentimental value at all.
In reflecting on this event I found that I have, once or twice, done the same thing with development work. I get so focused on what I'm doing that I forget to back off and look at the forest. For example, I might write an elegant looking piece of code and decide that that pattern needs to be applied throughout a system. I then happily run off and track down every place where I can make my glorious improvement to the system. I'm focused on that tree. The forest however says, the system works and doesn't necessarily need fixing. In fact, my fabulous, wonderful, hyper-elegant piece of code should get reviewed by at least one more person before I run around 'fixing' the system with it. For that matter, even if upon review it is deemed worthy, I should consider the iteration goals before I allocate time to propagating my genius creation. Bigger still is the consideration of the release goal, the timeline, my social life, and my need for sleep. It might be better to declare my discovery a good lesson learned, stash it in a gist someplace for later reference, and move on.
We often get obsessed with things like perfection and correctness and all the various clean code rules that we love and revere so much, but many of these things, in a business sense, are the trees. The CIO doesn't care deeply that your code is DRY, or that you've properly applied the Visitor Pattern. The CIO cares about serving the business and/or customers with software that works. The business as a whole is more of less oblivious to the details of our craft. Those details, compelling as they are, really matter to us, the people who live in the code and have to deal with it daily. And the reality is, beauty is in the eye of the beholder. So my precious nugget of super compact, well named, DRY/SOLID/Clean code is indeed a spectacle to behold, but has no genuine value outside the realm of developer-dom.
If we, as software development professionals, really want to get awesome results for the business, we need to think about the big picture along with the details. We need to step back and consider the impact we are having on the whole organization with whatever we do. I used an example of a 'special block of code' here, but it could be anything from a naming convention to an application architecture. We need to consider how its development and enforcement will impact the organization as a whole before we trot it out as 'the answer' to any problem.
Here is a little trick I now use to help myself decide if I should proceed with some task. Before I begin I set out very specific objectives for the task and then I ask myself, how will that help us achieve goal X. Where X is any one of the organizational goals. I often have to ask this question a dozen or more times for a dozen different Xs. When I do this, I then know I've considered the impact of my actions (present and future) on the organization as a whole and I can proceed with confidence. If I cannot answer a question, I know I have to do more work. It might be as simple as saying to my pair partner, What do you think with respect to X? Or I might need to talk to a product owner or the Director of Engineering.
If you keep asking yourself these questions periodically you will develop an intuition for the correct behavior within the context of a project or job. Here is another tip, consider when your context changes. The answers to your questions might change depending on the context of what you are doing and from whom. For example, if I'm developing course work or a tutorial, I might spend hours agonizing about the correctness of a solution, but if I'm developing a spike I might only take a moment to make sure it works. Why, because the answer to the question 'will I need to reuse this?' is yes in one case and no in the other.
Last tip, if you don't know what questions to ask, ask a meta question like 'What is important to the organization with respect to this thing?' Try your answers out with others to validate that you've asked good questions, and then solicit questions from them to help you build a better understanding of what questions you should be asking.
-----
You might be asking what are the Xs. Well it depends somewhat on the specific task but here are some examples...
Does this further the objective of the current task/objective?
Is this aesthetics?
If this is aesthetics, are they worth while? Do they lead to understanding or is it just pretty?
Does this further the iteration objective?
What are the long term consequences of doing this? Did I just create cruft/tech-debt?
Can others understand what I've done and why?
How does this impact the current release? Will this cause a delay?
I think you get the point.
Once upon a time, far, far away I was commuting to Florida weekly for a job. The hotel thing got tired real fast, and it was also very expensive. So I got an apartment. I stocked the place up with rented furniture and a nice TV, but I didn't feel like buying new computer equipment, so I shipped my printer and a few odds and ends from home. It seemed like a sensible plan, but I wouldn't do it again. The printer showed up in 4 pieces and I could not get it working again. The shipping agency had simply poured packing peanuts into the box, taped it shut, and sent it. (Last time I'll pay them to pack and ship). I bought insurance on the package, so when I found the remains of my printer I snapped a few pictures, call them up and tried to execute my claim. No dice, the package was not properly constructed and they wouldn't pay; despite the fact that they had packed the box, not me. There label was even on the box. Still no go.
I was livid. Anyone who knows me is aware that inconsistencies like this can send me into a hyper active ranting state about the lack of justice and proper behavior in the world, usually followed by a tirade on free-markets.
In this case I really got lost. I had a solid contracting gig working 40+ hours a week at a very high rate. Rather than going to work and making some money I spent the first 3 hours of my work day on the phone with the shipping company trying to get my insurance claim accepted. That's right, I spent three (potentially billable) hours talking to customer support. I went so far as to document everything, get the address of the CEO, write, print, and mail a letter of complaint. I was going to get my consumer justice. (I never heard back from them, but I refuse to ship with that organization to this day).
This was pretty foolish. I lost out on a few hundred dollars of billable time over a printer that, when new, cost me about $150. It wasn't new when I shipped it, I could just have bought another one. Heck, I could have gone to the mall at lunch, picked up a comparable model AND eaten a burger, without skipping a billable beat. But instead I got lost, staring at the trees and forgot about the forest. I stopped my lunacy at lunch time and that is when a buddy pointed out what I had just done. I felt pretty dumb.
In the end it was a good learning experience. What I learned that day was that I'd been looking at the trees and not looking at the forest. In the end it was just a printer, a good one, but no sentimental value at all.
In reflecting on this event I found that I have, once or twice, done the same thing with development work. I get so focused on what I'm doing that I forget to back off and look at the forest. For example, I might write an elegant looking piece of code and decide that that pattern needs to be applied throughout a system. I then happily run off and track down every place where I can make my glorious improvement to the system. I'm focused on that tree. The forest however says, the system works and doesn't necessarily need fixing. In fact, my fabulous, wonderful, hyper-elegant piece of code should get reviewed by at least one more person before I run around 'fixing' the system with it. For that matter, even if upon review it is deemed worthy, I should consider the iteration goals before I allocate time to propagating my genius creation. Bigger still is the consideration of the release goal, the timeline, my social life, and my need for sleep. It might be better to declare my discovery a good lesson learned, stash it in a gist someplace for later reference, and move on.
We often get obsessed with things like perfection and correctness and all the various clean code rules that we love and revere so much, but many of these things, in a business sense, are the trees. The CIO doesn't care deeply that your code is DRY, or that you've properly applied the Visitor Pattern. The CIO cares about serving the business and/or customers with software that works. The business as a whole is more of less oblivious to the details of our craft. Those details, compelling as they are, really matter to us, the people who live in the code and have to deal with it daily. And the reality is, beauty is in the eye of the beholder. So my precious nugget of super compact, well named, DRY/SOLID/Clean code is indeed a spectacle to behold, but has no genuine value outside the realm of developer-dom.
If we, as software development professionals, really want to get awesome results for the business, we need to think about the big picture along with the details. We need to step back and consider the impact we are having on the whole organization with whatever we do. I used an example of a 'special block of code' here, but it could be anything from a naming convention to an application architecture. We need to consider how its development and enforcement will impact the organization as a whole before we trot it out as 'the answer' to any problem.
Here is a little trick I now use to help myself decide if I should proceed with some task. Before I begin I set out very specific objectives for the task and then I ask myself, how will that help us achieve goal X. Where X is any one of the organizational goals. I often have to ask this question a dozen or more times for a dozen different Xs. When I do this, I then know I've considered the impact of my actions (present and future) on the organization as a whole and I can proceed with confidence. If I cannot answer a question, I know I have to do more work. It might be as simple as saying to my pair partner, What do you think with respect to X? Or I might need to talk to a product owner or the Director of Engineering.
If you keep asking yourself these questions periodically you will develop an intuition for the correct behavior within the context of a project or job. Here is another tip, consider when your context changes. The answers to your questions might change depending on the context of what you are doing and from whom. For example, if I'm developing course work or a tutorial, I might spend hours agonizing about the correctness of a solution, but if I'm developing a spike I might only take a moment to make sure it works. Why, because the answer to the question 'will I need to reuse this?' is yes in one case and no in the other.
Last tip, if you don't know what questions to ask, ask a meta question like 'What is important to the organization with respect to this thing?' Try your answers out with others to validate that you've asked good questions, and then solicit questions from them to help you build a better understanding of what questions you should be asking.
-----
You might be asking what are the Xs. Well it depends somewhat on the specific task but here are some examples...
Does this further the objective of the current task/objective?
Is this aesthetics?
If this is aesthetics, are they worth while? Do they lead to understanding or is it just pretty?
Does this further the iteration objective?
What are the long term consequences of doing this? Did I just create cruft/tech-debt?
Can others understand what I've done and why?
How does this impact the current release? Will this cause a delay?
I think you get the point.
11 April 2017
Random Strangeness - Node Module with a Python Dep
So I'm not saying this is wrong, but it is surprising. The following snippet comes from the logs of a Docker container I'm trying to build. Why does this node module want Python?
npm info lifecycle modern-syslog@1.1.2~install: modern-syslog@1.1.2
> modern-syslog@1.1.2 install /statsd-master/node_modules/modern-syslog
> node-gyp rebuild
gyp info it worked if it ends with ok
gyp info using node-gyp@3.5.0
gyp info using node@7.8.0 | linux | x64
gyp ERR! configure error
gyp ERR! stack Error: Can't find Python executable "python", you can set the PYTHON env variable.
gyp ERR! stack at PythonFinder.failNoPython (/usr/local/lib/node_modules/npm/node_modules/node-gyp/lib/configure.js:454:19)
gyp ERR! stack at PythonFinder.<anonymous> (/usr/local/lib/node_modules/npm/node_modules/node-gyp/lib/configure.js:368:16)
gyp ERR! stack at F (/usr/local/lib/node_modules/npm/node_modules/which/which.js:68:16)
gyp ERR! stack at E (/usr/local/lib/node_modules/npm/node_modules/which/which.js:80:29)
gyp ERR! stack at /usr/local/lib/node_modules/npm/node_modules/which/which.js:89:16
gyp ERR! stack at /usr/local/lib/node_modules/npm/node_modules/which/node_modules/isexe/index.js:44:5
gyp ERR! stack at /usr/local/lib/node_modules/npm/node_modules/which/node_modules/isexe/access.js:8:5
gyp ERR! stack at FSReqWrap.oncomplete (fs.js:114:15)
gyp ERR! System Linux 4.9.13-moby
gyp ERR! command "/usr/local/bin/node" "/usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
gyp ERR! cwd /statsd-master/node_modules/modern-syslog
gyp ERR! node -v v7.8.0
gyp ERR! node-gyp -v v3.5.0
gyp ERR! not ok
npm info lifecycle modern-syslog@1.1.2~install: Failed to exec install script
So I guess we're not quite to a world where everything is JavaScript.
npm info lifecycle modern-syslog@1.1.2~install: modern-syslog@1.1.2
> modern-syslog@1.1.2 install /statsd-master/node_modules/modern-syslog
> node-gyp rebuild
gyp info it worked if it ends with ok
gyp info using node-gyp@3.5.0
gyp info using node@7.8.0 | linux | x64
gyp ERR! configure error
gyp ERR! stack Error: Can't find Python executable "python", you can set the PYTHON env variable.
gyp ERR! stack at PythonFinder.failNoPython (/usr/local/lib/node_modules/npm/node_modules/node-gyp/lib/configure.js:454:19)
gyp ERR! stack at PythonFinder.<anonymous> (/usr/local/lib/node_modules/npm/node_modules/node-gyp/lib/configure.js:368:16)
gyp ERR! stack at F (/usr/local/lib/node_modules/npm/node_modules/which/which.js:68:16)
gyp ERR! stack at E (/usr/local/lib/node_modules/npm/node_modules/which/which.js:80:29)
gyp ERR! stack at /usr/local/lib/node_modules/npm/node_modules/which/which.js:89:16
gyp ERR! stack at /usr/local/lib/node_modules/npm/node_modules/which/node_modules/isexe/index.js:44:5
gyp ERR! stack at /usr/local/lib/node_modules/npm/node_modules/which/node_modules/isexe/access.js:8:5
gyp ERR! stack at FSReqWrap.oncomplete (fs.js:114:15)
gyp ERR! System Linux 4.9.13-moby
gyp ERR! command "/usr/local/bin/node" "/usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
gyp ERR! cwd /statsd-master/node_modules/modern-syslog
gyp ERR! node -v v7.8.0
gyp ERR! node-gyp -v v3.5.0
gyp ERR! not ok
npm info lifecycle modern-syslog@1.1.2~install: Failed to exec install script
So I guess we're not quite to a world where everything is JavaScript.
10 April 2017
Voluntary Egalitarian Collectivism
What does an ideal agile team behave like?
I have a theory. It starts with Voluntary. If there are people on a team that aren't personally invested in being there, they aren't volunteers. They are there for some other reason that isn't directly part of 'winning'. I define winning as delivering rock solid code, features, and applications on time and within the budget. (just like everyone else?) People who haven't volunteered to take up the project and drive it home aren't really invested and therefore don't bring their best game.
I've talked a little elsewhere about your commitment to a team or project as a professional consultant, and that comes with some baggage of its own, and I'll admit that commitment can be faked, but if the team isn't engaged for the win, then you won't get optimal performance out of them.
Second, I think it is important to treat each person with equal respect. Everyone on the team should be assumed to be doing there very best and acting in the best interest of the team and project. Of course you must handle situations where that isn't true; by removing those players, but you have to at least start with the premise that everyone involved wants to succeed.
Lastly is collectivism. Now I'm not typically a collectivist, but if you will apply your suspension of disbelief for a moment, everyone on the team must subjugate themselves to the objective of the project. I'm not talking about some sort of blind enslavement (see Voluntary), but everyone must work to deliver the project at the possible expense of other things. At least for 8 hours a day.
I lost track of this mantra for a little while, and let me tell you things didn't go as planned. However, when I've applied this thinking to what I'm working on, things have always gone well. So, I encourage you to change your thinking about teams, projects, and companies for that matter and consider this mindset.
I have a theory. It starts with Voluntary. If there are people on a team that aren't personally invested in being there, they aren't volunteers. They are there for some other reason that isn't directly part of 'winning'. I define winning as delivering rock solid code, features, and applications on time and within the budget. (just like everyone else?) People who haven't volunteered to take up the project and drive it home aren't really invested and therefore don't bring their best game.
I've talked a little elsewhere about your commitment to a team or project as a professional consultant, and that comes with some baggage of its own, and I'll admit that commitment can be faked, but if the team isn't engaged for the win, then you won't get optimal performance out of them.
Second, I think it is important to treat each person with equal respect. Everyone on the team should be assumed to be doing there very best and acting in the best interest of the team and project. Of course you must handle situations where that isn't true; by removing those players, but you have to at least start with the premise that everyone involved wants to succeed.
Lastly is collectivism. Now I'm not typically a collectivist, but if you will apply your suspension of disbelief for a moment, everyone on the team must subjugate themselves to the objective of the project. I'm not talking about some sort of blind enslavement (see Voluntary), but everyone must work to deliver the project at the possible expense of other things. At least for 8 hours a day.
I lost track of this mantra for a little while, and let me tell you things didn't go as planned. However, when I've applied this thinking to what I'm working on, things have always gone well. So, I encourage you to change your thinking about teams, projects, and companies for that matter and consider this mindset.
07 April 2017
Team Metrics, A New Hope?
I just read Schmonz's post on Metrics. In it he asks if there is another way and proposes a very cool idea of metrics with expiration dates. I'd like to further the thinking on that a bit.
A New Hope?
One of the more insidious aspects of metrics in the team room is the coercive nature of measurement. It all ties into the incentivization and general vibe that we get from 'If this number isn't the right number, you've been doing it wrong'. I think we all feel it. Certainly in terms of Velocity or Cycle Time we must all feel some amount of pressure. I know I do.
Every human being responds differently to pressure. Every human being is motivated by different things and every collection of human beings struggles to agree on what will collectively motivate them. Lets face it, metrics either motivate or demotivate a team and that is a problem no matter which way you look at it.
So what if we tried this. Instead of the Scrummaster or Delivery Lead or CIO telling us what metrics they will measure us with, we tell them what we will allow to be measured?
Crazy right? Well. Try this on. If we can agree that as humans we have gathered to achieve a goal _and_ we can agree that in order to get there we need to monitor our progress toward the goal. Then why can we not agree to measure it. If we change our mindset to one of good intention, that is, we all intend to reach that goal. Then selecting metrics to measure our progress should be easier, if not painless.
I think the real hook though has to be the two agreements. If we cannot agree that we are here to achieve a goal, and the same goal, this all falls apart. The second agreement seems intuitive to me, but maybe it isn't so much for others. But, given the first agreement, I think the second one is just a negotiation.
A New Hope?
One of the more insidious aspects of metrics in the team room is the coercive nature of measurement. It all ties into the incentivization and general vibe that we get from 'If this number isn't the right number, you've been doing it wrong'. I think we all feel it. Certainly in terms of Velocity or Cycle Time we must all feel some amount of pressure. I know I do.
Every human being responds differently to pressure. Every human being is motivated by different things and every collection of human beings struggles to agree on what will collectively motivate them. Lets face it, metrics either motivate or demotivate a team and that is a problem no matter which way you look at it.
So what if we tried this. Instead of the Scrummaster or Delivery Lead or CIO telling us what metrics they will measure us with, we tell them what we will allow to be measured?
Crazy right? Well. Try this on. If we can agree that as humans we have gathered to achieve a goal _and_ we can agree that in order to get there we need to monitor our progress toward the goal. Then why can we not agree to measure it. If we change our mindset to one of good intention, that is, we all intend to reach that goal. Then selecting metrics to measure our progress should be easier, if not painless.
I think the real hook though has to be the two agreements. If we cannot agree that we are here to achieve a goal, and the same goal, this all falls apart. The second agreement seems intuitive to me, but maybe it isn't so much for others. But, given the first agreement, I think the second one is just a negotiation.
05 April 2017
In retrospect, I never should have taped my ankles
When I was in high school I played varsity soccer for four years. Sometime in my sophomore year I started taping my ankles as a protection against injury. It was because I saw another player break his ankle during a game. It seemed like a really good idea a the time.
It wasn't until years later that I realized something. Once I started taping my ankles my effectiveness decreased. We were a small school and didn't have trainers or anything like that, so I was doing my own tape job, as were my teammates. I'd guess that a trainer would have told me I was doing it wrong. What I think happened was that I lost some degree of flexibility in my ankles and that took some of the punch out of my kicks. I went from a 60 yard line-drive to a 40 yard lob and I could never quite recover.
In the grand scheme of things I think I still played ok. Though around my senior year I got so big and slow that the coaches used me more like a battering ram than a choice fullback. So Saturday night, for whatever reason, it occurred to me that the issue was the tape. I made a fundamental change in how I played the game and the consequence was decreased performance. What is really tragic is that I didn't realize it until thirty years later. (A problem I'll address elsewhere, feedback loops).
So, how does this relate to software development. Well, I put on tape to protect myself from injury just like we have smoke tests and acceptance tests to protect us from failed delivery. But sometimes those things create a blind spot. For example, if you have a fully automated build and deploy mechanism with a full suite of tests to ensure that you have not delivered a dud, you can make changes and grow your software with impunity. BUT, what if, due to a lack of discipline or pure happenstance a feature gets added to the system with insufficient coverage. That is, the core of the feature exists but it isn't complete or well maintained. Now you have a suite of test that prove that most things work, but not all of them. As an outsider to the process you might just blithely assume that since the CI job is green, all is well.
I see a lot of projects overly reliant on their test suites too tell them that things are ok. In the end that has lead them into deep trouble. At least one thing that happens is, after several months of decay, when a problem is detected, the team cannot find the missing test. They are top-heavy on their testing pyramid and when they dig into find the gaps they become hopelessly entangled in the minutia, never to see the resolution without massive rewrites of the test suite.
Of course the inverse is true. I've seen projects that rely unit tests so heavily that they have no acceptance tests at all. They at least can rely on those unit tests to tell them when things are broken, but they have no vision of the integrated system functioning correctly. A topic for another day maybe.
One answer I have found to solving this problem is mandatory exploratory testing. Have every developer take 15 minutes a day and go play with the application. Go in, click around, test out some features. Try intentionally stupid things to see how the system works. As you find issue, write them down (or put them on a Trello board), and then go back to what you were doing. The project leadership team can then take all the output from those session and process them. Triage genuine defects, toss the cards deemed unworthy, and cycle the work through to the team in upcoming iterations.
It wasn't until years later that I realized something. Once I started taping my ankles my effectiveness decreased. We were a small school and didn't have trainers or anything like that, so I was doing my own tape job, as were my teammates. I'd guess that a trainer would have told me I was doing it wrong. What I think happened was that I lost some degree of flexibility in my ankles and that took some of the punch out of my kicks. I went from a 60 yard line-drive to a 40 yard lob and I could never quite recover.
In the grand scheme of things I think I still played ok. Though around my senior year I got so big and slow that the coaches used me more like a battering ram than a choice fullback. So Saturday night, for whatever reason, it occurred to me that the issue was the tape. I made a fundamental change in how I played the game and the consequence was decreased performance. What is really tragic is that I didn't realize it until thirty years later. (A problem I'll address elsewhere, feedback loops).
So, how does this relate to software development. Well, I put on tape to protect myself from injury just like we have smoke tests and acceptance tests to protect us from failed delivery. But sometimes those things create a blind spot. For example, if you have a fully automated build and deploy mechanism with a full suite of tests to ensure that you have not delivered a dud, you can make changes and grow your software with impunity. BUT, what if, due to a lack of discipline or pure happenstance a feature gets added to the system with insufficient coverage. That is, the core of the feature exists but it isn't complete or well maintained. Now you have a suite of test that prove that most things work, but not all of them. As an outsider to the process you might just blithely assume that since the CI job is green, all is well.
I see a lot of projects overly reliant on their test suites too tell them that things are ok. In the end that has lead them into deep trouble. At least one thing that happens is, after several months of decay, when a problem is detected, the team cannot find the missing test. They are top-heavy on their testing pyramid and when they dig into find the gaps they become hopelessly entangled in the minutia, never to see the resolution without massive rewrites of the test suite.
Of course the inverse is true. I've seen projects that rely unit tests so heavily that they have no acceptance tests at all. They at least can rely on those unit tests to tell them when things are broken, but they have no vision of the integrated system functioning correctly. A topic for another day maybe.
One answer I have found to solving this problem is mandatory exploratory testing. Have every developer take 15 minutes a day and go play with the application. Go in, click around, test out some features. Try intentionally stupid things to see how the system works. As you find issue, write them down (or put them on a Trello board), and then go back to what you were doing. The project leadership team can then take all the output from those session and process them. Triage genuine defects, toss the cards deemed unworthy, and cycle the work through to the team in upcoming iterations.
03 April 2017
Impatience
Larry Wall once stated that a good programmer has three chief virtues, Laziness, Impatience, and Hubris.
I struggled with that quote for quite a while after I first heard in in 1994. Eventually I got his meaning and I can see now how it relates to some of the things we do in the Agile/Lean community. Specifically impatience.
We talk a lot about doing experiments, and making things big and visible, and measuring things. Some of us even execute on those things with great regularity. Many more of us do not. It seems like we spend a lot of time giving lip service to metrics and retrospectives and vastly less time actually executing on those things.
As Captain Jack Sparrow would say, 'The problem is not the problem; the problem is your attitude about the problem'
When we see problems we make up measures and experiments to try to solve them. Its a great idea, but I think we often wait too long to see if our solutions work. That is, we are too patient. What we should do is identify one or more solutions, pick the one that is most easily implemented, and then go do it. However, we need to do this with some amount of control, otherwise we're just flailing around like a fish on a hook.
First thing, we need to establish how we know we've resolved the problem. For example, if the problem is something like following coding standards, we know the problem is fixed if the number of formatting issues is reduced to zero.
Then we need a time box, by when will this problem be solved. Lets give it a week. We'll tackle this issue and solve it with a technique by next Monday.
OK, we have the only two things we really need to define our solution. Now we can spit ball a bunch of ideas about how to solve this problem in a week. Lets say we come up with this list;
1) Setup a static analysis tool that alerts us to violations when we build the software, CI should fail the build if there are violations
2) Have code reviews for every single commit/merge that examine the format of the code
3) Abandon the code formatting rules
4) Configure all the IDEs to format the code when saved
So, given those four options, which one can we implement in a week? Probably all of them, though number one might be hard to do that quickly depending on your language of choice and tooling.
Which of those remaining seems likely to succeed? Clearly number three is a non-starter, if code formatting wasn't an issue we wouldn't be having this conversation.
So we have two real choices, code reviews or automation. Can our IDE format code for us? Probably, so lets try that one first.
Now I picked a pretty simple (but overly common) problem to illustrate how you'd get to a resolution. I doubt any of this was a revelation to anyone. But we're not done yet. Once we have put this change in place, we need to check back and see if it worked. So, next Monday we need to see if it worked by looking at the count of formatting issues we've had. If the number is lower than when we started we're on track. If its already zero we're done. If by some chance the number went up we need stop and try again. But if we don't check the result, the experiment was for not.
One of the things I see many teams do is respond to the conditions of their environment but never follow through on their attempts at resolving them. This may be OK in the small world of a project. Using the above example, violations of the code formatting rules reach zero, everyone implicitly accepts this and they all keep working. Consequences, probably zero. But what if the issue were more complicated, for example, defects escaping into production? Or failures to respond to operations requests. Production outages? Or the cost of operations increasing due to the proliferation of empty 250Gb databases. In order to resolve any such issue we actually have to remember to conclude the process by going back and looking at the results of our experiments, and the sooner the better. Thus, impatience, or at least one aspect of it.
Getting feedback quickly, and responding to it, is critical to success.
I struggled with that quote for quite a while after I first heard in in 1994. Eventually I got his meaning and I can see now how it relates to some of the things we do in the Agile/Lean community. Specifically impatience.
We talk a lot about doing experiments, and making things big and visible, and measuring things. Some of us even execute on those things with great regularity. Many more of us do not. It seems like we spend a lot of time giving lip service to metrics and retrospectives and vastly less time actually executing on those things.
As Captain Jack Sparrow would say, 'The problem is not the problem; the problem is your attitude about the problem'
When we see problems we make up measures and experiments to try to solve them. Its a great idea, but I think we often wait too long to see if our solutions work. That is, we are too patient. What we should do is identify one or more solutions, pick the one that is most easily implemented, and then go do it. However, we need to do this with some amount of control, otherwise we're just flailing around like a fish on a hook.
First thing, we need to establish how we know we've resolved the problem. For example, if the problem is something like following coding standards, we know the problem is fixed if the number of formatting issues is reduced to zero.
Then we need a time box, by when will this problem be solved. Lets give it a week. We'll tackle this issue and solve it with a technique by next Monday.
OK, we have the only two things we really need to define our solution. Now we can spit ball a bunch of ideas about how to solve this problem in a week. Lets say we come up with this list;
1) Setup a static analysis tool that alerts us to violations when we build the software, CI should fail the build if there are violations
2) Have code reviews for every single commit/merge that examine the format of the code
3) Abandon the code formatting rules
4) Configure all the IDEs to format the code when saved
So, given those four options, which one can we implement in a week? Probably all of them, though number one might be hard to do that quickly depending on your language of choice and tooling.
Which of those remaining seems likely to succeed? Clearly number three is a non-starter, if code formatting wasn't an issue we wouldn't be having this conversation.
So we have two real choices, code reviews or automation. Can our IDE format code for us? Probably, so lets try that one first.
Now I picked a pretty simple (but overly common) problem to illustrate how you'd get to a resolution. I doubt any of this was a revelation to anyone. But we're not done yet. Once we have put this change in place, we need to check back and see if it worked. So, next Monday we need to see if it worked by looking at the count of formatting issues we've had. If the number is lower than when we started we're on track. If its already zero we're done. If by some chance the number went up we need stop and try again. But if we don't check the result, the experiment was for not.
One of the things I see many teams do is respond to the conditions of their environment but never follow through on their attempts at resolving them. This may be OK in the small world of a project. Using the above example, violations of the code formatting rules reach zero, everyone implicitly accepts this and they all keep working. Consequences, probably zero. But what if the issue were more complicated, for example, defects escaping into production? Or failures to respond to operations requests. Production outages? Or the cost of operations increasing due to the proliferation of empty 250Gb databases. In order to resolve any such issue we actually have to remember to conclude the process by going back and looking at the results of our experiments, and the sooner the better. Thus, impatience, or at least one aspect of it.
Getting feedback quickly, and responding to it, is critical to success.
Subscribe to:
Posts (Atom)