In this weekās episode of Semaphore Uncut, I had the honor of speaking with author, consultant, and continuous delivery thought leader Dave Farley.
Dave, who has been in the industry for more than 30 years, was kind enough to share his experience as a strategic software development consultant, industry patterns (and anti-patterns) he has observed, best practices for setting up successful testing strategies, and more.
Highlights from this Episode
Darko Fabijan (00:02): Hello and welcome to Semaphore Uncut Podcast, where we talk about continuous delivery, continuous integration, testing, and developing software in general. Today with us we have Dave Farley.
Dave, thank you so much for joining us. Please feel free to go ahead and introduce yourself.
Dave Farley: Itās a pleasure, thank you. My name is Dave Farley. Iām one of the authors of the Continuous Delivery book that described continuous delivery in the way that you think about it these days, I think, for the first time.
These days I make a living as a consultant advising organizations on how to, in general, improve their software engineering practice, but specifically in the context of continuous delivery. So if youāre a big organization with legacy systems or the complicated built system or anything else, I help people get over those sorts of problems.
A day in the life of a strategic software development consultant
Darko: Can you guide us through some examples of how day-to-day life looks as a software consultant helping companies on their journey?
Dave: Sure. One of my clients described what I do for a living as strategic consultancy. So Iām no longer the sort of consultant that goes in and kind of writes code for people. I used to do that, but thatās not really what I do anymore. Mostly what I do these days is to advise organizations on broader topics.
And so mostly my consultancy kind of falls into three different groups of activities. I do quite a lot of public speaking, so I speak at conferences and things like that. And sometimes I get engaged by organizations to go in and talk to them and try and get them interested or enthusiastic about ideas around continuous delivery. Thatās a small part of what I do.
Occasionally, I do things like run training courses for people, but the bulk of my work is really about consultancy. So what I tend to do is go into an organization and try and analyze the way in which they practice software development, from soup to nuts.
We try and do some kind of value stream analysis and understand how their development process works, and then usually I kind of critique it. Iāll kind of give them advice about different parts of that, and that usually boils down to a bunch of different kinds of activities that they might carry out.
Patterns (and anti-patterns) in software development companies
Darko (03:00): Maybe what would be interesting to know that some of us may recognize ourselves in those categories. What are maybe some anti-patterns that youāre seeing and what are people struggling with mostly?
Miscategorizing software development
Dave: Quite a lot of things. Let me philosophize for a moment to try and put that into context. I think that the biggest anti-pattern, the trillion-dollar mistake that our industry has made, is miscategorizing what software development is. Nearly all organizationsānearly all of my clientsā organizations anywayātry and treat software development as a production problem. A problem in production in the sense of being able to scale it up in order to be able to produce things more reliably, for example.
And software development isnāt that kind of problem. I think of waterfall development as the equivalent of a production line approach. And software isnāt that kind of problem.
Software development, in my eyes, is always an exercise in learning and discovery. So I think that, first and foremost, we should be optimizing our work to be really, really good at learning discovery, experimentation, exploration, and those sorts of things. So I think thatās the biggest anti-pattern. One of the common facets of that that I see in my organization, I think itās fair to say that agile thinking over the last 20 years has won the argument for how to approach software development at some level.
Misinterpreting agile software development
And so what I tend to see is I see lots of teams, technical teams in bigger organizations, practicing what they think of as agile software development. Usually what that means is it means that theyāre practicing some form of scrum. And usually what that means is that theyāre having stand up meetings and theyāre running in things called sprints, but theyāre not delivering software at the end of the sprint.
The standup meetings are usually kind of status meetings, and thereās very little or no automated testing. Thereās no kind of continuous planning, thereās no kind of customer involvement. So itās not really scrum, let alone really agile or anything else.
Disconnecting business and development teams
The last one that Iāll call out that is commonly broken is the interface between the business and development teams. But the story or the requirements processes often inadequate.
I most commonly see these requirements expressed as technical instructions. So do this thing, add this column to a database, refactor that componentāthose sorts of things. Thatās not really an effective way of organizing a development process for lots of reasons. Some of them are obvious and some of them are subtle, but thatās a poor interaction. Itās sort of like trying to write code by remote control, and thatās not an effective strategy.
So thereās a whole bunch of things that people commonly get wrong, but I think that the big one that it all boils down to is this misapprehension and trying to treat our problem as though itās a production line and it is not. Itās nothing like a production line.Itās a creative, exploratory, and intensive problem.
The last part of that issue in big organizations is the other really hard problem in software development, which is not only is this a problem of exploration and learning, itās also a problem in keeping and managing complexity. So ultimately all of our work, we have to work within the constraints of what fits inside a human beingās head.
Therefore, weāve got to treat very, very seriously ideas like modularity and coupling and separation of concerns. And thatās true at an organizational level, at least as much as itās true at the technical level of the software that we build. And thatās another area that traditional organizations comprehensively tend to get wrong.
Continuous delivery should be a standard engineering practice
Dave (09:31): I saw something on Twitter this week. I was involved in a conversation and somebody posted a challenge to say, could you name a single practice across our industry that we could consider to be standard? If I thought hard about this, I couldnāt, so somebody cropped up and said version control.
I came across an organization last year that didnāt use version control. I think using version control for some kinds of software is actually quite unusual. So if theyāre configuring product systems, often they donāt use version control. Even something as fundamental, or what I would consider as fundamental to doing a decent job, is not used across the board.
You couldnāt say that against other professions. All surgeons wash their hands. There are no surgeons that donāt wash their hands. I think weāre an odd industry in that respect. And in part I think that boils down to not looking in the right places for how to solve these problems.
One of the reasons why I value continuous delivery and the thinking around it so highly is not because of my personal involvement. I donāt think this is down to just having a personal connection with the idea. I genuinely donāt believe thatās true. What I do have a personal connection with from my point of view is the idea of the application of the scientific method. I think the scientific method is humanityās best problem-solving technique. And I believe, genuinely, that continuous delivery is an application of the scientific method to solving problems in software.
That means continuous delivery has a decent case to make to be considered as a genuine engineering discipline for software. And if that was the case, the implications of that would be that what we ought to see is that people that practice continuous delivery did a better job than people that didnāt, because thatās what engineering does. And engineering amplifies the effectiveness of craft, creativity, and understanding and makes those things higher quality and more reliable. Thatās what happens in other disciplines.
So we ought to be able to observe that in software development. And the evidence is that thatās what we see. But if you read the Accelerate book, it will look at the state of DevOps report, thatās what the numbers tell us. They tell us that organizations that practice continuous delivery produce higher-quality software more quickly. The people working on it enjoy it more and the organizations that practice it make more money. So those are pretty good measures, on the whole.
Setting up your testing strategy for a faster feedback loop
Darko (16:54): In the talk that we had previously, you mentioned the strategy of testing. What do you see as a successful testing strategy and getting to a fast feedback loop?
Dave: Yeah, by all means. So the mental model I have when I think of this is based on a real-life project. I was fortunate, while in the middle of writing the Continuous Delivery book, to get employed building one of the worldās highest performance financial exchanges. I was already immersed in the ideas of continuous delivery, so we did it based on continuous delivery from scratch.
What weāre trying to achieve is the fast, high-quality feedback that youāre talking about. And what that means is that we try to get to the point to where we can kind of make a change and release that into production with a degree of confidence.
And thereās a number of things that go into that. The first thing is the ability to try and weed out the bad changes as far as we can. So the spine of the Continuous Delivery book is organized around an idea called a deployment pipeline. And the aim of a deployment pipeline is to organize the evaluation of any change thatās destined for production and try to eliminate the bad changes.
This is another one of those things that we learned from science. Weāre trying to treat it as a falsification mechanism. If any test fails, weāre going to reject the change.
Then, weāre going to commit that code. And what weāre looking at is to get feedback very quickly so that we can get a sense of whether this was a good change or a bad change and a fairly high level of confidence that if all of those tests pass, everything else will be okay.
Generally, I advise my clients that what they should be aiming for as a target is to get feedback in under five minutes with roughly an 80% level of confidence. That says a huge amount about the nature of the test that you can afford to run in that amount of time. It means that they canāt afford to be starting up another process, talking to a database, talking to a file system, optimum the message queue, or any of those things.
Really, weāre talking about these small, focused, unique tests that are the output of test-first, test-driven development on the whole. That gives lots of beautiful properties in terms of the impact on the design of your system and that same very powerful step in its own right.
The limitations of test-driven development
Dave (19:54): Academic research suggests that just that kind of testing will eliminate something around 70-odd percent of production defects. So youāre talking about a 10x improvement. This is one step in that direction. Seventy-odd percent of production defects means that you spend much less time chasing and fixing bugs in production. Thatās brilliant, and essentially all Iāve described so far is continuous integration.
The limitation of test-driven development is that it says, āDoes the software do what I as a software developer intended the software to do?ā And it just verifies that. Itās a bit like double-entry bookkeeping, but for software. Thatās brilliant, but itās not enough. You also need a user-centered test. You need to evaluate the chain from the perspective of the user of the system.
What people get wrong about microservices
Darko (26:19): There is now quite a lot of talks for a couple of last years about microservices. What are your thoughts on microservices in general? Is it something that you think that has a high potential value?
Dave: I like the microservices approach, but actually I think that itās an expression of some deeper principles. I talked earlier about software being essentially an exercise in learning and managing complexity. And I think that what microservices does is it gives you a little help on both of those fronts, but there are other strategies that you can also use to keep those things in check. I think if you understand thatās whatās going on, it makes you better understand why microservices are important.
The reason that I say this is because I think that lots of people get microservices wrong. I think that the microservices are important. First, as you point out, each service is simple and therefore itās relatively easy to reason about. Thatās a big step forward in terms of being able to fit things into peopleās heads.
The other attribute of microservices, though, thatās really important is the degree to which they decouple organizationally. So the reason why Amazon first took the step to microservices wasnāt because of fitting stuff into peopleās heads as much as it was to liberate the organization to be able to work more independently in different parts of the organization. And I think thatās a crucially, vitally important strategy. And you want to be able to do that. What that means is you donāt get to test all of the microservices together before you release them.
We talked about anti-patterns earlier on. One of the anti-patterns that I see very commonly in large organizations is everybodyās read about microservices and itās kind of an obviously good idea when you read about it. It makes an awful lot of sense. But lots of ideas. What we tend to do is that we tend to pause the ideas and miss out the bits that sound hard or difficult to think about and just try to go for the easy bits.
I see lots of organizations that write these totally coupled components that they call microservices that to my mind arenāt because theyāre not independently deployable and they canāt even imagine being able to deploy those without testing them together. So thatās a key value and a key complexity of microservices.
PS. Be on the lookout for Daveās upcoming book!
Darko (33:37): You mentioned working on a new book and what software development will look in a hundred years, which is quite intriguing. Can you share more on when your book will be available and whatās it about?
Dave: Sure. Iām probably talking about these a bit early in the life of the book. Iāve written about half of the book so far. Iām just preparing to maybe release it on Leanpub to get some feedback, so if people want to crack where Iām up to, perhaps the best thing they can do is follow me on Twitter and Iāll announce it on Twitter, so my handle on Twitter is @DaveFarley77.
The working title, Iām probably going to change this, but the working title for the book at the moment is Modern Software Engineering, which is a bit of a grandiose title. And, yes, at some level the ideas of what software development like in a hundred years time is related to that. Itās one of the spinoffs Iām thinking about in the book.
Darko: Again, thank you very much for this conversation. Good luck with your book and yeah. Weāll be sure to share the link to your Twitter account so people can discover the book as soon as you decide to publish something live and if you can get feedback.
Dave: Thank you very much!