DevOps Is The New Agile

In The Structure of Scientific Revolutions, Thomas Kuhn argues that science is not a steady accumulation of facts and theories, but rather an sequence of stable periods, interrupted by revolutions.

3rd-platformDuring such revolutions, the dominant paradigm breaks down under the accumulated weight of anomalies it can’t explain until a new paradigm emerges that can.

We’ve seen similar paradigm shifts in the field of information technology. For hardware, we’re now at the Third Platform.

For software, we’ve had several generations of programming languages and we’ve seen different programming paradigms, with reactive programming gaining popularity lately.

The Rise of Agile

We’ve seen a revolution in software development methodology as well, where the old Waterfall paradigm was replaced by Agile. The anomalies in this case were summarized as the software crisis, as documented by the Chaos Report.

The Agile Manifesto was clearly a revolutionary pamphlet:

We are uncovering better ways of developing software by doing it and helping others do it.

It was written in 2001 and originally signed by 17 people. It’s interesting to see what software development methods they were involved with:

  1. Adaptive Software Development: Jim Highsmith
  2. Crystal: Alistair Cockburn
  3. Dynamic Systems Development Method: Arie van Bennekum
  4. eXtreme Programming: Kent Beck, Ward Cunningham, Martin Fowler, James Grenning, Ron Jeffries, Robert Martin
  5. Feature-Driven Development: Jon Kern
  6. Object-Oriented Analysis: Stephen Mellor
  7. Scrum: Mike Beedle, Ken Schwaber, Jeff Sutherland
  8. Andrew Hunt, Brian Marick, and Dave Thomas were not associated with a specific method

Only two of the seven methods were represented by more than one person: eXtreme Programming (XP) and Scrum. Coincidentally, these are the only ones we still hear about today.

Agile Becomes Synonymous with Scrum

ScrumScrum is the clear winner in terms of market share, to the point where many people don’t know the difference between Agile and Scrum.

I think there are at least two reasons for that: naming and ease of adoption.

Decision makers in environments where nobody ever gets fired for buying IBM are usually not looking for something that is “extreme”. And “programming” is for, well, other people. On the other hand, Scrum is a term borrowed from sports, and we all know how executives love using sport metaphors.

[BTW, the term “extreme” in XP comes from the idea of turning the dials of some useful practices up to 10. For example, if code reviews are good, let’s do it all the time (pair programming). But Continuous Integration is not nearly as extreme as Continuous Delivery and iterations (time-boxed pushes) are mild compared to pull systems like Kanban. XP isn’t so extreme after all.]

Scrum is easy to get started with: you can certifiably master it in two days. Part of this is that Scrum has fewer mandated practices than XP.

That’s also a danger: Scrum doesn’t prescribe any technical practices, even though technical practices are important. The technical practices support the management practices and are the foundation for a culture of technical excellence.

The software craftsmanship movement can be seen as a reaction to the lack of attention for the technical side. For me, paying attention to obviously important technical practices is simply being a good software professional.

The (Water)Fall Of Scrum

The jury is still out on whether management-only Scrum is going to win completely, or whether the software craftsmanship movement can bring technical excellence back into the picture. This may be more important than it seems at first.

ScrumFallSince Scrum focuses only on management issues, developers may largely keep doing what they were doing in their Waterfall days. This ScrumFall seems to have become the norm in enterprises.

No wonder that many Scrum projects don’t produce the expected benefits. The late majority and laggards may take that opportunity to completely revert back to the old ways and the Agile Revolution may fail.

In fact, several people have already proclaimed that Agile is dead and are talking about a post-Agile world.

Some think that software craftsmanship should be the new paradigm, but I’m not buying that.

Software craftsmanship is all about the code and too many people simply don’t care enough about code. Beautiful code that never gets deployed, for example, is worthless.

Beyond Agile with DevOps

Speaking of deploying software, the DevOps movement may be a more likely candidate to take over the baton from Agile. It’s based on Lean principles, just like Agile was. Actually, DevOps is a continuation of Agile outside the development team. I’ve even seen the term agile DevOps.

So what makes me think DevOps won’t share the same fate as Agile?

First, DevOps looks at the whole software delivery value stream, whereas Agile confined itself to software development. This means DevOps can’t remain in the developer’s corner; for DevOps to work, it has to have support from way higher up the corporate food chain. And executive support is a prerequisite for real, lasting change.

Second, the DevOps movement from the beginning has placed a great deal of emphasis on culture, which is where I think Agile failed most. We’ll have to see whether DevOps can really do better, but at least the topic is on the agenda.

metricsThird, DevOps puts a lot of emphasis on metrics, which makes it easier to track its success and helps to break down silos.

Fourth, the Third Platform virtually requires DevOps, because Systems of Engagement call for much more rapid software delivery than Systems of Record.

Fifth, with the number of security breaches spiraling out of control, the ability to quickly deploy fixes becomes the number one security measure. The combination of DevOps and Security is referred to as Rugged DevOps or DevOpsSec.

What do you think? Will DevOps succeed where Agile failed? Please leave a comment.

Advertisement

The Nature of Software Development

I’ve been developing software for a while now. Over fifteen years professionally, and quite some time before that too. One would think that when you’ve been doing something for a long time, you would develop a good understanding of what it is that you do. Well, maybe. But every once in a while I come across something that deepens my insight, that takes it to the next level.

I recently came across an article (from 1985!) that does just that, by providing a theory of what software development really is. If you have anything to do with developing software, please read the article. It’s a bit dry and terse at times, but please persevere.

The article promotes the view that, in essence, software development is a theory building activity. This means that it is first and foremost about building a mental model in the developer’s mind about how the world works and how the software being developed handles and supports that world.

This contrasts with the more dominant manufacturing view that sees software development as an activity where some artifacts are to be produced, like code, tests, and documentation. That’s not to say that these artifacts are not produced, since obviously they are, but rather that that’s not the real issue. If we want to be able to develop software that works well, and is maintainable, we’re better off focusing on building the right theories. The right artifacts will then follow.

This view has huge implications for how we organize software development. Below, I will discuss some things that we need to do differently from what you might expect based on the manufacturing view. The funny thing is that we’ve known that for some time now, but for different reasons.

We need strong customer collaboration to build good theories
Mental models can never be externalized completely, some subtleties always get lost when we try that. That’s why requirements as documents don’t work. It’s not just that they will change (Agile), or that hand-offs are a waste (Lean), no, they fundamentally cannot work! So we need the customer around to clarify the subtle details, and we need to go see how the end users work in their own environment to build the best possible theories.

Since this is expensive, and since a customer would not like having to explain the same concept over and over again to different developers, it makes sense to try and capture some of the domain knowledge, even though we know we can never capture every subtle detail. This is where automated acceptance tests, for example as produced in Behavior Driven Development (BDD), can come in handy.

Software developers should share the same theory
It’s of paramount importance that all the developers on the team share the same theory, or else they will develop things that make no sense to their team members and that will not integrate well with their team members’ work. Having the same theory is helped by using the same terminology. Developers should be careful not to let the meaning of the terms drift away from how the end users use them.

The need for a shared theory means that strong code ownership is a bad idea. Weak ownership could work, but we’d better take measures to prevent silos from forming. Code reviews are one candidate for this. The best approach, however, is collective code ownership, especially when combined with pair programming.

Note that eXtreme Programming (XP) makes the shared theory explicit through it’s system metaphor concept. This is the XP practice that I’ve always struggled most with, but now it’s finally starting to make sense to me.

Theory building is an essential skill for software developers
If software development is first and foremost about building theories, then software developers better be good at it. So it makes sense to teach them this. I’m not yet aware of how to best do this, but one obvious place to start is the scientific method, since building theories is precisely what scientist do (even though some dispute the scientific method is actually the way scientists build theories). This reminds me of the relationship between the scientific method and Test-Driven Development (TDD).

Another promising approach is Domain Driven Design, since that places the domain model at the center of software development. Please let me know if you have more ideas on this subject.

Software developers are not interchangeable resources
If the most crucial part of software development is the building of a theory, than you can’t just simply replace one developer with another, since the new girl needs to start building her theory from scratch and that takes time. This is another reason why you can’t add people to a late project without making it later still, as (I wish!) we all know.

Also, developers with a better initial theory are better suited to work on a new project, since they require less theory building. That’s why it makes sense to use developers with pre-existing domain knowledge, if possible. Conversely, that’s also why it makes sense for a developer to specialize in a given domain.

We should expect designs to undergo significant changes
Taking a hint from science, we see that theories sometimes go through radical changes. Now, if the software design is a manifestation of the theory in the developers’ mind, then we can expect the design to undergo radical changes as well. This argues against too much design up front, and for incremental design.

Maintenance should be done by the original developers
Some organizations hand off software to a maintenance team once its initial development is done. From the perspective of software development as theory building, that doesn’t make a whole lot of sense. The maintenance team doesn’t have the theories that the original developers built, and will likely make modifications that don’t fit that theory and are therefore not optimal.

If you do insist on separate maintenance teams, then the maintainers should be phased into the team before the initial stage ends, so they have access to people that already have well formed theories and can learn from them.

Software developers should not be shared between teams
For productivity reasons, software developers shouldn’t divide their attention between multiple projects. But the theory building view of software development gives another perspective. It’s hard enough to build one theory at a time, especially if one is also learning other stuff, like a new technology. Developers really shouldn’t need to learn too much in parallel, or they may feel like their head is going to explode.

The Art of Pair Programming

Pair programming (PP) is one of the eXtreme Programming (XP) practices. When doing pair programming, two programmers share a mouse and keyboard while they write code. One of the two, the driver, types, while the other, the navigator, reviews the code, and thinks ahead. The two persons switch roles often. This is another example of XP ‘turning the knobs to 10’: if reviewing code is good, then we should do it all the time.

There is a good post on PP by Rod Hilton. He starts out with

When I first heard I’d be pairing at the new job, I was a bit apprehensive and skeptical.

I can relate to that. At a former company I worked for, I introduced XP. I had just read XP Explained: Embrace Change and thought it could improve our software development process. But pair programming was the one practice I was apprehensive about. The thought of someone watching over your shoulder all day made me feel uneasy. Yet the book made it clear that PP is a foundational practice, so I tried to keep an open mind.

I shouldn’t have worried. Rod explains that PP has nothing to do with “looking over a shoulder”:

The way pair programming works in practice is quite a bit different than I imagined it. […] When doing Test-Driven Development, one of the things we do is called “ping-pong pairing”. So the other developer will write a test, then make it compile but fail. Then he passes the keyboard to me. I implement the feature just enough to make the test pass, then I write another failing test and pass it back.

I didn’t know about ping-pong pairing when we started out doing XP. In fact, Rod’s post introduced it to me. It seems like a really good technique, and I’d love to try it out.

Also, the “all day” part is a misconception:

In reality, you don’t sit down at a desk with another person and work all day with them. You pair up for tasks. […] When the task is done and code is checked in, the pair breaks up. Generally tasks only have 2 hour estimates (sometimes less) so you’re only pairing for the two hours. Then you pair up with someone else to work on another task. […] We take breaks often because getting away from the code for a few minutes helps keep a fresh perspective when you come back to the task.

When I did XP, we paired up for about three hours at a time. We all came in on different times, so PP time generally started at about 9:30 AM, and lasted till lunch. Then we’d switch pairs. Since some of us started early, they also left early (Sustainable Pace), so there was another session of about three hours in the afternoon.

Pairing for about six hours a day is really enough. PP is pretty intense! There is no chance to let your mind wander off a bit, to look out of the window for more than a couple of seconds or to read e-mail. You have to stay focused all the time.

Advantages

So why bother at all with PP? Rod explains:

I see pairing work so well every day that I consider my career prior to my current job to have consisted mostly of wasting time. […] When you have a second person working with you, you find that you try harder to code well. You’re far, far less likely to be willing to apply duct tape to a problem, because someone else is working with you and he or she is more likely to object to the duct tape.

We all know the feeling, I guess. You know that you’re taking a shortcut, that there is a better way. But that would take more effort. When you’re pairing, your partner “keeps you honest”. You don’t want to look like a slouch, so you don’t even consider the shortcut.

This makes a big difference. In fact, while I consider myself a pretty good programmer, I must admit that the code I wrote when pairing was of considerable higher quality than the code written solo. And that is true even when pairing with someone that I knew for sure was a lesser programmer than me!

The “keeping you honest” thingie is one reason for this. “Two brains think of more than one” is another. The interplay between the driver and navigator is a subtle one. When you, as a driver, see the navigator’s eyes go blank, you know you’re code is not as clear as it could be. And you fix it. You either stop to explain, at which point the navigator will likely suggest an improvement (like a better name for a class to better express its purpose), or you switch roles and let the navigator write better code.

Disadvantages

So why isn’t everybody pairing? What’s the catch?

Well, there are downsides. Rod mentions one:

Hygiene can be a serious problem. If one person smells, it’s rough to sit with them. I find myself going back to my desk often and the code suffers for it. If you’re pairing, take a shower, and hold your farts for your next bathroom trip. Just do it, you filthy pig.

I haven’t dealt with this problem much. One guy I paired with was a smoker, so whenever he came back from a smoke outside, I’d smell the smoke. I didn’t like it, but so what. We all have our faults. Pair programming is a partnership, and like in any partnership, you need to give and take a bit 😉 PP takes some time to get good at, but the basics skills should have been acquired at Kindergarten.

Rod also mentions productivity:

there’s no denying that you’re producing fewer lines of code per day since only half your team is coding at any one point. I don’t consider that a bad thing, though (if you have two solutions that equally solve a problem, the solution with fewer lines of code is the superior one) because the QUALITY of the code that IS produced is so, so, so much higher.

This is an important point to stress. Pair programming seems wasteful, but it’s really not. The limiting factor when programming is not the speed with which you type, but the number of bugs you don’t introduce. There hasn’t been much scientific research into this, but there is one nice experiment by Alistair Cockburn and Laurie Williams that finds that PP takes 15% more time, but produces 15% less bugs, while yielding a significantly simpler design. My experiences confirm this.

The long term benefits of these advantages cannot be overestimated. Each bug takes away future coding time, since it needs to be fixed. And more complex designs slow you down as well, because it requires more time to understand the code you’re working on, and increases the chances of introducing bugs because of misunderstandings.

Of course, there will always be nay-sayers. Most of them haven’t actually tried PP, because “that could never work”. Well, if you don’t want to learn, then don’t, but don’t bother me. If you did give PP a try and failed to get good results, I must press you to check whether you did PP properly. If you think you did and still didn’t like PP, then please leave a comment explaining why.