Software Engineering in 2016

engineeringI write computer programs for a living. Many people therefore refer to me as a software engineer, but that term has always made me uncomfortable.

Mary Shaw from the Software Engineering Institute explains where that unease comes from: our industry doesn’t meet the standards for engineers as set by real engineering fields like structural engineering.

Shaw defines engineering as “Creating cost-effective solutions to practical problems by applying codified knowledge, building things in the service of mankind.” (my italics) While we certainly do this some of the time in some places, I believe we are still quite a ways away from doing it all of the time everywhere.

Examples abound. Code is still being written without automated tests, without peer review, and without continuously integrating it with code of fellow team members. Basic stuff, really.

The State of DevOps 2015 report goes into some more advanced practices. It presents compelling evidence that these practices lead to better business results. High-performing companies in the study deploy code 30x more often and have a 60x higher success rate and 168x lower Mean Time To Recovery (MTTR) than low-performing companies.

Yes, you’ve read that right: lower risk of failure and quicker recovery when things do go wrong!

So why aren’t we all applying this codified knowledge?

DevOps practices, as well as more advanced Agile practices like test-driven development and pair programming, require more discipline and that seems to be lacking in our field. Uncle Bob explains this lack of discipline by pointing to the population demographics in our field.

go-fast-go-wellThere still seems to be a strong belief that “quick and dirty” really is quicker. But in the long term, it almost always ends up just being dirty, and many times actually slower.

Yes, going into a production database and manually updating rows to compensate for a bug in the code can be a quick fix, but it also increases the risk of unanticipated things going wrong. Do it often enough, and something will go wrong, requiring fixes for the fixes, etc.

Following a defined process may seem slower than a quick hack, but it also reduces risk. If you can make the defined process itself quicker, then at some point there is little to gain from deviating from the process.

The high-performing DevOps companies from the aforementioned study prove that this is not a pipe dream. With a good Continuous Delivery process you can make fixing urgent critical bugs a non-event. This is good, because people make poorer decisions under stress.

This finding shouldn’t be news to us. It’s the same lesson that we learned with Continuous Integration. Following a process and doing it more often leads to an optimization of the process and an overall increase in productivity.

Until the majority in our field have learned to apply this and other parts of our codified knowledge, we should probably stop calling ourselves engineers.

 

Advertisement

DevOps Is The New Agile

In The Structure of Scientific Revolutions, Thomas Kuhn argues that science is not a steady accumulation of facts and theories, but rather an sequence of stable periods, interrupted by revolutions.

3rd-platformDuring such revolutions, the dominant paradigm breaks down under the accumulated weight of anomalies it can’t explain until a new paradigm emerges that can.

We’ve seen similar paradigm shifts in the field of information technology. For hardware, we’re now at the Third Platform.

For software, we’ve had several generations of programming languages and we’ve seen different programming paradigms, with reactive programming gaining popularity lately.

The Rise of Agile

We’ve seen a revolution in software development methodology as well, where the old Waterfall paradigm was replaced by Agile. The anomalies in this case were summarized as the software crisis, as documented by the Chaos Report.

The Agile Manifesto was clearly a revolutionary pamphlet:

We are uncovering better ways of developing software by doing it and helping others do it.

It was written in 2001 and originally signed by 17 people. It’s interesting to see what software development methods they were involved with:

  1. Adaptive Software Development: Jim Highsmith
  2. Crystal: Alistair Cockburn
  3. Dynamic Systems Development Method: Arie van Bennekum
  4. eXtreme Programming: Kent Beck, Ward Cunningham, Martin Fowler, James Grenning, Ron Jeffries, Robert Martin
  5. Feature-Driven Development: Jon Kern
  6. Object-Oriented Analysis: Stephen Mellor
  7. Scrum: Mike Beedle, Ken Schwaber, Jeff Sutherland
  8. Andrew Hunt, Brian Marick, and Dave Thomas were not associated with a specific method

Only two of the seven methods were represented by more than one person: eXtreme Programming (XP) and Scrum. Coincidentally, these are the only ones we still hear about today.

Agile Becomes Synonymous with Scrum

ScrumScrum is the clear winner in terms of market share, to the point where many people don’t know the difference between Agile and Scrum.

I think there are at least two reasons for that: naming and ease of adoption.

Decision makers in environments where nobody ever gets fired for buying IBM are usually not looking for something that is “extreme”. And “programming” is for, well, other people. On the other hand, Scrum is a term borrowed from sports, and we all know how executives love using sport metaphors.

[BTW, the term “extreme” in XP comes from the idea of turning the dials of some useful practices up to 10. For example, if code reviews are good, let’s do it all the time (pair programming). But Continuous Integration is not nearly as extreme as Continuous Delivery and iterations (time-boxed pushes) are mild compared to pull systems like Kanban. XP isn’t so extreme after all.]

Scrum is easy to get started with: you can certifiably master it in two days. Part of this is that Scrum has fewer mandated practices than XP.

That’s also a danger: Scrum doesn’t prescribe any technical practices, even though technical practices are important. The technical practices support the management practices and are the foundation for a culture of technical excellence.

The software craftsmanship movement can be seen as a reaction to the lack of attention for the technical side. For me, paying attention to obviously important technical practices is simply being a good software professional.

The (Water)Fall Of Scrum

The jury is still out on whether management-only Scrum is going to win completely, or whether the software craftsmanship movement can bring technical excellence back into the picture. This may be more important than it seems at first.

ScrumFallSince Scrum focuses only on management issues, developers may largely keep doing what they were doing in their Waterfall days. This ScrumFall seems to have become the norm in enterprises.

No wonder that many Scrum projects don’t produce the expected benefits. The late majority and laggards may take that opportunity to completely revert back to the old ways and the Agile Revolution may fail.

In fact, several people have already proclaimed that Agile is dead and are talking about a post-Agile world.

Some think that software craftsmanship should be the new paradigm, but I’m not buying that.

Software craftsmanship is all about the code and too many people simply don’t care enough about code. Beautiful code that never gets deployed, for example, is worthless.

Beyond Agile with DevOps

Speaking of deploying software, the DevOps movement may be a more likely candidate to take over the baton from Agile. It’s based on Lean principles, just like Agile was. Actually, DevOps is a continuation of Agile outside the development team. I’ve even seen the term agile DevOps.

So what makes me think DevOps won’t share the same fate as Agile?

First, DevOps looks at the whole software delivery value stream, whereas Agile confined itself to software development. This means DevOps can’t remain in the developer’s corner; for DevOps to work, it has to have support from way higher up the corporate food chain. And executive support is a prerequisite for real, lasting change.

Second, the DevOps movement from the beginning has placed a great deal of emphasis on culture, which is where I think Agile failed most. We’ll have to see whether DevOps can really do better, but at least the topic is on the agenda.

metricsThird, DevOps puts a lot of emphasis on metrics, which makes it easier to track its success and helps to break down silos.

Fourth, the Third Platform virtually requires DevOps, because Systems of Engagement call for much more rapid software delivery than Systems of Record.

Fifth, with the number of security breaches spiraling out of control, the ability to quickly deploy fixes becomes the number one security measure. The combination of DevOps and Security is referred to as Rugged DevOps or DevOpsSec.

What do you think? Will DevOps succeed where Agile failed? Please leave a comment.

Three Ways To Become a Better Software Professional

The other day InfoQ posted an article on software craftsmanship.

In my view, software craftsmanship is no more or less than being a good professional. Here are three main ways to become one.

1. See the Big Picture

Let’s start with why. Software rules the world and thus we rule the world. And we all know that with great power comes great responsibility.

Now, what is responsible behavior in this context?

It’s many things. It’s delivering software that solves real needs, that works reliably, is secure, is a pleasure to use, etc. etc.

There is one constant in all these aspects: they change. Business needs evolve. New security threats emerge. New usability patterns come into fashion. New technology is introduced at breakneck speed.

The number one thing a software professional must do is to form an attitude of embracing change. We cope with change by writing programs that are easy to change.

Adaptability is not something we’ll explicitly see in the requirements; it’s silently assumed. We must nevertheless take our responsibility to bake it in.

Unfortunately, adaptability doesn’t find its way into our programs by accident. Writing programs that are easy to change is not easy but requires a considerable amount of effort and skill. The skill of a craftsman.

2. Hone Your Skills

How do we acquire the required skills to keep our programs adaptable?

We need to learn. And the more we learn, the more we’ll find that there’s always more to learn. That should make us humble.

How do we learn?

By reading/watching/listening, by practicing, and by doing. We need to read a lot and go to conferences to infuse our minds with fresh ideas. We need to practice to put such new ideas to the test in a safe environment. Finally, we need to incorporate those ideas into our daily practices to actually profit from them.

BTW, I don’t agree with the statement in the article that

Programmers cannot improve their skills by doing the same exercise repeatedly.

One part of mastering a skill is building muscle memory, and that’s what katas like Roman Numerals are for. Athletes and musicians understand that all too well.

But we must go even further. There is so much to learn that we’ll have to continuously improve our ability to do so to keep up. Learning to learn is a big part of software craftsmanship.

3. Work Well With Others

Nowadays software development is mostly a team sport, because we’ve pushed our programs to the point where they’re too big to fail build alone. We are part of a larger community and the craftsmanship model emphasizes that.

There are both pros and cons to being part of a community. On the bright side, there are many people around us who share our interests and are willing to help us out, for instance in code retreats. The flip side is that we need to learn soft skills, like how to influence others or how to work in a team.

Being effective in a community also means our individually honed skills must work well with those of others. Test-Driven Development (TDD), for example, can’t successfully be practiced in isolation. An important aspect of a community is its culture, as the DevOps movement clearly shows.

To make matters even more interesting, we’re actually simultaneously part of multiple communities: our immediate team, our industry (e.g. healthcare), and our community of interest (e.g. software security or REST), to name a few. We should participate in each, understanding that each of those communities will have their own culture.

It’s All About the Journey

Software craftsmanship is not about becoming a master and then resting on your laurels.

While we should aspire to master all aspects of software development, we can’t hope to actually achieve it. It’s more about the journey than the destination. And about the fun we can have along the way.

How Friction Slows Us Down

FrictionI once joined a project where running the “unit” tests took three and a half hours.

As you may have guessed, the developers didn’t run the tests before they checked in code, resulting in a frequently red build.

Running the tests just gave too much friction for the developers.

I define friction as anything that resist the developer while she is producing software.

Since then, I’ve spotted friction in numerous places while developing software.

Friction in Software Development

Since friction impacts productivity negatively, it’s important that we understand it. Here are some of my observations:

  • Friction can come from different sources.
    It can result from your tool set, like when you have to wait for Perforce to check out a file over the network before you can edit it.
    Friction can also result from your development process, for example when you have to wait for the QA department to test your code before it can be released.
  • Friction can operate on different time scales.
    Some friction slows you down a lot, while others are much more benign. For instance, waiting for the next set of requirements might keep you from writing valuable software for weeks.
    On the other hand, waiting for someone to review your code changes may take only a couple of minutes.
  • Friction can be more than simple delays.
    It also rears its ugly head when things are more difficult then they ought to be.
    In the vi editor, for example, you must switch between command and insert modes. Seasoned vi users are just as fast as with editors that don’t have that separation. Yet they do have to keep track of which mode they are in, which gives them a higher cognitive load.

Lubricating Software Development

LubricationThere has been a trend to decrease friction in software development.

Tools like Integrated Development Environments have eliminated many sources of friction.

For instance, Eclipse will automatically compile your code when you save it.

Automated refactorings decrease both the time and the cognitive load required to make certain code changes.

On the process side, things like Agile development methodologies and the DevOps movement have eliminated or reduced friction. For instance, continuous deployment automates the release of software into production.

These lubricants have given us a fighting chance in a world of increasing complexity.

Frictionless Software Development

It’s fun to think about how far we could take these improvements, and what the ultimate Frictionless Development Environment (FDE) might look like.

My guess is that it would call for the combination of some of the same trends we already see in consumer and enterprise software products. Cloud computing will play a big role, as will simplification of the user interaction, and access from anywhere.

What do you think?

What frictions have you encountered? Do you think frictions are the same as waste in Lean?

What have you done to lubricate the frictions away? What would your perfect FDE look like?

Please let me know your thoughts in the comments.

Book review – Software Security: Building Security In

Dr. Gary McGraw is an authority on software security who has written many security books. This book, Software Security: Building Security In, is the third in a series.

While Exploiting Software: How to Break Code focuses on the black hat side of security, and Building Secure Software: How to Avoid Security Problems the Right Way focuses on the white hat side, this book brings the two perspectives together (see book cover on right).

Chapter 1, Defining a Discipline, explains the security problem that we have with software. It also introduces the three pillars of software security:

  1. Applied Risk Management
  2. Software Security Touchpoints
  3. Knowledge

Chapter 2, A Risk Management Framework, explains that security is all about risks and mitigating them. McGraw argues the need to embed this in an overall risk management framework to systematically identify, rank, track, understand, and mitigate security risks over time. The chapter also explains that security risks must always be placed into the larger business context. The chapter ends with a worked out example.

Chapter 3, Introduction to Software Security Touchpoints, starts the second part of the book, about the touchpoints. McGraw uses this term to denote software security best practices. The chapter presents a high level overview of the touchpoints and explains how both black and white hats must be involved to build secure software.

Chapter 4, Code Review with a Tool, introduces static code analysis with a specialized tool, like Fortify. A history of such tools is given, followed by a bit more detail about Fortify.

Chapter 5, Architectural Risk Analysis, shifts the focus from implementation bugs to design flaws, which account for about half of all security problems. Since design flaws can’t be found by a static code analysis tool, we need to perform a risk analysis based on architecture and design documents. McGraw argues that MicroSoft mistakenly calls this threat modeling, but he seems to have lost that battle.

Chapter 6, Software Penetration Testing, explains that functional testing focuses on what software is supposed to do, and how that is not enough to guarantee security. We also need to focus on what should not happen. McGraw argues that this “negative” testing should be informed by the architectural risk analysis (threat modeling) to be effective. The results of penetration testing should be fed back to the developers, so they can learn from their mistakes.

Chapter 7, Risk-Based Security Testing, explains that while black box penetration testing is helpful, we also need white box testing. Again, this testing should be driven by the architectural risk analysis (threat modeling). McGraw also scorns eXtreme Programming (XP). Personally, I feel that this is based on some misunderstandings about XP.

Chapter 8, Abuse Cases, explains that requirements should not only specify what should happen, in use cases, but what should not, in abuse cases. Abuse cases look at the software from the point of view of an attacker. Unfortunately, this chapter is a bit thin on how to go about writing them.

Chapter 9, Software Security Meets Security Operations, explains that developers and operations people should work closely together to improve security. We of course already knew this from the DevOps movement, but security adds some specific focal points. Some people have recently started talking about DevOpsSec. This chapter is a bit more modest, though, and talks mainly about how operations people can contribute to which parts of software development.

Chapter, 10, An Enterprise Software Security Program, starts the final part of the book. It explains how to introduce security into the software development lifecycle to create a Security Development Lifecycle (SDL).

Chapter 11, Knowledge for Software Security, explains the need for catalogs of security knowledge. Existing collections of security information, like CVE and CERT, focus on vulnerabilities and exploits, but we also need collections for principles, guidelines, rules, attack patterns, and historical risks.

Chapter 12, A Taxonomy of Coding Errors, introduces the 7 pernicious kingdoms of coding errors that can lead to vulnerabilities:

  1. Input Validations and Representation
  2. API Abuse
  3. Security Features
  4. Time and State
  5. Error Handling
  6. Code Quality
  7. Encapsulation

It also mentions the kingdom Environment, but doesn’t give it number 8, since it focuses on things outside the code. For each kingdom, the chapter lists a collection of so-called phyla, with more narrowly defined scope. For instance, the kingdom Time and State contains the phylum File Access Race Condition. This chapter concludes with a comparison with other collections of coding errors, like the 19 Deadly Sins of Software Security and the OWASP Top Ten.

Chapter 13, Annotated Bibliography and References, provides a list of must-read security books and other recommended reading.

The book ends with four appendices: Fortify Source Code Analysis Suite, Tutorial, ITS4 Rules, An Exercise in Risk Analysis: Smurfware, and Glossary. There is a trial version of Fortify included with the book.

All in all, this book provides a very good overview for software developers who want to learn about security. It doesn’t go into enough detail in some cases for you to be able to apply the described practices, but it does teach enough to know what to learn more about and it does tie everything together very nicely.