Book review – Software Security: Building Security In

Dr. Gary McGraw is an authority on software security who has written many security books. This book, Software Security: Building Security In, is the third in a series.

While Exploiting Software: How to Break Code focuses on the black hat side of security, and Building Secure Software: How to Avoid Security Problems the Right Way focuses on the white hat side, this book brings the two perspectives together (see book cover on right).

Chapter 1, Defining a Discipline, explains the security problem that we have with software. It also introduces the three pillars of software security:

  1. Applied Risk Management
  2. Software Security Touchpoints
  3. Knowledge

Chapter 2, A Risk Management Framework, explains that security is all about risks and mitigating them. McGraw argues the need to embed this in an overall risk management framework to systematically identify, rank, track, understand, and mitigate security risks over time. The chapter also explains that security risks must always be placed into the larger business context. The chapter ends with a worked out example.

Chapter 3, Introduction to Software Security Touchpoints, starts the second part of the book, about the touchpoints. McGraw uses this term to denote software security best practices. The chapter presents a high level overview of the touchpoints and explains how both black and white hats must be involved to build secure software.

Chapter 4, Code Review with a Tool, introduces static code analysis with a specialized tool, like Fortify. A history of such tools is given, followed by a bit more detail about Fortify.

Chapter 5, Architectural Risk Analysis, shifts the focus from implementation bugs to design flaws, which account for about half of all security problems. Since design flaws can’t be found by a static code analysis tool, we need to perform a risk analysis based on architecture and design documents. McGraw argues that MicroSoft mistakenly calls this threat modeling, but he seems to have lost that battle.

Chapter 6, Software Penetration Testing, explains that functional testing focuses on what software is supposed to do, and how that is not enough to guarantee security. We also need to focus on what should not happen. McGraw argues that this “negative” testing should be informed by the architectural risk analysis (threat modeling) to be effective. The results of penetration testing should be fed back to the developers, so they can learn from their mistakes.

Chapter 7, Risk-Based Security Testing, explains that while black box penetration testing is helpful, we also need white box testing. Again, this testing should be driven by the architectural risk analysis (threat modeling). McGraw also scorns eXtreme Programming (XP). Personally, I feel that this is based on some misunderstandings about XP.

Chapter 8, Abuse Cases, explains that requirements should not only specify what should happen, in use cases, but what should not, in abuse cases. Abuse cases look at the software from the point of view of an attacker. Unfortunately, this chapter is a bit thin on how to go about writing them.

Chapter 9, Software Security Meets Security Operations, explains that developers and operations people should work closely together to improve security. We of course already knew this from the DevOps movement, but security adds some specific focal points. Some people have recently started talking about DevOpsSec. This chapter is a bit more modest, though, and talks mainly about how operations people can contribute to which parts of software development.

Chapter, 10, An Enterprise Software Security Program, starts the final part of the book. It explains how to introduce security into the software development lifecycle to create a Security Development Lifecycle (SDL).

Chapter 11, Knowledge for Software Security, explains the need for catalogs of security knowledge. Existing collections of security information, like CVE and CERT, focus on vulnerabilities and exploits, but we also need collections for principles, guidelines, rules, attack patterns, and historical risks.

Chapter 12, A Taxonomy of Coding Errors, introduces the 7 pernicious kingdoms of coding errors that can lead to vulnerabilities:

  1. Input Validations and Representation
  2. API Abuse
  3. Security Features
  4. Time and State
  5. Error Handling
  6. Code Quality
  7. Encapsulation

It also mentions the kingdom Environment, but doesn’t give it number 8, since it focuses on things outside the code. For each kingdom, the chapter lists a collection of so-called phyla, with more narrowly defined scope. For instance, the kingdom Time and State contains the phylum File Access Race Condition. This chapter concludes with a comparison with other collections of coding errors, like the 19 Deadly Sins of Software Security and the OWASP Top Ten.

Chapter 13, Annotated Bibliography and References, provides a list of must-read security books and other recommended reading.

The book ends with four appendices: Fortify Source Code Analysis Suite, Tutorial, ITS4 Rules, An Exercise in Risk Analysis: Smurfware, and Glossary. There is a trial version of Fortify included with the book.

All in all, this book provides a very good overview for software developers who want to learn about security. It doesn’t go into enough detail in some cases for you to be able to apply the described practices, but it does teach enough to know what to learn more about and it does tie everything together very nicely.

Software Development and Security

It seems that not many software developers are interested in security. One reason may be that security is a negative feature. Another could be that developers don’t see how security relates to their daily activities. Let’s look at a detailed example that sheds some light on this relation.

Example: Crashing Tetris

My employer, EMC, takes security seriously. Besides the annual security awareness training that every employee has to take, software developers are required to take additional security courses, so that they understand the Security Development Lifecycle. In one of those courses, security guru Hugh Thompson tells the following story.

While on an airplane, he found a Tetris game in the on-board entertainment system. The game showed the next blocks to drop in a preview pane. The game’s settings had up and down buttons to increase or decrease the number of preview blocks.

Using the up button, the number could only be increased to four. However, using the telephone key pad, Thompson could enter 5 and get it accepted.

No higher digits were accepted from the telephone, but now that the number was five, the up button on the screen happily increased the number further.

He increased the number all the way up to 127. The next time he pressed the up button, the screen went black. And so did the screen next to him. And everywhere else in the plane. Zero availability.

Exploits Use Vulnerabilities, Which Come From Bugs

How did this happen? The answer is simple: there were some bugs in the application that were abused in a systematic manner. In the security world, such a bug is referred to as a vulnerability, and the abuse of them to decrease security is known as an exploit.

There is nothing inherently “security related” about vulnerabilities. In the example, the first mistake was that the two interfaces each had their own logic for manipulating the model, a clear violation of DRY. The second was the off-by-one error in the telephone interface. Next, the logic for the up button only checked for the specific boundary value four, instead of for four and anything larger. The final mistake was a missing check for integer overflow. These four more or less innocent bugs combined to form a vulnerability that Thompson exploited.

Certain bugs are more likely to lead to vulnerabilities than others. Two notorious examples are Buffer Overflow and SQL Injection. Luckily, many of such bugs are easily prevented. Good tools and a little awareness on the side of the developer go a long way.

Conclusion: Less Bugs Means More Secure

If vulnerabilities come from bugs, then we need a relentless focus on preventing and eliminating bugs in order to make our applications more secure.

With that insight, we’re firmly back in the land of software development. Security isn’t the big scary monster we developers sometimes think it is.