Building Both Security and Quality In

One of the important things in a Security Development Lifecycle (SDL) is to feed back information about vulnerabilities to developers.

This post relates that practice to the Agile practice of No Bugs.

The Security Incident Response

Even though we work hard to ship our software without security vulnerabilities, we never succeed 100%.

When an incident is reported (hopefully responsibly), we execute our security response plan. We must be careful to fix the issue without introducing new problems.

Next, we should also look for similar issues to the one reported. It’s not unlikely that there are issues in other parts of the application that are similar to the reported one. We should find and fix those as part of the same security update.

Finally, we should do a root cause analysis to determine why this weakness slipped through the cracks in the first place. Armed with that knowledge, we can adapt our process to make sure that similar issues will not occur in the future.

From Security To Quality

The process outlined above works well for making our software ever more secure.

But security weaknesses are essentially just bugs. Security issues may have more severe consequences than regular bugs, but most regular bugs are expensive to fix once the software is deployed as well.

So it actually makes sense to treat all bugs, security or otherwise, the same way.

As the saying goes, an ounce of prevention is worth a pound of cure. Just as we need to build security in, we also need to build quality in general in.

Building Quality In Using Agile Methods

This has been known in the Agile and Lean communities for a long time. For instance, James Shore wrote about it in his excellent book The Art Of Agile Development and Elisabeth Hendrickson thinks that there should be so little bugs that they don’t need triaging.

Some people object to the Zero Defects mentality, claiming that it’s unrealistic.

There is, however, clear evidence of much lower defect rates for Agile development teams. Many Lean implementations also report successes in their quest for Zero Defects.

So there is at least anecdotal evidence that a very significant reduction of defects is possible.

This will require change, of course. Testers need to change and so do developers. And then everybody on the team needs to speak the same language and work together as a single team instead of in silos.

If we do this well, we’ll become bug exterminators that delight our customers with software that actually works.

Advertisements

Book Review: The Security Development Lifecycle (SDL)

In The Security Development Lifecycle (SDL), A Process for Developing Demonstrably More Secure Software, authors Michael Howard and Steven Lipner explain how to build secure software through a repeatable process.

The methodology they describe was developed at Microsoft and has led to a measurable decrease in vulnerabilities. That’s why it’s now also used elsewhere, like at EMC (my employer).

Chapter 1, Enough is Enough: The Threats have Changed, explains how the SDL was born out of the Trustworthy Computing initiative that started with Bill Gates’ famous email in early 2002. Most operating systems have since become relatively secure, so hackers have shifted their focus to applications and the burden is now on us developers to crank up our security game. Many security issues are also privacy problems, so if we don’t, we are bound to pay the price.

Chapter 2, Current Software Development Methods Fail to Produce Secure Software, reviews current software development methods with regard to how (in)secure the resulting applications are. It shows that the adage given enough eyeballs, all bugs are shallow is wrong when it comes to security. The conclusion is that we need to explicitly include security into our development efforts.

Chapter 3, A Short History of the SDL at Microsoft, describes how security improvement efforts at Microsoft evolved into a consistent process that is now called the SDL.

Chapter 4, SDL for Management, explains that the SDL requires time, money, and commitment from senior management to prioritize over time to market. We’re talking real commitment, like delaying the release of an insecure application.

Chapter 5, Stage 0: Education and Awareness, starts the second part of the book, that describes the stages of the SDL. It all starts with educating developers about security. Without this, there’s no real chance of delivering secure software.

Chapter 6, Stage 1: Project Inception, sets the security context for the development effort. This includes assigning someone to guide the team through the SDL, building security leaders within the team, and setting up security expectations and tools.

Chapter 7, Stage 2: Define and Follow Best Practices, lists common secure design principles and describes attack surface analysis and attack surface reduction. The latter is about reducing the amount of code accessible to untrusted users, for example by disabling certain features by default.

Chapter 8, Stage 3: Product Risk Assessment, shows how to determine the application’s level of vulnerability to attack and its privacy impact. This helps to determine what level of security investment is appropriate for what parts of the application.

Chapter 9, Stage 4: Risk Analysis, explains threat modeling. The authors think that this is the practice with the most significant contribution to an application’s security. The idea is to understand the potential threats to the application, the risks those threats pose, and the mitigations that can reduce those risks. Threat models also help with code reviews and penetration tests. The chapter uses a pet shop website as an example.

[Note that there is now a tool that helps you with threat modeling. In this tool, you draw data flow diagrams, after which the tool uses the STRIDE approach to automatically find threats. The tool requires Visio 2007+.]

Chapter 10, Stage 5: Creating Security Documents, Tools, and Best Practices for Customers, describes the collateral that helps customers install, maintain, and use your application securely.

Chapter 11, Stage 6: Secure Coding Policies, explains the need for prescribing security-specific coding practices, educating developers about them, and verifying that they are adhered to. This is a high-level chapter, with details following in later chapters.

Chapter 12, Stage 7: Secure Testing Policies, describes the various forms of security testing, like fuzz testing, penetration testing, and run-time verification.

Chapter 13, Stage 8: The Security Push, explains that the goal of a security push is to hunt for security bugs and triage them. Fixes should follow the push. A security push doesn’t really fit into the SDL, since the goal is to prevent vulnerabilities. It can, however, be useful for legacy (i.e. pre-SDL) code.

Chapter 14, Stage 9: The Final Security Review, describes how to assess (from a security perspective) whether the application is ready to ship. A questionnaire is filled out to show compliance with the SDL, the threat models are reviewed, and unfixed security bugs are reviewed to make sure none are critical.

Chapter 15, Stage 10: Security Response Planning, explains that you need to be prepared to respond to the discovery of vulnerabilities in your deployed application, so that you can prevent panic and follow a solid process. You should have a Security Response Center outside your development team that interfaces with security researchers and others who discover vulnerabilities and guides the development team through the process of releasing a fix. It’s also important to feed back lessons learned into the development process.

Chapter 16, Stage 11: Product Release, explains that the actual release is a non-event, since all the hard work was done in the Final Security Review.

Chapter 17, Stage 12: Security Response Execution, describes the real-world challenges associated with responding to reported vulnerabilities, including when and how to deviate from the plan outlined in Security Response Planning. Above all, you must take the time to fix the root problem properly and to make sure you’re not introducing new bugs.

Chapter 18, Integrating SDL with Agile Methods, starts the final part of the book. It shows how to incorporate agile practices into the SDL, or the other way around.

Chapter 19, SDL Banned Function Calls, explains that some functions are so bad from a security perspective, that they never should be used. This chapter is heavily focused on C.

Chapter 20, SDL Minimum Cryptographic Standards, gives guidance on the use of cryptography, like never roll your own, make the use of crypto algorithms configurable, and what key sizes to use for what algorithms.

Chapter 21, SDL-Required Tools and Compiler Options, describes security tools you should use during development. This chapter is heavily focused on Microsoft technologies.

Chapter 22, Threat Tree Patterns, shows a number of threat trees that reflect common attack patterns. It follows the STRIDE approach again.

The appendix has information about the authors.

I think this book is a must-read for every developer who is serious about building secure software.

Book review – Software Security: Building Security In

Dr. Gary McGraw is an authority on software security who has written many security books. This book, Software Security: Building Security In, is the third in a series.

While Exploiting Software: How to Break Code focuses on the black hat side of security, and Building Secure Software: How to Avoid Security Problems the Right Way focuses on the white hat side, this book brings the two perspectives together (see book cover on right).

Chapter 1, Defining a Discipline, explains the security problem that we have with software. It also introduces the three pillars of software security:

  1. Applied Risk Management
  2. Software Security Touchpoints
  3. Knowledge

Chapter 2, A Risk Management Framework, explains that security is all about risks and mitigating them. McGraw argues the need to embed this in an overall risk management framework to systematically identify, rank, track, understand, and mitigate security risks over time. The chapter also explains that security risks must always be placed into the larger business context. The chapter ends with a worked out example.

Chapter 3, Introduction to Software Security Touchpoints, starts the second part of the book, about the touchpoints. McGraw uses this term to denote software security best practices. The chapter presents a high level overview of the touchpoints and explains how both black and white hats must be involved to build secure software.

Chapter 4, Code Review with a Tool, introduces static code analysis with a specialized tool, like Fortify. A history of such tools is given, followed by a bit more detail about Fortify.

Chapter 5, Architectural Risk Analysis, shifts the focus from implementation bugs to design flaws, which account for about half of all security problems. Since design flaws can’t be found by a static code analysis tool, we need to perform a risk analysis based on architecture and design documents. McGraw argues that MicroSoft mistakenly calls this threat modeling, but he seems to have lost that battle.

Chapter 6, Software Penetration Testing, explains that functional testing focuses on what software is supposed to do, and how that is not enough to guarantee security. We also need to focus on what should not happen. McGraw argues that this “negative” testing should be informed by the architectural risk analysis (threat modeling) to be effective. The results of penetration testing should be fed back to the developers, so they can learn from their mistakes.

Chapter 7, Risk-Based Security Testing, explains that while black box penetration testing is helpful, we also need white box testing. Again, this testing should be driven by the architectural risk analysis (threat modeling). McGraw also scorns eXtreme Programming (XP). Personally, I feel that this is based on some misunderstandings about XP.

Chapter 8, Abuse Cases, explains that requirements should not only specify what should happen, in use cases, but what should not, in abuse cases. Abuse cases look at the software from the point of view of an attacker. Unfortunately, this chapter is a bit thin on how to go about writing them.

Chapter 9, Software Security Meets Security Operations, explains that developers and operations people should work closely together to improve security. We of course already knew this from the DevOps movement, but security adds some specific focal points. Some people have recently started talking about DevOpsSec. This chapter is a bit more modest, though, and talks mainly about how operations people can contribute to which parts of software development.

Chapter, 10, An Enterprise Software Security Program, starts the final part of the book. It explains how to introduce security into the software development lifecycle to create a Security Development Lifecycle (SDL).

Chapter 11, Knowledge for Software Security, explains the need for catalogs of security knowledge. Existing collections of security information, like CVE and CERT, focus on vulnerabilities and exploits, but we also need collections for principles, guidelines, rules, attack patterns, and historical risks.

Chapter 12, A Taxonomy of Coding Errors, introduces the 7 pernicious kingdoms of coding errors that can lead to vulnerabilities:

  1. Input Validations and Representation
  2. API Abuse
  3. Security Features
  4. Time and State
  5. Error Handling
  6. Code Quality
  7. Encapsulation

It also mentions the kingdom Environment, but doesn’t give it number 8, since it focuses on things outside the code. For each kingdom, the chapter lists a collection of so-called phyla, with more narrowly defined scope. For instance, the kingdom Time and State contains the phylum File Access Race Condition. This chapter concludes with a comparison with other collections of coding errors, like the 19 Deadly Sins of Software Security and the OWASP Top Ten.

Chapter 13, Annotated Bibliography and References, provides a list of must-read security books and other recommended reading.

The book ends with four appendices: Fortify Source Code Analysis Suite, Tutorial, ITS4 Rules, An Exercise in Risk Analysis: Smurfware, and Glossary. There is a trial version of Fortify included with the book.

All in all, this book provides a very good overview for software developers who want to learn about security. It doesn’t go into enough detail in some cases for you to be able to apply the described practices, but it does teach enough to know what to learn more about and it does tie everything together very nicely.

Software Development and Security

It seems that not many software developers are interested in security. One reason may be that security is a negative feature. Another could be that developers don’t see how security relates to their daily activities. Let’s look at a detailed example that sheds some light on this relation.

Example: Crashing Tetris

My employer, EMC, takes security seriously. Besides the annual security awareness training that every employee has to take, software developers are required to take additional security courses, so that they understand the Security Development Lifecycle. In one of those courses, security guru Hugh Thompson tells the following story.

While on an airplane, he found a Tetris game in the on-board entertainment system. The game showed the next blocks to drop in a preview pane. The game’s settings had up and down buttons to increase or decrease the number of preview blocks.

Using the up button, the number could only be increased to four. However, using the telephone key pad, Thompson could enter 5 and get it accepted.

No higher digits were accepted from the telephone, but now that the number was five, the up button on the screen happily increased the number further.

He increased the number all the way up to 127. The next time he pressed the up button, the screen went black. And so did the screen next to him. And everywhere else in the plane. Zero availability.

Exploits Use Vulnerabilities, Which Come From Bugs

How did this happen? The answer is simple: there were some bugs in the application that were abused in a systematic manner. In the security world, such a bug is referred to as a vulnerability, and the abuse of them to decrease security is known as an exploit.

There is nothing inherently “security related” about vulnerabilities. In the example, the first mistake was that the two interfaces each had their own logic for manipulating the model, a clear violation of DRY. The second was the off-by-one error in the telephone interface. Next, the logic for the up button only checked for the specific boundary value four, instead of for four and anything larger. The final mistake was a missing check for integer overflow. These four more or less innocent bugs combined to form a vulnerability that Thompson exploited.

Certain bugs are more likely to lead to vulnerabilities than others. Two notorious examples are Buffer Overflow and SQL Injection. Luckily, many of such bugs are easily prevented. Good tools and a little awareness on the side of the developer go a long way.

Conclusion: Less Bugs Means More Secure

If vulnerabilities come from bugs, then we need a relentless focus on preventing and eliminating bugs in order to make our applications more secure.

With that insight, we’re firmly back in the land of software development. Security isn’t the big scary monster we developers sometimes think it is.