Abuse Cases

Gary McGraw describes several best practices for building secure software. One is the use of so-called abuse cases. Since his chapter on abuse cases left me hungry for more information, this post examines additional literature on the subject and how to fit abuse cases into a Security Development Lifecycle (SDL).

Modeling Functional Requirements With Use Cases

Abuse cases are an adaptation of use cases, abstract episodes of interaction between a system and its environment.

A use case consists of a number of related scenarios. A scenario is a description of a specific interaction between the system and particular actors. Each use case has a main success scenario and some additional scenarios to cover variations and exceptional cases.

Actors are external agents, and can be either human or non-human.

For better understanding, each use case should state the goal that the primary actor is working towards.

Use cases are represented in UML diagrams (see example on left) as ovals that are connected to stick figures, which represent the actors. Use case diagrams are accompanied by textual use case descriptions that explain how the actors and the system interact.

Modeling Security Requirements With Abuse Cases

An abuse case is a use case where the results of the interaction are harmful to the system, one of the actors, or one of the stakeholders in the system. An interaction is harmful if it decreases the security (confidentiality, integrity, or availability) of the system.

Abuse cases are also referred to as misuse cases, although some people maintain they’re different. I think the two concepts are too similar to treat differently, so whenever I write “abuse case”, it refers to “misuse case” as well.

Some actors in regular use cases may also act as attacker in an abuse case (e.g. in the case of an insider threat). We should then introduce a new actor to avoid confusion (potentially using inheritance). This is consistent with the best practice of having actors represent roles rather than actual users.

Attackers are described in more detail than regular actors, to make it easier to look at the system from their point of view. Their description should include the resources at their disposal, their skills, and their objectives.

Note that objectives are longer term than the (ab)use case’s goal. For instance, the attacker’s goal for an abuse case may be to gain root privileges on a certain server, while her objective may be industrial espionage.

Abuse cases are very different from use cases in one respect: while we know how the actor in a use case achieves her goal, we don’t know precisely how an attacker will break the system’s security. If we would, we would fix the vulnerability! Therefore, abuse case scenarios describe interactions less precisely than regular use case scenarios.

Modeling Other Non-Functional Requirements

Note that since actors in use cases needn’t be human, we can employ a similar approach to abuse cases with actors like “network failure” etc. to model non-functional requirements beyond security, like reliability, portability, maintainability, etc.

For this to work, one must be able to express the non-functional requirement as an interactive scenario. I won’t go into this topic any further in this post.

Creating Abuse Cases

Abuse case models are best created when use cases are: during requirements gathering. It’s easiest to define the abuse cases after the regular use cases are identified (or even defined).

Abuse case modeling requires one to wear a black hat. Therefore, it makes sense to invite people with black hat capabilities, like testers and network operators or administrators to the table.

The first step in developing abuse cases is to find the actors. As stated before, every actor in a regular use case can potentially be turned into an malicious actor in an abuse case.

We should next add actors for different kinds of intruders. These are distinguished based on their resources and skills.

When we have the actors, we can identify the abuse cases by determining how they might interact with the system. We might identify such malicious interactions by combining the regular use cases with attack patterns.

We can find more abuse cases by combining them systematically and recursively with regular use cases.

Combining Use Cases and Abuse Cases

Some people keep use cases and abuse cases separate to avoid confusion. Others combine them, but display abuse cases as inverted use cases (i.e. black ovals with white text, and actors with black heads).

The latter approach makes it possible to relate abuse cases to use cases using UML associations. For instance, an abuse case may threaten a use case, while a use case might mitigate an abuse case. The latter use case is also referred to as a security use case. Security use cases usually deal with security features.

Security use cases can be threatened by new abuse cases, for which we can find new security use cases to mitigate, etc. etc. In this way, a “game” of play and counterplay enfolds that fits well in a defense in depth strategy.

We should not expect to win this “game”. Instead, we should make a good trade-off between security requirements and other aspects of the system, like usability and development cost. Ideally, these trade-offs are made clearly visible to stakeholders by using a good risk management framework.

Reusing Abuse Cases

Use cases can be abstracted into essential use cases to make them more reusable. There is no reason we couldn’t do the same with abuse cases and security use cases.

It seems to me that this not just possible, but already done. Microsoft’s STRIDE model contains generalized threats, and its SDL Threat Modeling tool automatically identifies which of those are applicable to your situation.

Conclusion

Although abuse cases are a bit different from regular use cases, their main value is that they present information about security risks in a format that may already be familiar to the stakeholders of the software development process. Developers in particular are likely to know them.

This should make it easier for people with little or no security background to start thinking about securing their systems and how to trade-off security and functionality.

However, it seems that threat modeling gives the same advantages as abuse cases. Since threat modeling is supported by tools, it’s little wonder that people prefer that over abuse cases for inclusion in their Security Development Lifecycle.

Book review – Software Security: Building Security In

Dr. Gary McGraw is an authority on software security who has written many security books. This book, Software Security: Building Security In, is the third in a series.

While Exploiting Software: How to Break Code focuses on the black hat side of security, and Building Secure Software: How to Avoid Security Problems the Right Way focuses on the white hat side, this book brings the two perspectives together (see book cover on right).

Chapter 1, Defining a Discipline, explains the security problem that we have with software. It also introduces the three pillars of software security:

  1. Applied Risk Management
  2. Software Security Touchpoints
  3. Knowledge

Chapter 2, A Risk Management Framework, explains that security is all about risks and mitigating them. McGraw argues the need to embed this in an overall risk management framework to systematically identify, rank, track, understand, and mitigate security risks over time. The chapter also explains that security risks must always be placed into the larger business context. The chapter ends with a worked out example.

Chapter 3, Introduction to Software Security Touchpoints, starts the second part of the book, about the touchpoints. McGraw uses this term to denote software security best practices. The chapter presents a high level overview of the touchpoints and explains how both black and white hats must be involved to build secure software.

Chapter 4, Code Review with a Tool, introduces static code analysis with a specialized tool, like Fortify. A history of such tools is given, followed by a bit more detail about Fortify.

Chapter 5, Architectural Risk Analysis, shifts the focus from implementation bugs to design flaws, which account for about half of all security problems. Since design flaws can’t be found by a static code analysis tool, we need to perform a risk analysis based on architecture and design documents. McGraw argues that MicroSoft mistakenly calls this threat modeling, but he seems to have lost that battle.

Chapter 6, Software Penetration Testing, explains that functional testing focuses on what software is supposed to do, and how that is not enough to guarantee security. We also need to focus on what should not happen. McGraw argues that this “negative” testing should be informed by the architectural risk analysis (threat modeling) to be effective. The results of penetration testing should be fed back to the developers, so they can learn from their mistakes.

Chapter 7, Risk-Based Security Testing, explains that while black box penetration testing is helpful, we also need white box testing. Again, this testing should be driven by the architectural risk analysis (threat modeling). McGraw also scorns eXtreme Programming (XP). Personally, I feel that this is based on some misunderstandings about XP.

Chapter 8, Abuse Cases, explains that requirements should not only specify what should happen, in use cases, but what should not, in abuse cases. Abuse cases look at the software from the point of view of an attacker. Unfortunately, this chapter is a bit thin on how to go about writing them.

Chapter 9, Software Security Meets Security Operations, explains that developers and operations people should work closely together to improve security. We of course already knew this from the DevOps movement, but security adds some specific focal points. Some people have recently started talking about DevOpsSec. This chapter is a bit more modest, though, and talks mainly about how operations people can contribute to which parts of software development.

Chapter, 10, An Enterprise Software Security Program, starts the final part of the book. It explains how to introduce security into the software development lifecycle to create a Security Development Lifecycle (SDL).

Chapter 11, Knowledge for Software Security, explains the need for catalogs of security knowledge. Existing collections of security information, like CVE and CERT, focus on vulnerabilities and exploits, but we also need collections for principles, guidelines, rules, attack patterns, and historical risks.

Chapter 12, A Taxonomy of Coding Errors, introduces the 7 pernicious kingdoms of coding errors that can lead to vulnerabilities:

  1. Input Validations and Representation
  2. API Abuse
  3. Security Features
  4. Time and State
  5. Error Handling
  6. Code Quality
  7. Encapsulation

It also mentions the kingdom Environment, but doesn’t give it number 8, since it focuses on things outside the code. For each kingdom, the chapter lists a collection of so-called phyla, with more narrowly defined scope. For instance, the kingdom Time and State contains the phylum File Access Race Condition. This chapter concludes with a comparison with other collections of coding errors, like the 19 Deadly Sins of Software Security and the OWASP Top Ten.

Chapter 13, Annotated Bibliography and References, provides a list of must-read security books and other recommended reading.

The book ends with four appendices: Fortify Source Code Analysis Suite, Tutorial, ITS4 Rules, An Exercise in Risk Analysis: Smurfware, and Glossary. There is a trial version of Fortify included with the book.

All in all, this book provides a very good overview for software developers who want to learn about security. It doesn’t go into enough detail in some cases for you to be able to apply the described practices, but it does teach enough to know what to learn more about and it does tie everything together very nicely.