How To Start With Software Security

white-hatThe software security field sometimes feels a bit negative.

The focus is on things that went wrong and people are constantly told what not to do.

Build Security In

One often heard piece of advice is that one cannot bolt security on as an afterthought, that it has to be built in.

But how do we do that? I’ve written earlier about two approaches: Cigital’s TouchPoints and Microsoft’s Security Development Lifecycle (SDL).

The Touchpoints are good, but rather high-level and not so actionable for developers starting out with security. The SDL is also good, but rather heavyweight and difficult to adopt for smaller organizations.

The Software Assurance Maturity Model (SAMM)

We need a framework that we can ease into in an iterative manner. It should also provide concrete guidance for developers that don’t necessarily have a lot of background in the security field.

Enter OWASP‘s SAMM:

The Software Assurance Maturity Model (SAMM) is an open framework to help organizations formulate and implement a strategy for software security that is tailored to the specific risks facing the organization.

SAMM assumes four business functions in developing software and assigns three security practices to each of those:

opensamm-security-practices

For each practice, three maturity levels are defined, in addition to an implicit Level 0 where the practice isn’t performed at all. Each level has an objective and several activities to meet the objective.

To get a baseline of the current security status, you perform an assessment, which consists of answering questions about each of the practices. The result of an assessment is a scorecard. Comparing scorecards over time gives insight into evolving security capabilities.

With these building blocks in place, you can build a roadmap for improving your capabilities.

A roadmap consists of phases in which certain practices are improved so that they reach a higher level. SAMM even provides roadmap templates for specific types of organizations to get you started quickly.

What Do You Think?

Do you think the SAMM is actionable? Would it help your organization build out a roadmap for improving its security capabilities? Please leave a comment.

Advertisement

The Lazy Developer’s Way to an Up-To-Date Libraries List

groovyLast time I shared some tips on how to use libraries well. I now want to delve deeper into one of those: Know What Libraries You Use.

Last week I set out to create such a list of embedded components for our product. This is a requirement for our Security Development Lifecycle (SDL).

However, it’s not a fun task. As a developer, I want to write code, not update documents! So I turned to my friends Gradle and Groovy, with a little help from Jenkins and Confluence.

Gradle Dependencies

We use Gradle to build our product, and Gradle maintains the dependencies we have on third-party components.

Our build defines a list of names of configurations for embedded components, copyBundleConfigurations, for copying those to the distribution directory. From there, I get to the external dependencies using Groovy’s collection methods:

def externalDependencies() {
  copyBundleConfigurations.collectMany { 
    configurations[it].allDependencies 
  }.findAll {
    !(it instanceof ProjectDependency) && it.group &&
        !it.group.startsWith('com.emc')
  }
}

Adding Required Information

However, Gradle dependencies don’t contain all the required information.

For instance, we need the license under which the library is distributed, so that we can ask the Legal department permission for using it.

So I added a simple XML file to hold the additional info. Combining that information with the dependencies that Gradle maintains is easy using Groovy’s XML support:

ext.embeddedComponentsInfo = 'embeddedComponents.xml'

def externalDependencyInfos() {
  def result = new TreeMap()
  def componentInfo = new XmlSlurper()
      .parse(embeddedComponentsInfo)
  externalDependencies().each { dependency ->
    def info = componentInfo.component.find { 
      it.id == "$dependency.group:$dependency.name" &&
          it.friendlyName?.text() 
    }
    if (!info.isEmpty()) {
      def component = [
        'id': info.id,
        'friendlyName': info.friendlyName.text(),
        'version': dependency.version,
        'latestVersion': info.latestVersion.text(),
        'license': info.license.text(),
        'licenseUrl': info.licenseUrl.text(),
        'comment': info.comment.text()
      ]
      result.put component.friendlyName, component
    }
  }
  result.values()
}

I then created a Gradle task to write the information to an HTML file. Our Jenkins build executes this task, so that we always have an up-to-date list. I used Confluence’s html-include macro to include the HTML file in our Wiki.

Now our Wiki is always up-to-date.

Automatically Looking Up Missing Information

The next problem was to populate the XML file with additional information.

Had we had this file from the start, adding that information manually would not have been a big deal. In our case, we already had over a hundred dependencies, so automation was in order.

First I identified the components that miss the required information:

def missingExternalDependencies() {
  def componentInfo = new XmlSlurper()
      .parse(embeddedComponentsInfo)
  externalDependencies().findAll { dependency ->
    componentInfo.component.find { 
      it.id == "$dependency.group:$dependency.name" && 
          it.friendlyName?.text() 
    }.isEmpty()
  }.collect {
    "$it.group:$it.name"
  }.sort()
}

Next, I wanted to automatically look up the missing information and add it to the XML file (using Groovy’s MarkupBuilder). In case the required information can’t be found, the build should fail:

project.afterEvaluate {
  def missingComponents = missingExternalDependencies()
  if (!missingComponents.isEmpty()) {
    def manualComponents = []
    def writer = new StringWriter() 
    def xml = new MarkupBuilder(writer)
    xml.expandEmptyElements = true
    println 'Looking up information on new dependencies:'
    xml.components {
      externalDependencyInfos().each { existingComponent ->
        component { 
          id(existingComponent.id)
          friendlyName(existingComponent.friendlyName)
          latestVersion(existingComponent.latestVersion)
          license(existingComponent.license)
          licenseUrl(existingComponent.licenseUrl)
          approved(existingComponent.approved)
          comment(existingComponent.comment)
        }
      }
      missingComponents.each { missingComponent ->
        def lookedUpComponent = collectInfo(missingComponent)
        component {
          id(missingComponent)
          friendlyName(lookedUpComponent.friendlyName)
          latestVersion(lookedUpComponent.latestVersion)
          license(lookedUpComponent.license)
          licenseUrl(lookedUpComponent.licenseUrl)
          approved('?')
          comment(lookedUpComponent.comment)
        }
        if (!lookedUpComponent.friendlyName || 
            !lookedUpComponent.latestVersion || 
            !lookedUpComponent.license) {
          manualComponents.add lookedUpComponent.id
          println '    => Please enter information manually'
        }
      }
    }
    writer.close()
    def embeddedComponentsFile = 
        project.file(embeddedComponentsInfo)
    embeddedComponentsFile.text = writer.toString()
    if (!manualComponents.isEmpty()) {
      throw new GradleException('Missing library information')
    }
  }
}

Anyone who adds a dependency in the future is now forced to add the required information.

So all that is left to implement is the collectInfo() method.

There are two primary sources that I used to look up the required information: the SpringSource Enterprise Bundle Repository holds OSGi bundle versions of common libraries, while Maven Central holds regular jars.

Extracting information from those sources is a matter of downloading and parsing XML and HTML files. This is easy enough with Groovy’s String.toURL() and URL.eachLine() methods and support for regular expressions.

Conclusion

All of this took me a couple of days to build, but I feel that the investment is well worth it, since I no longer have to worry about the list of used libraries being out of date.

How do you maintain a list of used libraries? Please let me know in the comments.

Book review: Secure Programming with Static Analysis

Secure Programming with Static AnalysisOne thing that should be part of every Security Development Lifecycle (SDL) is static code analysis.

This topic is explained in great detail in Secure Programming with Static Analyis.

Chapter 1, The Software Security Problem, explains why security is easy to get wrong and why typical methods for catching bugs aren’t effective for finding security vulnerabilities.

Chapter 2, Introduction to Static Analysis, explains that static analysis involves a software program checking the source code of another software program to find structural, quality, and security problems.

Chapter 3, Static Analysis as Part of Code Review, explains how static code analysis can be integrated into a security review process.

Chapter 4, Static Analysis Internals, describes how static analysis tools work internally and what trade-offs are made when building them.

This concludes the first part of the book that describes the big picture. Part two deals with pervasive security problems.

InputChapter 5, Handling Input, describes how programs should deal with untrustworthy input.

Chapter 6, Buffer Overflow, and chapter 7, Bride to Buffer Overflow, deal with buffer overflows. These chapters are not as interesting for developers working with modern languages like Java or C#.

Chapter 8, Errors and Exceptions, talks about unexpected conditions and the link with security issues. It also handles logging and debugging.

Chapter 9, Web Applications, starts the third part of the book about common types of programs. This chapter looks at security problems specific to the Web and HTTP.

Chapter 10, XML and Web Services, discusses the security challenges associated with XML and with building up applications from distributed components.

CryptographyChapter 11, Privacy and Secrets, switches the focus from AppSec to InfoSec with an explanation of how to protect private information.

Chapter 12, Privileged Programs, continues with a discussion on how to write programs that operate with different permissions than the user.

The final part of the book is about gaining experience with static analysis tools.

Chapter 13, Source Code Analysis Exercises for Java, is a tutorial on how to use Fortify (a trial version of which is included with the book) on some sample Java projects.

Chapter 14, Source Code Analysis Exercises for C does the same for C programs.

rating-five-out-of-five

This book is very useful for anybody working with static analysis tools. Its description of the internals of such tools helps with understanding how to apply the tools best.

I like that the book is filled with numerous examples that show how the tools can detect a particular type of problem.

Finally, the book makes clear that any static analysis tool will give both false positives and false negatives. You should really understand security issues yourself to make good decisions. When you know how to do that, a static analysis tool can be a great help.

Building Both Security and Quality In

One of the important things in a Security Development Lifecycle (SDL) is to feed back information about vulnerabilities to developers.

This post relates that practice to the Agile practice of No Bugs.

The Security Incident Response

Even though we work hard to ship our software without security vulnerabilities, we never succeed 100%.

When an incident is reported (hopefully responsibly), we execute our security response plan. We must be careful to fix the issue without introducing new problems.

Next, we should also look for similar issues to the one reported. It’s not unlikely that there are issues in other parts of the application that are similar to the reported one. We should find and fix those as part of the same security update.

Finally, we should do a root cause analysis to determine why this weakness slipped through the cracks in the first place. Armed with that knowledge, we can adapt our process to make sure that similar issues will not occur in the future.

From Security To Quality

The process outlined above works well for making our software ever more secure.

But security weaknesses are essentially just bugs. Security issues may have more severe consequences than regular bugs, but most regular bugs are expensive to fix once the software is deployed as well.

So it actually makes sense to treat all bugs, security or otherwise, the same way.

As the saying goes, an ounce of prevention is worth a pound of cure. Just as we need to build security in, we also need to build quality in general in.

Building Quality In Using Agile Methods

This has been known in the Agile and Lean communities for a long time. For instance, James Shore wrote about it in his excellent book The Art Of Agile Development and Elisabeth Hendrickson thinks that there should be so little bugs that they don’t need triaging.

Some people object to the Zero Defects mentality, claiming that it’s unrealistic.

There is, however, clear evidence of much lower defect rates for Agile development teams. Many Lean implementations also report successes in their quest for Zero Defects.

So there is at least anecdotal evidence that a very significant reduction of defects is possible.

This will require change, of course. Testers need to change and so do developers. And then everybody on the team needs to speak the same language and work together as a single team instead of in silos.

If we do this well, we’ll become bug exterminators that delight our customers with software that actually works.

Signing Java Code

In a previous post, we discussed how to secure mobile code.

One of the measures mentioned was signing code. This post explores how that works for Java programs.

Digital Signatures

The basis for digital signatures is cryptography, specifically, public key cryptography. We use a set of cryptographic keys: a private and a public key.

The private key is used to sign a file and must remain a secret. The public key is used to verify the signature that was generated with the private key. This is possible because of the special mathematical relationship between the keys.

Both the signature and the public key need to be transferred to the recipient.

Certificates

In order to trust a file, one needs to verify the signature on that file. For this, one needs the public key that corresponds to the private key that was used to sign the file. So how can we trust the public key?

This is where certificates come in. A certificate contains a public key and the distinguished name that identifies the owner of that key.

The trust comes from the fact that the certificate is itself signed. So the certificate also contains a signature and the distinguished name of the signer.

When we control both ends of the communication, we can just provide both with the certificate and be done with it. This works well for mobile apps you write that connect to a server you control, for instance.

If you don’t control both ends, then we need an alternative. The distinguished name of the signer can be used to look up the signer’s certificate. With the public key from that certificate, the signature in the original certificate can be verified.

We can continue in this manner, creating a certificate chain, until we reach a signer that we explicitly trust. This is usually a well-established Certificate Authority (CA), like VeriSign or Thawte.

Keystores

In Java, private keys and certificates are stored in a password-protected database called a keystore.

Each key/certificate combination is identified by a string known as the alias.

Code Signing Tools

Java comes with two tools for code signing: keytool and jarsigner.

Use the jarsigner program to sign jar files using certificates stored in a keystore.

Use the keytool program to create private keys and the corresponding public key certificates, to retrieve/store those from/to a keystore, and to manage the keystore.

The keytool program is not capable of creating a certificate signed by someone else. It can create a Certificate Signing Request, however, that you can send to a CA. It can also import the CA’s response into the keystore.

The alternative is to use tools like OpenSSL or BSAFE, which support such CA capabilities.

Code Signing Environment

Code signing should happen in a secure environment, since private keys are involved and those need to remain secret. If a private key falls into the wrong hands, a third party could sign their code with your key, tricking your customers into trusting that code.

This means that you probably don’t want to maintain the keystore on the build machine, since that machine is likely available to many people. A more secure approach is to introduce a dedicated signing server:

You should also use different signing certificates for development and production.

Timestamping

Certificates are valid for a limited time period only. Any files signed with a private key for which the public key certificate has expired, should no longer be trusted, since it may have been signed after the certificate expired.

We can alleviate this problem by timestamping the file. By adding a trusted timestamp to the file, we can trust it even after the signing certificate expires.

But then how do we trust the timestamp? Well, by signing it using a Time Stamping Authority, of course! The OpenSSL program can help you with that as well.

Beyond Code Signing

When you sign your code, you only prove that the code came from you. For a customer to be able to trust your code, it needs to be trustworthy. You probably want to set up a full-blown Security Development Lifecycle (SDL) to make sure that it is as much as possible.

Another thing to consider in this area is third-party code. Most software packages embed commercial and/or open source libraries. Ideally, those libraries are signed by their authors. But no matter what, you need to take ownership, since customers don’t care whether a vulnerability is found in code you wrote yourself or in a library you used.

Securing Mobile Java Code

Mobile Code is code sourced from remote, possibly untrusted systems, that are executed on your local system. Mobile code is an optional constraint in the REST architectural style.

This post investigates our options for securely running mobile code in general, and for Java in particular.

Mobile Code

Examples of mobile code range from JavaScript fragments found in web pages to plug-ins for applications like FireFox and Eclipse.

Plug-ins turn a simple application into an extensible platform, which is one reason they are so popular. If you are going to support plug-ins in your application, then you should understand the security implications of doing so.

Types of Mobile Code

Mobile code comes in different forms. Some mobile code is source code, like JavaScript.

Mobile code in source form requires an interpreter to execute, like JägerMonkey in FireFox.

Mobile code can also be found in the form of executable code.

This can either be intermediate code, like Java applets, or native binary code, like Adobe’s Flash Player.

Active Content Delivers Mobile Code

A concept that is related to mobile code is active content, which is defined by NIST as

Electronic documents that can carry out or trigger actions automatically on a computer platform without the intervention of a user.

Examples of active content are HTML pages or PDF documents containing scripts and Office documents containing macros.

Active content is a vehicle for delivering mobile code, which makes it a popular technology for use in phishing attacks.

Security Issues With Mobile Code

There are two classes of security problems associated with mobile code.

The first deals with getting the code safely from the remote to the local system. We need to control who may initiate the code transfer, for example, and we must ensure the confidentiality and integrity of the transferred code.

From the point of view of this class of issues, mobile code is just data, and we can rely on the usual solutions for securing the transfer. For instance, XACML may be used to control who may initiate the transfer, and SSL/TLS may be used to protect the actual transfer.

It gets more interesting with the second class of issues, where we deal with executing the mobile code. Since the remote source is potentially untrusted, we’d like to limit what the code can do. For instance, we probably don’t want to allow mobile code to send credit card data to its developer.

However, it’s not just malicious code we want to protect ourselves from.

A simple bug that causes the mobile code to go into an infinite loop will threaten your application’s availability.

The bottom line is that if you want your application to maintain a certain level of security, then you must make sure that any third-party code meets that same standard. This includes mobile code and embedded libraries and components.

That’s why third-party code should get a prominent place in a Security Development Lifecycle (SDL).

Safely Executing Mobile Code

In general, we have four types of safeguards at our disposal to ensure the safe execution of mobile code:

  • Proofs
  • Signatures
  • Filters
  • Cages (sandboxes)

We will look at each of those in the context of mobile Java code.

Proofs

It’s theoretically possible to present a formal proof that some piece of code possesses certain safety properties. This proof could be tied to the code and the combination is then proof carrying code.

After download, the code could be checked against the code by a verifier. Only code that passes the verification check would be allowed to execute.

Updated for Bas’ comment:
Since Java 6, the StackMapTable attribute implements a limited form of proof carrying code where the type safety of the Java code is verified. However, this is certainly not enough to guarantee that the code is secure, and other approaches remain necessary.

Signatures

One of those approaches is to verify that the mobile code is made by a trusted source and that it has not been tampered with.

For Java code, this means wrapping the code in a jar file and signing and verifying the jar.

Filters

We can limit what mobile content can be downloaded. Since we want to use signatures, we should only accept jar files. Other media types, including individual .class files, can simply be filtered out.

Next, we can filter out downloaded jar files that are not signed, or signed with a certificate that we don’t trust.

We can also use anti-virus software to scan the verified jars for known malware.

Finally, we can use a firewall to filter out any outbound requests using protocols/ports/hosts that we know our code will never need. That limits what any code can do, including the mobile code.

Cages/Sandboxes

After restricting what mobile code may run at all, we should take the next step: prevent the running code from doing harm by restricting what it can do.

We can intercept calls at run-time and block any that would violate our security policy. In other words, we put the mobile code in a cage or sandbox.

In Java, cages can be implemented using the Security Manager. In a future post, we’ll take a closer look at how to do this.

Book Review: The Security Development Lifecycle (SDL)

In The Security Development Lifecycle (SDL), A Process for Developing Demonstrably More Secure Software, authors Michael Howard and Steven Lipner explain how to build secure software through a repeatable process.

The methodology they describe was developed at Microsoft and has led to a measurable decrease in vulnerabilities. That’s why it’s now also used elsewhere, like at EMC (my employer).

Chapter 1, Enough is Enough: The Threats have Changed, explains how the SDL was born out of the Trustworthy Computing initiative that started with Bill Gates’ famous email in early 2002. Most operating systems have since become relatively secure, so hackers have shifted their focus to applications and the burden is now on us developers to crank up our security game. Many security issues are also privacy problems, so if we don’t, we are bound to pay the price.

Chapter 2, Current Software Development Methods Fail to Produce Secure Software, reviews current software development methods with regard to how (in)secure the resulting applications are. It shows that the adage given enough eyeballs, all bugs are shallow is wrong when it comes to security. The conclusion is that we need to explicitly include security into our development efforts.

Chapter 3, A Short History of the SDL at Microsoft, describes how security improvement efforts at Microsoft evolved into a consistent process that is now called the SDL.

Chapter 4, SDL for Management, explains that the SDL requires time, money, and commitment from senior management to prioritize over time to market. We’re talking real commitment, like delaying the release of an insecure application.

Chapter 5, Stage 0: Education and Awareness, starts the second part of the book, that describes the stages of the SDL. It all starts with educating developers about security. Without this, there’s no real chance of delivering secure software.

Chapter 6, Stage 1: Project Inception, sets the security context for the development effort. This includes assigning someone to guide the team through the SDL, building security leaders within the team, and setting up security expectations and tools.

Chapter 7, Stage 2: Define and Follow Best Practices, lists common secure design principles and describes attack surface analysis and attack surface reduction. The latter is about reducing the amount of code accessible to untrusted users, for example by disabling certain features by default.

Chapter 8, Stage 3: Product Risk Assessment, shows how to determine the application’s level of vulnerability to attack and its privacy impact. This helps to determine what level of security investment is appropriate for what parts of the application.

Chapter 9, Stage 4: Risk Analysis, explains threat modeling. The authors think that this is the practice with the most significant contribution to an application’s security. The idea is to understand the potential threats to the application, the risks those threats pose, and the mitigations that can reduce those risks. Threat models also help with code reviews and penetration tests. The chapter uses a pet shop website as an example.

[Note that there is now a tool that helps you with threat modeling. In this tool, you draw data flow diagrams, after which the tool uses the STRIDE approach to automatically find threats. The tool requires Visio 2007+.]

Chapter 10, Stage 5: Creating Security Documents, Tools, and Best Practices for Customers, describes the collateral that helps customers install, maintain, and use your application securely.

Chapter 11, Stage 6: Secure Coding Policies, explains the need for prescribing security-specific coding practices, educating developers about them, and verifying that they are adhered to. This is a high-level chapter, with details following in later chapters.

Chapter 12, Stage 7: Secure Testing Policies, describes the various forms of security testing, like fuzz testing, penetration testing, and run-time verification.

Chapter 13, Stage 8: The Security Push, explains that the goal of a security push is to hunt for security bugs and triage them. Fixes should follow the push. A security push doesn’t really fit into the SDL, since the goal is to prevent vulnerabilities. It can, however, be useful for legacy (i.e. pre-SDL) code.

Chapter 14, Stage 9: The Final Security Review, describes how to assess (from a security perspective) whether the application is ready to ship. A questionnaire is filled out to show compliance with the SDL, the threat models are reviewed, and unfixed security bugs are reviewed to make sure none are critical.

Chapter 15, Stage 10: Security Response Planning, explains that you need to be prepared to respond to the discovery of vulnerabilities in your deployed application, so that you can prevent panic and follow a solid process. You should have a Security Response Center outside your development team that interfaces with security researchers and others who discover vulnerabilities and guides the development team through the process of releasing a fix. It’s also important to feed back lessons learned into the development process.

Chapter 16, Stage 11: Product Release, explains that the actual release is a non-event, since all the hard work was done in the Final Security Review.

Chapter 17, Stage 12: Security Response Execution, describes the real-world challenges associated with responding to reported vulnerabilities, including when and how to deviate from the plan outlined in Security Response Planning. Above all, you must take the time to fix the root problem properly and to make sure you’re not introducing new bugs.

Chapter 18, Integrating SDL with Agile Methods, starts the final part of the book. It shows how to incorporate agile practices into the SDL, or the other way around.

Chapter 19, SDL Banned Function Calls, explains that some functions are so bad from a security perspective, that they never should be used. This chapter is heavily focused on C.

Chapter 20, SDL Minimum Cryptographic Standards, gives guidance on the use of cryptography, like never roll your own, make the use of crypto algorithms configurable, and what key sizes to use for what algorithms.

Chapter 21, SDL-Required Tools and Compiler Options, describes security tools you should use during development. This chapter is heavily focused on Microsoft technologies.

Chapter 22, Threat Tree Patterns, shows a number of threat trees that reflect common attack patterns. It follows the STRIDE approach again.

The appendix has information about the authors.

I think this book is a must-read for every developer who is serious about building secure software.

Abuse Cases

Gary McGraw describes several best practices for building secure software. One is the use of so-called abuse cases. Since his chapter on abuse cases left me hungry for more information, this post examines additional literature on the subject and how to fit abuse cases into a Security Development Lifecycle (SDL).

Modeling Functional Requirements With Use Cases

Abuse cases are an adaptation of use cases, abstract episodes of interaction between a system and its environment.

A use case consists of a number of related scenarios. A scenario is a description of a specific interaction between the system and particular actors. Each use case has a main success scenario and some additional scenarios to cover variations and exceptional cases.

Actors are external agents, and can be either human or non-human.

For better understanding, each use case should state the goal that the primary actor is working towards.

Use cases are represented in UML diagrams (see example on left) as ovals that are connected to stick figures, which represent the actors. Use case diagrams are accompanied by textual use case descriptions that explain how the actors and the system interact.

Modeling Security Requirements With Abuse Cases

An abuse case is a use case where the results of the interaction are harmful to the system, one of the actors, or one of the stakeholders in the system. An interaction is harmful if it decreases the security (confidentiality, integrity, or availability) of the system.

Abuse cases are also referred to as misuse cases, although some people maintain they’re different. I think the two concepts are too similar to treat differently, so whenever I write “abuse case”, it refers to “misuse case” as well.

Some actors in regular use cases may also act as attacker in an abuse case (e.g. in the case of an insider threat). We should then introduce a new actor to avoid confusion (potentially using inheritance). This is consistent with the best practice of having actors represent roles rather than actual users.

Attackers are described in more detail than regular actors, to make it easier to look at the system from their point of view. Their description should include the resources at their disposal, their skills, and their objectives.

Note that objectives are longer term than the (ab)use case’s goal. For instance, the attacker’s goal for an abuse case may be to gain root privileges on a certain server, while her objective may be industrial espionage.

Abuse cases are very different from use cases in one respect: while we know how the actor in a use case achieves her goal, we don’t know precisely how an attacker will break the system’s security. If we would, we would fix the vulnerability! Therefore, abuse case scenarios describe interactions less precisely than regular use case scenarios.

Modeling Other Non-Functional Requirements

Note that since actors in use cases needn’t be human, we can employ a similar approach to abuse cases with actors like “network failure” etc. to model non-functional requirements beyond security, like reliability, portability, maintainability, etc.

For this to work, one must be able to express the non-functional requirement as an interactive scenario. I won’t go into this topic any further in this post.

Creating Abuse Cases

Abuse case models are best created when use cases are: during requirements gathering. It’s easiest to define the abuse cases after the regular use cases are identified (or even defined).

Abuse case modeling requires one to wear a black hat. Therefore, it makes sense to invite people with black hat capabilities, like testers and network operators or administrators to the table.

The first step in developing abuse cases is to find the actors. As stated before, every actor in a regular use case can potentially be turned into an malicious actor in an abuse case.

We should next add actors for different kinds of intruders. These are distinguished based on their resources and skills.

When we have the actors, we can identify the abuse cases by determining how they might interact with the system. We might identify such malicious interactions by combining the regular use cases with attack patterns.

We can find more abuse cases by combining them systematically and recursively with regular use cases.

Combining Use Cases and Abuse Cases

Some people keep use cases and abuse cases separate to avoid confusion. Others combine them, but display abuse cases as inverted use cases (i.e. black ovals with white text, and actors with black heads).

The latter approach makes it possible to relate abuse cases to use cases using UML associations. For instance, an abuse case may threaten a use case, while a use case might mitigate an abuse case. The latter use case is also referred to as a security use case. Security use cases usually deal with security features.

Security use cases can be threatened by new abuse cases, for which we can find new security use cases to mitigate, etc. etc. In this way, a “game” of play and counterplay enfolds that fits well in a defense in depth strategy.

We should not expect to win this “game”. Instead, we should make a good trade-off between security requirements and other aspects of the system, like usability and development cost. Ideally, these trade-offs are made clearly visible to stakeholders by using a good risk management framework.

Reusing Abuse Cases

Use cases can be abstracted into essential use cases to make them more reusable. There is no reason we couldn’t do the same with abuse cases and security use cases.

It seems to me that this not just possible, but already done. Microsoft’s STRIDE model contains generalized threats, and its SDL Threat Modeling tool automatically identifies which of those are applicable to your situation.

Conclusion

Although abuse cases are a bit different from regular use cases, their main value is that they present information about security risks in a format that may already be familiar to the stakeholders of the software development process. Developers in particular are likely to know them.

This should make it easier for people with little or no security background to start thinking about securing their systems and how to trade-off security and functionality.

However, it seems that threat modeling gives the same advantages as abuse cases. Since threat modeling is supported by tools, it’s little wonder that people prefer that over abuse cases for inclusion in their Security Development Lifecycle.