Book review: Secure Programming with Static Analysis

Secure Programming with Static AnalysisOne thing that should be part of every Security Development Lifecycle (SDL) is static code analysis.

This topic is explained in great detail in Secure Programming with Static Analyis.

Chapter 1, The Software Security Problem, explains why security is easy to get wrong and why typical methods for catching bugs aren’t effective for finding security vulnerabilities.

Chapter 2, Introduction to Static Analysis, explains that static analysis involves a software program checking the source code of another software program to find structural, quality, and security problems.

Chapter 3, Static Analysis as Part of Code Review, explains how static code analysis can be integrated into a security review process.

Chapter 4, Static Analysis Internals, describes how static analysis tools work internally and what trade-offs are made when building them.

This concludes the first part of the book that describes the big picture. Part two deals with pervasive security problems.

InputChapter 5, Handling Input, describes how programs should deal with untrustworthy input.

Chapter 6, Buffer Overflow, and chapter 7, Bride to Buffer Overflow, deal with buffer overflows. These chapters are not as interesting for developers working with modern languages like Java or C#.

Chapter 8, Errors and Exceptions, talks about unexpected conditions and the link with security issues. It also handles logging and debugging.

Chapter 9, Web Applications, starts the third part of the book about common types of programs. This chapter looks at security problems specific to the Web and HTTP.

Chapter 10, XML and Web Services, discusses the security challenges associated with XML and with building up applications from distributed components.

CryptographyChapter 11, Privacy and Secrets, switches the focus from AppSec to InfoSec with an explanation of how to protect private information.

Chapter 12, Privileged Programs, continues with a discussion on how to write programs that operate with different permissions than the user.

The final part of the book is about gaining experience with static analysis tools.

Chapter 13, Source Code Analysis Exercises for Java, is a tutorial on how to use Fortify (a trial version of which is included with the book) on some sample Java projects.

Chapter 14, Source Code Analysis Exercises for C does the same for C programs.

rating-five-out-of-five

This book is very useful for anybody working with static analysis tools. Its description of the internals of such tools helps with understanding how to apply the tools best.

I like that the book is filled with numerous examples that show how the tools can detect a particular type of problem.

Finally, the book makes clear that any static analysis tool will give both false positives and false negatives. You should really understand security issues yourself to make good decisions. When you know how to do that, a static analysis tool can be a great help.

Book Review: The Security Development Lifecycle (SDL)

In The Security Development Lifecycle (SDL), A Process for Developing Demonstrably More Secure Software, authors Michael Howard and Steven Lipner explain how to build secure software through a repeatable process.

The methodology they describe was developed at Microsoft and has led to a measurable decrease in vulnerabilities. That’s why it’s now also used elsewhere, like at EMC (my employer).

Chapter 1, Enough is Enough: The Threats have Changed, explains how the SDL was born out of the Trustworthy Computing initiative that started with Bill Gates’ famous email in early 2002. Most operating systems have since become relatively secure, so hackers have shifted their focus to applications and the burden is now on us developers to crank up our security game. Many security issues are also privacy problems, so if we don’t, we are bound to pay the price.

Chapter 2, Current Software Development Methods Fail to Produce Secure Software, reviews current software development methods with regard to how (in)secure the resulting applications are. It shows that the adage given enough eyeballs, all bugs are shallow is wrong when it comes to security. The conclusion is that we need to explicitly include security into our development efforts.

Chapter 3, A Short History of the SDL at Microsoft, describes how security improvement efforts at Microsoft evolved into a consistent process that is now called the SDL.

Chapter 4, SDL for Management, explains that the SDL requires time, money, and commitment from senior management to prioritize over time to market. We’re talking real commitment, like delaying the release of an insecure application.

Chapter 5, Stage 0: Education and Awareness, starts the second part of the book, that describes the stages of the SDL. It all starts with educating developers about security. Without this, there’s no real chance of delivering secure software.

Chapter 6, Stage 1: Project Inception, sets the security context for the development effort. This includes assigning someone to guide the team through the SDL, building security leaders within the team, and setting up security expectations and tools.

Chapter 7, Stage 2: Define and Follow Best Practices, lists common secure design principles and describes attack surface analysis and attack surface reduction. The latter is about reducing the amount of code accessible to untrusted users, for example by disabling certain features by default.

Chapter 8, Stage 3: Product Risk Assessment, shows how to determine the application’s level of vulnerability to attack and its privacy impact. This helps to determine what level of security investment is appropriate for what parts of the application.

Chapter 9, Stage 4: Risk Analysis, explains threat modeling. The authors think that this is the practice with the most significant contribution to an application’s security. The idea is to understand the potential threats to the application, the risks those threats pose, and the mitigations that can reduce those risks. Threat models also help with code reviews and penetration tests. The chapter uses a pet shop website as an example.

[Note that there is now a tool that helps you with threat modeling. In this tool, you draw data flow diagrams, after which the tool uses the STRIDE approach to automatically find threats. The tool requires Visio 2007+.]

Chapter 10, Stage 5: Creating Security Documents, Tools, and Best Practices for Customers, describes the collateral that helps customers install, maintain, and use your application securely.

Chapter 11, Stage 6: Secure Coding Policies, explains the need for prescribing security-specific coding practices, educating developers about them, and verifying that they are adhered to. This is a high-level chapter, with details following in later chapters.

Chapter 12, Stage 7: Secure Testing Policies, describes the various forms of security testing, like fuzz testing, penetration testing, and run-time verification.

Chapter 13, Stage 8: The Security Push, explains that the goal of a security push is to hunt for security bugs and triage them. Fixes should follow the push. A security push doesn’t really fit into the SDL, since the goal is to prevent vulnerabilities. It can, however, be useful for legacy (i.e. pre-SDL) code.

Chapter 14, Stage 9: The Final Security Review, describes how to assess (from a security perspective) whether the application is ready to ship. A questionnaire is filled out to show compliance with the SDL, the threat models are reviewed, and unfixed security bugs are reviewed to make sure none are critical.

Chapter 15, Stage 10: Security Response Planning, explains that you need to be prepared to respond to the discovery of vulnerabilities in your deployed application, so that you can prevent panic and follow a solid process. You should have a Security Response Center outside your development team that interfaces with security researchers and others who discover vulnerabilities and guides the development team through the process of releasing a fix. It’s also important to feed back lessons learned into the development process.

Chapter 16, Stage 11: Product Release, explains that the actual release is a non-event, since all the hard work was done in the Final Security Review.

Chapter 17, Stage 12: Security Response Execution, describes the real-world challenges associated with responding to reported vulnerabilities, including when and how to deviate from the plan outlined in Security Response Planning. Above all, you must take the time to fix the root problem properly and to make sure you’re not introducing new bugs.

Chapter 18, Integrating SDL with Agile Methods, starts the final part of the book. It shows how to incorporate agile practices into the SDL, or the other way around.

Chapter 19, SDL Banned Function Calls, explains that some functions are so bad from a security perspective, that they never should be used. This chapter is heavily focused on C.

Chapter 20, SDL Minimum Cryptographic Standards, gives guidance on the use of cryptography, like never roll your own, make the use of crypto algorithms configurable, and what key sizes to use for what algorithms.

Chapter 21, SDL-Required Tools and Compiler Options, describes security tools you should use during development. This chapter is heavily focused on Microsoft technologies.

Chapter 22, Threat Tree Patterns, shows a number of threat trees that reflect common attack patterns. It follows the STRIDE approach again.

The appendix has information about the authors.

I think this book is a must-read for every developer who is serious about building secure software.

Book review – Software Security: Building Security In

Dr. Gary McGraw is an authority on software security who has written many security books. This book, Software Security: Building Security In, is the third in a series.

While Exploiting Software: How to Break Code focuses on the black hat side of security, and Building Secure Software: How to Avoid Security Problems the Right Way focuses on the white hat side, this book brings the two perspectives together (see book cover on right).

Chapter 1, Defining a Discipline, explains the security problem that we have with software. It also introduces the three pillars of software security:

  1. Applied Risk Management
  2. Software Security Touchpoints
  3. Knowledge

Chapter 2, A Risk Management Framework, explains that security is all about risks and mitigating them. McGraw argues the need to embed this in an overall risk management framework to systematically identify, rank, track, understand, and mitigate security risks over time. The chapter also explains that security risks must always be placed into the larger business context. The chapter ends with a worked out example.

Chapter 3, Introduction to Software Security Touchpoints, starts the second part of the book, about the touchpoints. McGraw uses this term to denote software security best practices. The chapter presents a high level overview of the touchpoints and explains how both black and white hats must be involved to build secure software.

Chapter 4, Code Review with a Tool, introduces static code analysis with a specialized tool, like Fortify. A history of such tools is given, followed by a bit more detail about Fortify.

Chapter 5, Architectural Risk Analysis, shifts the focus from implementation bugs to design flaws, which account for about half of all security problems. Since design flaws can’t be found by a static code analysis tool, we need to perform a risk analysis based on architecture and design documents. McGraw argues that MicroSoft mistakenly calls this threat modeling, but he seems to have lost that battle.

Chapter 6, Software Penetration Testing, explains that functional testing focuses on what software is supposed to do, and how that is not enough to guarantee security. We also need to focus on what should not happen. McGraw argues that this “negative” testing should be informed by the architectural risk analysis (threat modeling) to be effective. The results of penetration testing should be fed back to the developers, so they can learn from their mistakes.

Chapter 7, Risk-Based Security Testing, explains that while black box penetration testing is helpful, we also need white box testing. Again, this testing should be driven by the architectural risk analysis (threat modeling). McGraw also scorns eXtreme Programming (XP). Personally, I feel that this is based on some misunderstandings about XP.

Chapter 8, Abuse Cases, explains that requirements should not only specify what should happen, in use cases, but what should not, in abuse cases. Abuse cases look at the software from the point of view of an attacker. Unfortunately, this chapter is a bit thin on how to go about writing them.

Chapter 9, Software Security Meets Security Operations, explains that developers and operations people should work closely together to improve security. We of course already knew this from the DevOps movement, but security adds some specific focal points. Some people have recently started talking about DevOpsSec. This chapter is a bit more modest, though, and talks mainly about how operations people can contribute to which parts of software development.

Chapter, 10, An Enterprise Software Security Program, starts the final part of the book. It explains how to introduce security into the software development lifecycle to create a Security Development Lifecycle (SDL).

Chapter 11, Knowledge for Software Security, explains the need for catalogs of security knowledge. Existing collections of security information, like CVE and CERT, focus on vulnerabilities and exploits, but we also need collections for principles, guidelines, rules, attack patterns, and historical risks.

Chapter 12, A Taxonomy of Coding Errors, introduces the 7 pernicious kingdoms of coding errors that can lead to vulnerabilities:

  1. Input Validations and Representation
  2. API Abuse
  3. Security Features
  4. Time and State
  5. Error Handling
  6. Code Quality
  7. Encapsulation

It also mentions the kingdom Environment, but doesn’t give it number 8, since it focuses on things outside the code. For each kingdom, the chapter lists a collection of so-called phyla, with more narrowly defined scope. For instance, the kingdom Time and State contains the phylum File Access Race Condition. This chapter concludes with a comparison with other collections of coding errors, like the 19 Deadly Sins of Software Security and the OWASP Top Ten.

Chapter 13, Annotated Bibliography and References, provides a list of must-read security books and other recommended reading.

The book ends with four appendices: Fortify Source Code Analysis Suite, Tutorial, ITS4 Rules, An Exercise in Risk Analysis: Smurfware, and Glossary. There is a trial version of Fortify included with the book.

All in all, this book provides a very good overview for software developers who want to learn about security. It doesn’t go into enough detail in some cases for you to be able to apply the described practices, but it does teach enough to know what to learn more about and it does tie everything together very nicely.

Book: Leading Change

Leading Change is about how to implement significant changes in organizations. It discusses the Eight-Stage Process of Creating Major Change:

  1. Establishing a sense of urgency
  2. Creating the guiding coalition
  3. Developing a vision and strategy
  4. Communicating the change vision
  5. Empowering broad-based action
  6. Generating short-term wins
  7. Consolidating gains and producing more change
  8. Anchoring new approaches in the culture

These actions require leadership more than management, to define what the future should look like, align people with this vision, and inspire them to make it happen despite obstacles.

Establishing a sense of urgency is needed to overcome complacency, which can be caused by

  1. The absence of a major crisis
  2. Too many visible resources
  3. Low overall performance standards
  4. Organizational structures that focus employees on narrow functional goals
  5. Internal measurement systems that focus on the wrong performance indexes
  6. A lack of sufficient performance feedback from external sources
  7. A kill-the-messenger-of-bad-news, low-candor, low-confrontation culture
  8. Human nature, with its capacity for denial, especially if people are already busy or stressed
  9. Too much happy talk from senior management

Creating urgency can be done by attacking each of these, but these forces are not to be underestimated.

A guiding coalition is powerful coalition that can act as a team. It is needed for introducing change, since no one individual has the information needed to make all major decisions or the time or credibility needed to convince lots of people to implement the decisions. The following characteristics are essential for individuals in a guiding coalition:

  1. Position power to prevent others from blocking progress
  2. Expertise to make informed, intelligent decisions
  3. Credibility to be taken seriously by others
  4. Leadership to drive the process

Make sure to avoid individuals with egos that fill up the room. Also avoid so-called snakes, people who create enough mistrust to kill teamwork.

To make the guiding coalition into a team, you have to create trust (through lots of joint activities) and develop a common goal (that is sensible to the head and appealing to the heart).

Developing a vision simplifies the detailed decisions, motivates people to take action in the right direction, and coordinates the actions of different people. An effective vision is:

  1. Imaginable: conveys a picture of what the future will look like
  2. Desirable: appeals to the long-term interests of employees, customers, stockholders and other stakeholders
  3. Feasible: comprises realistic, attainable goals
  4. Focused: is clear enough to provide guidance in decision making
  5. Flexible: is general enough to allow individual initiative and alternate responses in light of changing conditions
  6. Communicable: is easy to communicate; can be successfully explained in 5 minutes

The most effective transformational visions:

  1. Are ambitious enough to force people out of their comfort zones
  2. Aim in a general way at becoming better and better at lower and lower costs
  3. Take advantage of fundamental trends, like globalization and new technology
  4. Exploit nobody and therefore have a certain moral power

Communicating the change vision requires:

  1. Simplicity: all jargon must be eliminated
  2. Metaphor, analogy, and example: a verbal picture is worth a thousands words
  3. Multiple forums: big and small meetings, memos and newspapers, formal and informal interaction
  4. Repetition: ideas sink in deeply only after they have been heard many times
  5. Leadership by example: behavior from important people that is inconsistent with the vision overwhelms other forms of communication
  6. Explanation of seeming inconsistencies: unaddressed inconsistencies undermine the credibility of all communication
  7. Give-and-take: two-way communication is always more powerful than one-way communication

Empowering employees for broad-based action faces these barriers:

  1. Formal structures make it difficult to act
  2. A lack of needed skills undermines action
  3. Personnel and information systems make it difficult to act
  4. Bosses discourage actions aimed at implementing the new vision

Generating short-term wins is also essential for major change. A win has to be:

  1. Visible: so people know it’s not just hype
  2. Unambiguous
  3. Clearly related to the goal

If you get them right, short-term wins:

  1. Provide evidence that sacrifices are worth it
  2. Reward change agents with a pat on the back
  3. Help fine-tune vision and strategies
  4. Undermine cynics and self-serving resisters
  5. Keep bosses on board
  6. Build momentum

You need to plan for these short-term wins, since they don’t just happen. Sometimes people don’t plan short-term wins, because:

  1. They are overwhelmed by the change process
  2. They don’t believe one can produce major change and achieve excellent short-term results
  3. Lack sufficient management skills

Consolidating gains and producing more change is needed because resistance is always waiting to re-assert itself. This resistance can come from interdependency, where a change in one part requires changes in many other parts. Attacking resistance results in:

  1. More change, not less: The guiding coalition uses the credibility afforded by short-term wins to tackle additional and bigger change projects
  2. More help: Additional people are brought in, promoted, and developed to help with all the changes
  3. Leadership from senior management: Senior people focus on maintaining clarity of shared purpose for the overall effort and keeping urgency levels up
  4. Project management and leadership from below: Lower ranks in the hierarchy both provide leadership for specific projects and manage those projects
  5. Reduction of unnecessary interdependencies: To make change easier in both the short and long term, managers identify unnecessary interdependencies and eliminate them

Anchoring new approaches in the culture is the final step in the change process. Culture refers to norms of behavior and shared values among a group of people. Norms of behavior are common or pervasive ways of acting that are found in a group and that persist because group members tend to behave in ways that teach these practices to new members, rewarding those who fit in and sanctioning those who do not. Shared values are important concerns and goals shared by most of the people in a group that tend to shape group behavior and that often persists over time even when group membership changes.

Culture is a powerful thing, because:

  1. Individuals are selected and indoctrinated so well
  2. Culture exerts itself through the actions of hundreds or thousands of people
  3. All of this happens without much conscious intent and thus is difficult to challenge or even discuss

Anchoring change in a culture:

  1. Comes last, not first
  2. Depends on results
  3. Requires a lot of talk
  4. May involve turnover
  5. Makes decisions on succession crucial

Book review: The Goal

The Goal, A Process of Ongoing Improvement, by Eli Goldratt is a business novel that introduced the Theory of Constraints.

Being a novel means that this theory is presented almost casually as a story. We follow Alex Rogo, a plant manager. His plant is in trouble. Big trouble. Alex finds out that it will be closed in three months if things don’t improve. Alex runs into his old physics teacher, Jonah, who starts helping him. Not by telling Alex what to do, but by asking questions. Questions that can only be answered by challenging fundamental assumptions about running the plant. By letting go of these false assumptions, Alex finds ways to dramatically improve his plant’s operations. Not only is the plant not closed down, Alex gets to run the entire division.

I don’t read much fiction. I have too little time as it is to read all the non-fiction that I want. So why did I read this book? Well, the story I outlined above isn’t what this book is about at all. This book tries to teach us to not lose track of the real goal, i.e. making money in the case of an enterprise. We make more money by

  1. Increasing throughput
  2. Decreasing inventory
  3. Decreasing operational expenses

Preferably all of those at the same time.

The primary tool to help us with this is the Theory of Constraints. This is a scientific, i.e. evidence-based, approach to process improvement, based on the assumption that any process is limited by some constraints. To improve the process, you need to follow the 5 focusing steps:

  1. Identify the constraint
  2. Decide how to exploit the constraint
  3. Subordinate all other processes to the above decision
  4. Elevate the constraint
  5. If, as a result of these steps, the constraint has moved, return to Step 1.

It all starts with identifying the constraint. Visualizing the process helps here, e.g. using a Kanban board. We may need to do some root cause analysis to find the real constraint underlying the symptoms that our visual tool shows.

Exploiting the constraint means making sure that the constraint’s capacity isn’t wasted. For instance, if QA is the constraint in a software development process, then we must make sure they’re not sitting idle. One way of doing that would be to decrease iteration size, which will give QA a more even supply of work. Linking back to the 3 elements that help achieve the goal, we see that this is a way of decreasing inventory (of coded, but untested features).

Exploiting the constraint often requires the use of buffers before the constraint, to make sure it doesn’t run out of things to do. For instance, Scrum assumes the development team is the constraint in the organization, and uses a buffer called a backlog to keep it busy.

Subordinating all processes to the decision to exploit the constraint is not easy. It may mean you need to take counter-intuitive actions, such as keeping non-constraint resources idle. For instance, coding more features is not going to help us make more money if QA can’t keep up with testing. So if QA is using up all its capacity, then no more features should be coded. If the coders have more capacity, they should do something else than coding new features. I’m sure that raises some eyebrows somewhere, but it makes perfect sense when you think about it.

Elevating the constraint means improving the capacity of the constraint. For instance, we could hire more people for QA. Better yet, we could have coders use their idle time to write automated tests, which will prevent defects reaching QA, which means less work for QA per feature, which means more features per unit of time can be tested. We know this is a better solution when we compare the 2 proposals against the 3 ways of reaching our goal: the first proposal increases operational expenses, while the other doesn’t.

The final step embodies the “ongoing” part of process improvement. Once QA finds less bugs because of automated tests, they may stop being the constraint. That is good news, but no reason to sit back and relax. By assumption, something else is now the constraint, so we repeat the whole process for the new constraint.

The 5 step process is quite general. The Goal uses it in the context of a manufacturing plant, but the examples I gave talk about software development. This generic ability makes it a very useful tool, so if you haven’t read the book, I suggest you do. It’s a decent read as a novel as well.