Major Tech Projects

Under construction

Last time we saw how a target architecture can guide you in your day to day technical decision making. But what do you do when your target architecture is considerably different from your current architecture?

Ways of working differ considerably between organizations, but most tech companies nowadays use some form of agile process (or at least claim to), where a Product Owner prioritizes a backlog of smallish items for the team to work on. That’s the context we’ll assume for this post.

The question then becomes how you convince a business-oriented Product Owner to prioritize technical improvement work over value-added work, such as new features. This question becomes even more pressing when the amount of tech work is not small, but rather an entire project in itself. This type of work doesn’t fit all that well into the agile model of continuous delivery of value through small increments. So what to do?

First of all, it’s always good if you can split the work into smaller pieces. Those are easier to squeeze in and, more importantly, carry less risk. But even if you manage to do that for a big tech overhaul, you’ll end up with a long list of tech items that somehow need to find their way to the top of the backlog.

You could try to persuade the Product Owner in a separate discussion about each and every one of these tech items. However, since Product Owners are judged by how much value they deliver sooner rather than later, you’re going to be fighting an uphill battle. That doesn’t mean it can’t work in your context, of course, but in general the chances aren’t great when the number of tech items grows.

Alternatively, you could treat this as technical debt, assuming you have a well-functioning process for handling that. That’s quite an assumption, by the way. Some teams use their slack time to dig themselves out of holes, but most teams are not fortunate enough to have slack time. (As an industry, we’re slow learners. Mythical man-month, anyone?) Many teams I’ve seen that keep track of tech debt just keep growing it year over year and I hate to think about the teams that don’t even keep track of it.

Under construction

Part of the problem with tech debt is that the metaphor is broken. Financial people will tell you that some amount of debt is good and that you shouldn’t get rid of all of it. That’s just not how developers see tech debt. I’ve also yet to see a team that regularly and consistently pays off tech debt, like one would make monthly payments to pay off, say, a mortgage.

Idea Flow argues that we should abandon the technical debt concept in favor of risk and that we should let the team be guided by both a Product Owner, for business value, and Technical Risk Manager, for keeping risk within an acceptable range. It’s an interesting concept that sounds like it may work, but the problem will be in estimating the risk. I haven’t seen this done in practice yet, so if you have experience with this, please leave a comment below.

A somewhat similar, but less refined, approach is to allocate two budgets for the team: one for doing value-added work and one for doing technical work. You could assign a 3:1 ratio to those budgets, for instance, which means 75% of developer time would be spent on adding business value and 25% on technical work items. Then if you have a year where you need to do some major technical work, you could ask for a temporary change in this ratio.

Yet another approach is the dreaded rewrite: you stop all value-added work for some time and do only technical work. This allows the team to make quick progress in the technical area, but is generally not liked all that much by the business. You can get away with this only every so often, as it tends to take a huge bite out of your political capital. But at some point it will become your only option if you don’t find a better way first.

What do you think? What ways to schedule time for major technical projects have worked for you? Please leave a comment below.

Data Flow Diagrams and Threat Models

Last time we looked at some generic diagrams from the C4 model, which are useful for most teams. This time we’re going to explore a more specific type of diagram that can be a tremendous help with security.

Data Flow Diagrams

A Data Flow Diagram (DFD), as the name indicates, shows the flow of data through the system. It depicts external entities, processes, data stores, and data flows. Larger systems usually have composite processes, which expand into their own DFD.

Here’s a simple example of a data flow diagram:

Create a Data Flow Diagram from a Container Diagram

If you already have a container diagram of your system, then it’s easy to create a DFD from it:

  1. Convert containers into processes by drawing them as circles. You may or may not want to number them. Complicate containers with many processes running in them may be modeled as composite process that expand into their own sub-DFD.
  2. Convert external systems into external entities by drawing them as rectangles.
  3. Convert databases and queues into data stores by drawing them as horizontal parallel bars.
  4. Modify the directions of all the arrows as needed. In a container diagram, the direction of an arrow usually indicates who initiates a request. In a data flow diagram, the direction of an arrow indicates the direction in which data flows. This could be both ways.
    Some people use arrows to indicate data flows in container diagrams as well. In that case, you don’t have anything to do in this step.

Now, why would you go through the trouble to convert a container diagram into a DFD? There are several uses for DFDs, but in this post I want to focus on using them for threat modeling.

Threat Models

In threat modeling, we look for design flaws with security implications. These are different from implementation bugs, which makes them hard to detect using code-based techniques such as Static Application Security Testing (SAST) or security code reviews. But the flip side of that is that you can do threat modeling even before any code is written. And you should!

Threat modeling falls under Threat Assessment in the Software Assurance Maturity Model.

There are many ways to do build threat models, but the one I’ve found easiest to understand for developers with limited security knowledge (which is the vast majority), is to use STRIDE, an acronym for the types of security threats that exist:

  • Spoofing
  • Tampering
  • Repudiation
  • Information disclosure
  • Denial of service
  • Elevation of privilege

Using DFDs to Build Threat Models

Not all of the STRIDE threats are applicable to all elements of our system, and this is where the DFD comes in handy, since each DFD element maps nicely unto some subset of the STRIDE threats:

So now you have a structured process for reviewing your architecture from a security perspective:

  1. Create a DFD from your container diagram (or from scratch if you don’t have one)
  2. Identify the threats using STRIDE
  3. Score the threats using the Common Vulnerability Scoring System (CVSS)
  4. Manage the risks posed by the identified threats:

I’ve found that using CVSS scores for threats makes it easier to decide how to manage them. You can set a security risk appetite for your system, for example accept threats in the None through Medium levels and work to avoid or mitigate High or Critical levels.

CVSS scores also go a long way towards making the threat assessment objective, which makes it easier to convince people that work should be done to reduce the security risk.

Costs and Benefits of Threat Models

If all of this seems like a lot of work, especially for a big system, that’s because it is. Sorry.

It’s not quite as bad as it may seem, however, because DFD elements can usually be grouped since they all behave the same from a security perspective. So then you only have to score and manage the group as a whole rather than all the elements in it individually. But still, threat modeling is a considerable investment.

When I’ve done this activity with developers, I see that it always provides a lot of value. We usually find one or more security threats that we really need to manage better. And most of the time participants get a much better understanding of their system, which helps them in their non-security work as well.

What do you think? Is threat modeling using data flow diagrams and STRIDE something you’re willing to give a shot, or do you prefer a method that requires less work (but offers less protection), like abuse cases? Please leave a comment.

Architecture Diagrams

Hi, my name is Ray, and I’m a software architect.

Example container diagram

According to my old boss Jeroen van Rotterdam, this means that I draw boxes and lines. In practice, it’s only a small part of what I do, but I do think it’s an important part.

Some people may wonder why, in the age of working software over comprehensive documentation, one would still spend time on creating pretty pictures. Didn’t we leave all this up front, non-code fluff behind? Well, I do value working software more than comprehensive documentation, but that doesn’t mean there is no value in documentation.

I work with several “central” teams that provide shared services to “local” teams. In such discussions, it’s quite often useful to be able to point to a high-level architecture of the system. Something like the context diagram of the C4 model. Such context diagrams were also of great value during the due diligence phase of the recently announced sale of our division. Drawing a context diagram takes very little time, provides clear benefits, and requires little maintenance, so it’s really a no-brainer.

What about other types of diagrams?

The C4 model has several other diagrams. Which ones you need depends on your context. If all your code goes into a single war that you deploy to an application server, for instance, there is little point in creating a container diagram, but a component diagram may be useful. If, on the other hand, your system consists of quite a few microservices, then a container diagram that shows how they are connected may be very valuable, but a component diagram would probably be overkill.

If you work in a larger organization, with lots of teams building lots of systems, then having a diagram standard in place like the C4 model is a great way to reduce the time needed to explain a system to others and to prevent misunderstandings.

You will probably want to “personalize” that standard to make it more expressive for your particular context. For instance, we use color coding of elements on container diagrams to indicate their status: red for things we are ready to remove, orange for things we want to migrate away from, yellow for things we want to review, green for things we’re happy with, and blue for things we’re planning to build in the future.

Such an adaptation of the C4 standard doesn’t reduce the value of the standard if most or all of you communication is within your organization where everyone uses the same adaptation. If, however, you routinely use diagrams in communication with outsiders, you may want to minimize your adaptations, or at the very least provide a legend with your diagrams.

Also use your diagram standard on whiteboards.

The standard should be used everywhere you draw visual representations of your systems, like in whiteboard sessions and in Architecture Decision Records. Consistent use of the standard turns it into a ubiquitous language for visually representing your systems, eliminating ambiguities.

If you have a standard that all your diagrams must adhere to, then you can build tooling around that. For instance, you could write a tool that generates a diagram from a simple YAML file that describes your system. This way, you bring the diagram closer to the code, improve consistency, and reduce the maintenance burden. I’ll come back to tooling in a later post.

There are other types of diagrams besides those in the C4 model that are useful in certain situations. For instance, sequence diagrams are great for showing complex interactions between systems. I’ll talk about some other diagram types in future posts.

Do you use C4 model diagrams? Are they worth the investment? Please leave a comment.

Update: C4 Model author Simon Brown got some answers to the above questions on Twitter:

Target Architecture

In the last two posts, we looked at generic architecture diagrams and security-specific diagrams. These diagrams reflect the current architecture of a system. This time we will look at using diagrams to depict a desired future architecture, or target architecture.

The point of a target architecture is to paint a picture of the desired state that will act as the North Star (or Southern Cross, depending on your hemisphere), in the sense that it guides all the little decisions we make every day when working on our system. This doesn’t mean we’ll always be moving straight towards the target, but we do want to make sure we don’t stray too far of the path.

However, a target architecture is not a static thing.

Assuming you don’t have a crystal ball, it will be hard to predict how circumstances will develop over the years. So think of a target architecture more as a moving target than as something set in stone. We need to keep an open mind at all times, and always react appropriately to changes, and one such reaction could be to update the target architecture as we learn more about our system, the customers it supports, the team that builds it, the evolution of the technologies it uses, etc.

Moving towards a target architecture usually doesn’t happen in a straight line.

So how do we come up with a target architecture in the first place?

One starting point is to look at the diagram that reflects your current state and identify things that give you trouble. Few systems are optimal, so you’re likely to find some things that you would do differently if you had a chance to start all over.

Another source of ideas comes from future requirements, i.e. requests on your backlog. The fact that the current architecture supports the current requirements doesn’t necessarily mean that it will be able to support all future requirements. Sometimes we need to create some architectural runway.

Once you’ve identified an area for improvement, you need to make a decision about whether it’s actually worth fixing. Some things are very costly or risky to fix compared to their benefit.

Next, write up your decision in the form of an Architecture Decision Record (ADR). The structured format of an ADR, which lists the positive and negative consequences of the decision, will help with getting everyone on the same page. It will also give you a nice historical record, which is especially important for any newcomers as the team evolves over time.

Every ADR records a decision. Each such decision constrains the solution space of the system in its own way. As the number of decisions grows, the resulting viable part of the solution space shrinks. The target architecture is a picture of a single point in this viable part of the solution space. Note that there could be other points that are viable as well, i.e. alternative target architectures.

Obviously, the solution space is multi-dimensional. But if we project it in two dimensions, we can visualize the relationship between target architecture and ADRs as follows:

Architecturally significant decisions constrain the viable solution space in which a target architecture lives.

If you find your team arguing about two different options in the viable solution space and you want to resolve this ambiguity, then hopefully the picture above will suggest the solution: you’re missing an ADR that shrinks the viable solution space to exclude one of the options.

What do you think? Do you find a target architecture a useful tool to guide your team’s day to day technical decisions? Do you use ADRs to document why the target architecture looks the way it does? Please leave a comment below.