Conference report: Explore DDD 2025

Explore DDD is a two-day conference about Domain-Driven Design (DDD) in Denver, Colorado, that my employer VelocityEHS was kind enough to let me go to. Below, I give an impression of the sessions that I attended.

Wednesday

Opening keynote: Diana Montalion

Diana spoke about how systems thinking can inform architecture. In complex adaptive systems, the relationships between the parts that make up the system are actually more important than the parts themselves, leading her to claim that “architecture is relationship therapy for software systems.”

Because of non-linear dynamics, there are often leverage points in systems, where a small shift produces significant, lasting change. However, people won’t believe you if you find one, so it makes sense to let them discover patterns for themselves by building an easily accessible knowledge graph.

DDD & LLM brainstorming: Eric Evans

Whiteboard discussion about DDD linter

Eric led a two-hour workshop where we explored how LLMs can help with DDD. We crowdsourced topics and discussed them in groups.

My group worked on the concept of a DDD linter, which would work off a glossary to help team members use the ubiquitous language properly.

The group set up a Discord server to continue after the conference, hopefully producing an actual tool at some point.

Strategies for Fearlessly Making Change Happen: Mary Lynn Manns

In this workshop, we got to play the Fearless Journey game. We started by defining the as-is and desired to-be states. Then we defined obstacles that may prevent us from reaching that goal. Each obstacle went into a bag and we took them out one by one in random order. We used one or more of the 60+ fearless change pattern cards to attack the obstacle.

This session connected several dots for me, which I may write about later.

The EventStorming Process Modelling Rat Race: Alberto Brandolini & Paul Rayner

In this workshop I got to practice the process modeling variant of event storming on the problem of selecting talks for a conference. It was set up as a competitive game, where teams worked on the problem in 3 rounds of 20 minutes, with scoring and small retrospectives.

I got to ask Alberto himself some in-depth questions about how he uses event storming in practice.

Thursday

Systems Theory and Practice Applied to System Design: Ruth Malan

In this workshop, Ruth helped us apply some systems thinking tools to a toy problem. Examples are bubble diagrams and concept cards. One of the main lessons learned is to always design a system in its next larger context, e.g. a design a chair in a room, or a room in a house, since the system’s properties interact with the system’s context.

Another lesson is to record decisions are review them later, both when an issue arises and periodically. This helps see how decisions play out over time and fosters learning.

Teaching DDD – Facilitating Mindset Shifts: Tobias Goeschel

In this workshop, Tobias led us through an interesting approach to solving the problem of introducing a new technique to people who may not care about the technique per se.

We first identified job roles of people we may want to influence and placed them on a diagram with two axes: engine room → c-suite and technical → non-technical. We identified the 3 top things these people care about and brainstormed facts about the technique that impact these things.

We then identified the tools that come with the technique, like ubiquitous language or context mapping for DDD, and placed these on a similar diagram. The combination of the two diagrams shows what tools are likely to be of use for a given role.

Learning from Incidents as Continuous Design: Eric Dobbs

In this workshop, Eric taught us about the Learning From Incidents (LFI) community. It’s easy to spot errors in hindsight, but it’s much more difficult to encourage insights. We like to attribute the “root cause” to human error, so we have someone to blame. It’s much harder to see how the system enabled the human to make that error, because we can’t see the system directly, but only through representations.

After the theory, we applied some DDD techniques to the incident response process. I always love it when people combine different fields.

Closing keynote: Rebecca Wirfs-Brock

Name with accent replaced by weird symbols, as sent by InnoQ.

Rebecca talked about the various degrees of rigor a DDD model can have and how much rigor to apply in what situation. She gave examples of not enough rigor (Boeing 737 Max) and too much (limiting people’s names, like rejecting non-letter characters or characters with accents). Since my legal name is Rémon, I certainly could relate to the latter.

Conclusion

I absolutely loved this conference! The hands-on sessions gave me something that’s hard to get anywhere else: practical application of ideas under the guidance of experts in the field. As many people can attest, reading about e.g. event storming is very different from actually doing it.

Eric Evans and me

I also loved the networking opportunities. At lunch on Wednesday I sat next to the InfoQ editor for Architecture & Design, and on Thursday to µservices master Chris Richardson. (And the food was good too.)

The breaks in between sessions were 30 minutes, which gave me enough time to strike up some good conversations.

And to top it all of, Diana Montalion gave me a free, signed copy of her book!

Many thanks to Paul Rayner and team for organizing! Hopefully they can manage to put together another conference next year 🤞

Better Names for Hexagonal Architecture: Inbound/Outbound

I tend to remind myself (and others) that there is no silver bullet, and claims about universal applicability of certain patterns must be approached with a default attitude of skepticism.

One of the few exceptions is the hexagonal architecture, aka the Ports & Adapters pattern, discovered by Dr. Alistair Cockburn. He claims that most teams face a risk of technology change and that the pattern protects against this risk at very low cost.

I couldn’t agree more. This pattern has made it easy for me to switch technologies in the past. One event-driven application I worked on went from on-premise using Kafka to GCP with Pub/Sub, and finally to AWS with SNS & SQS, all in the span of 6 years. These migrations were relatively painless because we used the Ports & Adapters pattern.

Although I love this pattern and would recommend it for most code bases, I do have one gripe with the terminology used.

We like to say that naming things is one of the two hard problems in software development, but there is in fact solid guidance on what makes good names. Naming Things states that a good name has both high comprehension (it can be understood quickly) and high recall (it can be remembered easily).

One of the criteria is distinguishability: a name shouldn’t be easily confused with another name. For instance, names that differ in only one or two characters are difficult to distinguish.

This is where the Ports and Adapters pattern description fails: driver and driven look too much alike.

My proposal is to use the following two terms instead: inbound and outbound. These terms are much easier to distinguish, and they capture the essence of the pattern: there is a separation between the application and the external world and the communication between the application and the external world falls into two categories: world → application (inbound) and application → world (outbound).

Has confusing terminology ever been a barrier to your team adopting useful architectural patterns? I’d love to hear your experiences in the comments.

Should we put the names of deciders in ADRs?

Architecture Decision Records (ADRs) are a wonderful tool. They force you to think through the options before you make a decision. They also provide institutional memory that’s independent of people, which is great for new joiners with their inevitable Why? questions.

Michael Nygard’s template shows what sections an ADRs should have. There are several other templates out there as well.

Nygard’s template has a Status section, which only contains the status of the ADR. At most teams in Adevinta, we add the names of the people involved in making the decision to the Status section, something like Accepted by ray.sinnema on 2023-09-04. Most of the templates I’ve seen don’t have this, although some do, e.g. Joel Parker Henderson, and Planguage.

Recently, I got challenged on this practice. They argued that individuals don’t matter, since the team as a whole makes the decisions. I love such challenges, since they force me to think through and articulate my reasons better. So let’s look at why adding names to ADRs adds value.

Personal branding

Like it or not, it’s important for your name to pop up if you want a promotion. Now, inside your team people will know your strengths. But not all promotions are decided inside a team, i.e. by your manager.

The more you climb the ranks, the more important it becomes for people outside your team to see the value that you add.

Having your name in ADRs, especially the more impactful ones, helps with your personal brand. You can point to those ADRs as proof that you can make difficult decisions well and are ready for the next step in your career.

External stakeholders

Sometimes a decision can’t be made by the team alone. The decision may have budget impact, for instance, and need approval from a budget owner or from the Procurement department.

Or the decision may affect how you store personal data and the Privacy department needs to give their blessing.

Having the names of those outside collaborators in the ADR shows that you’ve consulted the right people and may prevent challenges to the decision from outside the team (CYA).

Contact person

Most companies have more than one team. It may be interesting for other teams to look at your team’s decisions.

For instance, you may have an ADR on a public cloud migration and the other team is about to embark on a similar migration. They could learn from your research into the options and their pros and cons.

Having your name in the ADR gives the other team a person to contact. If it’s not there, they need to contact the team’s manager and ask them for a person to reach out to. That may sound like a small step, but every extra roadblock means you could hamper the transfer of knowledge.

Writing an ADR isn’t a daily activity. Many teams may go long periods of time without needing to write one. This means the opportunities for team members to learn how to make good decisions and write them up in ADRs are relatively scarce. Looking at other team’s ADRs may help, especially if you can have a follow-up conversation with the decision makers.

Counterargument

Some might argue that you can see the ADR contributors from the Git history (you are versioning your ADRs, aren’t you?). However, that’s not universally true, because

  • Not everyone who contributes ideas to an ADR will create a commit or PR comment. This is especially true when you mob-write your ADRs.
  • Some people may not have access to Git, like PMs, or VPs.
  • If you make your ADRs available via some other system, like Backstage, it may not be clear to the reader where the ADR is stored.

Conclusion

I think there are good reasons for adding the names of the deciders to ADRs, especially in larger organizations. Whether this is in the Status section, or in some other section doesn’t matter much.

Acknowledgments

Thanks to Francesco for raising this topic, to Alexey for helping me crystallize my thoughts, and to Gabe for reminding me to write about the counterargument.

The Anti-Corruption Microservice Pattern

Implement an anti-corruption microservice (ACM) to talk to an external system. Other microservices only communicate with the external system via the ACM. The ACM translates requests from the microservices system to the external system and back.

Use this pattern when you employ a microservices architecture and you want to ensure their designs are not limited by an external system. This pattern is an adaptation of the Anti-Corruption Layer (ACL) pattern for microservices architectures.

Context and problem

The context of the ACL pattern applies, but additionally, your application is made up of microservices and multiple of those communicate with the external system. If you’d only apply the ACL pattern, then you’d end up with multiple microservices that each have an ACL for the same external system.

Solution

Isolate the microservices from the external system by placing an Anti-Corruption Microservice (ACM) between them. This layer translates communications between the application’s microservices on the one hand and the external system on the other.

The diagram above shows an application consisting of 4 microservices, 3 of which communicate with the same external system via an Anti-Corruption Microservice. Communication between the microservices and the ACM always uses the data model and architecture of the application. Calls from the ACM to the external system conform to the external system’s data model and methods. The ACM contains all the logic necessary to translate between the two systems.

Issues and considerations

  • The ACM may add latency to calls between the two systems. Conversely, by caching data from calls by one microservice, the ACM might speed up calls by a different microservice.
  • The ACM is another microservice to be managed, maintained, and scaled.
  • The ACM doesn’t need to support all the features of the external system, just the ones used by the application’s microservices.

When to use this pattern

Use this pattern when:

  • Two or more systems have different semantics, but still need to communicate.
  • The external system is provided by a vendor and you want to minimize vendor lock-in. This elevates the idea of hexagonal architecture to the microservices world, where the ACM is a port and the external system an adapter.

Related resources

Acknowledgements

Thanks to Dev for pushing me to write up this pattern.

Hexagonal Architecture helps keep tech debt low

Some people remain skeptical of the idea that tech debt can be kept low over the long haul.

I’ve noticed that when people say something can’t be done, it usually means that they don’t know how to do it. To help with that, this post explores an approach to keep one major source of tech debt under control.

Tech debt can grow without the application changing

Martin Fowler defines technical debt as “deficiencies in internal quality that make it harder than it would ideally be to modify and extend the system further”. Modifications and extensions are changes and there exist many types of those.

In this post, I’d like to focus on changes in perception. That might seem odd, but bear with me. Let’s look at some examples:

  • We are running our software on GCP and our company gets acquired by another that wants to standardize on AWS.
  • A new LTS version of Java comes out.
  • We use a logging library that all of a sudden doesn’t look so great anymore.
  • The framework that we build our system around is introducing breaking changes.
  • We learn about a new design pattern that we realize would make our code much simpler, or more resilient, or better in some other way.

These examples have in common that the change is in the way we look at our code rather than in the code itself. Although the code didn’t change, our definition of “internal quality” did and therefore so did our amount of technical debt.

Responding to changed perceptions

When our perception of the code changes, we think of the code as having tech debt and we want to change it. How do we best make this type of change? And if our code looks fine today but maybe not tomorrow, then what can we possibly do to prevent that?

The answers to such questions depends on the type of technology that is affected. Programming languages and frameworks are fundamentally different from infrastructure, libraries, and our own code.

Language changes come in different sizes. If you’ve picked a language that values stability, like Java, then you’re rarely if ever forced to make changes when adopting a new version. If you picked a more volatile language, well, that was a trade-off you made. (Hopefully with open eyes and documented in an ADR for bonus points.)

Even when you’re not forced to change, you may still want to, to benefit from new language constructs (like records or sealed classes for Java). You can define a project to update the entire code base in one fell swoop, but you’d probably need to formally schedule that work. It’s easier to only improve code that you touch in the course of your normal work, just like any other refactoring. Remember that you don’t need permission from anyone to keep your code clean, as this is the only way to keep development sustainable.

Frameworks are harder to deal with, since a framework is in control of the application and directs our code. It defines the rules and we have no choice but to modify our code if those rules change. That’s the trade-off we accepted when we opted to use the framework. Upgrading Spring Boot across major (or even minor) versions has historically been a pain, for example, but we accept that because the framework saves us a lot of time on a daily basis. There isn’t a silver bullet here, so be careful what you’re getting yourself into. Making a good trade-off analysis and recording it in an ADR is about the best we can do.

Libraries are a bit simpler because they impact the application less than frameworks. Still, there is a lot of variation in their impact. A library for a cross-cutting concern like logging may show up in many places, while one for generating passwords sees more limited use.

Much has been written about keeping application code easy to change. Follow the SOLID (or IDEALS) principles and employ the correct design patterns. If you do, then basically every piece of code treats every other piece of code as a library with a well-defined API.

Infrastructure can also be impactful. Luckily, work like upgrading databases, queues, and Kubernetes clusters can often economically be outsourced to cloud vendors. From the perspective of the application, that reduces infrastructure to a library as well. Obviously there is more to switching cloud vendors than switching libraries, like updating infrastructure as code, but from an application code perspective the difference is minimal.

This analysis shows that if we can figure out how to deal with changes in libraries, we would be able to effectively handle most changes that increase tech debt.

Hexagonal Architecture to the rescue

Luckily, the solution to this problem is fairly straightforward: localize dependencies. If only a small part of your code depends on a library, then any changes in that library can only affect that part of the code. A structured way of doing that is using a Hexagonal Architecture.

Hexagonal Architecture (aka Ports & Adapters) is an approach to localize dependencies. Variations are Onion Architecture and Clean Architecture. The easiest way to explain Hexagonal Architecture is to compare it to a more traditional three layer architecture:

Three layer vs hexagonal architecture

A three layer architecture separates code into layers that may only communicate “downward”. In particular, the business logic depends on the data access layer. Hexagonal Architecture replaces the notion of downward dependencies with inward ones. The essence of the application, the business logic, sits nicely in the middle of the visualization. Data access code depends on the business logic, instead of the other way around.

The inversion of dependencies between business logic and data access is implemented using ports (interfaces) and adapters (implementations of those interfaces). For example, accounting business logic may define an output port AccountRepository for storing and loading Account objects. If you’re using MySQL to store those Accounts, then a MySqlAccountRepository adapter implements the AccountRepository port.

When you need to upgrade MySQL, changes are limited to the adapter. If you ever wanted to replace MySQL with some other data access technology, you’d simply add a new adapter and decommission the old one. You can even have both adapters live side by side for a while and activate one or the other depending on the environment the application is running in. This makes testing the migration easier.

You can use ports and adapters for more than data access, however. Need to use logging? Define a Logging port and an adapter for Log4J or whatever your preferred logging library is. Same for building PDFs, generating passwords, or really anything you’d want to use a library for.

This approach has other benefits as well.

Your code no longer has to suffer from poor APIs offered by the library, since you can design the port such that it makes sense to you. For example, you can use names that match your context and put method parameters in the order you prefer. You can reduce cognitive load by only exposing the subset of the functionality of the library that you need for your application. If you document the port, team members no longer need to look at the library’s documentation. (Unless they’re studying the adapter’s implementation, of course.)

Testing also becomes easier. Since a port is an interface, mocking the use of the library becomes trivial. You can write an abstract test against the port’s interface and derive concrete tests for the adapters that do nothing more than instantiate the adapter under test. Such contract tests ensure a smooth transition from one adapter of the port to the next, since the tests prove that both adapters work the same way.

Adopting Hexagonal Architecture

By now the benefits of Hexagonal Architecture should be clear. Some developers, however, are put off by the need to create separate ports, especially for trivial things. Many would balk at designing their own logging interface, for example. Luckily, this is not an all-or-nothing decision. You can make a trade-off analysis per library.

With Hexagonal Architecture, code constantly needs to be passed the ports it uses. A dependency injection framework can automate that.

It also helps to have a naming convention for the modules of your code, like packages in Java.

Here’s what we used on a recent Spring Boot application (*) I was involved with:

  • application.services
    The @SpringBootApplication class and related plumbing (like Spring Security configuration) to wire up and start the application.
  • domain.model
    Types that represent fundamental concepts in the domain with associated behavior.
  • domain.services
    Functionality that crosses domain model boundaries. It implements input ports and uses output ports.
  • ports.in
    Input ports that offer abstractions of domain services to input mechanisms like @Scheduled jobs and @Controllers.
  • ports.out
    Output ports that offer abstractions of technical services to the domain services.
  • infra
    Infrastructure that implements output ports, IOW adapters. Packages in here represent either technology directly, like infra.pubsub, or indirectly for some functionality, like infra.store.gcs. The latter form allows competing implementations to live next to each other.
  • ui
    Interface into the application, like its user and programmatic interfaces, and any scheduled jobs.

(*) All the examples in this post are from the same application. This doesn’t mean that Hexagonal Architecture is only suitable for Java, or even Spring, applications. It can be applied anywhere.

Note that this package structure is a form of package by layer. It therefore works best for microservices, where you’ve implicitly already done packaging by feature on the service level. If you have a monolith, it makes sense for top-level packages to be about the domains and sub-packages to be split out like above.

You only realize the benefits of Hexagonal Architecture if you have the discipline to adhere to it. You can use an architectural fitness function to ensure that it’s followed. A tool like ArchUnit can automate such a fitness function, especially if you have a naming convention.

What do you think? Does it sound like Hexagonal Architecture could improve your ability to keep tech debt low? Have you used it and not liked it? Please leave a comment below.

No need to manage technical debt

There are a lot of articles and presentations out there that discuss how to manage technical debt. In my opinion, most of these approaches are workarounds to fix a broken system. As usual, it’s much better to treat the disease than the symptoms.

Most of the discussions around technical debt take for granted that technical debt is unavoidable and will increase over time until it grinds development down to a halt. Unless we figure out a way to manage it.

This rests on two debatable assumptions.

The first assumption is that there has to be a battle of some kind between development and “product” or “the business” where “product” always wins, leading to technical debt. Consider an excerpt from this article:

The product manager describes the next feature they want to be added to the product. Developers give a high estimate for the time it takes to implement, which is seen as too long. The developers talk about having to deal with the implications of making changes to lots of hard to understand code or working around bugs in old libraries or frameworks. Then the developers ask for time to address these problems, and the product manager declines, referring to the big backlog of desired features that need to be implemented first.

The assumption is that a product manager or “the business” decides how software developers spend their time. While this might seem logical, since “the business” pays the developers’ salaries, it’s a very poor setup.

First of all, let’s consider this from a roles and responsibilities angle. Who will get yelled at (hopefully figuratively) when technical debt increases to the point that delays become a problem? If you think the product manager, then think again. If the developers are accountable for maintaining a sustainable pace of delivery, then they should have the responsibility to address technical debt as well. Otherwise we’re just setting developers up to fail.

Secondly, let’s look at this from a skills and knowledge perspective. Technical debt is just a fancy term for poor maintainability. Maintainability is one of several quality dimensions. For example, here’s how ISO 25010 defines quality:

Product managers are great at functionality and (hopefully) usability, but they aren’t qualified to make tradeoffs between all these quality attributes. That’s what we have architects for. (Whether a team employs dedicated architects or has developers do architecture is besides the point for this discussion.)

If we take a more balanced approach to software development instead of always prioritizing new features, then technical debt will not grow uncontrollably.

The assumption that product managers should make all the decisions is wrong. We’ve long ago uncovered better ways of developing software. Let the product manager collaborate with the development team instead of dictating their every move. Then most of this self-inflicted pain that we call technical debt will simply never materialize and doesn’t need to be managed.

The second assumption underlying most of the discussions around technical debt is even more fundamental.

In the text above I mentioned a tradeoff between quality attributes and a collaboration to resolve priorities. But a sizable portion of what people call technical debt isn’t even about that. It’s about cutting corners to save a couple of minutes or hours of development time to make a deadline.

Let’s not go into how many (most?) deadlines are fairly arbitrary. Let’s accept them and look at how to deal with them.

Many developers overestimate the ratio of development time to total time for a feature to become available to end users (lead time). The savings of cutting corners in development really don’t add up to as much as one would think. Re-work negates many of the initial savings.

But the costs down the road are significant, so we shouldn’t be cutting corners as often as we do. Even under time pressure, the only way to go fast is to go well.

Developers have a responsibility to the organization that pays them to deliver software at a sustainable pace. They shouldn’t let anyone “collaborate” them out of that responsibility, much less let anyone dictate that. What would a doctor do when their manager told them just before an emergency operation not to wash their hands to save time? Maybe first do no harm wouldn’t be such a bad principle for software development either.

Technical debt will most likely not go away completely. But we shouldn’t let it get to the point that managing it is a big deal worthy of endless discussions, articles, and presentations.

Architecture Artifacts Cross-Checker

Last time we looked at architecture metrics. We stated then that the data required for calculating these metrics could come from a variety of sources. However, we all know that information about architectures is often not kept up-to-date…

So how do you keep your metrics reliable by keeping their inputs fresh?

In order to answer a question like that it’s always good to understand the reasons diagrams and other artifacts go stale. It’s not because people are deliberately making them rot, it’s more that they have many things to do and either simply forget to update artifacts or prioritize some other activity.

To solve the first problem, a simple reminder could be all it takes. But here we run the risk of crying wolf. So we need to make sure there is something to update before we send out a reminder.

Enter the architecture artifacts cross-checker. This is a tool that compares different inputs to verify they are consistent. Let’s look at some examples of inputs such a tool could verify.

External systems in a context diagram should also appear in the corresponding container diagram. If you have a tech radar, it should list at least the technologies that are in the container diagram. Containers in a container diagram should have a corresponding process in a data flow diagram. Threat models should calculate the risk of security threats against each of those. Etc. Etc.

We may even be able to tie some of the architecture artifacts to source code.

For instance, in a micro services architecture, each service may have its own source code repository or its own directory in a monorepo. And if you have coding standards for cross-service calls, you may be able to derive at compile time which containers call which at runtime. Alternatively, this information could come from runtime call patterns collected by a service mesh. Finally, running a tool like cloc gives technologies used in the service, which should be listed in the container diagram and in the tech radar.

By combining these diverse sources of information about your system, you can detect inconsistencies automatically and send out reminders for updates. Or even fail the build, if you want to go that far.

What do you think? Would a little bit of coding effort to write an architecture artifacts cross-checker be worth it in your situation? Please leave a comment below.

Architecture Metrics

Last time we saw how major tech projects continue to be difficult to schedule. One thing that can keep momentum going for a long-running initiative is the appropriate use of metrics. Improving scores allow you to visualize progress and maintain motivation to keep going.

Let’s look at some metrics for software architectures.

Architecture is the art of making trade-offs between the quality attributes that matter most in a given context. A quality attribute is a measurable or testable property of a system that is used to indicate how well the system satisfies the needs of its stakeholders. An architecture metric should therefore be a combination of measurements of quality attributes deemed important for the architecture in question.

There are many potential quality attributes to choose from. The ISO/IEC 25010 standard describes 8 categories with a total of 31 quality attributes:

ISO 25010 Product Quality Model

Nothing is stopping you from including additional quality attributes, of course, if they make sense in your context.

However, you should not measure all possible quality attributes. Some of them won’t make much sense in your situation. Others would be prohibitively expensive to measure. Having too many will also dilute your message. When in doubt, err on the side of leaving some out; you can always add them later. Start small and iterate as you learn.

Quality Storming is one way to determine which quality attributes are important enough to be included in your architecture metric.

Once you’ve decided what quality attributes to measure, you need to define how you will measure them. Good metrics are comparative and understandable, and they are usually ratios.

How you measure a certain quality attribute also depends on your context.

For example, McGabe’s cyclomatic complexity or Uncle Bob Martin’s distance from the main sequence are valid candidate metrics for a monolith. But those makes little sense for a Micro Services Architecture (MSA), since the complexity in an MSA moves from inside a service to the dependencies between the services. So if you’re living in an MSA world, maybe you should look instead at things like how much of the services’ data is exposed to other services, or what percentage of service calls cross domain boundaries.

When defining how to measure a quality attribute, think about how you’re going to collect the required data. If at all possible, automate the data collection. This will allow you to update the metrics more often, and it will probably also be less error-prone.

Some data could be collected by scanning the source code, e.g. for cyclomatic complexity. Other data could come from architecture diagrams, in particular container diagrams. Having a standard for those allows you to parse the diagrams for automated data collection. A threat model is a good source of data for security-related metrics.

Once you’ve defined how to measure all the quality attributes, the last step is to merge all those numbers into a single number that describes the overall quality of the architecture. You will most likely want to calculate a weighted average. To perform this calculation, you’ll need to define weights that indicate the relative importance of the quality attributes. The outcomes of Quality Storming can help with setting those weights.

What do you think? Do you define metrics for your architectures? If so, how? And in what way have they helped or hindered you? Please leave a comment below.

Major Tech Projects

Last time we saw how a target architecture can guide you in your day to day technical decision making. But what do you do when your target architecture is considerably different from your current architecture?

Ways of working differ considerably between organizations, but most tech companies nowadays use some form of agile process (or at least claim to), where a Product Owner prioritizes a backlog of smallish items for the team to work on. That’s the context we’ll assume for this post.

The question then becomes how you convince a business-oriented Product Owner to prioritize technical improvement work over value-added work, such as new features. This question becomes even more pressing when the amount of tech work is not small, but rather an entire project in itself. This type of work doesn’t fit all that well into the agile model of continuous delivery of value through small increments. So what to do?

First of all, it’s always good if you can split the work into smaller pieces. Those are easier to squeeze in and, more importantly, carry less risk. But even if you manage to do that for a big tech overhaul, you’ll end up with a long list of tech items that somehow need to find their way to the top of the backlog.

You could try to persuade the Product Owner in a separate discussion about each and every one of these tech items. However, since Product Owners are judged by how much value they deliver sooner rather than later, you’re going to be fighting an uphill battle. That doesn’t mean it can’t work in your context, of course, but in general the chances aren’t great when the number of tech items grows.

Alternatively, you could treat this as technical debt, assuming you have a well-functioning process for handling that. That’s quite an assumption, by the way. Some teams use their slack time to dig themselves out of holes, but most teams are not fortunate enough to have slack time. (As an industry, we’re slow learners. Mythical man-month, anyone?) Many teams I’ve seen that keep track of tech debt just keep growing it year over year and I hate to think about the teams that don’t even keep track of it.

Under construction

Part of the problem with tech debt is that the metaphor is broken. Financial people will tell you that some amount of debt is good and that you shouldn’t get rid of all of it. That’s just not how developers see tech debt. I’ve also yet to see a team that regularly and consistently pays off tech debt, like one would make monthly payments to pay off, say, a mortgage.

Idea Flow argues that we should abandon the technical debt concept in favor of risk and that we should let the team be guided by both a Product Owner, for business value, and Technical Risk Manager, for keeping risk within an acceptable range. It’s an interesting concept that sounds like it may work, but the problem will be in estimating the risk. I haven’t seen this done in practice yet, so if you have experience with this, please leave a comment below.

A somewhat similar, but less refined, approach is to allocate two budgets for the team: one for doing value-added work and one for doing technical work. You could assign a 3:1 ratio to those budgets, for instance, which means 75% of developer time would be spent on adding business value and 25% on technical work items. Then if you have a year where you need to do some major technical work, you could ask for a temporary change in this ratio.

Yet another approach is the dreaded rewrite: you stop all value-added work for some time and do only technical work. This allows the team to make quick progress in the technical area, but is generally not liked all that much by the business. You can get away with this only every so often, as it tends to take a huge bite out of your political capital. But at some point it will become your only option if you don’t find a better way first.

What do you think? What ways to schedule time for major technical projects have worked for you? Please leave a comment below.

Data Flow Diagrams and Threat Models

Last time we looked at some generic diagrams from the C4 model, which are useful for most teams. This time we’re going to explore a more specific type of diagram that can be a tremendous help with security.

Data Flow Diagrams

A Data Flow Diagram (DFD), as the name indicates, shows the flow of data through the system. It depicts external entities, processes, data stores, and data flows. Larger systems usually have composite processes, which expand into their own DFD.

Here’s a simple example of a data flow diagram:

Create a Data Flow Diagram from a Container Diagram

If you already have a container diagram of your system, then it’s easy to create a DFD from it:

  1. Convert containers into processes by drawing them as circles. You may or may not want to number them. Complicate containers with many processes running in them may be modeled as composite process that expand into their own sub-DFD.
  2. Convert external systems into external entities by drawing them as rectangles.
  3. Convert databases and queues into data stores by drawing them as horizontal parallel bars.
  4. Modify the directions of all the arrows as needed. In a container diagram, the direction of an arrow usually indicates who initiates a request. In a data flow diagram, the direction of an arrow indicates the direction in which data flows. This could be both ways.
    Some people use arrows to indicate data flows in container diagrams as well. In that case, you don’t have anything to do in this step.

Now, why would you go through the trouble to convert a container diagram into a DFD? There are several uses for DFDs, but in this post I want to focus on using them for threat modeling.

Threat Models

In threat modeling, we look for design flaws with security implications. These are different from implementation bugs, which makes them hard to detect using code-based techniques such as Static Application Security Testing (SAST) or security code reviews. But the flip side of that is that you can do threat modeling even before any code is written. And you should!

Threat modeling falls under Threat Assessment in the Software Assurance Maturity Model.

There are many ways to do build threat models, but the one I’ve found easiest to understand for developers with limited security knowledge (which is the vast majority), is to use STRIDE, an acronym for the types of security threats that exist:

  • Spoofing
  • Tampering
  • Repudiation
  • Information disclosure
  • Denial of service
  • Elevation of privilege

Using DFDs to Build Threat Models

Not all of the STRIDE threats are applicable to all elements of our system, and this is where the DFD comes in handy, since each DFD element maps nicely unto some subset of the STRIDE threats:

So now you have a structured process for reviewing your architecture from a security perspective:

  1. Create a DFD from your container diagram (or from scratch if you don’t have one)
  2. Identify the threats using STRIDE
  3. Score the threats using the Common Vulnerability Scoring System (CVSS)
  4. Manage the risks posed by the identified threats:

I’ve found that using CVSS scores for threats makes it easier to decide how to manage them. You can set a security risk appetite for your system, for example accept threats in the None through Medium levels and work to avoid or mitigate High or Critical levels.

CVSS scores also go a long way towards making the threat assessment objective, which makes it easier to convince people that work should be done to reduce the security risk.

Costs and Benefits of Threat Models

If all of this seems like a lot of work, especially for a big system, that’s because it is. Sorry.

It’s not quite as bad as it may seem, however, because DFD elements can usually be grouped since they all behave the same from a security perspective. So then you only have to score and manage the group as a whole rather than all the elements in it individually. But still, threat modeling is a considerable investment.

When I’ve done this activity with developers, I see that it always provides a lot of value. We usually find one or more security threats that we really need to manage better. And most of the time participants get a much better understanding of their system, which helps them in their non-security work as well.

What do you think? Is threat modeling using data flow diagrams and STRIDE something you’re willing to give a shot, or do you prefer a method that requires less work (but offers less protection), like abuse cases? Please leave a comment.