Event storming icons

Most engineering disciplines specialize around a domain. Engineers trained in that field speak the same language as the people requesting them to build a system. In contrast, software developers need to learn the language of the domain. This makes it harder to elicit requirements.

Subject-matter experts (SMEs), by definition, are experts. They’ve accumulated a lot of knowledge over a long period of time. It’s hard for them to think back to when they didn’t have all that knowledge. This makes it hard for them to know what to explain or not, and even what to mention at all. And since the business analyst is new to the domain, they don’t know what questions to ask. The result is an iterative process that takes a lot of time to get things right.

Worse, it’s uncommon for SMEs to be experts in the entire domain. More often, multiple SMEs each have a clear picture of one part of the process and nobody of the whole. This results in conflicting points of view, which need resolution before building software. However, it takes a while before the analyst knows enough to ask the hard questions and bring conflicts into the open.

Event storming is a technique that solves these issues. It’s a workshop where the key stakeholders work together to build up a consistent picture of the entire process under consideration.

In event storming, the SMEs perform the integration of various perspectives rather than the business analyst. By giving them a standard notation, non-experts can follow what they’re doing and force them to be precise. It allows them to ask the hard questions and bring conflicts out for resolution. Everybody’s learning compresses while the domain model emerges as a natural byproduct.

Event storming use the following concepts:

  • A domain event is anything that happens that’s of interest to an SME. (orange)
  • A command triggers an event. (blue)
  • An aggregate accepts commands and emits events. (yellow)
  • A policy contains the decision on how to react to an event. (purple)
  • A read model holds the information necessary to make a decision. (green)
  • A person is a human being responsible for a given decision. (yellow)
  • An external system is another system that interacts with the system under consideration. (pink)

In an event storming workshop, sticky notes of a particular color represent each of these concepts. Workshop participants place the stickies on a wall in timeline order to visualize the business process.

A specific grammar governs event storming concepts, in the sense that certain items always come before or after others. It’s this grammar that allows people who aren’t domain experts to ask intelligent questions, like what command leads to this event, and who issues it?

I like event storming, because it employs useful concepts, like domain events, that are non-technical and yet naturally map to technical concepts, like messages published to a message broker. As such, the output of the workshop is useful long after the workshop is over.

That raises the question of how best to use that output. During the workshop, using stickies makes a lot of sense to facilitate interaction. Afterwards, however, a bit more formal notation would be nice.

I created a set of icons that represent the event storming concepts. These icons maintain the colors of the stickies, but add symbols to visually represent the concepts. Here’s the event storming grammar visualized using these icons:

I’ve used these icons to document processes in a way that both techies and non-techies can follow. The icons are released under the Creative Commons By Attribution 4.0 International license, so feel free to use them in your own work.

Performance and TDD

TDD works wonders for developing code that meets functional requirements. But what about non-functional requirements? Let’s take a look at one that most users care about: performance.

Most TDD examples are necessarily small, so that the author can see the process through to completion. That reduces the chances for intricacies like performance aspects. But it’s not impossible. Let’s try this with the WordWrap kata, where we have to ensure lines aren’t too long by inserting newline characters in appropriate places.

As usual, we start with input validation:

class WhenWrappingWords {

    @Test
    void shouldHandleEmptyText() {
        assertWrap(null, "");
    }

    private void assertWrap(String text, String expected) {
        assertThat(text, Wrapper.wrap(text, 5), is(expected));
    }

}

Which is easy enough:

public class Wrapper {

    public static String wrap(String text, int length) {
         return "";
    }

}

Next, the degenerate case, where the text doesn’t require newlines:

    @Test
    void shouldNotWrapShortText() {
        assertWrap("short", "short");
    }
    public static String wrap(String text, int length) {
        if (text == null) {
             return "";
        }
        return text;
    }

Now we get to the meat: longer texts require wrapping:

    @Test
    void shouldWrapLongText() {
        assertWrap("toolong", "toolo\nng");
    }
    private static final char NL = '\n';

    public static String wrap(String text, int length) {
        if (text == null) {
            return "";
        }
        if (text.length() <= length) {
            return text;
        }
        return text.substring(0, length) + NL 
            + text.substring(length);
    }

But, if possible, we should wrap at word boundaries rather than in the middle of a word:

    @Test
    void shouldPreferToWrapAtWhitespace() {
        assertWrap("too long", "too\nlong");
    }
    public static String wrap(String text, int length) {
        if (text == null) {
            return "";
        }
        if (text.length() <= length) {
            return text;
        }
        var index = text.lastIndexOf(' ', length);
        if (index < 0) {
            return text.substring(0, length) + NL
                + text.substring(length);
        }
        return text.substring(0, index) + NL
            + text.substring(index + 1);
    }

And finally, we should wrap into multiple lines if needed:

    @Test
    void shouldWrapVeryLongTextMultipleTimes() {
        assertWrap("toolongtext", "toolo\nngtex\nt");
        assertWrap("too long text", "too\nlong\ntext");
    }
    public static String wrap(String text, int length) {
        if (text == null) {
            return "";
        }
        if (text.length() <= length) {
            return text;
        }
        var index = text.lastIndexOf(' ', length);
        if (index < 0) {
            return text.substring(0, length) + NL 
                + wrap(text.substring(length), length);
        }
        return text.substring(0, index) + NL 
            + wrap(text.substring(index + 1), length);
    }

Which we can clean up a bit:

    public static String wrap(String text, int length) {
        if (text == null) {
            return "";
        }
        if (text.length() <= length) {
            return text;
        }
        var index = text.lastIndexOf(' ', length);
        var skip = 1;
        if (index < 0) {
            index = length;
            skip = 0;
        }
        return text.substring(0, index) + NL 
            + wrap(text.substring(index + skip), length);
    }

Now let’s consider the performance of this code. Can we use it to format a book? A novel has around 100,000 words and an English word consists of 5.1 letters on average. Let’s say we want to wrap lines at 80 characters:

    private static final int NUM_WORDS_IN_BOOK = 100_000;
    private static final float AVG_NUM_CHARS_PER_WORD = 5.1f;
    private static final int MAX_BOOK_LINE_LENGTH = 80;
    private static final int NUM_TRIES = 10;
    private static final float MAX_WRAPPING_MS = 1000;

    private final Random random = new SecureRandom();

    @Test
    void shouldWrapBook() {
        var time = 0;
        for (var i = 0; i < NUM_TRIES; i++) {
            var text = randomStringOfBookLength();
            var start = System.currentTimeMillis();
            Wrapper.wrap(text, MAX_BOOK_LINE_LENGTH);
            var stop = System.currentTimeMillis();
            time += stop - start;
        }
        assertThat(1.0f * time / NUM_TRIES, 
            lessThanOrEqualTo(MAX_WRAPPING_MS));
    }

    private String randomStringOfBookLength() {
        var numCharsInBook = (int) (NUM_WORDS_IN_BOOK * ( 1 + AVG_NUM_CHARS_PER_WORD));
        var result = new StringBuilder(numCharsInBook);
        for (var i = 0; i < numCharsInBook; i++) {
            result.append(randomChar());
        }
        return result.toString();
    }

    private char randomChar() {
        if (random.nextFloat() < 1.0 / (1 + AVG_NUM_CHARS_PER_WORD)) {
            return ' ';
        }
        return (char) (random.nextInt(26) + 'a');
    }

Normally, you’d use the Java Microbenchmark Harness to investigate the performance of an algorithm like this. I don’t want to introduce new tech for this already long post, however, so this test will have to do. Note that we have to run multiple tries, since we’re using randomness.

Running this test gives a stack overflow, so clearly we need to do something about that.

In this case, it would be easy to replace the recursion with a while loop, so we could just go do that and see the test pass. In the real world, however, things usually aren’t that simple.

This is where the Strategy pattern can come in handy. With multiple implementations of the strategy interface, we can run our tests against all of them. We can develop alternative implementations from scratch, using TDD, or copy some code into a new implementation and start modifying it. Once we’re satisfied with the results, we can keep the best implementation and remove the others.

But hang on, we used TDD to get to this implementation, so how is doing that again going to give us a different result?

Well, when we did it the first time, we weren’t focused on performance. We shouldn’t have been, since premature optimization is the root of all evil. Now that we have proof that our performance isn’t good enough, things are different. Let’s see how that plays out.

The implementation of the first two tests can remain the same:

    public static String wrap(String text, int length) {
        if (text == null) {
            return "";
        }
        return text;
    }

To make shouldWrapLongText() pass, we need to pay more attention to performance this time. We don’t want to use substring() and add two Strings together, since that involves copying characters. So let’s use a StringBuilder instead:

    public static String wrap(String text, int length) {
        if (text == null) {
            return "";
        }
        var result = new StringBuilder(text);
        if (result.length() > length) {
            result.insert(length, NL);
        }
        return result.toString();
    }

This still means we have to copy some arrays around to make room for the newline. We can avoid that by allocating enough capacity from the start:

    public static String wrap(String text, int length) {
        if (text == null) {
            return "";
        }
        var capacity = text.length() + text.length() / length;
        var result = new StringBuilder(capacity);
        result.append(text);
        if (result.length() > length) {
            result.insert(length, NL);
        }
        return result.toString();
    }

This would normally be looking ahead a bit too much for my taste, but since we already implemented the algorithm once, we know for sure we’re going to need this, so I’m cool with it.

Next let’s make shouldPreferToWrapAtWhitespace() pass:

    public static String wrap(String text, int length) {
        if (text == null) {
            return "";
        }
        var result = new StringBuilder(text.length() + text.length() / length);
        result.append(text);
        if (result.length() > length) {
            var spaceIndex = text.lastIndexOf(' ', length);
            if (spaceIndex < 0) {
                result.insert(length, NL);
            } else {
                result.setCharAt(spaceIndex, NL);
            }
        }
        return result.toString();
    }

Finally, we can generalize the if to a while to make the last test pass:

    public static String wrap(String text, int length) {
        if (text == null) {
            return "";
        }
        var capacity = text.length() + text.length() / length;
        var result = new StringBuilder(capacity);
        result.append(text);
        var columnEnd = length;
        while (columnEnd < result.length()) {
            var spaceIndex = result.lastIndexOf(" ", columnEnd);
            if (spaceIndex < columnEnd - length) {
                result.insert(columnEnd, NL);
                columnEnd += length + 1;
            } else {
                result.setCharAt(spaceIndex, NL);
                columnEnd = spaceIndex + 1 + length;
            }
        }
        return result.toString();
    }

This passes all our tests, including the one about performance.

The above do-over may seem like a wasteful approach: why wouldn’t we do it “right” from the start? Like I said earlier, we didn’t because we didn’t know that our implementation wasn’t going to perform well. But what if we did know from the start that performance was important?

We could’ve written our tests in a different order, tackling the test for performance earlier in the process. That would’ve prevented us from getting to green with recursion in this example, saving us a bit of time. In a real-world scenario, it might have saved a lot more time. Yet again, we see that the order of tests is important.

I would argue, however, that not much time was lost with the initial approach. I still believe that the proper order is make it pass, make it right, make it fast. One of the reasons TDD works so well is the explicit distinction between making the test green and then refactoring. Doing one thing at a time is solid advice when it comes to addressing performance as well.

I’ll accept a little bit of rework, knowing that I’ll win back that time and more in all the cases where the “right” solution is also fast enough and I don’t waste time on premature optimization.

Sprint considered harmful

At this time of the year, many people like to slow down and reflect. It seems as good a time as any then, to take offense at the word “sprint” in the context of software development.

I firmly believe that words have meaning, semantic diffusion be damned. Merriam-Webster defines sprint as “to run or go at top speed especially for a short distance”. Most software development doesn’t go just “a short distance”. And as far as I know, nobody ever won a marathon by running 422 consecutive 100m sprints. So the sprint analogy breaks down pretty badly.

The marathon analogy isn’t any better, however. Runners often hit a wall between 30 and 35 kilometers, also known as the man with the hammer. This is due to depletion of glycogen stored in the muscles, which forces the body to transition to alternative energy sources, like breaking down fat. Since this is much less efficient, the runner’s body struggles to maintain the same level of performance.

The equivalent in software development is known as a death march. The team is pushed to their limits at the expense of work-life balance, the well-being of its members, and the quality of the work they produce.

This isn’t a good model for what we want to happen. We want a sustainable pace that developers can keep up for as long as it takes to complete the project. We need them sharp to do their best work. Sleep deprived or stressed out people don’t perform well and software development is hard enough as it is.

So let’s not talk about sprints anymore, shall we?

What then, is a good alternative?

Well, the word “sprint” is used in software development in the context of Agile methods, in particular Scrum. Agile methods are iterative in nature: they split up delivery into small pieces. The word iterative comes from iteration, which is exactly the word that eXtreme Programming and other Agile methods use instead of “sprint”. Turning to Merriam-Webster again, we find that iteration is “a procedure in which repetition of a sequence of operations yields results successively closer to a desired result.” That sounds about right.

Exercise for the reader: What’s wrong with the phrase “best practices?” (And why do we always seem to need more than one?) Hint: look up the Cynefin framework.

Canon TDD example: Roman numerals

People continue to misunderstand Test-Driven Development (TDD). Kent Beck, who wrote the book on it, recently published an article called Canon TDD, which aims to remove some of those misunderstandings. This post shows an example.

Suppose you need Roman numerals in your application. Maybe you want to show years that way, or the hours of a clock, or the titles of Superbowl games. The problem we’ll look at is to calculate a string that represents the Roman representation of a number. We’ll solve that problem using Canon TDD.

The first step is to write a list of test scenarios we want to cover. We use Wikipedia as a reference to compile this list:

  1. Input validation: Roman numerals can’t express numbers smaller than 1 or bigger than 3999.
  2. Numbers are written using a subset of the Latin alphabet (I, V, X, L, C, D, and M), which each have a fixed integer value (1, 5, 10, 50, 100, 500, 1000).
  3. Roman numerals are constructed by appending partial solutions, e.g. 1005 = MV.
  4. We should always start with the highest valued symbol, e.g. 5 = V rather than IIIII.
  5. Subtractive notation shortens some numerals, e.g. 40 = 50 – 10 = XL rather than XXXX.

Step two is to pick one test from this list and express it in code. The order of tests matters. Some tests may force you to make a large change in your code. Always pick the next test such that you make it pass in an obvious way. It’s usually good to start with input validation, so that your programs are secure from the beginning. An additional benefit is that input validation is usually simple.

Here’s the first test:

class WhenConvertingNumbersToRomanNumerals {

    @Test
    void shouldRejectInexpressibleNumbers() {
        assertThrows(RomanNumeral.InexpressibleException.class, () -> 
                RomanNumeral.from(0));
    }

}

This is where we design our interface: we want a RomanNumeral class with a static method from() that accepts a number, returns a String, and throws an InexpressibleException exception on invalid input. To make the test compile, we have to add this code:

public class RomanNumeral {

    public static String from(int value) {
        return null;
    }


    public static class InexpressibleException 
            extends IllegalArgumentException {
    }

}

This fails because there is no exception thrown.

The third step is to make the test pass. The easiest way is to unconditionally throw the expected exception:

    public static String from(int value) {
        throw new InexpressibleException();
    }

The fourth step is to optionally refactor. There isn’t enough code yet for that to make sense.

Now we continue with the next cycle at step two. For the next test, we could add another check for invalid input, but that wouldn’t change the code, so that’s not a smart choice. Tests for items 3-5 on the list all depend on the symbols in 2, so that’s the obvious next choice:

    @Test
    void shouldConvertBaseSymbols() {
        assertThat(RomanNumeral.from(1), is("I"));
    }

This fails, as expected, with an exception. We need to throw the exception in some cases, but not others. In other words, we need an if statement:

    public static String from(int value) {
        if (value == 0) {
            throw new InexpressibleException();
        }
        return "I";
    }

Note how the code deals exclusively with the two tests that we wrote, and nothing else. Not only is the test coverage 100%, any other test than these two would fail. Such tests will force us to generalize the code.

Let’s first do the easy part and complete the input validation. First, generalize from 0 to non-positive numbers:

    @ParameterizedTest
    @ValueSource(ints = {-1, 0})
    void shouldRejectInexpressibleNumbers(int value) {
        assertThrows(RomanNumeral.InexpressibleException.class, () ->
                RomanNumeral.from(value));
    }
    public static String from(int value) {
        if (value <= 0) {
            throw new InexpressibleException();
        }
        return "I";
    }

With the lower bound in place, let’s add the upper bound:

    @ParameterizedTest
    @ValueSource(ints = {-1, 0, 4000, 4001})
    void shouldRejectInexpressibleNumbers(int value) {
        assertThrows(RomanNumeral.InexpressibleException.class, () ->
                RomanNumeral.from(value));
    }
    public static String from(int value) {
        if (value <= 0 || value >= 4000) {
            throw new InexpressibleException();
        }
        return "I";
    }

This works, but doesn’t look great, so let’s do some refactoring to make it more expressive:

    private static final int MIN_EXPRESSIBLE = 1;
    private static final int MAX_EXPRESSIBLE = 3999;

    public static String from(int value) {
        if (!isExpressible(value)) {
            throw new InexpressibleException();
        }
        return "I";
    }

    private static boolean isExpressible(int value) {
        return MIN_EXPRESSIBLE <= value && value <= MAX_EXPRESSIBLE;
    }

Now we can cross item 1 of our list of tests. For our next test, we can pick from either 2 (more symbols) or 3 (additive form). The former would introduce an if statement that then generalizes into a switch with a bunch of random facts, while the latter would force us to develop a bit of an algorithm. That algorithm we can then apply to the other symbols. This sounds like an easier hill to climb than the reverse, where we would have to develop an algorithm that can deal with all the symbols. OK, here goes:

    @ParameterizedTest
    @CsvSource({"1,I", "2,II"})
    void shouldPerformAddition(int value, String expected) {
        assertThat(RomanNumeral.from(value), is(expected));
    }

Note that we renamed to test to better reflect what it is that we’re testing. The simplest way to make this test pass is to generalize the constant into a variable and to add to that variable inside an if:

    public static String from(int value) {
        if (!isExpressible(value)) {
            throw new InexpressibleException();
        }
        var result = "I";
        if (value > 1) {
            result += "I";
        }
        return result;
    }

This looks messy, but bear with me. Things will become clearer after we add the next test:

    @ParameterizedTest
    @CsvSource({"1,I", "2,II", "3,III"})
    void shouldPerformAddition(int value, String expected) {
        assertThat(RomanNumeral.from(value), is(expected));
    }

To make this pass, we have to generalize the if to a while:

    public static String from(int value) {
        if (!isExpressible(value)) {
            throw new InexpressibleException();
        }
        var result = "I";
        while (value > 1) {
            result += "I";
            value--;
        }
        return result;
    }

If we clean up a bit, we’re starting to see an algorithm form:

    public static String from(int value) {
        if (!isExpressible(value)) {
            throw new InexpressibleException();
        }
        var result = "";
        while (value >= 1) {
            result += "I";
            value -= 1;
        }
        return result;
    }

The constants in this piece of code are related: "I" is the symbol for 1, so the algorithm adds "I" for as long as it needs. Let’s make this relationship clearer:

    public static String from(int value) {
        if (!isExpressible(value)) {
            throw new InexpressibleException();
        }
        var result = "";
        var numeral = new Numeral("I", 1);
        while (value >= numeral.value()) {
            result += numeral.text();
            value -= numeral.value();
        }
        return result;
    }

    private record Numeral(String text, int value) {
    }

We can improve this code further. The from() method deals with a nice abstraction of inexpressible numbers, but also with a whole bunch of details about adding texts and subtracting numbers. So let’s extract all those details into its own method:

    public static String from(int value) {
        if (!isExpressible(value)) {
            throw new InexpressibleException();
        }
        return convert(value);
    }

    private static String convert(int value) {
        var result = "";
        var numeral = new Numeral("I", 1);
        while (value >= numeral.value()) {
            result += numeral.text();
            value -= numeral.value();
        }
        return result;
    }

Another code smell here is that we change the value of the parameter. We can solve that in two ways. The first is to assign the parameter to a local variable and then use that variable everywhere we now use the parameter. The second is to introduce a local variable and compare it to the parameter. This turns out to be more instructive:

    private static String convert(int value) {
        var result = "";
        var progress = 0;
        var numeral = new Numeral("I", 1);
        while (value - progress >= numeral.value()) {
            result += numeral.text();
            progress += numeral.value();
        }
        return result;
    }

Now we can see something interesting: the combination of result and progress is very much like a Numeral. But in order to express that, we need to be able to add two Numerals:

    private static String convert(int value) {
        var result = new Numeral("", 0);
        var numeral = new Numeral("I", 1);
        while (value - result.value() >= numeral.value()) {
            result = result.add(numeral);
        }
        return result.text();
    }

    private record Numeral(String text, int value) {

        Numeral add(Numeral addens) {
            return new Numeral(text + addens.text,
                    value + addens.value);
        }

    }

Now the contours of the algorithm are starting to become more apparent: we will build up the result by processing our numeral. Presumably, we’ll add processing of other numerals later. In order to prepare for that, let’s tidy the code a bit more by extracting the part that handles the numeral variable. If we did that on the current code, however, the extracted method would need to take result and value as parameters, in addition to numeral. That’s because this is a static method, and thus not able to use fields. Let’s fix that. First we make convert() an instance method:

    public static String from(int value) {
        if (!isExpressible(value)) {
            throw new InexpressibleException();
        }
        return new RomanNumeral().convert(value);
    }

    private String convert(int value) {
        var result = new Numeral("", 0);
        var numeral = new Numeral("I", 1);
        while (value - result.value() >= numeral.value()) {
            result = result.add(numeral);
        }
        return result.text();
    }

Then we can turn result and value into fields. We also rename value to target and convert() to complete() to better express their meaning:

    private final int target;
    private Numeral current = new Numeral("", 0);

    private RomanNumeral(int target) {
        this.target = target;
    }

    public static String from(int value) {
        if (!isExpressible(value)) {
            throw new InexpressibleException();
        }
        return new RomanNumeral(value).complete();
    }

    private String complete() {
        var numeral = new Numeral("I", 1);
        while (target - current.value() >= numeral.value()) {
            current = current.add(numeral);
        }
        return current.text();
    }

Now we can finally extract the handling of one numeral into its own method:

    private String complete() {
        var numeral = new Numeral("I", 1);
        add(numeral);
        return current.text();
    }

    private void add(Numeral numeral) {
        while (target - current.value() >= numeral.value()) {
            current = current.add(numeral);
        }
    }

We may even want to extract another method to express the algorithm better:

    private void add(Numeral numeral) {
        while (remainder() >= numeral.value()) {
            current = current.add(numeral);
        }
    }

    private int remainder() {
        return target - current.value();
    }

That was a lot of refactoring, but look at what it did to the design.

We have now finished item 3 on our list. Let’s continue with item 2:

    @ParameterizedTest
    @CsvSource({"1,I", "5,V"})
    void shouldConvertBaseSymbols(int value, String expected) {
        assertThat(RomanNumeral.from(value), is(expected));
    }

We can make this pass by generalizing our single numeral into a list:

    private String complete() {
        var numerals = List.of(new Numeral("V", 5), new Numeral("I", 1));
        numerals.forEach(this::add);
        return current.text();
    }

Note that our algorithm only works if we process the numerals from high to low (item 4 on our test list). It’s now easy to add the other base symbols:

    @ParameterizedTest
    @CsvSource({"1,I", "5,V", "10,X", "50,L", "100,C", "500,D", "1000,M"})
    void shouldConvertBaseSymbols(int value, String expected) {
        assertThat(RomanNumeral.from(value), is(expected));
    }
    private String complete() {
        var numerals = List.of(
                new Numeral("M", 1000),
                new Numeral("D", 500),
                new Numeral("C", 100),
                new Numeral("L", 50),
                new Numeral("X", 10),
                new Numeral("V", 5),
                new Numeral("I", 1));
        numerals.forEach(this::add);
        return current.text();
    }

This concludes items 2 and 4 of our test list. The only thing left to do is item 5:

    @ParameterizedTest
    @CsvSource({"4,IV"})
    void shouldShortenWithSubtractiveNotation(int value, String expected) {
        assertThat(RomanNumeral.from(value), is(expected));
    }

We can make this pass simply by adding that numeral explicitly:

    private String complete() {
        var numerals = List.of(
                new Numeral("M", 1000),
                new Numeral("D", 500),
                new Numeral("C", 100),
                new Numeral("L", 50),
                new Numeral("X", 10),
                new Numeral("V", 5),
                new Numeral("IV", 4),
                new Numeral("I", 1));
        numerals.forEach(this::add);
        return current.text();
    }

While that works, it isn’t pretty. There is duplication in the values, since "IV" == "V" - "I" == 5 - 1 == 4. Let’s express that better:

    private String complete() {
        var v = new Numeral("V", 5);
        var i = new Numeral("I", 1);
        var numerals = List.of(
                new Numeral("M", 1000),
                new Numeral("D", 500),
                new Numeral("C", 100),
                new Numeral("L", 50),
                new Numeral("X", 10),
                v,
                v.subtract(i),
                i);
        numerals.forEach(this::add);
        return current.text();
    }

    private record Numeral(String text, int value) {

        public Numeral subtract(Numeral subtrahens) {
            return new Numeral(subtrahens.text + text,
                    value - subtrahens.value);
        }

    }

The others are similar:

    @ParameterizedTest
    @CsvSource({"4,IV", "9,IX", "40,XL", "90,XC", "400,CD", "900,CM"})
    void shouldShortenWithSubtractiveNotation(int value, String expected) {
        assertThat(RomanNumeral.from(value), is(expected));
    }
    private String complete() {
        var i = new Numeral("I", 1);
        var v = new Numeral("V", 5);
        var x = new Numeral("X", 10);
        var l = new Numeral("L", 50);
        var c = new Numeral("C", 100);
        var d = new Numeral("D", 500);
        var m = new Numeral("M", 1000);
        var numerals = List.of(
                m,
                m.subtract(c),
                d,
                d.subtract(c),
                c,
                c.subtract(x),
                l,
                l.subtract(x),
                x,
                x.subtract(i),
                v,
                v.subtract(i),
                i);
        numerals.forEach(this::add);
        return current.text();
    }

Now we see another form of duplication in the way the list of numerals is constructed. There is a pattern that repeats itself for every power of 10. We should extract that pattern:

    private String complete() {
        var i = new Numeral("I", 1);
        var v = new Numeral("V", 5);
        var x = new Numeral("X", 10);
        var l = new Numeral("L", 50);
        var c = new Numeral("C", 100);
        var d = new Numeral("D", 500);
        var m = new Numeral("M", 1000);
        var numerals = new TreeSet<>(comparing(Numeral::value).reversed());
        numerals.addAll(includeSubtractives(i, v, x));
        numerals.addAll(includeSubtractives(x, l, c));
        numerals.addAll(includeSubtractives(c, m, d));
        numerals.forEach(this::add);
        return current.text();
    }

    private Collection<Numeral> includeSubtractives(Numeral one,
            Numeral five, Numeral ten) {
        return List.of(
                one,
                five.subtract(one),
                five,
                ten.subtract(one),
                ten);
    }

Note that we have to use a set to remove duplicate numerals and we need to sort that set in the correct order for the algorithm to work.

We’re still not done, though. The way we add the pattern of subtractives isn’t random; there’s a pattern to that too:

    private String complete() {
        var i = new Numeral("I", 1);
        var v = new Numeral("V", 5);
        var x = new Numeral("X", 10);
        var l = new Numeral("L", 50);
        var c = new Numeral("C", 100);
        var d = new Numeral("D", 500);
        var m = new Numeral("M", 1000);
        var baseNumerals = List.of(i, v, x, l, c, d, m);
        var numerals = new TreeSet<>(comparing(Numeral::value).reversed());
        for (var index = 0; index < baseNumerals.size() - 1; index += 2) {
            numerals.addAll(includeSubtractives(
                    baseNumerals.get(index),
                    baseNumerals.get(index + 1),
                    baseNumerals.get(index + 2)));
        }
        numerals.forEach(this::add);
        return current.text();
    }

We can clean that up further by extracting the base numerals into a constant:

    private static final List<Numeral> BASE_NUMERALS = List.of(
            new Numeral("I", 1), 
            new Numeral("V", 5), 
            new Numeral("X", 10), 
            new Numeral("L", 50), 
            new Numeral("C", 100), 
            new Numeral("D", 500), 
            new Numeral("M", 1000));

    private String complete() {
        var numerals = new TreeSet<>(comparing(Numeral::value).reversed());
        for (var index = 0; index < BASE_NUMERALS.size() - 1; index += 2) {
            numerals.addAll(includeSubtractives(
                    BASE_NUMERALS.get(index),
                    BASE_NUMERALS.get(index + 1),
                    BASE_NUMERALS.get(index + 2)));
        }
        numerals.forEach(this::add);
        return current.text();
    }

And extracting the creation of numerals into its own method:

    private String complete() {
        numerals().forEach(this::add);
        return current.text();
    }

    private Collection<Numeral> numerals() {
        var result = new TreeSet<>(comparing(Numeral::value).reversed());
        for (var index = 0; index < BASE_NUMERALS.size() - 1; index += 2) {
            result.addAll(includeSubtractives(
                    BASE_NUMERALS.get(index),
                    BASE_NUMERALS.get(index + 1),
                    BASE_NUMERALS.get(index + 2)));
        }
        return result;
    }

The code now expresses all concepts well and our test list is empty, so we’re done.

Here’s the complete solution:

import java.util.*;

import static java.util.Comparator.comparing;


public class RomanNumeral {

    private static final int MIN_EXPRESSIBLE = 1;
    private static final int MAX_EXPRESSIBLE = 3999;
    private static final List<Numeral> BASE_NUMERALS = List.of(
            new Numeral("I", 1),
            new Numeral("V", 5),
            new Numeral("X", 10),
            new Numeral("L", 50),
            new Numeral("C", 100),
            new Numeral("D", 500),
            new Numeral("M", 1000));

    private final int target;
    private Numeral current = new Numeral("", 0);

    private RomanNumeral(int target) {
        this.target = target;
    }

    public static String from(int value) {
        if (!isExpressible(value)) {
            throw new InexpressibleException();
        }
        return new RomanNumeral(value).complete();
    }

    private static boolean isExpressible(int value) {
        return MIN_EXPRESSIBLE <= value && value <= MAX_EXPRESSIBLE;
    }

    private String complete() {
        numerals().forEach(this::add);
        return current.text();
    }

    private Collection<Numeral> numerals() {
        var result = new TreeSet<>(comparing(Numeral::value).reversed());
        for (var index = 0; index < BASE_NUMERALS.size() - 1; index += 2) {
            result.addAll(includeSubtractives(
                    BASE_NUMERALS.get(index),
                    BASE_NUMERALS.get(index + 1),
                    BASE_NUMERALS.get(index + 2)));
        }
        return result;
    }

    private Collection<Numeral> includeSubtractives(Numeral one,
            Numeral five, Numeral ten) {
        return List.of(
                one,
                five.subtract(one),
                five,
                ten.subtract(one),
                ten);
    }

    private void add(Numeral numeral) {
        while (remainder() >= numeral.value()) {
            current = current.add(numeral);
        }
    }

    private int remainder() {
        return target - current.value();
    }


    public static class InexpressibleException
            extends IllegalArgumentException {
    }


    private record Numeral(String text, int value) {

        Numeral add(Numeral addens) {
            return new Numeral(text + addens.text,
                    value + addens.value);
        }

        public Numeral subtract(Numeral subtrahens) {
            return new Numeral(subtrahens.text + text,
                    value - subtrahens.value);
        }

    }

}

Should we put the names of deciders in ADRs?

Architecture Decision Records (ADRs) are a wonderful tool. They force you to think through the options before you make a decision. They also provide institutional memory that’s independent of people, which is great for new joiners with their inevitable Why? questions.

Michael Nygard’s template shows what sections an ADRs should have. There are several other templates out there as well.

Nygard’s template has a Status section, which only contains the status of the ADR. At most teams in Adevinta, we add the names of the people involved in making the decision to the Status section, something like Accepted by ray.sinnema on 2023-09-04. Most of the templates I’ve seen don’t have this, although some do, e.g. Joel Parker Henderson, and Planguage.

Recently, I got challenged on this practice. They argued that individuals don’t matter, since the team as a whole makes the decisions. I love such challenges, since they force me to think through and articulate my reasons better. So let’s look at why adding names to ADRs adds value.

Personal branding

Like it or not, it’s important for your name to pop up if you want a promotion. Now, inside your team people will know your strengths. But not all promotions are decided inside a team, i.e. by your manager.

The more you climb the ranks, the more important it becomes for people outside your team to see the value that you add.

Having your name in ADRs, especially the more impactful ones, helps with your personal brand. You can point to those ADRs as proof that you can make difficult decisions well and are ready for the next step in your career.

External stakeholders

Sometimes a decision can’t be made by the team alone. The decision may have budget impact, for instance, and need approval from a budget owner or from the Procurement department.

Or the decision may affect how you store personal data and the Privacy department needs to give their blessing.

Having the names of those outside collaborators in the ADR shows that you’ve consulted the right people and may prevent challenges to the decision from outside the team (CYA).

Contact person

Most companies have more than one team. It may be interesting for other teams to look at your team’s decisions.

For instance, you may have an ADR on a public cloud migration and the other team is about to embark on a similar migration. They could learn from your research into the options and their pros and cons.

Having your name in the ADR gives the other team a person to contact. If it’s not there, they need to contact the team’s manager and ask them for a person to reach out to. That may sound like a small step, but every extra roadblock means you could hamper the transfer of knowledge.

Writing an ADR isn’t a daily activity. Many teams may go long periods of time without needing to write one. This means the opportunities for team members to learn how to make good decisions and write them up in ADRs are relatively scarce. Looking at other team’s ADRs may help, especially if you can have a follow-up conversation with the decision makers.

Counterargument

Some might argue that you can see the ADR contributors from the Git history (you are versioning your ADRs, aren’t you?). However, that’s not universally true, because

  • Not everyone who contributes ideas to an ADR will create a commit or PR comment. This is especially true when you mob-write your ADRs.
  • Some people may not have access to Git, like PMs, or VPs.
  • If you make your ADRs available via some other system, like Backstage, it may not be clear to the reader where the ADR is stored.

Conclusion

I think there are good reasons for adding the names of the deciders to ADRs, especially in larger organizations. Whether this is in the Status section, or in some other section doesn’t matter much.

Acknowledgments

Thanks to Francesco for raising this topic, to Alexey for helping me crystallize my thoughts, and to Gabe for reminding me to write about the counterargument.

The Anti-Corruption Microservice Pattern

Implement an anti-corruption microservice (ACM) to talk to an external system. Other microservices only communicate with the external system via the ACM. The ACM translates requests from the microservices system to the external system and back.

Use this pattern when you employ a microservices architecture and you want to ensure their designs are not limited by an external system. This pattern is an adaptation of the Anti-Corruption Layer (ACL) pattern for microservices architectures.

Context and problem

The context of the ACL pattern applies, but additionally, your application is made up of microservices and multiple of those communicate with the external system. If you’d only apply the ACL pattern, then you’d end up with multiple microservices that each have an ACL for the same external system.

Solution

Isolate the microservices from the external system by placing an Anti-Corruption Microservice (ACM) between them. This layer translates communications between the application’s microservices on the one hand and the external system on the other.

The diagram above shows an application consisting of 4 microservices, 3 of which communicate with the same external system via an Anti-Corruption Microservice. Communication between the microservices and the ACM always uses the data model and architecture of the application. Calls from the ACM to the external system conform to the external system’s data model and methods. The ACM contains all the logic necessary to translate between the two systems.

Issues and considerations

  • The ACM may add latency to calls between the two systems. Conversely, by caching data from calls by one microservice, the ACM might speed up calls by a different microservice.
  • The ACM is another microservice to be managed, maintained, and scaled.
  • The ACM doesn’t need to support all the features of the external system, just the ones used by the application’s microservices.

When to use this pattern

Use this pattern when:

  • Two or more systems have different semantics, but still need to communicate.
  • The external system is provided by a vendor and you want to minimize vendor lock-in. This elevates the idea of hexagonal architecture to the microservices world, where the ACM is a port and the external system an adapter.

Related resources

Acknowledgements

Thanks to Dev for pushing me to write up this pattern.

Hexagonal Architecture helps keep tech debt low

Some people remain skeptical of the idea that tech debt can be kept low over the long haul.

I’ve noticed that when people say something can’t be done, it usually means that they don’t know how to do it. To help with that, this post explores an approach to keep one major source of tech debt under control.

Tech debt can grow without the application changing

Martin Fowler defines technical debt as “deficiencies in internal quality that make it harder than it would ideally be to modify and extend the system further”. Modifications and extensions are changes and there exist many types of those.

In this post, I’d like to focus on changes in perception. That might seem odd, but bear with me. Let’s look at some examples:

  • We are running our software on GCP and our company gets acquired by another that wants to standardize on AWS.
  • A new LTS version of Java comes out.
  • We use a logging library that all of a sudden doesn’t look so great anymore.
  • The framework that we build our system around is introducing breaking changes.
  • We learn about a new design pattern that we realize would make our code much simpler, or more resilient, or better in some other way.

These examples have in common that the change is in the way we look at our code rather than in the code itself. Although the code didn’t change, our definition of “internal quality” did and therefore so did our amount of technical debt.

Responding to changed perceptions

When our perception of the code changes, we think of the code as having tech debt and we want to change it. How do we best make this type of change? And if our code looks fine today but maybe not tomorrow, then what can we possibly do to prevent that?

The answers to such questions depends on the type of technology that is affected. Programming languages and frameworks are fundamentally different from infrastructure, libraries, and our own code.

Language changes come in different sizes. If you’ve picked a language that values stability, like Java, then you’re rarely if ever forced to make changes when adopting a new version. If you picked a more volatile language, well, that was a trade-off you made. (Hopefully with open eyes and documented in an ADR for bonus points.)

Even when you’re not forced to change, you may still want to, to benefit from new language constructs (like records or sealed classes for Java). You can define a project to update the entire code base in one fell swoop, but you’d probably need to formally schedule that work. It’s easier to only improve code that you touch in the course of your normal work, just like any other refactoring. Remember that you don’t need permission from anyone to keep your code clean, as this is the only way to keep development sustainable.

Frameworks are harder to deal with, since a framework is in control of the application and directs our code. It defines the rules and we have no choice but to modify our code if those rules change. That’s the trade-off we accepted when we opted to use the framework. Upgrading Spring Boot across major (or even minor) versions has historically been a pain, for example, but we accept that because the framework saves us a lot of time on a daily basis. There isn’t a silver bullet here, so be careful what you’re getting yourself into. Making a good trade-off analysis and recording it in an ADR is about the best we can do.

Libraries are a bit simpler because they impact the application less than frameworks. Still, there is a lot of variation in their impact. A library for a cross-cutting concern like logging may show up in many places, while one for generating passwords sees more limited use.

Much has been written about keeping application code easy to change. Follow the SOLID (or IDEALS) principles and employ the correct design patterns. If you do, then basically every piece of code treats every other piece of code as a library with a well-defined API.

Infrastructure can also be impactful. Luckily, work like upgrading databases, queues, and Kubernetes clusters can often economically be outsourced to cloud vendors. From the perspective of the application, that reduces infrastructure to a library as well. Obviously there is more to switching cloud vendors than switching libraries, like updating infrastructure as code, but from an application code perspective the difference is minimal.

This analysis shows that if we can figure out how to deal with changes in libraries, we would be able to effectively handle most changes that increase tech debt.

Hexagonal Architecture to the rescue

Luckily, the solution to this problem is fairly straightforward: localize dependencies. If only a small part of your code depends on a library, then any changes in that library can only affect that part of the code. A structured way of doing that is using a Hexagonal Architecture.

Hexagonal Architecture (aka Ports & Adapters) is an approach to localize dependencies. Variations are Onion Architecture and Clean Architecture. The easiest way to explain Hexagonal Architecture is to compare it to a more traditional three layer architecture:

Three layer vs hexagonal architecture

A three layer architecture separates code into layers that may only communicate “downward”. In particular, the business logic depends on the data access layer. Hexagonal Architecture replaces the notion of downward dependencies with inward ones. The essence of the application, the business logic, sits nicely in the middle of the visualization. Data access code depends on the business logic, instead of the other way around.

The inversion of dependencies between business logic and data access is implemented using ports (interfaces) and adapters (implementations of those interfaces). For example, accounting business logic may define an output port AccountRepository for storing and loading Account objects. If you’re using MySQL to store those Accounts, then a MySqlAccountRepository adapter implements the AccountRepository port.

When you need to upgrade MySQL, changes are limited to the adapter. If you ever wanted to replace MySQL with some other data access technology, you’d simply add a new adapter and decommission the old one. You can even have both adapters live side by side for a while and activate one or the other depending on the environment the application is running in. This makes testing the migration easier.

You can use ports and adapters for more than data access, however. Need to use logging? Define a Logging port and an adapter for Log4J or whatever your preferred logging library is. Same for building PDFs, generating passwords, or really anything you’d want to use a library for.

This approach has other benefits as well.

Your code no longer has to suffer from poor APIs offered by the library, since you can design the port such that it makes sense to you. For example, you can use names that match your context and put method parameters in the order you prefer. You can reduce cognitive load by only exposing the subset of the functionality of the library that you need for your application. If you document the port, team members no longer need to look at the library’s documentation. (Unless they’re studying the adapter’s implementation, of course.)

Testing also becomes easier. Since a port is an interface, mocking the use of the library becomes trivial. You can write an abstract test against the port’s interface and derive concrete tests for the adapters that do nothing more than instantiate the adapter under test. Such contract tests ensure a smooth transition from one adapter of the port to the next, since the tests prove that both adapters work the same way.

Adopting Hexagonal Architecture

By now the benefits of Hexagonal Architecture should be clear. Some developers, however, are put off by the need to create separate ports, especially for trivial things. Many would balk at designing their own logging interface, for example. Luckily, this is not an all-or-nothing decision. You can make a trade-off analysis per library.

With Hexagonal Architecture, code constantly needs to be passed the ports it uses. A dependency injection framework can automate that.

It also helps to have a naming convention for the modules of your code, like packages in Java.

Here’s what we used on a recent Spring Boot application (*) I was involved with:

  • application.services
    The @SpringBootApplication class and related plumbing (like Spring Security configuration) to wire up and start the application.
  • domain.model
    Types that represent fundamental concepts in the domain with associated behavior.
  • domain.services
    Functionality that crosses domain model boundaries. It implements input ports and uses output ports.
  • ports.in
    Input ports that offer abstractions of domain services to input mechanisms like @Scheduled jobs and @Controllers.
  • ports.out
    Output ports that offer abstractions of technical services to the domain services.
  • infra
    Infrastructure that implements output ports, IOW adapters. Packages in here represent either technology directly, like infra.pubsub, or indirectly for some functionality, like infra.store.gcs. The latter form allows competing implementations to live next to each other.
  • ui
    Interface into the application, like its user and programmatic interfaces, and any scheduled jobs.

(*) All the examples in this post are from the same application. This doesn’t mean that Hexagonal Architecture is only suitable for Java, or even Spring, applications. It can be applied anywhere.

Note that this package structure is a form of package by layer. It therefore works best for microservices, where you’ve implicitly already done packaging by feature on the service level. If you have a monolith, it makes sense for top-level packages to be about the domains and sub-packages to be split out like above.

You only realize the benefits of Hexagonal Architecture if you have the discipline to adhere to it. You can use an architectural fitness function to ensure that it’s followed. A tool like ArchUnit can automate such a fitness function, especially if you have a naming convention.

What do you think? Does it sound like Hexagonal Architecture could improve your ability to keep tech debt low? Have you used it and not liked it? Please leave a comment below.

No need to manage technical debt

There are a lot of articles and presentations out there that discuss how to manage technical debt. In my opinion, most of these approaches are workarounds to fix a broken system. As usual, it’s much better to treat the disease than the symptoms.

Most of the discussions around technical debt take for granted that technical debt is unavoidable and will increase over time until it grinds development down to a halt. Unless we figure out a way to manage it.

This rests on two debatable assumptions.

The first assumption is that there has to be a battle of some kind between development and “product” or “the business” where “product” always wins, leading to technical debt. Consider an excerpt from this article:

The product manager describes the next feature they want to be added to the product. Developers give a high estimate for the time it takes to implement, which is seen as too long. The developers talk about having to deal with the implications of making changes to lots of hard to understand code or working around bugs in old libraries or frameworks. Then the developers ask for time to address these problems, and the product manager declines, referring to the big backlog of desired features that need to be implemented first.

The assumption is that a product manager or “the business” decides how software developers spend their time. While this might seem logical, since “the business” pays the developers’ salaries, it’s a very poor setup.

First of all, let’s consider this from a roles and responsibilities angle. Who will get yelled at (hopefully figuratively) when technical debt increases to the point that delays become a problem? If you think the product manager, then think again. If the developers are accountable for maintaining a sustainable pace of delivery, then they should have the responsibility to address technical debt as well. Otherwise we’re just setting developers up to fail.

Secondly, let’s look at this from a skills and knowledge perspective. Technical debt is just a fancy term for poor maintainability. Maintainability is one of several quality dimensions. For example, here’s how ISO 25010 defines quality:

Product managers are great at functionality and (hopefully) usability, but they aren’t qualified to make tradeoffs between all these quality attributes. That’s what we have architects for. (Whether a team employs dedicated architects or has developers do architecture is besides the point for this discussion.)

If we take a more balanced approach to software development instead of always prioritizing new features, then technical debt will not grow uncontrollably.

The assumption that product managers should make all the decisions is wrong. We’ve long ago uncovered better ways of developing software. Let the product manager collaborate with the development team instead of dictating their every move. Then most of this self-inflicted pain that we call technical debt will simply never materialize and doesn’t need to be managed.

The second assumption underlying most of the discussions around technical debt is even more fundamental.

In the text above I mentioned a tradeoff between quality attributes and a collaboration to resolve priorities. But a sizable portion of what people call technical debt isn’t even about that. It’s about cutting corners to save a couple of minutes or hours of development time to make a deadline.

Let’s not go into how many (most?) deadlines are fairly arbitrary. Let’s accept them and look at how to deal with them.

Many developers overestimate the ratio of development time to total time for a feature to become available to end users (lead time). The savings of cutting corners in development really don’t add up to as much as one would think. Re-work negates many of the initial savings.

But the costs down the road are significant, so we shouldn’t be cutting corners as often as we do. Even under time pressure, the only way to go fast is to go well.

Developers have a responsibility to the organization that pays them to deliver software at a sustainable pace. They shouldn’t let anyone “collaborate” them out of that responsibility, much less let anyone dictate that. What would a doctor do when their manager told them just before an emergency operation not to wash their hands to save time? Maybe first do no harm wouldn’t be such a bad principle for software development either.

Technical debt will most likely not go away completely. But we shouldn’t let it get to the point that managing it is a big deal worthy of endless discussions, articles, and presentations.

Architecture Artifacts Cross-Checker

Last time we looked at architecture metrics. We stated then that the data required for calculating these metrics could come from a variety of sources. However, we all know that information about architectures is often not kept up-to-date…

So how do you keep your metrics reliable by keeping their inputs fresh?

In order to answer a question like that it’s always good to understand the reasons diagrams and other artifacts go stale. It’s not because people are deliberately making them rot, it’s more that they have many things to do and either simply forget to update artifacts or prioritize some other activity.

To solve the first problem, a simple reminder could be all it takes. But here we run the risk of crying wolf. So we need to make sure there is something to update before we send out a reminder.

Enter the architecture artifacts cross-checker. This is a tool that compares different inputs to verify they are consistent. Let’s look at some examples of inputs such a tool could verify.

External systems in a context diagram should also appear in the corresponding container diagram. If you have a tech radar, it should list at least the technologies that are in the container diagram. Containers in a container diagram should have a corresponding process in a data flow diagram. Threat models should calculate the risk of security threats against each of those. Etc. Etc.

We may even be able to tie some of the architecture artifacts to source code.

For instance, in a micro services architecture, each service may have its own source code repository or its own directory in a monorepo. And if you have coding standards for cross-service calls, you may be able to derive at compile time which containers call which at runtime. Alternatively, this information could come from runtime call patterns collected by a service mesh. Finally, running a tool like cloc gives technologies used in the service, which should be listed in the container diagram and in the tech radar.

By combining these diverse sources of information about your system, you can detect inconsistencies automatically and send out reminders for updates. Or even fail the build, if you want to go that far.

What do you think? Would a little bit of coding effort to write an architecture artifacts cross-checker be worth it in your situation? Please leave a comment below.

Architecture Metrics

Last time we saw how major tech projects continue to be difficult to schedule. One thing that can keep momentum going for a long-running initiative is the appropriate use of metrics. Improving scores allow you to visualize progress and maintain motivation to keep going.

Let’s look at some metrics for software architectures.

Architecture is the art of making trade-offs between the quality attributes that matter most in a given context. A quality attribute is a measurable or testable property of a system that is used to indicate how well the system satisfies the needs of its stakeholders. An architecture metric should therefore be a combination of measurements of quality attributes deemed important for the architecture in question.

There are many potential quality attributes to choose from. The ISO/IEC 25010 standard describes 8 categories with a total of 31 quality attributes:

ISO 25010 Product Quality Model

Nothing is stopping you from including additional quality attributes, of course, if they make sense in your context.

However, you should not measure all possible quality attributes. Some of them won’t make much sense in your situation. Others would be prohibitively expensive to measure. Having too many will also dilute your message. When in doubt, err on the side of leaving some out; you can always add them later. Start small and iterate as you learn.

Quality Storming is one way to determine which quality attributes are important enough to be included in your architecture metric.

Once you’ve decided what quality attributes to measure, you need to define how you will measure them. Good metrics are comparative and understandable, and they are usually ratios.

How you measure a certain quality attribute also depends on your context.

For example, McGabe’s cyclomatic complexity or Uncle Bob Martin’s distance from the main sequence are valid candidate metrics for a monolith. But those makes little sense for a Micro Services Architecture (MSA), since the complexity in an MSA moves from inside a service to the dependencies between the services. So if you’re living in an MSA world, maybe you should look instead at things like how much of the services’ data is exposed to other services, or what percentage of service calls cross domain boundaries.

When defining how to measure a quality attribute, think about how you’re going to collect the required data. If at all possible, automate the data collection. This will allow you to update the metrics more often, and it will probably also be less error-prone.

Some data could be collected by scanning the source code, e.g. for cyclomatic complexity. Other data could come from architecture diagrams, in particular container diagrams. Having a standard for those allows you to parse the diagrams for automated data collection. A threat model is a good source of data for security-related metrics.

Once you’ve defined how to measure all the quality attributes, the last step is to merge all those numbers into a single number that describes the overall quality of the architecture. You will most likely want to calculate a weighted average. To perform this calculation, you’ll need to define weights that indicate the relative importance of the quality attributes. The outcomes of Quality Storming can help with setting those weights.

What do you think? Do you define metrics for your architectures? If so, how? And in what way have they helped or hindered you? Please leave a comment below.