Hexagonal Architecture helps keep tech debt low

Some people remain skeptical of the idea that tech debt can be kept low over the long haul.

I’ve noticed that when people say something can’t be done, it usually means that they don’t know how to do it. To help with that, this post explores an approach to keep one major source of tech debt under control.

Tech debt can grow without the application changing

Martin Fowler defines technical debt as “deficiencies in internal quality that make it harder than it would ideally be to modify and extend the system further”. Modifications and extensions are changes and there exist many types of those.

In this post, I’d like to focus on changes in perception. That might seem odd, but bear with me. Let’s look at some examples:

  • We are running our software on GCP and our company gets acquired by another that wants to standardize on AWS.
  • A new LTS version of Java comes out.
  • We use a logging library that all of a sudden doesn’t look so great anymore.
  • The framework that we build our system around is introducing breaking changes.
  • We learn about a new design pattern that we realize would make our code much simpler, or more resilient, or better in some other way.

These examples have in common that the change is in the way we look at our code rather than in the code itself. Although the code didn’t change, our definition of “internal quality” did and therefore so did our amount of technical debt.

Responding to changed perceptions

When our perception of the code changes, we think of the code as having tech debt and we want to change it. How do we best make this type of change? And if our code looks fine today but maybe not tomorrow, then what can we possibly do to prevent that?

The answers to such questions depends on the type of technology that is affected. Programming languages and frameworks are fundamentally different from infrastructure, libraries, and our own code.

Language changes come in different sizes. If you’ve picked a language that values stability, like Java, then you’re rarely if ever forced to make changes when adopting a new version. If you picked a more volatile language, well, that was a trade-off you made. (Hopefully with open eyes and documented in an ADR for bonus points.)

Even when you’re not forced to change, you may still want to, to benefit from new language constructs (like records or sealed classes for Java). You can define a project to update the entire code base in one fell swoop, but you’d probably need to formally schedule that work. It’s easier to only improve code that you touch in the course of your normal work, just like any other refactoring. Remember that you don’t need permission from anyone to keep your code clean, as this is the only way to keep development sustainable.

Frameworks are harder to deal with, since a framework is in control of the application and directs our code. It defines the rules and we have no choice but to modify our code if those rules change. That’s the trade-off we accepted when we opted to use the framework. Upgrading Spring Boot across major (or even minor) versions has historically been a pain, for example, but we accept that because the framework saves us a lot of time on a daily basis. There isn’t a silver bullet here, so be careful what you’re getting yourself into. Making a good trade-off analysis and recording it in an ADR is about the best we can do.

Libraries are a bit simpler because they impact the application less than frameworks. Still, there is a lot of variation in their impact. A library for a cross-cutting concern like logging may show up in many places, while one for generating passwords sees more limited use.

Much has been written about keeping application code easy to change. Follow the SOLID (or IDEALS) principles and employ the correct design patterns. If you do, then basically every piece of code treats every other piece of code as a library with a well-defined API.

Infrastructure can also be impactful. Luckily, work like upgrading databases, queues, and Kubernetes clusters can often economically be outsourced to cloud vendors. From the perspective of the application, that reduces infrastructure to a library as well. Obviously there is more to switching cloud vendors than switching libraries, like updating infrastructure as code, but from an application code perspective the difference is minimal.

This analysis shows that if we can figure out how to deal with changes in libraries, we would be able to effectively handle most changes that increase tech debt.

Hexagonal Architecture to the rescue

Luckily, the solution to this problem is fairly straightforward: localize dependencies. If only a small part of your code depends on a library, then any changes in that library can only affect that part of the code. A structured way of doing that is using a Hexagonal Architecture.

Hexagonal Architecture (aka Ports & Adapters) is an approach to localize dependencies. Variations are Onion Architecture and Clean Architecture. The easiest way to explain Hexagonal Architecture is to compare it to a more traditional three layer architecture:

Three layer vs hexagonal architecture

A three layer architecture separates code into layers that may only communicate “downward”. In particular, the business logic depends on the data access layer. Hexagonal Architecture replaces the notion of downward dependencies with inward ones. The essence of the application, the business logic, sits nicely in the middle of the visualization. Data access code depends on the business logic, instead of the other way around.

The inversion of dependencies between business logic and data access is implemented using ports (interfaces) and adapters (implementations of those interfaces). For example, accounting business logic may define an output port AccountRepository for storing and loading Account objects. If you’re using MySQL to store those Accounts, then a MySqlAccountRepository adapter implements the AccountRepository port.

When you need to upgrade MySQL, changes are limited to the adapter. If you ever wanted to replace MySQL with some other data access technology, you’d simply add a new adapter and decommission the old one. You can even have both adapters live side by side for a while and activate one or the other depending on the environment the application is running in. This makes testing the migration easier.

You can use ports and adapters for more than data access, however. Need to use logging? Define a Logging port and an adapter for Log4J or whatever your preferred logging library is. Same for building PDFs, generating passwords, or really anything you’d want to use a library for.

This approach has other benefits as well.

Your code no longer has to suffer from poor APIs offered by the library, since you can design the port such that it makes sense to you. For example, you can use names that match your context and put method parameters in the order you prefer. You can reduce cognitive load by only exposing the subset of the functionality of the library that you need for your application. If you document the port, team members no longer need to look at the library’s documentation. (Unless they’re studying the adapter’s implementation, of course.)

Testing also becomes easier. Since a port is an interface, mocking the use of the library becomes trivial. You can write an abstract test against the port’s interface and derive concrete tests for the adapters that do nothing more than instantiate the adapter under test. Such contract tests ensure a smooth transition from one adapter of the port to the next, since the tests prove that both adapters work the same way.

Adopting Hexagonal Architecture

By now the benefits of Hexagonal Architecture should be clear. Some developers, however, are put off by the need to create separate ports, especially for trivial things. Many would balk at designing their own logging interface, for example. Luckily, this is not an all-or-nothing decision. You can make a trade-off analysis per library.

With Hexagonal Architecture, code constantly needs to be passed the ports it uses. A dependency injection framework can automate that.

It also helps to have a naming convention for the modules of your code, like packages in Java.

Here’s what we used on a recent Spring Boot application (*) I was involved with:

  • application.services
    The @SpringBootApplication class and related plumbing (like Spring Security configuration) to wire up and start the application.
  • domain.model
    Types that represent fundamental concepts in the domain with associated behavior.
  • domain.services
    Functionality that crosses domain model boundaries. It implements input ports and uses output ports.
  • ports.in
    Input ports that offer abstractions of domain services to input mechanisms like @Scheduled jobs and @Controllers.
  • ports.out
    Output ports that offer abstractions of technical services to the domain services.
  • infra
    Infrastructure that implements output ports, IOW adapters. Packages in here represent either technology directly, like infra.pubsub, or indirectly for some functionality, like infra.store.gcs. The latter form allows competing implementations to live next to each other.
  • ui
    Interface into the application, like its user and programmatic interfaces, and any scheduled jobs.

(*) All the examples in this post are from the same application. This doesn’t mean that Hexagonal Architecture is only suitable for Java, or even Spring, applications. It can be applied anywhere.

Note that this package structure is a form of package by layer. It therefore works best for microservices, where you’ve implicitly already done packaging by feature on the service level. If you have a monolith, it makes sense for top-level packages to be about the domains and sub-packages to be split out like above.

You only realize the benefits of Hexagonal Architecture if you have the discipline to adhere to it. You can use an architectural fitness function to ensure that it’s followed. A tool like ArchUnit can automate such a fitness function, especially if you have a naming convention.

What do you think? Does it sound like Hexagonal Architecture could improve your ability to keep tech debt low? Have you used it and not liked it? Please leave a comment below.

Advertisement

No need to manage technical debt

There are a lot of articles and presentations out there that discuss how to manage technical debt. In my opinion, most of these approaches are workarounds to fix a broken system. As usual, it’s much better to treat the disease than the symptoms.

Most of the discussions around technical debt take for granted that technical debt is unavoidable and will increase over time until it grinds development down to a halt. Unless we figure out a way to manage it.

This rests on two debatable assumptions.

The first assumption is that there has to be a battle of some kind between development and “product” or “the business” where “product” always wins, leading to technical debt. Consider an excerpt from this article:

The product manager describes the next feature they want to be added to the product. Developers give a high estimate for the time it takes to implement, which is seen as too long. The developers talk about having to deal with the implications of making changes to lots of hard to understand code or working around bugs in old libraries or frameworks. Then the developers ask for time to address these problems, and the product manager declines, referring to the big backlog of desired features that need to be implemented first.

The assumption is that a product manager or “the business” decides how software developers spend their time. While this might seem logical, since “the business” pays the developers’ salaries, it’s a very poor setup.

First of all, let’s consider this from a roles and responsibilities angle. Who will get yelled at (hopefully figuratively) when technical debt increases to the point that delays become a problem? If you think the product manager, then think again. If the developers are accountable for maintaining a sustainable pace of delivery, then they should have the responsibility to address technical debt as well. Otherwise we’re just setting developers up to fail.

Secondly, let’s look at this from a skills and knowledge perspective. Technical debt is just a fancy term for poor maintainability. Maintainability is one of several quality dimensions. For example, here’s how ISO 25010 defines quality:

Product managers are great at functionality and (hopefully) usability, but they aren’t qualified to make tradeoffs between all these quality attributes. That’s what we have architects for. (Whether a team employs dedicated architects or has developers do architecture is besides the point for this discussion.)

If we take a more balanced approach to software development instead of always prioritizing new features, then technical debt will not grow uncontrollably.

The assumption that product managers should make all the decisions is wrong. We’ve long ago uncovered better ways of developing software. Let the product manager collaborate with the development team instead of dictating their every move. Then most of this self-inflicted pain that we call technical debt will simply never materialize and doesn’t need to be managed.

The second assumption underlying most of the discussions around technical debt is even more fundamental.

In the text above I mentioned a tradeoff between quality attributes and a collaboration to resolve priorities. But a sizable portion of what people call technical debt isn’t even about that. It’s about cutting corners to save a couple of minutes or hours of development time to make a deadline.

Let’s not go into how many (most?) deadlines are fairly arbitrary. Let’s accept them and look at how to deal with them.

Many developers overestimate the ratio of development time to total time for a feature to become available to end users (lead time). The savings of cutting corners in development really don’t add up to as much as one would think. Re-work negates many of the initial savings.

But the costs down the road are significant, so we shouldn’t be cutting corners as often as we do. Even under time pressure, the only way to go fast is to go well.

Developers have a responsibility to the organization that pays them to deliver software at a sustainable pace. They shouldn’t let anyone “collaborate” them out of that responsibility, much less let anyone dictate that. What would a doctor do when their manager told them just before an emergency operation not to wash their hands to save time? Maybe first do no harm wouldn’t be such a bad principle for software development either.

Technical debt will most likely not go away completely. But we shouldn’t let it get to the point that managing it is a big deal worthy of endless discussions, articles, and presentations.

Architecture Artifacts Cross-Checker

Last time we looked at architecture metrics. We stated then that the data required for calculating these metrics could come from a variety of sources. However, we all know that information about architectures is often not kept up-to-date…

So how do you keep your metrics reliable by keeping their inputs fresh?

In order to answer a question like that it’s always good to understand the reasons diagrams and other artifacts go stale. It’s not because people are deliberately making them rot, it’s more that they have many things to do and either simply forget to update artifacts or prioritize some other activity.

To solve the first problem, a simple reminder could be all it takes. But here we run the risk of crying wolf. So we need to make sure there is something to update before we send out a reminder.

Enter the architecture artifacts cross-checker. This is a tool that compares different inputs to verify they are consistent. Let’s look at some examples of inputs such a tool could verify.

External systems in a context diagram should also appear in the corresponding container diagram. If you have a tech radar, it should list at least the technologies that are in the container diagram. Containers in a container diagram should have a corresponding process in a data flow diagram. Threat models should calculate the risk of security threats against each of those. Etc. Etc.

We may even be able to tie some of the architecture artifacts to source code.

For instance, in a micro services architecture, each service may have its own source code repository or its own directory in a monorepo. And if you have coding standards for cross-service calls, you may be able to derive at compile time which containers call which at runtime. Alternatively, this information could come from runtime call patterns collected by a service mesh. Finally, running a tool like cloc gives technologies used in the service, which should be listed in the container diagram and in the tech radar.

By combining these diverse sources of information about your system, you can detect inconsistencies automatically and send out reminders for updates. Or even fail the build, if you want to go that far.

What do you think? Would a little bit of coding effort to write an architecture artifacts cross-checker be worth it in your situation? Please leave a comment below.

Architecture Metrics

Last time we saw how major tech projects continue to be difficult to schedule. One thing that can keep momentum going for a long-running initiative is the appropriate use of metrics. Improving scores allow you to visualize progress and maintain motivation to keep going.

Let’s look at some metrics for software architectures.

Architecture is the art of making trade-offs between the quality attributes that matter most in a given context. A quality attribute is a measurable or testable property of a system that is used to indicate how well the system satisfies the needs of its stakeholders. An architecture metric should therefore be a combination of measurements of quality attributes deemed important for the architecture in question.

There are many potential quality attributes to choose from. The ISO/IEC 25010 standard describes 8 categories with a total of 31 quality attributes:

ISO 25010 Product Quality Model

Nothing is stopping you from including additional quality attributes, of course, if they make sense in your context.

However, you should not measure all possible quality attributes. Some of them won’t make much sense in your situation. Others would be prohibitively expensive to measure. Having too many will also dilute your message. When in doubt, err on the side of leaving some out; you can always add them later. Start small and iterate as you learn.

Quality Storming is one way to determine which quality attributes are important enough to be included in your architecture metric.

Once you’ve decided what quality attributes to measure, you need to define how you will measure them. Good metrics are comparative and understandable, and they are usually ratios.

How you measure a certain quality attribute also depends on your context.

For example, McGabe’s cyclomatic complexity or Uncle Bob Martin’s distance from the main sequence are valid candidate metrics for a monolith. But those makes little sense for a Micro Services Architecture (MSA), since the complexity in an MSA moves from inside a service to the dependencies between the services. So if you’re living in an MSA world, maybe you should look instead at things like how much of the services’ data is exposed to other services, or what percentage of service calls cross domain boundaries.

When defining how to measure a quality attribute, think about how you’re going to collect the required data. If at all possible, automate the data collection. This will allow you to update the metrics more often, and it will probably also be less error-prone.

Some data could be collected by scanning the source code, e.g. for cyclomatic complexity. Other data could come from architecture diagrams, in particular container diagrams. Having a standard for those allows you to parse the diagrams for automated data collection. A threat model is a good source of data for security-related metrics.

Once you’ve defined how to measure all the quality attributes, the last step is to merge all those numbers into a single number that describes the overall quality of the architecture. You will most likely want to calculate a weighted average. To perform this calculation, you’ll need to define weights that indicate the relative importance of the quality attributes. The outcomes of Quality Storming can help with setting those weights.

What do you think? Do you define metrics for your architectures? If so, how? And in what way have they helped or hindered you? Please leave a comment below.

Major Tech Projects

Under construction

Last time we saw how a target architecture can guide you in your day to day technical decision making. But what do you do when your target architecture is considerably different from your current architecture?

Ways of working differ considerably between organizations, but most tech companies nowadays use some form of agile process (or at least claim to), where a Product Owner prioritizes a backlog of smallish items for the team to work on. That’s the context we’ll assume for this post.

The question then becomes how you convince a business-oriented Product Owner to prioritize technical improvement work over value-added work, such as new features. This question becomes even more pressing when the amount of tech work is not small, but rather an entire project in itself. This type of work doesn’t fit all that well into the agile model of continuous delivery of value through small increments. So what to do?

First of all, it’s always good if you can split the work into smaller pieces. Those are easier to squeeze in and, more importantly, carry less risk. But even if you manage to do that for a big tech overhaul, you’ll end up with a long list of tech items that somehow need to find their way to the top of the backlog.

You could try to persuade the Product Owner in a separate discussion about each and every one of these tech items. However, since Product Owners are judged by how much value they deliver sooner rather than later, you’re going to be fighting an uphill battle. That doesn’t mean it can’t work in your context, of course, but in general the chances aren’t great when the number of tech items grows.

Alternatively, you could treat this as technical debt, assuming you have a well-functioning process for handling that. That’s quite an assumption, by the way. Some teams use their slack time to dig themselves out of holes, but most teams are not fortunate enough to have slack time. (As an industry, we’re slow learners. Mythical man-month, anyone?) Many teams I’ve seen that keep track of tech debt just keep growing it year over year and I hate to think about the teams that don’t even keep track of it.

Under construction

Part of the problem with tech debt is that the metaphor is broken. Financial people will tell you that some amount of debt is good and that you shouldn’t get rid of all of it. That’s just not how developers see tech debt. I’ve also yet to see a team that regularly and consistently pays off tech debt, like one would make monthly payments to pay off, say, a mortgage.

Idea Flow argues that we should abandon the technical debt concept in favor of risk and that we should let the team be guided by both a Product Owner, for business value, and Technical Risk Manager, for keeping risk within an acceptable range. It’s an interesting concept that sounds like it may work, but the problem will be in estimating the risk. I haven’t seen this done in practice yet, so if you have experience with this, please leave a comment below.

A somewhat similar, but less refined, approach is to allocate two budgets for the team: one for doing value-added work and one for doing technical work. You could assign a 3:1 ratio to those budgets, for instance, which means 75% of developer time would be spent on adding business value and 25% on technical work items. Then if you have a year where you need to do some major technical work, you could ask for a temporary change in this ratio.

Yet another approach is the dreaded rewrite: you stop all value-added work for some time and do only technical work. This allows the team to make quick progress in the technical area, but is generally not liked all that much by the business. You can get away with this only every so often, as it tends to take a huge bite out of your political capital. But at some point it will become your only option if you don’t find a better way first.

What do you think? What ways to schedule time for major technical projects have worked for you? Please leave a comment below.

Data Flow Diagrams and Threat Models

Last time we looked at some generic diagrams from the C4 model, which are useful for most teams. This time we’re going to explore a more specific type of diagram that can be a tremendous help with security.

Data Flow Diagrams

A Data Flow Diagram (DFD), as the name indicates, shows the flow of data through the system. It depicts external entities, processes, data stores, and data flows. Larger systems usually have composite processes, which expand into their own DFD.

Here’s a simple example of a data flow diagram:

Create a Data Flow Diagram from a Container Diagram

If you already have a container diagram of your system, then it’s easy to create a DFD from it:

  1. Convert containers into processes by drawing them as circles. You may or may not want to number them. Complicate containers with many processes running in them may be modeled as composite process that expand into their own sub-DFD.
  2. Convert external systems into external entities by drawing them as rectangles.
  3. Convert databases and queues into data stores by drawing them as horizontal parallel bars.
  4. Modify the directions of all the arrows as needed. In a container diagram, the direction of an arrow usually indicates who initiates a request. In a data flow diagram, the direction of an arrow indicates the direction in which data flows. This could be both ways.
    Some people use arrows to indicate data flows in container diagrams as well. In that case, you don’t have anything to do in this step.

Now, why would you go through the trouble to convert a container diagram into a DFD? There are several uses for DFDs, but in this post I want to focus on using them for threat modeling.

Threat Models

In threat modeling, we look for design flaws with security implications. These are different from implementation bugs, which makes them hard to detect using code-based techniques such as Static Application Security Testing (SAST) or security code reviews. But the flip side of that is that you can do threat modeling even before any code is written. And you should!

Threat modeling falls under Threat Assessment in the Software Assurance Maturity Model.

There are many ways to do build threat models, but the one I’ve found easiest to understand for developers with limited security knowledge (which is the vast majority), is to use STRIDE, an acronym for the types of security threats that exist:

  • Spoofing
  • Tampering
  • Repudiation
  • Information disclosure
  • Denial of service
  • Elevation of privilege

Using DFDs to Build Threat Models

Not all of the STRIDE threats are applicable to all elements of our system, and this is where the DFD comes in handy, since each DFD element maps nicely unto some subset of the STRIDE threats:

So now you have a structured process for reviewing your architecture from a security perspective:

  1. Create a DFD from your container diagram (or from scratch if you don’t have one)
  2. Identify the threats using STRIDE
  3. Score the threats using the Common Vulnerability Scoring System (CVSS)
  4. Manage the risks posed by the identified threats:

I’ve found that using CVSS scores for threats makes it easier to decide how to manage them. You can set a security risk appetite for your system, for example accept threats in the None through Medium levels and work to avoid or mitigate High or Critical levels.

CVSS scores also go a long way towards making the threat assessment objective, which makes it easier to convince people that work should be done to reduce the security risk.

Costs and Benefits of Threat Models

If all of this seems like a lot of work, especially for a big system, that’s because it is. Sorry.

It’s not quite as bad as it may seem, however, because DFD elements can usually be grouped since they all behave the same from a security perspective. So then you only have to score and manage the group as a whole rather than all the elements in it individually. But still, threat modeling is a considerable investment.

When I’ve done this activity with developers, I see that it always provides a lot of value. We usually find one or more security threats that we really need to manage better. And most of the time participants get a much better understanding of their system, which helps them in their non-security work as well.

What do you think? Is threat modeling using data flow diagrams and STRIDE something you’re willing to give a shot, or do you prefer a method that requires less work (but offers less protection), like abuse cases? Please leave a comment.

Target Architecture

In the last two posts, we looked at generic architecture diagrams and security-specific diagrams. These diagrams reflect the current architecture of a system. This time we will look at using diagrams to depict a desired future architecture, or target architecture.

The point of a target architecture is to paint a picture of the desired state that will act as the North Star (or Southern Cross, depending on your hemisphere), in the sense that it guides all the little decisions we make every day when working on our system. This doesn’t mean we’ll always be moving straight towards the target, but we do want to make sure we don’t stray too far of the path.

However, a target architecture is not a static thing.

Assuming you don’t have a crystal ball, it will be hard to predict how circumstances will develop over the years. So think of a target architecture more as a moving target than as something set in stone. We need to keep an open mind at all times, and always react appropriately to changes, and one such reaction could be to update the target architecture as we learn more about our system, the customers it supports, the team that builds it, the evolution of the technologies it uses, etc.

Moving towards a target architecture usually doesn’t happen in a straight line.

So how do we come up with a target architecture in the first place?

One starting point is to look at the diagram that reflects your current state and identify things that give you trouble. Few systems are optimal, so you’re likely to find some things that you would do differently if you had a chance to start all over.

Another source of ideas comes from future requirements, i.e. requests on your backlog. The fact that the current architecture supports the current requirements doesn’t necessarily mean that it will be able to support all future requirements. Sometimes we need to create some architectural runway.

Once you’ve identified an area for improvement, you need to make a decision about whether it’s actually worth fixing. Some things are very costly or risky to fix compared to their benefit.

Next, write up your decision in the form of an Architecture Decision Record (ADR). The structured format of an ADR, which lists the positive and negative consequences of the decision, will help with getting everyone on the same page. It will also give you a nice historical record, which is especially important for any newcomers as the team evolves over time.

Every ADR records a decision. Each such decision constrains the solution space of the system in its own way. As the number of decisions grows, the resulting viable part of the solution space shrinks. The target architecture is a picture of a single point in this viable part of the solution space. Note that there could be other points that are viable as well, i.e. alternative target architectures.

Obviously, the solution space is multi-dimensional. But if we project it in two dimensions, we can visualize the relationship between target architecture and ADRs as follows:

Architecturally significant decisions constrain the viable solution space in which a target architecture lives.

If you find your team arguing about two different options in the viable solution space and you want to resolve this ambiguity, then hopefully the picture above will suggest the solution: you’re missing an ADR that shrinks the viable solution space to exclude one of the options.

What do you think? Do you find a target architecture a useful tool to guide your team’s day to day technical decisions? Do you use ADRs to document why the target architecture looks the way it does? Please leave a comment below.

Summary

  • Viable solution space: sub-set of the space of all possible architectures that meets the constraints documented in the system’s ADRs
  • Target architecture: a point in the viable solution space that is the moving target currently selected as the north star that guides every day decisions when changing the system

Architecture Diagrams

Hi, my name is Ray, and I’m a software architect.

Example container diagram

According to my old boss Jeroen van Rotterdam, this means that I draw boxes and lines. In practice, it’s only a small part of what I do, but I do think it’s an important part.

Some people may wonder why, in the age of working software over comprehensive documentation, one would still spend time on creating pretty pictures. Didn’t we leave all this up front, non-code fluff behind? Well, I do value working software more than comprehensive documentation, but that doesn’t mean there is no value in documentation.

I work with several “central” teams that provide shared services to “local” teams. In such discussions, it’s quite often useful to be able to point to a high-level architecture of the system. Something like the context diagram of the C4 model. Such context diagrams were also of great value during the due diligence phase of the recently announced sale of our division. Drawing a context diagram takes very little time, provides clear benefits, and requires little maintenance, so it’s really a no-brainer.

What about other types of diagrams?

The C4 model has several other diagrams. Which ones you need depends on your context. If all your code goes into a single war that you deploy to an application server, for instance, there is little point in creating a container diagram, but a component diagram may be useful. If, on the other hand, your system consists of quite a few microservices, then a container diagram that shows how they are connected may be very valuable, but a component diagram would probably be overkill.

If you work in a larger organization, with lots of teams building lots of systems, then having a diagram standard in place like the C4 model is a great way to reduce the time needed to explain a system to others and to prevent misunderstandings.

You will probably want to “personalize” that standard to make it more expressive for your particular context. For instance, we use color coding of elements on container diagrams to indicate their status: red for things we are ready to remove, orange for things we want to migrate away from, yellow for things we want to review, green for things we’re happy with, and blue for things we’re planning to build in the future.

Such an adaptation of the C4 standard doesn’t reduce the value of the standard if most or all of you communication is within your organization where everyone uses the same adaptation. If, however, you routinely use diagrams in communication with outsiders, you may want to minimize your adaptations, or at the very least provide a legend with your diagrams.

Also use your diagram standard on whiteboards.

The standard should be used everywhere you draw visual representations of your systems, like in whiteboard sessions and in Architecture Decision Records. Consistent use of the standard turns it into a ubiquitous language for visually representing your systems, eliminating ambiguities.

If you have a standard that all your diagrams must adhere to, then you can build tooling around that. For instance, you could write a tool that generates a diagram from a simple YAML file that describes your system. This way, you bring the diagram closer to the code, improve consistency, and reduce the maintenance burden. I’ll come back to tooling in a later post.

There are other types of diagrams besides those in the C4 model that are useful in certain situations. For instance, sequence diagrams are great for showing complex interactions between systems. I’ll talk about some other diagram types in future posts.

Do you use C4 model diagrams? Are they worth the investment? Please leave a comment.

Update: C4 Model author Simon Brown got some answers to the above questions on Twitter:

TDD is like working out

We all know that exercise is good for us, members of the species Homo Sapiens Sitonourasses. And yet most of us don’t do enough of it. The same is true for Test-Driven Development (TDD). But the similarity doesn’t end there.

With regular exercise, you can grow your muscles. The way this works is not linear, however. It follows an upward saw-tooth pattern: first you damage your muscles (training), then they heal (recovery) and eventually grow stronger than before (supercompensation), then you damage them again, etc:

muscle-workout

TDD follows the Damage-Heal-Grow cycle as well.

First we damage the system by writing a test that fails. Where before we could gloat in knowing all was well because our test suite said so, now we have to admit that there is still something wrong with our code. For some, this realization may hurt as much as their sore muscles after working out.

Luckily, we heal the system quickly by writing only the minimal amount of code required to make the test pass. With everything back to green in minutes or even seconds, we have every right to feel good again. TDD is perfect for short-attention spans.

Finally we grow the system by improving the design. The system can now handle everything we’ve ever thrown at it and more, because we generalized concepts and gave them a proper place in the code.

Damaging and healing happen on two levels in TDD.

First there is the syntactic level. You write a test that calls code that doesn’t exist yet, so the code doesn’t even compile. The healing that follows is to make the code compile, even though it doesn’t yet pass the test.

Only after this syntactic healing do you change the code to pass the test. The latter is more of a semantic type of healing.

The distinction between syntactic and semantic healing has implications for how we work.

There are only so many ways that a program can be syntactically broken, and in many cases, a sophisticated enough IDE can help heal it. For example, when you write a test that refers to a class that doesn’t yet exist, Eclipse offers a Quick Fix to create the class for you.

Semantic healing, on the other hand, is more difficult. The transformations of the Transformation Priority Premise can be seen as standard building blocks, and at least some of them can be automated. But that’s still a long way from the IDE generating the code that will make the failing test pass.

get-your-code-in-shape-practice-tddI haven’t seen many TDD practitioners do the equivalent of walking around the beach showing off their rock-hard abs, and that’s probably a good thing.

But just as we appreciate how a strong, muscular friend can easily handle any piece of furniture when he helps us move, so do product owners like it that we can always deliver any feature in a short amount of time.

Unfortunately, it just doesn’t score us any dates; for that we really do need to hit the gym.

Celebrate Learning in Software Development

Every event is either a cause for celebration or an opportunity to learn.

celebrateI don’t remember where I came across this quote, but it has stuck with me. I like how it turns every experience into something positive.

Sometimes I need to remind myself of it, however, especially when there are a lot of, well, learning opportunities in a row.

One recent case was when I had started a long running performance test overnight. The next morning when I came back to it, there was no useful information at all. None whatsoever.

What had happened?

scdfOur system is a fast data solution built on Spring Cloud Data Flow (SCDF). SCDF allows you to compose stream processing solutions out of data microservices built with Spring Boot.

The performance test spun up a local cluster, ingested a lot of data, and spun the cluster down, all the while capturing performance metrics.

(This is early stages performance testing, so it doesn’t necessarily need to run on a production-like remote cluster.)

Part of the shutdown procedure was to destroy the SCDF stream. The stream destroy command to the SCDF shell is supposed to terminate the apps that make up the stream. It did in our functional tests.

But somehow it hadn’t this time. After the performance test ran, the supporting services were terminated, but the stream apps kept running. And that was the problem. These apps continued to try to connect to the supporting services, failed to do that, and wrote those failures to the log files. The log files had overflown and the old ones had been removed, in an effort to save disk space.

All that was left, were log files filled with nothing but connection failures. All the useful information was gone. While I was grateful that I still had space on my disk left, it was definitely not a cause for celebration.

So then what could we learn from this event?

Obviously we need to fix the stream shutdown procedure.

kubernetesCome to think of it, we had already learned that lesson. The code to shut down our Kubernetes cluster doesn’t use stream destroy, but simply deletes all the replication controllers and pods that SCDF creates.

We did it that way, because the alternative proved unreliable. And yet we had failed to update the equivalent code for a local cluster. In other words, we had previously missed an opportunity to learn!

Determined not that make that mistake again, we tried to look beyond fixing the local cluster shutdown code.

One option is to not delete old logs, so we wouldn’t have lost the useful information. However, that almost certainly would have led to a full disk and a world of hurt. So maybe, just maybe, we shouldn’t go there.

Another idea is to not log the connection failures that filled up the log files. Silently ignoring problems isn’t exactly a brilliant strategy either, however. If we don’t log problems, we have nothing to monitor and alert on.

release-itA better idea is to reduce the number of connection attempts in the face of repeated failures. Actually, resiliency features like circuit breakers were already in the backlog, since the need for it was firmly drilled into us by the likes of Nygard.

We just hadn’t worked on that story yet, because we didn’t have much experience in this area and needed to do some homework.

So why not spend a little bit of time to do that research now? It’s not like we could work on analyzing the performance test results.

It turns out that this kind of stuff is very easy to accomplish with the FailSafe library:

private final CircuitBreaker circuitBreaker = new CircuitBreaker()
    .withFailureThreshold(3, 10)
    .withSuccessThreshold(3)
    .withDelay(1, TimeUnit.SECONDS);
private final SyncFailsafe<Object> safeService = Failsafe
    .with(circuitBreaker)
    .withFallback(() -> DEFAULT_VALUE);

@PostConstruct
public void init() {
  circuitBreaker.onOpen(() -> LOG.warn("Circuit breaker opened"));
  circuitBreaker.onClose(() -> LOG.warn("Circuit breaker closed"));
}

private Object getValue() {
  return safeService.get(() -> remoteService.getValue());
}

learnI always feel better after learning something new. Taking every opportunity to learn keeps my job interesting and makes it easier to deal with the inevitable problems that come my way.

Instead of being overwhelmed with negativity, the positive experience of improving my skills keeps me motivated to keep going.

What else could we have learned from this incident? What have you learned recently? Please leave a comment below.