Architecture Artifacts Cross-Checker

Last time we looked at architecture metrics. We stated then that the data required for calculating these metrics could come from a variety of sources. However, we all know that information about architectures is often not kept up-to-date…

So how do you keep your metrics reliable by keeping their inputs fresh?

In order to answer a question like that it’s always good to understand the reasons diagrams and other artifacts go stale. It’s not because people are deliberately making them rot, it’s more that they have many things to do and either simply forget to update artifacts or prioritize some other activity.

To solve the first problem, a simple reminder could be all it takes. But here we run the risk of crying wolf. So we need to make sure there is something to update before we send out a reminder.

Enter the architecture artifacts cross-checker. This is a tool that compares different inputs to verify they are consistent. Let’s look at some examples of inputs such a tool could verify.

External systems in a context diagram should also appear in the corresponding container diagram. If you have a tech radar, it should list at least the technologies that are in the container diagram. Containers in a container diagram should have a corresponding process in a data flow diagram. Threat models should calculate the risk of security threats against each of those. Etc. Etc.

We may even be able to tie some of the architecture artifacts to source code.

For instance, in a micro services architecture, each service may have its own source code repository or its own directory in a monorepo. And if you have coding standards for cross-service calls, you may be able to derive at compile time which containers call which at runtime. Alternatively, this information could come from runtime call patterns collected by a service mesh. Finally, running a tool like cloc gives technologies used in the service, which should be listed in the container diagram and in the tech radar.

By combining these diverse sources of information about your system, you can detect inconsistencies automatically and send out reminders for updates. Or even fail the build, if you want to go that far.

What do you think? Would a little bit of coding effort to write an architecture artifacts cross-checker be worth it in your situation? Please leave a comment below.

Advertisement

Architecture Metrics

Last time we saw how major tech projects continue to be difficult to schedule. One thing that can keep momentum going for a long-running initiative is the appropriate use of metrics. Improving scores allow you to visualize progress and maintain motivation to keep going.

Let’s look at some metrics for software architectures.

Architecture is the art of making trade-offs between the quality attributes that matter most in a given context. A quality attribute is a measurable or testable property of a system that is used to indicate how well the system satisfies the needs of its stakeholders. An architecture metric should therefore be a combination of measurements of quality attributes deemed important for the architecture in question.

There are many potential quality attributes to choose from. The ISO/IEC 25010 standard describes 8 categories with a total of 31 quality attributes:

ISO 25010 Product Quality Model

Nothing is stopping you from including additional quality attributes, of course, if they make sense in your context.

However, you should not measure all possible quality attributes. Some of them won’t make much sense in your situation. Others would be prohibitively expensive to measure. Having too many will also dilute your message. When in doubt, err on the side of leaving some out; you can always add them later. Start small and iterate as you learn.

Quality Storming is one way to determine which quality attributes are important enough to be included in your architecture metric.

Once you’ve decided what quality attributes to measure, you need to define how you will measure them. Good metrics are comparative and understandable, and they are usually ratios.

How you measure a certain quality attribute also depends on your context.

For example, McGabe’s cyclomatic complexity or Uncle Bob Martin’s distance from the main sequence are valid candidate metrics for a monolith. But those makes little sense for a Micro Services Architecture (MSA), since the complexity in an MSA moves from inside a service to the dependencies between the services. So if you’re living in an MSA world, maybe you should look instead at things like how much of the services’ data is exposed to other services, or what percentage of service calls cross domain boundaries.

When defining how to measure a quality attribute, think about how you’re going to collect the required data. If at all possible, automate the data collection. This will allow you to update the metrics more often, and it will probably also be less error-prone.

Some data could be collected by scanning the source code, e.g. for cyclomatic complexity. Other data could come from architecture diagrams, in particular container diagrams. Having a standard for those allows you to parse the diagrams for automated data collection. A threat model is a good source of data for security-related metrics.

Once you’ve defined how to measure all the quality attributes, the last step is to merge all those numbers into a single number that describes the overall quality of the architecture. You will most likely want to calculate a weighted average. To perform this calculation, you’ll need to define weights that indicate the relative importance of the quality attributes. The outcomes of Quality Storming can help with setting those weights.

What do you think? Do you define metrics for your architectures? If so, how? And in what way have they helped or hindered you? Please leave a comment below.

DevOps Is The New Agile

In The Structure of Scientific Revolutions, Thomas Kuhn argues that science is not a steady accumulation of facts and theories, but rather an sequence of stable periods, interrupted by revolutions.

3rd-platformDuring such revolutions, the dominant paradigm breaks down under the accumulated weight of anomalies it can’t explain until a new paradigm emerges that can.

We’ve seen similar paradigm shifts in the field of information technology. For hardware, we’re now at the Third Platform.

For software, we’ve had several generations of programming languages and we’ve seen different programming paradigms, with reactive programming gaining popularity lately.

The Rise of Agile

We’ve seen a revolution in software development methodology as well, where the old Waterfall paradigm was replaced by Agile. The anomalies in this case were summarized as the software crisis, as documented by the Chaos Report.

The Agile Manifesto was clearly a revolutionary pamphlet:

We are uncovering better ways of developing software by doing it and helping others do it.

It was written in 2001 and originally signed by 17 people. It’s interesting to see what software development methods they were involved with:

  1. Adaptive Software Development: Jim Highsmith
  2. Crystal: Alistair Cockburn
  3. Dynamic Systems Development Method: Arie van Bennekum
  4. eXtreme Programming: Kent Beck, Ward Cunningham, Martin Fowler, James Grenning, Ron Jeffries, Robert Martin
  5. Feature-Driven Development: Jon Kern
  6. Object-Oriented Analysis: Stephen Mellor
  7. Scrum: Mike Beedle, Ken Schwaber, Jeff Sutherland
  8. Andrew Hunt, Brian Marick, and Dave Thomas were not associated with a specific method

Only two of the seven methods were represented by more than one person: eXtreme Programming (XP) and Scrum. Coincidentally, these are the only ones we still hear about today.

Agile Becomes Synonymous with Scrum

ScrumScrum is the clear winner in terms of market share, to the point where many people don’t know the difference between Agile and Scrum.

I think there are at least two reasons for that: naming and ease of adoption.

Decision makers in environments where nobody ever gets fired for buying IBM are usually not looking for something that is “extreme”. And “programming” is for, well, other people. On the other hand, Scrum is a term borrowed from sports, and we all know how executives love using sport metaphors.

[BTW, the term “extreme” in XP comes from the idea of turning the dials of some useful practices up to 10. For example, if code reviews are good, let’s do it all the time (pair programming). But Continuous Integration is not nearly as extreme as Continuous Delivery and iterations (time-boxed pushes) are mild compared to pull systems like Kanban. XP isn’t so extreme after all.]

Scrum is easy to get started with: you can certifiably master it in two days. Part of this is that Scrum has fewer mandated practices than XP.

That’s also a danger: Scrum doesn’t prescribe any technical practices, even though technical practices are important. The technical practices support the management practices and are the foundation for a culture of technical excellence.

The software craftsmanship movement can be seen as a reaction to the lack of attention for the technical side. For me, paying attention to obviously important technical practices is simply being a good software professional.

The (Water)Fall Of Scrum

The jury is still out on whether management-only Scrum is going to win completely, or whether the software craftsmanship movement can bring technical excellence back into the picture. This may be more important than it seems at first.

ScrumFallSince Scrum focuses only on management issues, developers may largely keep doing what they were doing in their Waterfall days. This ScrumFall seems to have become the norm in enterprises.

No wonder that many Scrum projects don’t produce the expected benefits. The late majority and laggards may take that opportunity to completely revert back to the old ways and the Agile Revolution may fail.

In fact, several people have already proclaimed that Agile is dead and are talking about a post-Agile world.

Some think that software craftsmanship should be the new paradigm, but I’m not buying that.

Software craftsmanship is all about the code and too many people simply don’t care enough about code. Beautiful code that never gets deployed, for example, is worthless.

Beyond Agile with DevOps

Speaking of deploying software, the DevOps movement may be a more likely candidate to take over the baton from Agile. It’s based on Lean principles, just like Agile was. Actually, DevOps is a continuation of Agile outside the development team. I’ve even seen the term agile DevOps.

So what makes me think DevOps won’t share the same fate as Agile?

First, DevOps looks at the whole software delivery value stream, whereas Agile confined itself to software development. This means DevOps can’t remain in the developer’s corner; for DevOps to work, it has to have support from way higher up the corporate food chain. And executive support is a prerequisite for real, lasting change.

Second, the DevOps movement from the beginning has placed a great deal of emphasis on culture, which is where I think Agile failed most. We’ll have to see whether DevOps can really do better, but at least the topic is on the agenda.

metricsThird, DevOps puts a lot of emphasis on metrics, which makes it easier to track its success and helps to break down silos.

Fourth, the Third Platform virtually requires DevOps, because Systems of Engagement call for much more rapid software delivery than Systems of Record.

Fifth, with the number of security breaches spiraling out of control, the ability to quickly deploy fixes becomes the number one security measure. The combination of DevOps and Security is referred to as Rugged DevOps or DevOpsSec.

What do you think? Will DevOps succeed where Agile failed? Please leave a comment.

Communicate Through Stories Rather Than Tasks

cooperationLast time I talked about interfaces between pieces of code.

Today I want to discuss the interface between groups of people involved in developing software.

There are two basic groups: those who develop the software, and those who coordinate that development.

In Agile terms, those groups are the Development Team on the one hand, and the Product Owner and other Stakeholders on the other.

Speaking the Same Language

The two groups need to communicate, so they do best when everybody speaks the same language.

This begins with speaking the same “natural” language, e.g. English. For most teams that will be a given, but teams that are distributed over multiple locations in different countries need to be a bit careful.

tower-of-babelOnce the language is determined, the team should look at the jargon they will be using.

Since the Development Team needs to understand what they must build, they need to know the business terms.

The Product Owner and Stakeholders don’t necessarily need to understand the technical terms, however.

Therefore, it makes sense that the Ubiquitous Language is the language of the business.

Speaking About Work: Stories and Tasks

But the two groups need to talk about more than the business problem to be solved. For any non-trivial amount of work, they also need to talk about how to organize that work.

In most Agile methods, work is organized into Sprints or Iterations. These time-boxed periods of development are an explicit interface between Product Owner and Development Team.

user-storyThe Product Owner is the one steering the Development Team: she decides which User Stories will be built in a given Iteration.

The Development Team implements the requested Stories during the Iteration. They do this by breaking the Stories down into Tasks, having people sign up for the Tasks, and implementing them.

Tasks describe how development time is organized, whereas Stories describe functionality. So Tasks may refer to technical terms like relational databases, while Stories should only talk about functionality, like data persistence.

Stories Are the Interface

Since we value working software, we talk about Stories most of the time. Tasks only exist to make implementing Stories easier. They are internal to the Development Team, not part of the interface the Development Team shares with the Product Owner.

task-boardMany Development Teams do, in fact, expose Tasks to their Product Owners and other Stakeholders.

Sometimes they do this to explain why an Estimate for a Story is higher than the Product Owner expected.

Or they let the Product Owner attend Standup Meetings where Tasks are discussed.

This is fine, as long as both sides understand that Tasks are owned by the Development Team, just as Stories are owned by the Product Owner.

The Development Team may propose Stories, but the Product Owner decides what gets added to the Backlog and what gets scheduled in the Iteration.

Similarly, the Product Owner may propose, question, or inquire about Tasks, but the Development Team decides which Tasks make up a Story and in which order and by who they are implemented.

Always Honor the Interface

This well-defined interface between Product Owner and Development Team allows both sides to do their job well.

burn-up-chartIt’s important to understand that this has implications for how the software development process is organized.

For instance, the metrics we report up should be defined in terms of Stories, not Tasks.

Outside the Development Team, people shouldn’t care about how development time was divided, only about what the result was.

If we stick to the interface, both sides become decoupled and therefore free to innovate and optimize their own processes without jeopardizing the whole.

This is the primary benefit of any well-defined interface and the basis for a successful divide-and-conquer strategy.

What Do You Think?

feedbackWhat problems have you seen in the communication between the two groups?

Are you consciously restricting the communication to stories, or are you letting tasks slip in?

Please leave a comment.