DevOps Is The New Agile

In The Structure of Scientific Revolutions, Thomas Kuhn argues that science is not a steady accumulation of facts and theories, but rather an sequence of stable periods, interrupted by revolutions.

3rd-platformDuring such revolutions, the dominant paradigm breaks down under the accumulated weight of anomalies it can’t explain until a new paradigm emerges that can.

We’ve seen similar paradigm shifts in the field of information technology. For hardware, we’re now at the Third Platform.

For software, we’ve had several generations of programming languages and we’ve seen different programming paradigms, with reactive programming gaining popularity lately.

The Rise of Agile

We’ve seen a revolution in software development methodology as well, where the old Waterfall paradigm was replaced by Agile. The anomalies in this case were summarized as the software crisis, as documented by the Chaos Report.

The Agile Manifesto was clearly a revolutionary pamphlet:

We are uncovering better ways of developing software by doing it and helping others do it.

It was written in 2001 and originally signed by 17 people. It’s interesting to see what software development methods they were involved with:

  1. Adaptive Software Development: Jim Highsmith
  2. Crystal: Alistair Cockburn
  3. Dynamic Systems Development Method: Arie van Bennekum
  4. eXtreme Programming: Kent Beck, Ward Cunningham, Martin Fowler, James Grenning, Ron Jeffries, Robert Martin
  5. Feature-Driven Development: Jon Kern
  6. Object-Oriented Analysis: Stephen Mellor
  7. Scrum: Mike Beedle, Ken Schwaber, Jeff Sutherland
  8. Andrew Hunt, Brian Marick, and Dave Thomas were not associated with a specific method

Only two of the seven methods were represented by more than one person: eXtreme Programming (XP) and Scrum. Coincidentally, these are the only ones we still hear about today.

Agile Becomes Synonymous with Scrum

ScrumScrum is the clear winner in terms of market share, to the point where many people don’t know the difference between Agile and Scrum.

I think there are at least two reasons for that: naming and ease of adoption.

Decision makers in environments where nobody ever gets fired for buying IBM are usually not looking for something that is “extreme”. And “programming” is for, well, other people. On the other hand, Scrum is a term borrowed from sports, and we all know how executives love using sport metaphors.

[BTW, the term “extreme” in XP comes from the idea of turning the dials of some useful practices up to 10. For example, if code reviews are good, let’s do it all the time (pair programming). But Continuous Integration is not nearly as extreme as Continuous Delivery and iterations (time-boxed pushes) are mild compared to pull systems like Kanban. XP isn’t so extreme after all.]

Scrum is easy to get started with: you can certifiably master it in two days. Part of this is that Scrum has fewer mandated practices than XP.

That’s also a danger: Scrum doesn’t prescribe any technical practices, even though technical practices are important. The technical practices support the management practices and are the foundation for a culture of technical excellence.

The software craftsmanship movement can be seen as a reaction to the lack of attention for the technical side. For me, paying attention to obviously important technical practices is simply being a good software professional.

The (Water)Fall Of Scrum

The jury is still out on whether management-only Scrum is going to win completely, or whether the software craftsmanship movement can bring technical excellence back into the picture. This may be more important than it seems at first.

ScrumFallSince Scrum focuses only on management issues, developers may largely keep doing what they were doing in their Waterfall days. This ScrumFall seems to have become the norm in enterprises.

No wonder that many Scrum projects don’t produce the expected benefits. The late majority and laggards may take that opportunity to completely revert back to the old ways and the Agile Revolution may fail.

In fact, several people have already proclaimed that Agile is dead and are talking about a post-Agile world.

Some think that software craftsmanship should be the new paradigm, but I’m not buying that.

Software craftsmanship is all about the code and too many people simply don’t care enough about code. Beautiful code that never gets deployed, for example, is worthless.

Beyond Agile with DevOps

Speaking of deploying software, the DevOps movement may be a more likely candidate to take over the baton from Agile. It’s based on Lean principles, just like Agile was. Actually, DevOps is a continuation of Agile outside the development team. I’ve even seen the term agile DevOps.

So what makes me think DevOps won’t share the same fate as Agile?

First, DevOps looks at the whole software delivery value stream, whereas Agile confined itself to software development. This means DevOps can’t remain in the developer’s corner; for DevOps to work, it has to have support from way higher up the corporate food chain. And executive support is a prerequisite for real, lasting change.

Second, the DevOps movement from the beginning has placed a great deal of emphasis on culture, which is where I think Agile failed most. We’ll have to see whether DevOps can really do better, but at least the topic is on the agenda.

metricsThird, DevOps puts a lot of emphasis on metrics, which makes it easier to track its success and helps to break down silos.

Fourth, the Third Platform virtually requires DevOps, because Systems of Engagement call for much more rapid software delivery than Systems of Record.

Fifth, with the number of security breaches spiraling out of control, the ability to quickly deploy fixes becomes the number one security measure. The combination of DevOps and Security is referred to as Rugged DevOps or DevOpsSec.

What do you think? Will DevOps succeed where Agile failed? Please leave a comment.

Advertisement

Communicate Through Stories Rather Than Tasks

cooperationLast time I talked about interfaces between pieces of code.

Today I want to discuss the interface between groups of people involved in developing software.

There are two basic groups: those who develop the software, and those who coordinate that development.

In Agile terms, those groups are the Development Team on the one hand, and the Product Owner and other Stakeholders on the other.

Speaking the Same Language

The two groups need to communicate, so they do best when everybody speaks the same language.

This begins with speaking the same “natural” language, e.g. English. For most teams that will be a given, but teams that are distributed over multiple locations in different countries need to be a bit careful.

tower-of-babelOnce the language is determined, the team should look at the jargon they will be using.

Since the Development Team needs to understand what they must build, they need to know the business terms.

The Product Owner and Stakeholders don’t necessarily need to understand the technical terms, however.

Therefore, it makes sense that the Ubiquitous Language is the language of the business.

Speaking About Work: Stories and Tasks

But the two groups need to talk about more than the business problem to be solved. For any non-trivial amount of work, they also need to talk about how to organize that work.

In most Agile methods, work is organized into Sprints or Iterations. These time-boxed periods of development are an explicit interface between Product Owner and Development Team.

user-storyThe Product Owner is the one steering the Development Team: she decides which User Stories will be built in a given Iteration.

The Development Team implements the requested Stories during the Iteration. They do this by breaking the Stories down into Tasks, having people sign up for the Tasks, and implementing them.

Tasks describe how development time is organized, whereas Stories describe functionality. So Tasks may refer to technical terms like relational databases, while Stories should only talk about functionality, like data persistence.

Stories Are the Interface

Since we value working software, we talk about Stories most of the time. Tasks only exist to make implementing Stories easier. They are internal to the Development Team, not part of the interface the Development Team shares with the Product Owner.

task-boardMany Development Teams do, in fact, expose Tasks to their Product Owners and other Stakeholders.

Sometimes they do this to explain why an Estimate for a Story is higher than the Product Owner expected.

Or they let the Product Owner attend Standup Meetings where Tasks are discussed.

This is fine, as long as both sides understand that Tasks are owned by the Development Team, just as Stories are owned by the Product Owner.

The Development Team may propose Stories, but the Product Owner decides what gets added to the Backlog and what gets scheduled in the Iteration.

Similarly, the Product Owner may propose, question, or inquire about Tasks, but the Development Team decides which Tasks make up a Story and in which order and by who they are implemented.

Always Honor the Interface

This well-defined interface between Product Owner and Development Team allows both sides to do their job well.

burn-up-chartIt’s important to understand that this has implications for how the software development process is organized.

For instance, the metrics we report up should be defined in terms of Stories, not Tasks.

Outside the Development Team, people shouldn’t care about how development time was divided, only about what the result was.

If we stick to the interface, both sides become decoupled and therefore free to innovate and optimize their own processes without jeopardizing the whole.

This is the primary benefit of any well-defined interface and the basis for a successful divide-and-conquer strategy.

What Do You Think?

feedbackWhat problems have you seen in the communication between the two groups?

Are you consciously restricting the communication to stories, or are you letting tasks slip in?

Please leave a comment.

Bridging the Client-Server Divide

webapp-architectureMost software these days is delivered in the form of web applications, and the move towards cloud computing will only emphasize this trend.

Web apps consist of client and server parts, where the client part has been getting bigger lately to deliver a richer user experience.

This split has implications for developers, because the technologies used on the client and server parts are often different.

The client is ruled by HTML, CSS, and JavaScript, while the server is most often developed using JVM or .NET based languages like Java and C#.

Disadvantages of Different Client and Server Technologies

Developers of web applications risk becoming either specialists confined to a single part of the stack or polyglot programmers.

Polyglot programming is the practice of knowing and using many programming languages. There are both advantages and disadvantages associated with polyglot programming. I believe the overriding disadvantage is the context switching involved, which degrades productivity and opens the doors to extra bugs.

Being a specialist has advantages and disadvantages as well. A big disadvantage I see is the “us versus them”, or “not my problem” culture that can arise. In general, Agile teams prefer generalists.

Bringing Server Technologies to the Client

Many attempts have been made at bridging the gap between client and server. Most of these attempts were about bringing server-side technologies to the client.

GWTJava on the client has failed to reached widespread adoption, and now that many people advice to disable Java applets altogether because of security reasons it seems increasingly unlikely that it ever will.

Bringing .NET to the client has likewise failed as Silverlight adoption continues to drop.

Another idea is to translate from server to client technologies. Many languages can now be compiled to JavaScript. The most mature effort is Google Web Toolkit (GWT), which translates from Java. The main problem with GWT is that it supports only a small subset of Java.

All in all I don’t feel there currently is a satisfactory way of using server technologies on the client.

Bringing Client Technologies to the Server

So what about the reverse? There is really only one client-side technology worth looking at today: JavaScript. The only other rival, Flash, is losing out quickly due to lack of support from Apple and the rise of HTML5.

Node.jsJavaScript on the server is starting to make inroads, thanks to the Node.js platform.

It is used by the Cloud9 IDE, for example, and supported by Platform-as-a-Service providers like CloudFoundry and Heroku.

What do you think?

If I had to put my money on any unification approach, it would be Node.js.

Do you agree? What needs to happen to make this a common way of developing web apps? Please let me know your thoughts in the comments.

How Friction Slows Us Down

FrictionI once joined a project where running the “unit” tests took three and a half hours.

As you may have guessed, the developers didn’t run the tests before they checked in code, resulting in a frequently red build.

Running the tests just gave too much friction for the developers.

I define friction as anything that resist the developer while she is producing software.

Since then, I’ve spotted friction in numerous places while developing software.

Friction in Software Development

Since friction impacts productivity negatively, it’s important that we understand it. Here are some of my observations:

  • Friction can come from different sources.
    It can result from your tool set, like when you have to wait for Perforce to check out a file over the network before you can edit it.
    Friction can also result from your development process, for example when you have to wait for the QA department to test your code before it can be released.
  • Friction can operate on different time scales.
    Some friction slows you down a lot, while others are much more benign. For instance, waiting for the next set of requirements might keep you from writing valuable software for weeks.
    On the other hand, waiting for someone to review your code changes may take only a couple of minutes.
  • Friction can be more than simple delays.
    It also rears its ugly head when things are more difficult then they ought to be.
    In the vi editor, for example, you must switch between command and insert modes. Seasoned vi users are just as fast as with editors that don’t have that separation. Yet they do have to keep track of which mode they are in, which gives them a higher cognitive load.

Lubricating Software Development

LubricationThere has been a trend to decrease friction in software development.

Tools like Integrated Development Environments have eliminated many sources of friction.

For instance, Eclipse will automatically compile your code when you save it.

Automated refactorings decrease both the time and the cognitive load required to make certain code changes.

On the process side, things like Agile development methodologies and the DevOps movement have eliminated or reduced friction. For instance, continuous deployment automates the release of software into production.

These lubricants have given us a fighting chance in a world of increasing complexity.

Frictionless Software Development

It’s fun to think about how far we could take these improvements, and what the ultimate Frictionless Development Environment (FDE) might look like.

My guess is that it would call for the combination of some of the same trends we already see in consumer and enterprise software products. Cloud computing will play a big role, as will simplification of the user interaction, and access from anywhere.

What do you think?

What frictions have you encountered? Do you think frictions are the same as waste in Lean?

What have you done to lubricate the frictions away? What would your perfect FDE look like?

Please let me know your thoughts in the comments.

Building Both Security and Quality In

One of the important things in a Security Development Lifecycle (SDL) is to feed back information about vulnerabilities to developers.

This post relates that practice to the Agile practice of No Bugs.

The Security Incident Response

Even though we work hard to ship our software without security vulnerabilities, we never succeed 100%.

When an incident is reported (hopefully responsibly), we execute our security response plan. We must be careful to fix the issue without introducing new problems.

Next, we should also look for similar issues to the one reported. It’s not unlikely that there are issues in other parts of the application that are similar to the reported one. We should find and fix those as part of the same security update.

Finally, we should do a root cause analysis to determine why this weakness slipped through the cracks in the first place. Armed with that knowledge, we can adapt our process to make sure that similar issues will not occur in the future.

From Security To Quality

The process outlined above works well for making our software ever more secure.

But security weaknesses are essentially just bugs. Security issues may have more severe consequences than regular bugs, but most regular bugs are expensive to fix once the software is deployed as well.

So it actually makes sense to treat all bugs, security or otherwise, the same way.

As the saying goes, an ounce of prevention is worth a pound of cure. Just as we need to build security in, we also need to build quality in general in.

Building Quality In Using Agile Methods

This has been known in the Agile and Lean communities for a long time. For instance, James Shore wrote about it in his excellent book The Art Of Agile Development and Elisabeth Hendrickson thinks that there should be so little bugs that they don’t need triaging.

Some people object to the Zero Defects mentality, claiming that it’s unrealistic.

There is, however, clear evidence of much lower defect rates for Agile development teams. Many Lean implementations also report successes in their quest for Zero Defects.

So there is at least anecdotal evidence that a very significant reduction of defects is possible.

This will require change, of course. Testers need to change and so do developers. And then everybody on the team needs to speak the same language and work together as a single team instead of in silos.

If we do this well, we’ll become bug exterminators that delight our customers with software that actually works.

Book Review: The Security Development Lifecycle (SDL)

In The Security Development Lifecycle (SDL), A Process for Developing Demonstrably More Secure Software, authors Michael Howard and Steven Lipner explain how to build secure software through a repeatable process.

The methodology they describe was developed at Microsoft and has led to a measurable decrease in vulnerabilities. That’s why it’s now also used elsewhere, like at EMC (my employer).

Chapter 1, Enough is Enough: The Threats have Changed, explains how the SDL was born out of the Trustworthy Computing initiative that started with Bill Gates’ famous email in early 2002. Most operating systems have since become relatively secure, so hackers have shifted their focus to applications and the burden is now on us developers to crank up our security game. Many security issues are also privacy problems, so if we don’t, we are bound to pay the price.

Chapter 2, Current Software Development Methods Fail to Produce Secure Software, reviews current software development methods with regard to how (in)secure the resulting applications are. It shows that the adage given enough eyeballs, all bugs are shallow is wrong when it comes to security. The conclusion is that we need to explicitly include security into our development efforts.

Chapter 3, A Short History of the SDL at Microsoft, describes how security improvement efforts at Microsoft evolved into a consistent process that is now called the SDL.

Chapter 4, SDL for Management, explains that the SDL requires time, money, and commitment from senior management to prioritize over time to market. We’re talking real commitment, like delaying the release of an insecure application.

Chapter 5, Stage 0: Education and Awareness, starts the second part of the book, that describes the stages of the SDL. It all starts with educating developers about security. Without this, there’s no real chance of delivering secure software.

Chapter 6, Stage 1: Project Inception, sets the security context for the development effort. This includes assigning someone to guide the team through the SDL, building security leaders within the team, and setting up security expectations and tools.

Chapter 7, Stage 2: Define and Follow Best Practices, lists common secure design principles and describes attack surface analysis and attack surface reduction. The latter is about reducing the amount of code accessible to untrusted users, for example by disabling certain features by default.

Chapter 8, Stage 3: Product Risk Assessment, shows how to determine the application’s level of vulnerability to attack and its privacy impact. This helps to determine what level of security investment is appropriate for what parts of the application.

Chapter 9, Stage 4: Risk Analysis, explains threat modeling. The authors think that this is the practice with the most significant contribution to an application’s security. The idea is to understand the potential threats to the application, the risks those threats pose, and the mitigations that can reduce those risks. Threat models also help with code reviews and penetration tests. The chapter uses a pet shop website as an example.

[Note that there is now a tool that helps you with threat modeling. In this tool, you draw data flow diagrams, after which the tool uses the STRIDE approach to automatically find threats. The tool requires Visio 2007+.]

Chapter 10, Stage 5: Creating Security Documents, Tools, and Best Practices for Customers, describes the collateral that helps customers install, maintain, and use your application securely.

Chapter 11, Stage 6: Secure Coding Policies, explains the need for prescribing security-specific coding practices, educating developers about them, and verifying that they are adhered to. This is a high-level chapter, with details following in later chapters.

Chapter 12, Stage 7: Secure Testing Policies, describes the various forms of security testing, like fuzz testing, penetration testing, and run-time verification.

Chapter 13, Stage 8: The Security Push, explains that the goal of a security push is to hunt for security bugs and triage them. Fixes should follow the push. A security push doesn’t really fit into the SDL, since the goal is to prevent vulnerabilities. It can, however, be useful for legacy (i.e. pre-SDL) code.

Chapter 14, Stage 9: The Final Security Review, describes how to assess (from a security perspective) whether the application is ready to ship. A questionnaire is filled out to show compliance with the SDL, the threat models are reviewed, and unfixed security bugs are reviewed to make sure none are critical.

Chapter 15, Stage 10: Security Response Planning, explains that you need to be prepared to respond to the discovery of vulnerabilities in your deployed application, so that you can prevent panic and follow a solid process. You should have a Security Response Center outside your development team that interfaces with security researchers and others who discover vulnerabilities and guides the development team through the process of releasing a fix. It’s also important to feed back lessons learned into the development process.

Chapter 16, Stage 11: Product Release, explains that the actual release is a non-event, since all the hard work was done in the Final Security Review.

Chapter 17, Stage 12: Security Response Execution, describes the real-world challenges associated with responding to reported vulnerabilities, including when and how to deviate from the plan outlined in Security Response Planning. Above all, you must take the time to fix the root problem properly and to make sure you’re not introducing new bugs.

Chapter 18, Integrating SDL with Agile Methods, starts the final part of the book. It shows how to incorporate agile practices into the SDL, or the other way around.

Chapter 19, SDL Banned Function Calls, explains that some functions are so bad from a security perspective, that they never should be used. This chapter is heavily focused on C.

Chapter 20, SDL Minimum Cryptographic Standards, gives guidance on the use of cryptography, like never roll your own, make the use of crypto algorithms configurable, and what key sizes to use for what algorithms.

Chapter 21, SDL-Required Tools and Compiler Options, describes security tools you should use during development. This chapter is heavily focused on Microsoft technologies.

Chapter 22, Threat Tree Patterns, shows a number of threat trees that reflect common attack patterns. It follows the STRIDE approach again.

The appendix has information about the authors.

I think this book is a must-read for every developer who is serious about building secure software.

Visualizing Code Coverage in Eclipse with EclEmma

Last time, we saw how Behavior-Driven Development (BDD) allows us to work towards a concrete goal in a very focused way.

In this post, we’ll look at how the big BDD and the smaller TDD feedback loops eliminate waste and how you can visualize that waste using code coverage tools like EclEmma to see whether you execute your process well.

The Relation Between BDD and TDD

Depending on your situation, running BDD scenarios may take a lot of time. For instance, you may need to first create a Web Application Archive (WAR), then start a web server, deploy your WAR, and finally run your automated acceptance tests using Selenium.

This is not a convenient feedback cycle to run for every single line of code you write.

So chances are that you’ll write bigger chunks of code. That increases the risk of introducing mistakes, however. Baby steps can mitigate that risk. In this case, that means moving to Test-First programming, preferably Test-Driven Development (TDD).

The link between a BDD scenario and a bunch of unit tests is the top-down test. The top-down test is a translation of the BDD scenario into test code. From there, you descend further down into unit tests using regular TDD.

This translation of BDD scenarios into top-down tests may seem wasteful, but it’s not.

Top-down tests only serve to give the developer a shorter feedback cycle. You should never have to leave your IDE to determine whether you’re done. The waste of the translation is more than made up for by the gains of not having to constantly switch to the larger BDD feedback cycle. By doing a little bit more work, you end up going faster!

If you’re worried about your build time increasing because of these top-down tests, you may even consider removing them after you’ve made them pass, since their risk-reducing job is then done.

Both BDD and TDD Eliminate Waste Using JIT Programming

Both BDD and TDD operate on the idea of Just-In-Time (JIT) coding. JIT is a Lean principle for eliminating waste; in this case of writing unnecessary code.

There are many reasons why you’d want to eliminate unnecessary code:

  • Since it takes time to write code, writing less code means you’ll be more productive (finish more stories per iteration)
  • More code means more bugs
  • In particular, more code means more opportunities for security vulnerabilities
  • More code means more things a future maintainer must understand, and thus a higher risk of bugs introduced during maintenance due to misunderstandings

Code Coverage Can Visualize Waste

With BDD and TDD in your software development process, you expect less waste. That’s the theory, at least. How do we prove this in practice?

Well, let’s look at the process:

  1. BDD scenarios define the acceptance criteria for the user stories
  2. Those BDD scenarios are translated into top-down tests
  3. Those top-down tests lead to unit tests
  4. Finally, those unit tests lead to production code

The last step is easiest to verify: no code should have been written that wasn’t necessary for making some unit test pass. We can prove that by measuring code coverage while we execute the unit tests. Any code that is not covered is by definition waste.

EclEmma Shows Code Coverage in Eclipse

We use Cobertura in our Continuous Integration build to measure code coverage. But that’s a long feedback cycle again.

Therefore, I like to use EclEmma to measure code coverage while I’m in the zone in Eclipse.

EclEmma turns covered lines green, uncovered lines red, and partially covered lines yellow.

You can change these colors using Window|Preferences|Java|Code coverage. For instance, you could change Full Coverage to white, so that the normal case doesn’t introduce visual clutter and only the exceptions stand out.

The great thing about EclEmma is that it let’s you measure code coverage without making you change the way you work.

The only difference is that instead of choosing Run As|JUnit Test (or Alt+Shift+X, T), you now choose Coverage As|JUnit test (or Alt+Shift+E, T). To re-run the last coverage, use Ctrl+Shift+F11 (instead of Ctrl+F11 to re-run the last launch).

If your fingers are conditioned to use Alt+Shift+X, T and/or Ctrl+F11, you can always change the key bindings using Window|Preferences|General|Keys.

In my experience, the performance overhead of EclEmma is low enough that you can use it all the time.

EclEmma Helps You Monitor Your Agile Process

The feedback from EclEmma allows you to immediately see any waste in the form of unnecessary code. Since there shouldn’t be any such waste if you do BDD and TDD well, the feedback from EclEmma is really feedback on how well you execute your BDD/TDD process. You can use this feedback to hone your skills and become the best developer you can be.

Waterfall vs. Agile

I’ve been a fan of Agile methodologies for quite some time now. As an Agilist, I would scoff at the Waterfall process I was taught during my studies. I did read a couple of times that the original paper introducing the Waterfall model wasn’t really supportive of it at all, but I’d never read that paper myself. Until now.

Here’s what I found what its author, Dr. Winston Royce, has to say.

Attitude
The heart of software development is analysis and coding, “since both steps involve genuinely creative work which directly contributes to the usefulness of the final product.” But for larger software systems, they are not enough. “Additional development steps are required […] The prime function of management is to sell these concepts to both groups and then enforce compliance on the part of development personnel.” The groups mentioned are customers and developers. Wow, not really people over processes, huh?

There are big problems with Waterfall
Royce then goes on to introduce the other steps, ending up with what we now call Waterfall. Right after that he adds feedback loops between each step and its predecessor. The caption to this figure says “Hopefully, the iterative interaction between the various phases is confined to successive steps.” Immediately following that, he points out a problem with this process: “Unfortunately, for the process illustrated, the design iterations are never confined to the successive steps”.

But there is a much worse problem. “The testing phase which occurs at the end of the development cycle is the first event for which timing, storage, input/output transfers, etc., are experienced as distinguished from analyzed. […] If these phenomena fail to satisfy the various external constraints, then invariably a major redesign is required. […] In effect the development process has returned to the origin and one can expect up to a 100-percent overrun in schedule and/or costs.” Yes. Been there, done that.

But these can be fixed
Stunningly, though, Royce goes on to claim “However, I believe the illustrated approach to be fundamentally sound.” We just need to tweak it a bit more: add a preliminary design phase before analysis, document the design, do it twice (he means simulate first, but others refer to this as “plan one to throw away”), look closely at testing, and involve the customer.

And these fixes look a lot like Agile
The first trick to do some design before analysis is also what is common in Agile methodologies. However, we usually don’t single out analysis and design, but apply the trick to all the phases. That’s how we end up with Behavior Driven Development.

Royce turns out to be a big fan of documentation: “In order to procure a 5 million dollar hardware device, I would expect that a 30 page specification would provide adequate detail to control the procurement. In order to procure 5 million dollars of software I would estimate a 1000 page specification is about right in order to achieve comparable control”. Why is documentation so important? One of the reasons is that “during the early phase of software development the documentation is the specification and is the design.” Agilist would rather argue that automated tests are both the documentation and the specification and drive the design. Royce could never have thought of that, since testing in his mind occurred at the end and was to be performed manually.

The do it twice trick is also used a lot in Agile. We call it a spike.

For testing, Royce notices that a lot of errors can be caught before the test phase: “every bit of an analysis and every bit of code should be subjected to a simple visual scan by a second party who did not do the original analysis or code”. Agilists would agree that pair programming is very useful. Also, Royce advises to “test every logic path in the computer program at least once”. He understands it is difficult, but should be done anyway. I agree that we should have (nearly) 100% test coverage, and TDD gives us just that.

For customer involvement, Royce notes that “for some reason what a software design is going to do is subject to wide interpretation even after previous agreement. […] To give the contractor free rein between requirement definition and operation is inviting trouble.” I don’t see how he can maintain this and still be a fan of written documentation. But I am with him in seeing the value of close collaboration with the customer.

So there is no dichotomy
In summary, the author of the Waterfall process clearly saw some problems with that approach. He even identified some solutions that look remarkably like what we do in Agile methodologies today. So why don’t we end this Waterfall vs. Agile false dichotomy and from now on talk just about software development best practices? Make progress, not war.

By the way, what I find amazing is that somehow people managed to get the Waterfall process out of this paper, but not the problems and solutions Royce presented. And it’s almost criminal that the Waterfall process is still taught in universities as a good way to do software development. Without the above fixes, it’s clearly not.

Kanban

Lately, I’ve seen a lot of discussions on Kanban. For those of you who, like me, want to know what all that fuss is about, I collected a couple of links that I will try to merge into a coherent whole below.

So what exactly is Kanban? Literally, it means “visual card”, but that’s not very helpful. This introduction explains that Kanban revolves around a board that visualizes the software development flow.

In fact, flow is a very important concept here. Kanban is a pull system, in which Minimal Marketable Features (MMFs) flow through the development stages when there is capacity available. This contrasts with most Agile methods that push work items into iterations. Also, note that for most Agile methods, those work items (e.g. User Stories) would be smaller than MMFs.

The other big point is that Kanban limits Work In Progress (i.e. the number of MMFs per development stage). This naturally exposes the bottleneck(s) in the flow.
Kanban limits WIP

This leads us nicely to the main reason to use Kanban: to improve your software development process. Other Agile methods deal with process improvement as well, but Kanban is different from e.g. Scrum.

So, if all this sounds cool and you want to give Kanban a shot, then apparently this is how you should get started. If you do, then you may see these effects. Also, make sure to get into a Kanban state of mind.

Update: here is a great compilation of Kanban resources.