Functional FizzBuzz Kata in Java

A while ago I solved the FizzBuzz kata using Java 8 streams and lambdas. While the end result was functional, the intermediate steps were not. Surely I can do better.

As always, let’s start with a failing test:

+ package remonsinnema.blog.fizzbuzz;
+
+ import static org.junit.Assert.assertEquals;
+
+ import org.junit.Test;
+
+
+ public class WhenFunctionallyFuzzingAndBuzzing {
+
+   private final FizzBuzzer fizzBuzzer = new FizzBuzzer();
+
+   @Test
+   public void shouldReplaceMultiplesOfThreeWithFizzAndMultiplesOfFiveWithBuzz() {
+     assertEquals(“1”, “1”, fizzBuzzer.apply(1));
+   }
+
+ }
+ package remonsinnema.blog.fizzbuzz;
+
+ import java.util.function.Function;
+
+
+ public class FizzBuzzer implements Function<Integer, String> {
+
+   @Override
+   public String apply(Integer n) {
+     return null;
+   }
+
+ }

Note that I start off on a functional course right away, using Java’s Function.

I fake the implementation to make the test pass:

  public class FizzBuzzer implements Function<Integer, String> {
    @Override
    public String apply(Integer n) {
–     return null;
+     return “1”;
    }
  }

And refactor the test to remove duplication:

  public class WhenFunctionallyFuzzingAndBuzzing {
    @Test
    public void shouldReplaceMultiplesOfThreeWithFizzAndMultiplesOfFiveWithBuzz() {
–     assertEquals(“1”, “1”, fizzBuzzer.apply(1));
+     assertFizzBuzz(“1”, 1);
+   }
+
+   private void assertFizzBuzz(String expected, int value) {
+     assertEquals(Integer.toString(value), expected, fizzBuzzer.apply(value));
    }
  }

Then I add another test to generalize the implementation:

  public class WhenFunctionallyFuzzingAndBuzzing {
    @Test
    public void shouldReplaceMultiplesOfThreeWithFizzAndMultiplesOfFiveWithBuzz() {
      assertFizzBuzz(“1”, 1);
+     assertFizzBuzz(“2”, 2);
    }
    private void assertFizzBuzz(String expected, int value) {
  public class FizzBuzzer implements Function<Integer, String> {
    @Override
    public String apply(Integer n) {
–     return “1”;
+     return Integer.toString(n);
    }
  }

OK, pretty standard stuff so far. Next I need to replace 3 with “Fizz”:

  public class WhenFunctionallyFuzzingAndBuzzing {
    public void shouldReplaceMultiplesOfThreeWithFizzAndMultiplesOfFiveWithBuzz() {
      assertFizzBuzz(“1”, 1);
      assertFizzBuzz(“2”, 2);
+     assertFizzBuzz(“Fizz”, 3);
    }
  nbsp;
    private void assertFizzBuzz(String expected, int value) {
  public class FizzBuzzer implements Function<Integer, String> {
    @Override
    public String apply(Integer n) {
–     return Integer.toString(n);
+     return numberReplacerFor(n).apply(n);
+   }
+
+   private Function<Integer, String> numberReplacerFor(Integer n) {
+     return n == 3
+         ? i -> “Fizz”
+         : i -> Integer.toString(i);
    }
  }

Here I recognize that I need to apply one of two functions, depending on the input. This code works, but needs some cleaning up. First, as a stepping stone, I extract the lambdas into fields:

  import java.util.function.Function;
  public class FizzBuzzer implements Function<Integer, String> {
+   private final Function<Integer, String> replaceNumberWithStringRepresentation
+       = n -> Integer.toString(n);
+   private final Function<Integer, String> replaceNumberWithFizz
+       = n -> “Fizz”;
+
    @Override
    public String apply(Integer n) {
      return numberReplacerFor(n).apply(n);
    private Function<Integer, String> numberReplacerFor(Integer n) {
      return n == 3
–         ? i -> “Fizz”
–         : i -> Integer.toString(i);
+         ? replaceNumberWithFizz
+         : replaceNumberWithStringRepresentation;
    }
  }

Next I emphasize that “3” and “Fizz” go together by extracting a class:

  public class FizzBuzzer implements Function<Integer, String> {
    private final Function<Integer, String> replaceNumberWithStringRepresentation
        = n -> Integer.toString(n);
–   private final Function<Integer, String> replaceNumberWithFizz
–       = n -> “Fizz”;
+   private final Fizzer replaceNumberWithFizz = new Fizzer();
    @Override
    public String apply(Integer n) {
    }
    private Function<Integer, String> numberReplacerFor(Integer n) {
–     return n == 3
+     return replaceNumberWithFizz.test(n)
          ? replaceNumberWithFizz
          : replaceNumberWithStringRepresentation;
    }
+ package remonsinnema.blog.fizzbuzz;
+
+ import java.util.function.Function;
+ import java.util.function.Predicate;
+
+
+ public class Fizzer implements Function<Integer, String>, Predicate<Integer> {
+
+   @Override
+   public boolean test(Integer n) {
+     return n == 3;
+   }
+
+   @Override
+   public String apply(Integer n) {
+     return “Fizz”;
+   }
+
+ }

Here I’m using the standard Java Predicate functional interface.

To add “Buzz”, I need to generalize the code from a single if (hidden as the ternary operator) to a loop:

  public class WhenFunctionallyFuzzingAndBuzzing {
      assertFizzBuzz(“1”, 1);
      assertFizzBuzz(“2”, 2);
      assertFizzBuzz(“Fizz”, 3);
+     assertFizzBuzz(“4”, 4);
+     assertFizzBuzz(“Buzz”, 5);
    }
    private void assertFizzBuzz(String expected, int value) {
  package remonsinnema.blog.fizzbuzz;
+ import java.util.Arrays;
+ import java.util.Collection;
  import java.util.function.Function;
    private final Function<Integer, String> replaceNumberWithStringRepresentation
        = n -> Integer.toString(n);
–   private final Fizzer replaceNumberWithFizz = new Fizzer();
+   private final Collection<ReplaceNumberWithFixedText> replacers = Arrays.asList(
+       new ReplaceNumberWithFixedText(3, “Fizz”),
+       new ReplaceNumberWithFixedText(5, “Buzz”)
+   );
    @Override
    public String apply(Integer n) {
    }
    private Function<Integer, String> numberReplacerFor(Integer n) {
–     return replaceNumberWithFizz.test(n)
–         ? replaceNumberWithFizz
–         : replaceNumberWithStringRepresentation;
+     for (ReplaceNumberWithFixedText replacer : replacers) {
+       if (replacer.test(n)) {
+         return replacer;
+       }
+     }
+     return replaceNumberWithStringRepresentation;
    }
  }
– package remonsinnema.blog.fizzbuzz;
– import java.util.function.Function;
– import java.util.function.Predicate;
– public class Fizzer implements Function<Integer, String>, Predicate<Integer> {
–   @Override
–   public boolean test(Integer n) {
–     return n == 3;
–   }
–   @Override
–   public String apply(Integer n) {
–     return “Fizz”;
–   }
– }
+ package remonsinnema.blog.fizzbuzz;
+
+ import java.util.function.Function;
+ import java.util.function.Predicate;
+
+
+ public class ReplaceNumberWithFixedText implements Function<Integer, String>,
+     Predicate<Integer> {
+
+   private final int target;
+   private final String replacement;
+
+   public ReplaceNumberWithFixedText(int target, String replacement) {
+     this.target = target;
+     this.replacement = replacement;
+   }
+
+   @Override
+   public boolean test(Integer n) {
+     return n == target;
+   }
+
+   @Override
+   public String apply(Integer n) {
+     return replacement;
+   }
+
+ }

Oops, old habits… That should be a stream rather than a loop:

  import java.util.function.Function;
  public class FizzBuzzer implements Function<Integer, String> {
–   private final Function<Integer, String> replaceNumberWithStringRepresentation
+   private final Function<Integer, String> defaultReplacer
        = n -> Integer.toString(n);
    private final Collection<ReplaceNumberWithFixedText> replacers = Arrays.asList(
        new ReplaceNumberWithFixedText(3, “Fizz”),
    }
    private Function<Integer, String> numberReplacerFor(Integer n) {
–     for (ReplaceNumberWithFixedText replacer : replacers) {
–       if (replacer.test(n)) {
–         return replacer;
–       }
–     }
–     return replaceNumberWithStringRepresentation;
+     return replacers.stream()
+         .filter(replacer -> replacer.test(n))
+         .map(replacer -> (Function<Integer, String>) replacer)
+         .findFirst()
+         .orElse(defaultReplacer);
    }
  }

Much better. The next test is for multiples:

  public class WhenFunctionallyFuzzingAndBuzzing {
      assertFizzBuzz(“Fizz”, 3);
      assertFizzBuzz(“4”, 4);
      assertFizzBuzz(“Buzz”, 5);
+     assertFizzBuzz(“Fizz”, 6);
    }
    private void assertFizzBuzz(String expected, int value) {
  public class FizzBuzzer implements Function<Integer, String> {
    private final Function<Integer, String> defaultReplacer
        = n -> Integer.toString(n);
–   private final Collection<ReplaceNumberWithFixedText> replacers = Arrays.asList(
–       new ReplaceNumberWithFixedText(3, “Fizz”),
–       new ReplaceNumberWithFixedText(5, “Buzz”)
+   private final Collection<ReplaceMultipleWithFixedText> replacers = Arrays.asList(
+       new ReplaceMultipleWithFixedText(3, “Fizz”),
+       new ReplaceMultipleWithFixedText(5, “Buzz”)
    );
    @Override
+ package remonsinnema.blog.fizzbuzz;
+
+ import java.util.function.Function;
+ import java.util.function.Predicate;
+
+
+ public class ReplaceNumberWithFixedText implements Function<Integer, String>,
+     Predicate<Integer> {
+
+   private final int target;
+   private final String replacement;
+
+   public ReplaceNumberWithFixedText(int target, String replacement) {
+     this.target = target;
+     this.replacement = replacement;
+   }
+
+   @Override
+   public boolean test(Integer n) {
+     return n % target == 0;
+   }
+
+   @Override
+   public String apply(Integer n) {
+     return replacement;
+   }
+
+ }
– package remonsinnema.blog.fizzbuzz;
– import java.util.function.Function;
– import java.util.function.Predicate;
– public class ReplaceNumberWithFixedText implements Function<Integer, String>, Predicate<Integer> {
–   private final int target;
–   private final String replacement;
–   public ReplaceNumberWithFixedText(int target, String replacement) {
–     this.target = target;
–     this.replacement = replacement;
–   }
–   @Override
–   public boolean test(Integer n) {
–     return n == target;
–   }
–   @Override
–   public String apply(Integer n) {
–     return replacement;
–   }
– }

The last test is to combine Fizz and Buzz:

  public class WhenFunctionallyFuzzingAndBuzzing {
      assertFizzBuzz(“4”, 4);
      assertFizzBuzz(“Buzz”, 5);
      assertFizzBuzz(“Fizz”, 6);
+     assertFizzBuzz(“7”, 7);
+     assertFizzBuzz(“8”, 8);
+     assertFizzBuzz(“Fizz”, 9);
+     assertFizzBuzz(“Buzz”, 10);
+     assertFizzBuzz(“11”, 11);
+     assertFizzBuzz(“Fizz”, 12);
+     assertFizzBuzz(“13”, 13);
+     assertFizzBuzz(“14”, 14);
+     assertFizzBuzz(“FizzBuzz”, 15);
    }
    private void assertFizzBuzz(String expected, int value) {
  package remonsinnema.blog.fizzbuzz;
  import java.util.Arrays;
  import java.util.Collection;
  import java.util.function.Function;
+ import java.util.stream.Collectors;
+ import java.util.stream.Stream;
  public class FizzBuzzer implements Function<Integer, String> {
    @Override
    public String apply(Integer n) {
–     return numberReplacerFor(n).apply(n);
+     return numberReplacersFor(n)
+         .map(function -> function.apply(n))
+         .collect(Collectors.joining());
    }
–   private Function<Integer, String> numberReplacerFor(Integer n) {
–     return replacers.stream()
+   private Stream<Function<Integer, String>> numberReplacersFor(Integer n) {
+     return Stream.of(replacers.stream()
          .filter(replacer -> replacer.test(n))
          .map(replacer -> (Function<Integer, String>) replacer)
          .findFirst()
–         .orElse(defaultReplacer);
+         .orElse(defaultReplacer));
    }
  }

I generalized the single Function into a Stream of Functions, to which I apply the Map-Reduce pattern. I could have spelled out the Reduce part using something like .reduce("", (a, b) -> a + b), but I think Collectors.joining() is more expressive.

This doesn’t pass the test yet, since I return a stream of a single function. The fix is a little bit tricky, because I need to know whether any applicable replacer functions were found, and you can’t do that without terminating the stream. So I need to create a new stream using StreamSupport:

  package remonsinnema.blog.fizzbuzz;
  import java.util.Arrays;
  import java.util.Collection;
+ import java.util.Iterator;
+ import java.util.Spliterators;
  import java.util.function.Function;
  import java.util.stream.Collectors;
  import java.util.stream.Stream;
+ import java.util.stream.StreamSupport;
  public class FizzBuzzer implements Function<Integer, String> {
    }
    private Stream<Function<Integer, String>> numberReplacersFor(Integer n) {
–     return Stream.of(replacers.stream()
+     Iterator<Function<Integer, String>> result = replacers.stream()
          .filter(replacer -> replacer.test(n))
          .map(replacer -> (Function<Integer, String>) replacer)
–         .findFirst()
–         .orElse(defaultReplacer));
+         .iterator();
+     return result.hasNext()
+         ? StreamSupport.stream(Spliterators.spliteratorUnknownSize(result, 0), false)
+         : Stream.of(defaultReplacer);
    }
  }

And that’s it. The full code is on GitHub.

I learned two lessons from this little exercise:

  1. Java comes with a whole bunch of functional interfaces, like Function and Predicate, that are easily combined with streams to solve a variety of problems.
  2. The standard if → while transformation becomes if → stream in the functional world.

 

Advertisement

How to manage dependencies in a Gradle multi-project build

gradleI’ve been a fan of the Gradle build tool from quite early on. Its potential was clear even before the 1.0 version, when changes were regularly breaking. Today, upgrading rarely cause surprises. The tool has become mature and performs well.

Gradle includes a powerful dependency management system that can work with Maven and Ivy repositories as well as local file system dependencies.

During my work with Gradle I’ve come to rely on a pattern for managing dependencies in a multi-project build that I want to share. This pattern consists of two key practices:

  1. Centralize dependency declarations in build.gradle
  2. Centralize dependency version declarations in gradle.properties

Both practices are examples of applying software development best practices like DRY to the code that makes up the Gradle build. Let’s look at them in some more detail.

Centralize dependency declarations

In the root project’s build.gradle file, declare a new configuration for each dependency used in the entire project. In each sub-project that uses the dependency, declare that the compile (or testCompile, etc) configuration extends the configuration for the dependency:

subprojects {
  configurations {
    commonsIo
  }

  dependencies {
    commonsIo 'commons-io:commons-io:2.5'
  }
}
configurations {
  compile.extendsFrom commonsIo
}

By putting all dependency declarations in a single place, we know where to look and we prevent multiple sub-projects from declaring the same dependency with different versions.

Furthermore, the sub-projects are now more declarative, specifying only what logical components they depend on, rather than all the details of how a component is built up from individual jar files. When there is a one-to-one correspondence, as in the commons IO example, that’s not such a big deal, but the difference is pronounced when working with components that are made up of multiple jars, like the Spring framework or Jetty.

Centralize dependency version declarations

The next step is to replace all the version numbers from the root project’s build.gradle file by properties defined in the root project’s gradle.properties:

dependencies {
  commonsIo "commons-io:commons-io:$commonsIoVersion"
}
commonsIoVersion=2.5

This practice allows you to reuse the version numbers for related dependencies. For instance, if you’re using the Spring framework, you may want to declare dependencies on spring-mvc and spring-jdbc with the same version number.

There is an additional advantage of this approach. Upgrading a dependency means updating gradle.properties, while adding a new dependency means updating build.gradle. This makes it easy to gauge from a commit feed what types of changes could have been made and thus to determine whether a closer inspection is warranted.

You can take this a step further and put the configurations and dependencies blocks in a separate file, e.g. dependencies.gradle.

And beyond…

Having all the dependencies declared in a single location is a stepping stone to more advanced supply chain management practices.

The centrally declared configurations give a good overview of all the components that you use in your product, the so-called Bill of Materials (BOM). You can use the above technique, or use the Gradle BOM plugin.

The BOM makes it easier to use a tool like OWASP DependencyCheck to check for publicly disclosed vulnerabilities in the dependencies that you use. At EMC, about 80% of vulnerabilities reported against our products are caused by issues in 3rd party components, so it makes sense to keep a security eye on dependencies.

A solid BOM also makes it easier to review licenses and their compliance requirements. If you can’t afford a tool like BlackDuck Protex, you can write something less advanced yourself with modest effort.

FizzBuzz Kata With Java Streams

black-beltAfter only a couple of weeks of Judo practice, my son got bored. He complained that he wasn’t learning anything, because he kept doing the same thing over and over.

It’s not just young children that confuse learning and doing new things. For instance, how many software developers go through the trouble of deliberate practice by performing katas or attending dojos?

It may seem silly to repeat exercises that you’ve already done many times, but it’s not. It’s the only way to become a black belt in your field. And remember that mastery is one of the three intrinsic motivators (the others being autonomy and purpose).

Practicing means slowing down and moving focus from outcome to process. It’s best to use simple exercises that you can complete in a limited amount of time, so you can do the same exercise multiple times.

I’ve found that I virtually always learn something new when I practice. That’s not because I’ve forgotten how to solve the problem since last time, but because I’ve learned new things since then and thus see the world through new eyes.

For example, since Java 8 came out I’ve been trying to use the new stream classes to help move to a more functional style of programming. This has changed the way I look at old problems, like FizzBuzz.

Let’s see this in action. Of course, I start by adding a test:

+ package remonsinnema.blog.fizzbuzz;
+
+ import static org.junit.Assert.assertEquals;
+
+ import org.junit.Test;
+
+
+ public class WhenFizzingAndBuzzing {
+
+   private final FizzBuzz fizzbuzz = new FizzBuzz();
+
+   @Test
+   public void shouldReplaceWithFizzAndBuzz() {
+     assertEquals(“1”, “1”, fizzbuzz.get(1));
+   }
+
+ }

This test uses the When…Should form of unit testing that helps focus on behavior rather than implementation details. I let Eclipse generate the code required to make this compile:

+ package remonsinnema.blog.fizzbuzz;
+
+
+ public class FizzBuzz {
+
+   public String get(int i) {
+     return null;
+   }
+
+ }

The simplest code that makes the test pass is to fake it:

  package remonsinnema.blog.fizzbuzz;
  public class FizzBuzz {
    public String get(int i) {
–     return null;
+     return “1”;
    }
  }

Now that the test passes, it’s time for refactoring. I remove duplication from the test:

  public class WhenFizzingAndBuzzing {
    @Test
    public void shouldReplaceWithFizzAndBuzz() {
–     assertEquals(“1”, “1”, fizzbuzz.get(1));
+     assertFizzBuzz(“1”, 1);
+   }
+
+   private void assertFizzBuzz(String expected, int n) {
+     assertEquals(Integer.toString(n), expected, fizzbuzz.get(n));
    }
  }

Next I add a test to force the real implementation:

  public class WhenFizzingAndBuzzing {
    @Test
    public void shouldReplaceWithFizzAndBuzz() {
      assertFizzBuzz(“1”, 1);
+     assertFizzBuzz(“2”, 2);
    }
    private void assertFizzBuzz(String expected, int n) {
  package remonsinnema.blog.fizzbuzz;
  public class FizzBuzz {
–   public String get(int i) {
–     return “1”;
+   public String get(int n) {
+     return Integer.toString(n);
    }
  }

OK, now let’s get real with a test for Fizz:

  public class WhenFizzingAndBuzzing {
    public void shouldReplaceWithFizzAndBuzz() {
      assertFizzBuzz(“1”, 1);
      assertFizzBuzz(“2”, 2);
+     assertFizzBuzz(“Fizz”, 3);
    }
    private void assertFizzBuzz(String expected, int n) {
  package remonsinnema.blog.fizzbuzz;
  public class FizzBuzz {
    public String get(int n) {
+     if (n == 3) {
+       return “Fizz”;
+     }
      return Integer.toString(n);
    }

Similar for Buzz:

  public class WhenFizzingAndBuzzing {
      assertFizzBuzz(“Fizz”, 3);
+     assertFizzBuzz(“4”, 4);
+     assertFizzBuzz(“Buzz”, 5);
    }
    private void assertFizzBuzz(String expected, int n) {
  public class FizzBuzz {
      if (n == 3) {
        return “Fizz”;
      }
+     if (n == 5) {
+       return “Buzz”;
+     }
      return Integer.toString(n);
    }

Here I just copied and pasted the if statement to get it working quickly. We shouldn’t stop there, of course, but get rid of the dirty stuff. In this case, that’s duplication.

First, let’s update the code to make the duplication more apparent:

  package remonsinnema.blog.fizzbuzz;
  public class FizzBuzz {
    public String get(int n) {
–     if (n == 3) {
–       return “Fizz”;
+     MultipleReplacer replacer = new MultipleReplacer(3, “Fizz”);
+     if (n == replacer.getValue()) {
+       return replacer.getText();
      }
–     if (n == 5) {
–       return “Buzz”;
+     replacer = new MultipleReplacer(5, “Buzz”);
+     if (n == replacer.getValue()) {
+       return replacer.getText();
      }
      return Integer.toString(n);
    }
+ package remonsinnema.blog.fizzbuzz;
+
+
+ public class MultipleReplacer {
+
+   private final int value;
+   private final String text;
+
+   public MultipleReplacer(int value, String text) {
+     this.value = value;
+     this.text = text;
+   }
+
+   public int getValue() {
+     return value;
+   }
+
+   public String getText() {
+     return text;
+   }
+
+ }

I just created a new value object to hold the two values that I had to change after the copy/paste.

Now that the duplication is clearer, it’s easy to remove:

  package remonsinnema.blog.fizzbuzz;
+ import java.util.Arrays;
+ import java.util.Collection;
+
  public class FizzBuzz {
+   private final Collection<MultipleReplacer> replacers = Arrays.asList(
+       new MultipleReplacer(3, “Fizz”), new MultipleReplacer(5, “Buzz”));
+
    public String get(int n) {
–     MultipleReplacer replacer = new MultipleReplacer(3, “Fizz”);
–     if (n == replacer.getValue()) {
–       return replacer.getText();
–     }
–     replacer = new MultipleReplacer(5, “Buzz”);
–     if (n == replacer.getValue()) {
–       return replacer.getText();
+     for (MultipleReplacer replacer : replacers) {
+       if (n == replacer.getValue()) {
+         return replacer.getText();
+       }
      }
      return Integer.toString(n);
    }

I’m not done cleaning up, however. The current code suffers from feature envy, which I resolve by moving behavior into the value object:

  package remonsinnema.blog.fizzbuzz;
  import java.util.Arrays;
  import java.util.Collection;
+ import java.util.Optional;
  public class FizzBuzz {
    public String get(int n) {
      for (MultipleReplacer replacer : replacers) {
–       if (n == replacer.getValue()) {
–         return replacer.getText();
+       Optional<String> result = replacer.textFor(n);
+       if (result.isPresent()) {
+         return result.get();
        }
      }
      return Integer.toString(n);
  package remonsinnema.blog.fizzbuzz;
+ import java.util.Optional;
+
  public class MultipleReplacer {
      this.text = text;
    }
–   public int getValue() {
–     return value;
–   }
–   public String getText() {
–     return text;
+   public Optional<String> textFor(int n) {
+     if (n == value) {
+       return Optional.of(text);
+     }
+     return Optional.empty();
    }
  }

Now that I’m done refactoring, I can continue with multiples:

  public class WhenFizzingAndBuzzing {
      assertFizzBuzz(“Fizz”, 3);
      assertFizzBuzz(“4”, 4);
      assertFizzBuzz(“Buzz”, 5);
+     assertFizzBuzz(“Fizz”, 6);
    }
    private void assertFizzBuzz(String expected, int n) {
  public class MultipleReplacer {
    }
    public Optional<String> textFor(int n) {
–     if (n == value) {
+     if (n % value == 0) {
        return Optional.of(text);
      }
      return Optional.empty();

The final test is for simultaneous “Fizz” and “Buzz”:

  public class WhenFizzingAndBuzzing {
      assertFizzBuzz(“4”, 4);
      assertFizzBuzz(“Buzz”, 5);
      assertFizzBuzz(“Fizz”, 6);
+     assertFizzBuzz(“7”, 7);
+     assertFizzBuzz(“8”, 8);
+     assertFizzBuzz(“Fizz”, 9);
+     assertFizzBuzz(“Buzz”, 10);
+     assertFizzBuzz(“11”, 11);
+     assertFizzBuzz(“Fizz”, 12);
+     assertFizzBuzz(“13”, 13);
+     assertFizzBuzz(“14”, 14);
+     assertFizzBuzz(“FizzBuzz”, 15);
    }
    private void assertFizzBuzz(String expected, int n) {
  public class FizzBuzz {
        new MultipleReplacer(3, “Fizz”), new MultipleReplacer(5, “Buzz”));
    public String get(int n) {
+     StringBuilder result = new StringBuilder();
      for (MultipleReplacer replacer : replacers) {
–       Optional<String> result = replacer.textFor(n);
–       if (result.isPresent()) {
–         return result.get();
+       Optional<String> replacement = replacer.textFor(n);
+       if (replacement.isPresent()) {
+         result.append(replacement.get());
        }
      }
+     if (result.length() > 0) {
+       return result.toString();
+     }
      return Integer.toString(n);
    }

This code is rather complex, but this is where streams come to the rescue:

  public class FizzBuzz {
        new MultipleReplacer(3, “Fizz”), new MultipleReplacer(5, “Buzz”));
    public String get(int n) {
–     StringBuilder result = new StringBuilder();
–     for (MultipleReplacer replacer : replacers) {
–       Optional<String> replacement = replacer.textFor(n);
–       if (replacement.isPresent()) {
–         result.append(replacement.get());
–       }
–     }
–     if (result.length() > 0) {
–       return result.toString();
–     }
–     return Integer.toString(n);
+     return replacers.stream()
+         .map(replacer -> replacer.textFor(n))
+         .filter(Optional::isPresent)
+         .map(optional -> optional.get())
+         .reduce((a, b) -> a + b)
+         .orElse(Integer.toString(n));
    }
  }

Note how the for and if statements disappear. Rather than spelling out how something needs to be done, we say what we want to achieve.

We can apply the same trick to get rid of the remainingif statement in our ode base:

  public class MultipleReplacer {
    }
    public Optional<String> textFor(int n) {
–     if (n % value == 0) {
–       return Optional.of(text);
–     }
–     return Optional.empty();
+     return Optional.of(text)
+         .filter(ignored -> n % value == 0);
    }
  }

The code is on GitHub.

Software Engineering in 2016

engineeringI write computer programs for a living. Many people therefore refer to me as a software engineer, but that term has always made me uncomfortable.

Mary Shaw from the Software Engineering Institute explains where that unease comes from: our industry doesn’t meet the standards for engineers as set by real engineering fields like structural engineering.

Shaw defines engineering as “Creating cost-effective solutions to practical problems by applying codified knowledge, building things in the service of mankind.” (my italics) While we certainly do this some of the time in some places, I believe we are still quite a ways away from doing it all of the time everywhere.

Examples abound. Code is still being written without automated tests, without peer review, and without continuously integrating it with code of fellow team members. Basic stuff, really.

The State of DevOps 2015 report goes into some more advanced practices. It presents compelling evidence that these practices lead to better business results. High-performing companies in the study deploy code 30x more often and have a 60x higher success rate and 168x lower Mean Time To Recovery (MTTR) than low-performing companies.

Yes, you’ve read that right: lower risk of failure and quicker recovery when things do go wrong!

So why aren’t we all applying this codified knowledge?

DevOps practices, as well as more advanced Agile practices like test-driven development and pair programming, require more discipline and that seems to be lacking in our field. Uncle Bob explains this lack of discipline by pointing to the population demographics in our field.

go-fast-go-wellThere still seems to be a strong belief that “quick and dirty” really is quicker. But in the long term, it almost always ends up just being dirty, and many times actually slower.

Yes, going into a production database and manually updating rows to compensate for a bug in the code can be a quick fix, but it also increases the risk of unanticipated things going wrong. Do it often enough, and something will go wrong, requiring fixes for the fixes, etc.

Following a defined process may seem slower than a quick hack, but it also reduces risk. If you can make the defined process itself quicker, then at some point there is little to gain from deviating from the process.

The high-performing DevOps companies from the aforementioned study prove that this is not a pipe dream. With a good Continuous Delivery process you can make fixing urgent critical bugs a non-event. This is good, because people make poorer decisions under stress.

This finding shouldn’t be news to us. It’s the same lesson that we learned with Continuous Integration. Following a process and doing it more often leads to an optimization of the process and an overall increase in productivity.

Until the majority in our field have learned to apply this and other parts of our codified knowledge, we should probably stop calling ourselves engineers.

 

The Poetry of Microservices

This post is dedicated to she-who-wears-my-code-poet-shirt, my muse, my Valentine, my Angel.


code-poetMy favorite forms of poetry are the sonnet and the haiku. I’ve found their constraints to be helpful in guiding creativity rather than restricting it.

Programming and writing poetry are very similar, which is proven beyond reasonable doubt by the fact that the first programmer was the child of a famous poet.

It therefore makes sense to consider that constraints on programming would actually help us write better programs as well. Uncle Bob Martin argues in The Last Programming Language that indeed it does.

The four major programming paradigms have all taken away some of our freedoms as programmers, and that has gotten us better results.

Modular programming limits the size of the parts that make up a program. Structured programming limits the flow of execution to a couple of well-established patterns. Object-oriented programming limits data exposure across units. Finally, functional programming limits side-effects.

Since the constraints on size, execution patterns, data exposure, and side-effects have served us so well when writing programs, we may wonder if we can apply them elsewhere in our field.

One example would be in how we deploy applications. Applying the four constraints results in applications that are made up of small components that communicate using a couple of well-established patterns, hide their data, and limit their side-effects. In other words, we would end up with microservices.

After watching The Last Programming Language, I wondered where aspect-oriented programming fits in. Wikipedia does indeed list it as a programming paradigm, but Uncle Bob probably left it out because it isn’t as widely used as the other four.

That doesn’t mean that aspects can’t be extremely useful in some specific situations. Applying the concept to application deployment gives us the API gateway pattern, for instance.

Have you found yourself in situations where following constraints actually improved the end solution? Please leave a comment below.


Bug in my software
Disappears when in testing
Curse you, Heisenberg
–Andrew from Ottawa, Canada


See also this TED talk on computers writing poetry.


First Steps Into the World of Go

golang

Since developers should learn a new programming language every year, I felt it was about time for me to dive into something new and I decided on Go.

The good news is that Go has awesome documentation to get you started. More good news is that Go has a mature ecosystem of tools, including support for getting dependencies, formatting, and testing.

There is also an Eclipse plugin that supports Go. Although it isn’t as complete as support for Java (e.g. very few quick fixes or refactorings), it’s a lot better than for some other languages I’ve tried.

The best way to learn anything is to learn by doing, so I started with a simple kata: develop a set type. The standard library for Go doesn’t offer sets, but that’s besides the point; I just want to learn the language by building something familiar.

So let’s get started with the first test. The convention in Go is to create a file with the name foo_test.go if you want to test foo.go.

package set

import (
  "testing"
)

func TestEmpty(t *testing.T) {
  empty := NewSet()
  if !empty.IsEmpty() {
    t.Errorf("Set without elements should be empty")
  }
}

(WordPress doesn’t currently support Go syntax highlighting, so the func keyword is not shown as such.)

There are several things to note about this piece of code:

  • Go supports packages using the package statement
  • Statements are terminated by semi-colons (;), but you can omit them at the end of the line, much like in Groovy
  • You import a package using the import statement. The testing package is part of the standard library
  • Anything that starts with a lower case letter is private to the package, anything that starts with an upper case letter is public
  • Code in Go goes inside a function, as indicated by the func keyword
  • Variable names are written before the type
  • The := syntax is a shorthand for declaring and initializing a variable; Go will figure out the correct type
  • Go doesn’t have constructors, but uses factory functions to achieve the same
  • if statements don’t require parentheses around the condition, but do require braces
  • The testing package is quite small and lacks assertions. While there are packages that provide those, I’ve decided to stay close to the default here

So let’s make the test pass:

package set

type set struct {
}

func NewSet() *set {
  return new(set)
}

func (s *set) IsEmpty() bool {
  return true
}

The cool thing about the Eclipse plugin is that it automatically runs the tests whenever you save a file, much like InfiniTest for Java. This is really nice when you’re doing Test-Driven Development.

Now this isn’t much of a test, of course, since it only tests one side of the IsEmpty() coin. Which is what allows us to fake the implementation. So let’s fix the test:

func TestEmpty(t *testing.T) {
  empty := NewSet()
  one := NewSet()
  one.Add("A")
  if !empty.IsEmpty() {
    t.Errorf("Set without elements should be empty")
  }
  if one.IsEmpty() {
    t.Errorf("Set with one element should not be empty")
  }
}

Which we can easily make pass:

type set struct {
  empty bool
}

func NewSet() *set {
  s := new(set)
  s.empty = true
  return s
}

func (s *set) IsEmpty() bool {
  return s.empty
}

func (s *set) Add(item string) {
  s.empty = false
}

Note that I’ve used the string type as the argument to Add(). We’d obviously want something more generic, but there is no Object in Go as there is in Java. I’ll revisit this decision later.

The next test verifies the number of items in the set:

func TestSize(t *testing.T) {
  empty := NewSet()
  one := NewSet()
  one.Add("A")
  if empty.Size() != 0 {
    t.Errorf("Set without elements should have size 0")
  }
  if one.Size() != 1 {
    t.Errorf("Set with one element should have size 1")
  }
}

Which we make pass by generalizing empty to size:

type set struct {
  size int
}

func NewSet() *set {
  s := new(set)
  s.size = 0
  return s
}

func (s *set) IsEmpty() bool {
  return s.Size() == 0
}

func (s *set) Add(item string) {
  s.size++
}

func (s *set) Size() int {
  return s.size
}

Now that the tests pass, we need to clean them up a bit:

var empty *set
var one *set

func setUp() {
  empty = NewSet()
  one = NewSet()
  one.Add("A")
}

func TestEmpty(t *testing.T) {
  setUp()
  if !empty.IsEmpty() {
    t.Errorf("Set without elements should be empty")
  }
  if one.IsEmpty() {
    t.Errorf("Set with one element should not be empty")
  }
}

func TestSize(t *testing.T) {
  setUp()
  if empty.Size() != 0 {
    t.Errorf("Set without elements should have size 0")
  }
  if one.Size() != 1 {
    t.Errorf("Set with one element should have size 1")
  }
}

Note again the lack of test infrastructure support compared to, say, JUnit. We have to manually call the setUp() function.

With the code in better shape, let’s add the next test:

func TestContains(t *testing.T) {
  setUp()
  if empty.Contains("A") {
    t.Errorf("Empty set should not contain element")
  }
  if !one.Contains("A") {
    t.Errorf("Set should contain added element")
  }
}

To make this pass, we have to actually store the items in the set, which we do using arrays and slices:

type set struct {
  items []string
}

func NewSet() *set {
  s := new(set)
  s.items = make([]string, 0, 10)
  return s
}

func (s *set) Add(item string) {
  s.items = append(s.items, item)
}

func (s *set) Size() int {
  return len(s.items)
}

func (s *set) Contains(item string) bool {
  for _, value := range s.items {
    if (value == item) {
      return true
    }
  }
  return false
}

A slice is a conventient array-like data structure that is backed by a real aray. Arrays can’t change size, but they can be bigger than the slices that they back. This keeps appending items to a slice efficient.

The for loop is the only looping construct in Go, but it’s quite a bit more powerful than the for of most other languages. It gives both the index and the value, the first of which we ignore using the underscore (_). It loops over all the items in the slice using the range keyword.

So now we have a collection of sorts, but not quite yet a set:

func TestIgnoresDuplicates(t *testing.T) {
  setUp()
  one.Add("A")
  if one.Size() != 1 {
    t.Errorf("Set should ignore adding an existing element")
  }
}

 

func (s *set) Add(item string) {
  if !s.Contains(item) {
    s.items = append(s.items, item)
  }
}

All we have left to make this a fully functional set, is to allow removal of items:

func TestRemove(t *testing.T) {
  setUp()
  one.Remove("A")

  if one.Contains("A") {
    t.Errorf("Set still contains element after removing it")
  }
}

 

func (s *set) Remove(item string) {
  for index, value := range s.items {
    if value == item {
      s.items[index] = s.items[s.Size() - 1]
      s.items = s.items[0:s.Size() - 1]
    }
  }
}

Here we see the full form of the for loop, with both the index and the value. This loop is very similar to the one in Contains(), so we can extract a method to get rid of the duplication:

func (s *set) Contains(item string) bool {
  return s.indexOf(item) >= 0
}

func (s *set) indexOf(item string) int {
  for index, value := range s.items {
    if value == item {
      return index
    }
  }
  return -1
}

func (s *set) Remove(item string) {
  index := s.indexOf(item)
  if index >= 0 {
    s.items[index] = s.items[s.Size()-1]
    s.items = s.items[0 : s.Size()-1]
  }
}

Note the lower case starting letter on indexOf() that makes it a private method. Since our set is unordered, it wouldn’t make sense to expose this functionality.

Finally, we need to generalize the set so that it can contain any type of items:

func TestNonStrings(t *testing.T) {
  set := NewSet()

  set.Add(1)
  if !set.Contains(1) {
    t.Errorf("Set does not contain added integer")
  }

  set.Remove(1)
  if set.Contains(1) {
    t.Errorf("Set still contains removed integer")
  }
}

Some digging reveals that we can mimic Java’s Object in Go with an empty interface:

type set struct {
  items []interface{}
}

func NewSet() *set {
  s := new(set)
  s.items = make([]interface{}, 0, 10)
  return s
}

func (s *set) IsEmpty() bool {
  return s.Size() == 0
}

func (s *set) Add(item interface{}) {
  if !s.Contains(item) {
    s.items = append(s.items, item)
  }
}

func (s *set) Size() int {
  return len(s.items)
}

func (s *set) Contains(item interface{}) bool {
  return s.indexOf(item) >= 0
}

func (s *set) indexOf(item interface{}) int {
  for index, value := range s.items {
    if value == item {
      return index
    }
  }
  return -1
}

func (s *set) Remove(item interface{}) {
  index := s.indexOf(item)
  if index >= 0 {
    s.items[index] = s.items[s.Size()-1]
    s.items = s.items[0 : s.Size()-1]
  }
}

All in all I found working in Go quite pleasurable. The language is simple yet powerful. The go fmt kills discussions about code layout, as does the compiler’s insistence on braces with if. Where Go really shines is in concurrent programming, but that’s something for another time.

What do you think? Do you like this opiniated little language? Do you use it at all? Please leave a word in the comments.

How To Develop Software Using Only SaaS

cloud-codeThe world is fast moving to Software-as-a-Service (SaaS) and we developers are busy learning how to build SaaS applications.

We can now finally do that using nothing but SaaS applications ourselves.

The Developer’s Toolbox

As developers, we don’t ask for much.

An Integrated Development Environment (IDE) lets us do our main task: writing code. A Source Code Management (SCM) system stores our Heartbreaking Work of Staggering Genius. A Continuous Integration (CI) server pulls our code through hoops that prove it is ready for use. And finally a Platform-as-a-Service (PaaS) or other deployment environment runs our applications.

We are used to running all of these on premises. IDEs like Eclipse or IntelliJ run on our local machines. SCMs like Git or Subversion run on some company server, as does our Jenkins/Hudson or TeamCity CI server. Finally, we deploy to a Paas like CloudFoundry, or to a custom server.

Most of those tools already run in the cloud. For those that don’t, we can easily find good alternatives. Let’s take a look at some of the candidates.

Integrated Development Environments

I’ve written about Cloud9 before. It’s mainly focused on web languages like JavaScript. For Java, Codenvy seems a better choice. For both, you can run the hosted offering, or deploy it in your own data center.

Neither can match a local IDE experience yet, but the gap is closing. On the other hand, they offer some functionality you won’t easily find in locally installed IDEs, like remote pair programming.

Source Code Management

githubGit has taken over the world, and the SaaS version of it, GitHub, is following suit.

Some people even think that your GitHub profile is your resume.

Again, you can use the hosted version (with public or private repositories), or install GitHub in your data center.

Both Cloud9 and Codenvy work seamlessly with GitHub repositories.

Continuous Integration

Jenkins/Hudson is the leader in this space, and CloudBees offers a SaaS version. Other products include Bamboo, Travis CI and CodeShip. Some of these are free for open source projects. Again, there are hosted and on premises versions.

The CI tools support GitHub through public SSH keys for access and commit hooks for starting jobs.

Platform-as-a-Service

After GitHub, these are probably the most familiar to you: Pivotal CloudFoundry, Heroku, Google App Engine, and Azure. CloudFoundry is backed by many big organizations (including the company I work for, EMC) and seems to be emerging as the leader.

cloudfoundrySome cloud IDEs let you push to a PaaS directly, but I don’t think that’s the right way to do it.

You should commit to your SCM and let CI pick up your changes.

Your CI jobs should be responsible for pushing to the PaaS. Your CI may have a custom integration to your PaaS, or you may have to use something like the CloudFoundry command-line interface to push your changes.

Conclusion

It seems that our entire tool chain is now available as a service, although the IDEs still leave us wanting a bit. Most of these tools are available as open source and can be deployed in your own data center.

Looks like we’re making some progress towards a Frictionless Development Environment!

What SaaS applications are you using for software development? Please leave a comment below.

How To Process Java Annotations

One of the cool new features of Java 8 is the support for lambda expressions. Lambda expressions lean heavily on the FunctionalInterface annotation.

In this post, we’ll look at annotations and how to process them so you can implement your own cool features.

Annotations

Annotations were added in Java 5. The Java language comes with some predefined annotations, but you can also define custom annotations.

Many frameworks and libraries make good use of custom annotations. JAX-RS, for instance, uses them to turn POJOs into REST resources.

Annotations can be processed at compile time or at runtime (or even both).

At runtime, you can use the reflection API. Each element of the Java language that can be annotated, like class or method, implements the AnnotatedElement interface. Note that an annotation is only available at runtime if it has the RUNTIME RetentionPolicy.

Compile-Time Annotation Processing

Java 5 came with the separate apt tool to process annotations, but since Java 6 this functionality is integrated into the compiler.

You can either call the compiler directly, e.g. from the command line, or indirectly, from your program.

In the former case, you specify the -processor option to javac, or you use the ServiceLoader framework by adding the file META-INF/services/javax.annotation.processing.Processor to your jar. The contents of this file should be a single line containing the fully qualified name of your processor class.

The ServiceLoader approach is especially convenient in an automated build, since all you have to do is put the annotation processor on the classpath during compilation, which build tools like Maven or Gradle will do for you.

Compile-Time Annotation Processing From Within Your Application

You can also use the compile-time tools to process annotations from within your running application.

Rather than calling javac directly, use the more convenient JavaCompiler interface. Either way, you’ll need to run your application with a JDK rather than just a JRE.

The JavaCompiler interface gives you programmatic access to the Java compiler. You can obtain an implementation of this interface using ToolProvider.getSystemJavaCompiler(). This method is sensitive to the JAVA_HOME environment variable.

The getTask() method of JavaCompiler allows you to add your annotation processor instances. This is the only way to control the construction of annotation processors; all other methods of invoking annotation processors require the processor to have a public no-arg constructor.

Annotation Processors

A processor must implement the Processor interface. Usually you will want to extend the AbstractProcessor base class rather than implement the interface from scratch.

Each annotation processor must indicate the types of annotations it is interested in through the getSupportedAnnotationTypes() method. You may return * to process all annotations.

The other important thing is to indicate which Java language version you support. Override the getSupportedSourceVersion() method and return one of the RELEASE_x constants.

With these methods implemented, your annotation processor is ready to get to work. The meat of the processor is in the process() method.

When process() returns true, the annotations processed are claimed by this processor, and will not be offered to other processors. Normally, you should play nice with other processors and return false.

Elements and TypeMirrors

The annotations and the Java elements they are present on are provided to your process() method as Element objects. You may want to process them using the Visitor pattern.

The most interesting types of elements are TypeElement for classes and interfaces (including annotations), ExecutableElement for methods, and VariableElement for fields.

Each Element points to a TypeMirror, which represents a type in the Java programming language. You can use the TypeMirror to walk the class relationships of the annotated code you’re processing, much like you would using reflection on the code running in the JVM.

Processing Rounds

Annotation processing happens in separate stages, called rounds. During each round, a processor gets a chance to process the annotations it is interested in.

The annotations to process and the elements they are present on are available via the RoundEnvironment parameter passed into the process() method.

If annotation processors generate new source or class files during a round, then the compiler will make those available for processing in the next round. This continues until no more new files are generated.

The last round contains no input, and is thus a good opportunity to release any resources the processor may have acquired.

Initializing and Configuring Processors

Annotation processors are initialized with a ProcessingEnvironment. This processing environment allows you to create new source or class files.

It also provides access to configuration in the form of options. Options are key-value pairs that you can supply on the command line to javac using the -A option. For this to work, you must return the options’ keys in the processor’s getSupportedOptions() method.

Finally, the processing environment provides some support routines (e.g. to get the JavaDoc for an element, or to get the direct super types of a type) that come in handy during processing.

Classpath Issues

To get the most accurate information during annotation processing, you must make sure that all imported classes are on the classpath, because classes that refer to types that are not available may have incomplete or altogether missing information.

When processing large numbers of annotated classes, this may lead to a problem on Windows systems where the command line becomes too large (> 8K). Even when you use the JavaCompiler interface, it still calls javac behind the scenes.

The Java compiler has a nice solution to this problem: you can use argument files that contain the arguments to javac. The name of the argument file is then supplied on the command line, preceded by @.

Unfortunately, the JavaCompiler.getTask() method doesn’t support argument files, so you’ll have to use the underlying run() method.

Remember that the getTask() approach is the only one that allows you to construct your annotation processors. If you must use argument files, then you have to use a public no-arg constructor.

If you’re in that situation, and you have multiple annotation processors that need to share a single instance of a class, you can’t pass that instance into the constructor, so you’ll be forced to use something like the Singleton pattern.

Conclusion

Annotations are an exciting technology that have lots of interesting applications. For example, I used them to extract the resources from a REST API into a resource model for further processing, like generating documentation.

I’m very interested to learn what you have used them for. Please leave a comment below.

REST Messages And Data Transfer Objects

In Patterns of Enterprise Application Architecture, Martin Fowler defines a Data Transfer Object (DTO) as

An object that carries data between processes in order to reduce the number of method calls.

Note that a Data Transfer Object is not the same as a Data Access Object (DAO), although they have some similarities. A Data Access Object is used to hide details from the underlying persistence layer.

REST Messages Are Serialized DTOs

message-transferIn a RESTful architecture, the messages sent across the wire are serializations of DTOs.

This means all the best practices around DTOs are important to follow when building RESTful systems.

For instance, Fowler writes

…encapsulate the serialization mechanism for transferring data over the wire. By encapsulating the serialization like this, the DTOs keep this logic out of the rest of the code and also provide a clear point to change serialization should you wish.

In other words, you should follow the DRY principle and have exactly one place where you convert your internal DTO to a message that is sent over the wire.

In JAX-RS, that one place should be in an entity provider. In Spring, the mechanism to use is the message converter. Note that both frameworks have support for several often-used serialization formats.

Following this advice not only makes it easier to change media types (e.g. from plain JSON or HAL to a more mature media type like Siren, Mason, or UBER). It also makes it easy to support multiple media types.

mediaThis in turn enables you to switch media types without breaking clients.

You can continue to serve old clients with the old media type, while new clients can take advantage of the new media type.

Introducing new media types is one way to evolve your REST API when you must make backwards incompatible changes.

DTOs Are Not Domain Objects

Domain objects implement the ubiquitous language used by subject matter experts and thus are discovered. DTOs, on the other hand, are designed to meet certain non-functional characteristics, like performance, and are subject to trade-offs.

This means the two have very different reasons to change and, following the Single Responsibility Principle, should be separate objects. Blindly serializing domain objects should thus be considered an anti-pattern.

That doesn’t mean you must blindly add DTOs, either. It’s perfectly fine to start with exposing domain objects, e.g. using Spring Data REST, and introducing DTOs as needed. As always, premature optimization is the root of all evil, and you should decide based on measurements.

The point is to keep the difference in mind. Don’t change your domain objects to get better performance, but rather introduce DTOs.

DTOs Have No Behavior

big-dataA DTO should not have any behavior; it’s purpose in life is to transfer data between remote systems.

This is very different from domain objects.

There are two basic approaches for dealing with the data in a DTO.

The first is to make them immutable objects, where all the input is provided in the constructor and the data can only be read.

This doesn’t work well for large objects, and doesn’t play nice with serialization frameworks.

The better approach is to make all the properties writable. Since a DTO must not have logic, this is one of the few occasions where you can safely make the fields public and omit the getters and setters.

Of course, that means some other part of the code is responsible for filling the DTO with combinations of properties that together make sense.

Conversely, you should validate DTOs that come back in from the client.

DevOps Is The New Agile

In The Structure of Scientific Revolutions, Thomas Kuhn argues that science is not a steady accumulation of facts and theories, but rather an sequence of stable periods, interrupted by revolutions.

3rd-platformDuring such revolutions, the dominant paradigm breaks down under the accumulated weight of anomalies it can’t explain until a new paradigm emerges that can.

We’ve seen similar paradigm shifts in the field of information technology. For hardware, we’re now at the Third Platform.

For software, we’ve had several generations of programming languages and we’ve seen different programming paradigms, with reactive programming gaining popularity lately.

The Rise of Agile

We’ve seen a revolution in software development methodology as well, where the old Waterfall paradigm was replaced by Agile. The anomalies in this case were summarized as the software crisis, as documented by the Chaos Report.

The Agile Manifesto was clearly a revolutionary pamphlet:

We are uncovering better ways of developing software by doing it and helping others do it.

It was written in 2001 and originally signed by 17 people. It’s interesting to see what software development methods they were involved with:

  1. Adaptive Software Development: Jim Highsmith
  2. Crystal: Alistair Cockburn
  3. Dynamic Systems Development Method: Arie van Bennekum
  4. eXtreme Programming: Kent Beck, Ward Cunningham, Martin Fowler, James Grenning, Ron Jeffries, Robert Martin
  5. Feature-Driven Development: Jon Kern
  6. Object-Oriented Analysis: Stephen Mellor
  7. Scrum: Mike Beedle, Ken Schwaber, Jeff Sutherland
  8. Andrew Hunt, Brian Marick, and Dave Thomas were not associated with a specific method

Only two of the seven methods were represented by more than one person: eXtreme Programming (XP) and Scrum. Coincidentally, these are the only ones we still hear about today.

Agile Becomes Synonymous with Scrum

ScrumScrum is the clear winner in terms of market share, to the point where many people don’t know the difference between Agile and Scrum.

I think there are at least two reasons for that: naming and ease of adoption.

Decision makers in environments where nobody ever gets fired for buying IBM are usually not looking for something that is “extreme”. And “programming” is for, well, other people. On the other hand, Scrum is a term borrowed from sports, and we all know how executives love using sport metaphors.

[BTW, the term “extreme” in XP comes from the idea of turning the dials of some useful practices up to 10. For example, if code reviews are good, let’s do it all the time (pair programming). But Continuous Integration is not nearly as extreme as Continuous Delivery and iterations (time-boxed pushes) are mild compared to pull systems like Kanban. XP isn’t so extreme after all.]

Scrum is easy to get started with: you can certifiably master it in two days. Part of this is that Scrum has fewer mandated practices than XP.

That’s also a danger: Scrum doesn’t prescribe any technical practices, even though technical practices are important. The technical practices support the management practices and are the foundation for a culture of technical excellence.

The software craftsmanship movement can be seen as a reaction to the lack of attention for the technical side. For me, paying attention to obviously important technical practices is simply being a good software professional.

The (Water)Fall Of Scrum

The jury is still out on whether management-only Scrum is going to win completely, or whether the software craftsmanship movement can bring technical excellence back into the picture. This may be more important than it seems at first.

ScrumFallSince Scrum focuses only on management issues, developers may largely keep doing what they were doing in their Waterfall days. This ScrumFall seems to have become the norm in enterprises.

No wonder that many Scrum projects don’t produce the expected benefits. The late majority and laggards may take that opportunity to completely revert back to the old ways and the Agile Revolution may fail.

In fact, several people have already proclaimed that Agile is dead and are talking about a post-Agile world.

Some think that software craftsmanship should be the new paradigm, but I’m not buying that.

Software craftsmanship is all about the code and too many people simply don’t care enough about code. Beautiful code that never gets deployed, for example, is worthless.

Beyond Agile with DevOps

Speaking of deploying software, the DevOps movement may be a more likely candidate to take over the baton from Agile. It’s based on Lean principles, just like Agile was. Actually, DevOps is a continuation of Agile outside the development team. I’ve even seen the term agile DevOps.

So what makes me think DevOps won’t share the same fate as Agile?

First, DevOps looks at the whole software delivery value stream, whereas Agile confined itself to software development. This means DevOps can’t remain in the developer’s corner; for DevOps to work, it has to have support from way higher up the corporate food chain. And executive support is a prerequisite for real, lasting change.

Second, the DevOps movement from the beginning has placed a great deal of emphasis on culture, which is where I think Agile failed most. We’ll have to see whether DevOps can really do better, but at least the topic is on the agenda.

metricsThird, DevOps puts a lot of emphasis on metrics, which makes it easier to track its success and helps to break down silos.

Fourth, the Third Platform virtually requires DevOps, because Systems of Engagement call for much more rapid software delivery than Systems of Record.

Fifth, with the number of security breaches spiraling out of control, the ability to quickly deploy fixes becomes the number one security measure. The combination of DevOps and Security is referred to as Rugged DevOps or DevOpsSec.

What do you think? Will DevOps succeed where Agile failed? Please leave a comment.