Likely Candidates for Frictionless Development Environments

Last time I reviewed the book on Consumption Economics, which explains how technology companies and their products will have to change to survive the brave new world that we’re entering.

So what would we find if we take the lessons from the book and apply them to our own software development environment? I think the answer would be surprisingly close to what I’ve called a Frictionless Development Environment (FDE) before.

To be honest, I’ve only started thinking more systematically about FDEs after reading Consumption Economics. In Five Essential Components of a Frictionless Development Environment, I’ve laid out the major building blocks of an FDE: cloud computing, big data analytics, recommendation engines, plug-in architecture, and open source.

It may be to soon to expect existing solutions to have all of those, but let’s see where we stand. There are already some cloud development environments. Most of these are geared towards web developers, and offer limited languages (mostly JavaScript). Some offer a big enough range to be interesting to a wide range of developers.

Big data analytics and recommendation engines are big features that are probably not there yet, but could always be added later. What’s more important is to look for a plug-in architecture and particularly for open source. These are fundamental architectural and business decisions.

Using open source as a criterion reduces our list to Cloud9 and Orion. Both have a plug-in architecture. The latter is an Eclipse project, but the former seems more mature. Be sure to follow both Cloud9 and Orion.

So what do you think? Would any of these cloud IDEs work for you? What other open source cloud IDEs are out there?

Advertisement

Five Essential Components of a Frictionless Development Environment

One of the challenges of maintaining a consistent programming style in a team is for everyone to have the same workspace settings, especially in the area of compiler warnings.

Every time a new member joins the team, an existing member sets up a new environment, or a new version of the compiler comes along, you havebook.e to synchronize settings.

My team recently started using Workspace Mechanic, an Eclipse plug-in that allows you to save those settings in an XML file that you put under source control.

The plug-in periodically compares the workspace settings with the contents of that file. It notifies you in case of differences, and allows you to update your environment with a couple of clicks.

Towards a Frictionless Development Environment

Workspace Mechanic is a good example of a lubricant, a tool that lubricates the development process to reduce friction.

LubricationMy ideal is to take this to the extreme with a Frictionless Development Environment (FDE) in which all software development activities go very smoothly.

Let’s see what we would likely need to make such an FDE a reality.

In this post, I will look at a very small example that uncovers some of the basic components of an FDE.

Example: Creating the Class Under Test

In Test-Driven Development, we start out with a test and there is no class under test yet. Eclipse has a Quick Fix to create the class, but we still have to manually invoke it and select a source folder to store it in (assuming you have different source folders for main and test code).

It would be nicer if the IDE would understand what you’re trying to do and automatically create the skeleton for the class under test for you and save it in the right place.

Big DataThe crux is for the tool to understand what you are doing, or else it could easily draw the wrong conclusion and create all kinds of artifacts that you don’t want.

This kind of knowledge is highly user and potentially even project specific. It is therefore imperative that the tool collects usage data and uses that to optimize its assistance. We’re likely talking about big data here.

Given the fact that it’s expensive in terms of storage and computing power to collect and analyze these statistics, it makes sense to do this in a cloud environment.

That will also allow for quicker learning of usage patterns when working on different machines, like in the office and at home. More importantly, it allows building on usage patterns of other people.

What this example also shows, is that we’ll need many small, very focused lubricants. This makes it unlikely for one organization to provide all lubricants for an FDE that suits everybody, even for a specific language.

Open Source SoftwareThe only practical way of assembling an FDE is through a plug-in architecture for lubricants.

Building an FDE will be a huge effort. To realize it on the short term, we’ll probably need an open source model. No one company could put in the resource required to pull this off in even a couple of years.

The Essential Components of a Frictionless Development Environment

This small example uncovered the following building blocks for a Frictionless Development Environment:

  1. Cloud Computing will provide economies of scale and access from anywhere
  2. Big Data Analytics will discern usage patterns
  3. Recommendation Engines will convert usage patterns into context-aware lubricants
  4. A Plug-in architecture will allow different parties to contribute lubricants and usage analysis tools
  5. An Open Source model will allow many organizations and individuals to collaborate

What do you think?

Do you agree with the proposed components of an FDE? Did I miss something?

Please share your thoughts in the comments.

Securing Mobile Java Code

Mobile Code is code sourced from remote, possibly untrusted systems, that are executed on your local system. Mobile code is an optional constraint in the REST architectural style.

This post investigates our options for securely running mobile code in general, and for Java in particular.

Mobile Code

Examples of mobile code range from JavaScript fragments found in web pages to plug-ins for applications like FireFox and Eclipse.

Plug-ins turn a simple application into an extensible platform, which is one reason they are so popular. If you are going to support plug-ins in your application, then you should understand the security implications of doing so.

Types of Mobile Code

Mobile code comes in different forms. Some mobile code is source code, like JavaScript.

Mobile code in source form requires an interpreter to execute, like JägerMonkey in FireFox.

Mobile code can also be found in the form of executable code.

This can either be intermediate code, like Java applets, or native binary code, like Adobe’s Flash Player.

Active Content Delivers Mobile Code

A concept that is related to mobile code is active content, which is defined by NIST as

Electronic documents that can carry out or trigger actions automatically on a computer platform without the intervention of a user.

Examples of active content are HTML pages or PDF documents containing scripts and Office documents containing macros.

Active content is a vehicle for delivering mobile code, which makes it a popular technology for use in phishing attacks.

Security Issues With Mobile Code

There are two classes of security problems associated with mobile code.

The first deals with getting the code safely from the remote to the local system. We need to control who may initiate the code transfer, for example, and we must ensure the confidentiality and integrity of the transferred code.

From the point of view of this class of issues, mobile code is just data, and we can rely on the usual solutions for securing the transfer. For instance, XACML may be used to control who may initiate the transfer, and SSL/TLS may be used to protect the actual transfer.

It gets more interesting with the second class of issues, where we deal with executing the mobile code. Since the remote source is potentially untrusted, we’d like to limit what the code can do. For instance, we probably don’t want to allow mobile code to send credit card data to its developer.

However, it’s not just malicious code we want to protect ourselves from.

A simple bug that causes the mobile code to go into an infinite loop will threaten your application’s availability.

The bottom line is that if you want your application to maintain a certain level of security, then you must make sure that any third-party code meets that same standard. This includes mobile code and embedded libraries and components.

That’s why third-party code should get a prominent place in a Security Development Lifecycle (SDL).

Safely Executing Mobile Code

In general, we have four types of safeguards at our disposal to ensure the safe execution of mobile code:

  • Proofs
  • Signatures
  • Filters
  • Cages (sandboxes)

We will look at each of those in the context of mobile Java code.

Proofs

It’s theoretically possible to present a formal proof that some piece of code possesses certain safety properties. This proof could be tied to the code and the combination is then proof carrying code.

After download, the code could be checked against the code by a verifier. Only code that passes the verification check would be allowed to execute.

Updated for Bas’ comment:
Since Java 6, the StackMapTable attribute implements a limited form of proof carrying code where the type safety of the Java code is verified. However, this is certainly not enough to guarantee that the code is secure, and other approaches remain necessary.

Signatures

One of those approaches is to verify that the mobile code is made by a trusted source and that it has not been tampered with.

For Java code, this means wrapping the code in a jar file and signing and verifying the jar.

Filters

We can limit what mobile content can be downloaded. Since we want to use signatures, we should only accept jar files. Other media types, including individual .class files, can simply be filtered out.

Next, we can filter out downloaded jar files that are not signed, or signed with a certificate that we don’t trust.

We can also use anti-virus software to scan the verified jars for known malware.

Finally, we can use a firewall to filter out any outbound requests using protocols/ports/hosts that we know our code will never need. That limits what any code can do, including the mobile code.

Cages/Sandboxes

After restricting what mobile code may run at all, we should take the next step: prevent the running code from doing harm by restricting what it can do.

We can intercept calls at run-time and block any that would violate our security policy. In other words, we put the mobile code in a cage or sandbox.

In Java, cages can be implemented using the Security Manager. In a future post, we’ll take a closer look at how to do this.

Behavior-Driven Development (BDD) with JBehave, Gradle, and Jenkins

Behavior-Driven Development (BDD) is a collaborative process where the Product Owner, developers, and testers cooperate to deliver software that brings value to the business.

BDD is the logical next step up from Test-Driven Development (TDD).

Behavior-Driven Development

In essence, BDD is a way to deliver requirements. But not just any requirements, executable ones! With BDD, you write scenarios in a format that can be run against the software to ascertain whether the software behaves as desired.

Scenarios

Scenarios are written in Given, When, Then format, also known as Gherkin:

Given the ATM has $250
And my balance is $200
When I withdraw $150
Then the ATM has $100
And my balance is $50

Given indicates the initial context, When indicates the occurrence of an interesting event, and Then asserts an expected outcome. And may be used to in place of a repeating keyword, to make the scenario more readable.

Given/When/Then is a very powerful idiom, that allows for virtually any requirement to be described. Scenarios in this format are also easily parsed, so that we can automatically run them.

BDD scenarios are great for developers, since they provide quick and unequivocal feedback about whether the story is done. Not only the main success scenario, but also alternate and exception scenarios can be provided, as can abuse cases. The latter requires that the Product Owner not only collaborates with testers and developers, but also with security specialists. The payoff is that it becomes easier to manage security requirements.

Even though BDD is really about the collaborative process and not about tools, I’m going to focus on tools for the remainder of this post. Please keep in mind that tools can never save you, while communication and collaboration can. With that caveat out of the way, let’s get started on implementing BDD with some open source tools.

JBehave

JBehave is a BDD tool for Java. It parses the scenarios from story files, maps them to Java code, runs them via JUnit tests, and generates reports.

Eclipse

JBehave has a plug-in for Eclipse that makes writing stories easier with features such as syntax highlighting/checking, step completion, and navigation to the step implementation.

JUnit

Here’s how we run our stories using JUnit:

@RunWith(AnnotatedEmbedderRunner.class)
@UsingEmbedder(embedder = Embedder.class, generateViewAfterStories = true,
    ignoreFailureInStories = true, ignoreFailureInView = false, 
    verboseFailures = true)
@UsingSteps(instances = { NgisRestSteps.class })
public class StoriesTest extends JUnitStories {

  @Override
  protected List<String> storyPaths() {
    return new StoryFinder().findPaths(
        CodeLocations.codeLocationFromClass(getClass()).getFile(),
        Arrays.asList(getStoryFilter(storyPaths)), null);
  }

  private String getStoryFilter(String storyPaths) {
    if (storyPaths == null) {
      return "*.story";
    }
    if (storyPaths.endsWith(".story")) {
      return storyPaths;
    }
    return storyPaths + ".story";
  }

  private List<String> specifiedStoryPaths(String storyPaths) {
    List<String> result = new ArrayList<String>();
    URI cwd = new File("src/test/resources").toURI();
    for (String storyPath : storyPaths.split(File.pathSeparator)) {
      File storyFile = new File(storyPath);
      if (!storyFile.exists()) {
        throw new IllegalArgumentException("Story file not found: " 
          + storyPath);
      }
      result.add(cwd.relativize(storyFile.toURI()).toString());
    }
    return result;
  }

  @Override
  public Configuration configuration() {
    return super.configuration()
        .useStoryReporterBuilder(new StoryReporterBuilder()
            .withFormats(Format.XML, Format.STATS, Format.CONSOLE)
            .withRelativeDirectory("../build/jbehave")
        )
        .usePendingStepStrategy(new FailingUponPendingStep())
        .useFailureStrategy(new SilentlyAbsorbingFailure());
  }

}

This uses JUnit 4’s @RunWith annotation to indicate the class that will run the test. The AnnotatedEmbedderRunner is a JUnit Runner that JBehave provides. It looks for the @UsingEmbedder annotation to determine how to run the stories:

  • generateViewAfterStories instructs JBehave to create a test report after running the stories
  • ignoreFailureInStories prevents JBehave from throwing an exception when a story fails. This is essential for the integration with Jenkins, as we’ll see below

The @UsingSteps annotation links the steps in the scenarios to Java code. More on that below. You can list more than one class.

Our test class re-uses the JUnitStories class from JBehave that makes it easy to run multiple stories. We only have to implement two methods: storyPaths() and configuration().

The storyPaths() method tells JBehave where to find the stories to run. Our version is a little bit complicated because we want to be able to run tests from both our IDE and from the command line and because we want to be able to run either all stories or a specific sub-set.

We use the system property bdd.stories to indicate which stories to run. This includes support for wildcards. Our naming convention requires that the story file names start with the persona, so we can easily run all stories for a single persona using something like -Dbdd.stories=wanda_*.

The configuration() method tells JBehave how to run stories and report on them. We need output in XML for further processing in Jenkins, as we’ll see below.

One thing of interest is the location of the reports. JBehave supports Maven, which is fine, but they assume that everybody follows Maven conventions, which is really not. The output goes into a directory called target by default, but we can override that by specifying a path relative to the target directory. We use Gradle instead of Maven, and Gradle’s temporary files go into the build directory, not target. More on Gradle below.

Steps

Now we can run our stories, but they will fail. We need to tell JBehave how to map the Given/When/Then steps in the scenarios to Java code. The Steps classes determine what the vocabulary is that can be used in the scenarios. As such, they define a Domain Specific Language (DSL) for acceptance testing our application.

Our application has a RESTful interface, so we wrote a generic REST DSL. However, due to the HATEOAS constraint in REST, a client needs a lot of calls to discover the URIs that it should use. Writing scenarios gets pretty boring and repetitive that way, so we added an application-specific DSL on top of the REST DSL. This allows us to write scenarios in terms the Product Owner understands.

Layering the application-specific steps on top of generic REST steps has some advantages:

  • It’s easy to implement new application-specific DSL, since they only need to call the REST-specific DSL
  • The REST-specific DSL can be shared with other projects

Gradle

With the Steps in place, we can run our stories from our favorite IDE. That works great for developers, but can’t be used for Continuous Integration (CI).

Our CI server runs a headless build, so we need to be able to run the BDD scenarios from the command line. We automate our build with Gradle and Gradle can already run JUnit tests. However, our build is a multi-project build. We don’t want to run our BDD scenarios until all projects are built, a distribution is created, and the application is started.

So first off, we disable running tests on the project that contains the BDD stories:

test {
  onlyIf { false } // We need a running server
}

Next, we create another task that can be run after we start our application:

task acceptStories(type: Test) {
  ignoreFailures = true
  doFirst {
    // Need 'target' directory on *nix systems to get any output
    file('target').mkdirs()

    def filter = System.getProperty('bdd.stories') 
    if (filter == null) {
      filter = '*'
    }
    def stories = sourceSets.test.resources.matching { 
      it.include filter
    }.asPath
    systemProperty('bdd.stories', stories)
  }
}

Here we see the power of Gradle. We define a new task of type Test, so that it already can run JUnit tests. Next, we configure that task using a little Groovy script.

First, we must make sure the target directory exists. We don’t need or even want it, but without it, JBehave doesn’t work properly on *nix systems. I guess that’s a little Maven-ism 😦

Next, we add support for running a sub-set of the stories, again using the bdd.stories system property. Our story files are located in src/test/resources, so that we can easily get access to them using the standard Gradle test source set. We then set the system property bdd.stories for the JVM that runs the tests.

Jenkins

So now we can run our BDD scenarios from both our IDE and the command line. The next step is to integrate them into our CI build.

We could just archive the JBehave reports as artifacts, but, to be honest, the reports that JBehave generates aren’t all that great. Fortunately, the JBehave team also maintains a plug-in for the Jenkins CI server. This plug-in requires prior installation of the xUnit plug-in.

After installation of the xUnit and JBehave plug-ins into jenkins, we can configure our Jenkins job to use the JBehave plug-in. First, add an xUnit post-build action. Then, select the JBehave test report.

With this configuration, the output from running JBehave on our BDD stories looks just like that for regular unit tests:

Note that the yellow part in the graph indicates pending steps. Those are used in the BDD scenarios, but have no counterpart in the Java Steps classes. Pending steps are shown in the Skip column in the test results:

Notice how the JBehave Jenkins plug-in translates stories to tests and scenarios to test methods. This makes it easy to spot which scenarios require more work.

Although the JBehave plug-in works quite well, there are two things that could be improved:

  • The output from the tests is not shown. This makes it hard to figure out why a scenario failed. We therefore also archive the JUnit test report
  • If you configure ignoreFailureInStories to be false, JBehave throws an exception on a failure, which truncates the XML output. The JBehave Jenkins plug-in can then no longer parse the XML (since it’s not well formed), and fails entirely, leaving you without test results

All in all these are minor inconveniences, and we ‘re very happy with our automated BDD scenarios.

Running a JavaFX script from an OSGi bundle

Two technologies that have been on my radar for a while are JavaFX, Sun’s entry in the RIA race, and OSGi, the Dynamic Module System for Java. In this tutorial, I will combine the two by running a JavaFX script from an OSGi bundle. You will need Eclipse and Java 6. [That’s right: a JavaFX tutorial that doesn’t want you to use NetBeans!]

Although JavaFX is still in the beta stage (despite being announced at JavaOne last year), and is very much in flux, it makes for an interesting UI technology. I wouldn’t want to write production ready applications in it yet, but maybe that will change in the not-too-distant future.

OSGi, on the other hand, is very much a mature technology. It is widely used, for instance for the plug-ins that make up Eclipse. In fact, Eclipse Equinox is now the reference implementation for OSGi. BTW, the OSGi term for plug-in is bundle, and I will use the two interchangeably.

Wrapping the JavaFX libraries in an OSGI bundle

Everything in OSGi happens inside a bundle, so the first step to running a JavaFX script from an OSGi bundle, is to create a bundle that holds the JavaFX jars. You will need to download the latest JavaFX distribution and unzip it somewhere.

Then, in Eclipse, select File|New|Project..., so that the New Project wizard appears. In the first step, select from the Plug-in Development category the Plug-in from existing JAR archives, and click Next. In the second step, click the Add External button and add all the jars from the JavaFX distribution’s lib directory. In the third step, enter JavaFX for the Project name, and click an OSGi framework under Target platform. Select standard for the OSGi framework, and click Finish. After a while, the plug-in project appears.

     

Next, open META-INF/MANIFEST.MF, select the MANIFEST.MF tab, and paste the following line into it:

Bundle-RequiredExecutionEnvironment: JavaSE-1.6

Now right-click on JRE System Library in the Package Explorer, and select Properties. For System library, select Execution environment, and from the combo box next to it, select JavaSE-1.6. Click OK.

 

Creating an OSGi bundle that will run a JavaFX script

You will now use the bundle you just created by a new bundle, the one running the JavaFX script. Again, select File|New|Project..., but this time, select Plug-in Project from the Plug-in Development category. In the next step, enter Run JavaFX for the Project Name, select the standard OSGi framework as the Target platform (same as above), and click Next. In the next step, just click Finish to create the project.

 

In the Manifest Editor, select the Dependencies tab. Click the Add button under Required Plug-ins, select JavaFX (1.0.0) from the list, and click the Save button in the toolbar.

   

Creating the JavaFX script

Create a new directory named script, and create a new file named HelloWorld.fx in it. Paste the following code into it:

import javafx.application.Frame;
import javafx.application.Stage;
import javafx.scene.paint.Color;
import javafx.scene.text.*;

Frame {
  title: "Hello World!"
  width: 550
  height: 200
  visible: true
  stage: Stage {
    content: Text {
      font: Font {
        name: "Sans Serif"
        style: FontStyle.BOLD
        size: 24
      }
      x: 20
      y: 40
      stroke: Color.BLUE
      fill: Color.BLUE
      content: "Hello from JavaFX Compiled Script"
    }
  }
}

   

Next, right-click the script directory, and select Build path|Use as Source Folder.

Running the script

To actually run the JavaFX script, we will use JSR-223: Scripting for the Java Platform. [That’s why I made you select JavaSE-1.6 as the bundle execution environment.]

In the Run JavaFX project, open the src directory, the run_javafx package, and then the Activator class. [A bundle’s activator allows you to do stuff when your bundle starts or stops.] In the start() method, add the following code:

final ScriptEngine engine = new ScriptEngineManager()
    .getEngineByExtension("fx");
final String scriptName = "HelloWorld.fx";
final InputStream stream = Thread.currentThread()
    .getContextClassLoader()
    .getResourceAsStream(scriptName);
engine.eval(new InputStreamReader(stream));

Use Eclipse’s Quick Fix to add the required import statements for ScriptEngine, ScriptEngineManager, InputStream, and InputStreamReader.

To run the bundle from Eclipse, select the MANIFEST.MF file, select the Overview tab, and click the Run button at the top.

In the Console, you will see the osgi> prompt, followed by a lot of error output. If you scroll through the output, you will see

org.osgi.framework.BundleException: Exception in
    run_javafx.Activator.start() of bundle Run_JavaFX.
[...]
Caused by: java.lang.NullPointerException
	at run_javafx.Activator.start(Activator.java:24)
[...]

Type exit + Enter in the Console to shut down the plug-in.

Finding the script engine

The NullPointerException comes from the ScriptEngine for JavaFX that could not be found. We can fix this as follows: create a new folder named services under META-INF, and copy the javax.script.ScriptEngineFactory file from the JavaFX project to it. See the Service Provider section in the Jar File Specification for an explanation.

 

Now when you run the plug-in, you will get

org.osgi.framework.BundleException: Exception in
    run_javafx.Activator.start() of bundle Run_JavaFX.
[...]
Caused by: javax.script.ScriptException: compilation failed
	at com.sun.tools.javafx.script.JavaFXScriptEngineImpl.parse(JavaFXScriptEngineImpl.java:254)
	at com.sun.tools.javafx.script.JavaFXScriptEngineImpl.eval(JavaFXScriptEngineImpl.java:144)
	at com.sun.tools.javafx.script.JavaFXScriptEngineImpl.eval(JavaFXScriptEngineImpl.java:135)
	at com.sun.tools.javafx.script.JavaFXScriptEngineImpl.eval(JavaFXScriptEngineImpl.java:140)
	at javax.script.AbstractScriptEngine.eval(AbstractScriptEngine.java:232)
	at run_javafx.Activator.start(Activator.java:24)
[...]

There is nothing wrong with the JavaFX script, though. You can test this yourself by running the command line tools (javafxc and javafx) against it.

Fixing the class path

The problem is in the JavaFXScriptEngineImpl class. It passes the class path to the JavaFXScriptCompiler. But this class path is not the actual class path that is used by the class loader, it is taken from the java.class.path system property. You can override this by either setting the com.sun.tools.javafx.script.classpath system property, or by setting the classpath attribute on the ScriptContext.

I will take that last approach. Just before the engine.eval(...) line, add the following:

engine.getContext().setAttribute("classpath", 
    getJavafxClassPath(), ScriptContext.ENGINE_SCOPE);

Implementing the getJavafxClassPath() method is a bit tricky:

private String getJavafxClassPath() {
  String result = FX.class.getProtectionDomain()
      .getCodeSource().getLocation().toExternalForm();
  final int index = result.indexOf(":/");
  if (index >= 0) {
    result = result.substring(index + 1);
  }
  return result;
}

Now finally, when you run the plug-in, it will show the Hello world window: