REST 101 For Developers

rest-easy

Local Code Execution

Functions in high-level languages like C are compiled into procedures in assembly. They add a level of indirection that frees us from having to think about memory addresses.

Methods and polymorphism in object-oriented languages like Java add another level of indirection that frees us from having to think about the specific variant of a set of similar functions.

Despite these indirections, methods are basically still procedure calls, telling the computer to switch execution flow from one memory location to another. All of this happens in the same process running on the same computer.

Remote Code Execution

This is fundamentally different from switching execution to another process or another computer. Especially the latter is very different, as the other computer may not even have the same operating system through which programs access memory.

It is therefore no surprise that mechanisms of remote code execution that try to hide this difference as much as possible, like RMI or SOAP, have largely failed. Such technologies employ what is known as Remote Procedure Calls (RPCs).

rpcOne reason we must distinguish between local and remote procedure calls is that RPCs are a lot slower.

For most practical applications, this changes the nature of the calls you make: you’ll want to make less remote calls that are more coarsely grained.

Another reason is more organizational than technical in nature.

When the code you’re calling lives in another process on another computer, chances are that the other process is written and deployed by someone else. For the two pieces of code to cooperate well, some form of coordination is required. That’s the price we pay for coupling.

Coordinating Change With Interfaces

We can also see this problem in a single process, for instance when code is deployed in different jar files. If you upgrade a third party jar file that your code depends on, you may need to change your code to keep everything working.

Such coordination is annoying. It would be much nicer if we could simply deploy the latest security patch of that jar without having to worry about breaking our code. Fortunately, we can if we’re careful.

interfaceInterfaces in languages like Java separate the public and private parts of code.

The public part is what clients depend on, so you must evolve interfaces in careful ways to avoid breaking clients.

The private part, in contrast, can be changed at will.

From Interfaces to Services

In OSGi, interfaces are the basis for what are called micro-services. By publishing services in a registry, we can remove the need for clients to know what object implements a given interface. In other words, clients can discover the identity of the object that provides the service. The service registry becomes our entry point for accessing functionality.

There is a reason these interfaces are referred to as micro-services: they are miniature versions of the services that make up a Service Oriented Architecture (SOA).

A straightforward extrapolation of micro-services to “SOA services” leads to RPC-style implementations, for instance with SOAP. However, we’ve established earlier that RPCs are not the best way to invoke remote code.

Enter REST.

RESTful Services

rest-easyRepresentational State Transfer (REST) is an architectural style that brings the advantages of the Web to the world of programs.

There is no denying the scalability of the Web, so this is an interesting angle.

Instead of explaining REST as it’s usually done by exploring its architectural constraints, let’s compare it to micro-services.

A well-designed RESTful service has a single entry point, like the micro-services registry. This entry point may take the form of a home resource.

We access the home resource like any other resource: through a representation. A representation is a series of bytes that we need to interpret. The rules for this interpretation are given by the media type.

Most RESTful services these days serve representations based on JSON or XML. The media type of a resource compares to the interface of an object.

Some interfaces contain methods that give us access to other interfaces. Similarly, a representation of a resource may contain hyperlinks to other resources.

Code-Based vs Data-Based Services

soapThe difference between REST and SOAP is now becoming apparent.

In SOAP, like in micro-services, the interface is made up of methods. In other words, it’s code based.

In REST, on the other hand, the interface is made up of code and data. We’ve already seen the data: the representation described by the media type. The code is the uniform interface, which means that it’s the same (uniform) for all resources.

In practice, the uniform interface consists of the HTTP methods GET, POST, PUT, and DELETE.

Since the uniform interface is fixed for all resources, the real juice in any RESTful service is not in the code, but in the data: the media type.

Just as there are rules for evolving a Java interface, there are rules for evolving a media type, for example for XML-based media types. (From this it follows that you can’t use XML Schema validation for XML-based media types.)

Uniform Resource Identifiers

So far I haven’t mentioned Uniform Resource Identifiers (URIs). The documentation of many so-called RESTful services may give you the impression that they are important.

identityHowever, since URIs identify resources, their equivalent in micro-services are the identities of the objects implementing the interfaces.

Hopefully this shows that clients shouldn’t care about URIs. Only the URI of the home resource is important.

The representation of the home resource contains links to other resources. The meaning of those links is indicated by link relations.

Through its understanding of link relations, a client can decide which links it wants to follow and discover their URIs from the representation.

Versions of Services

evolutionAs much as possible, we should follow the rules for evolving media types and not introduce any breaking changes.

However, sometimes that might be unavoidable. We should then create a new version of the service.

Since URIs are not part of the public interface of a RESTful API, they are not the right vehicle for relaying version information. The correct way to indicate major (i.e. non-compatible) versions of an API can be derived by comparison with micro-services.

Whenever a service introduces a breaking change, it should change its interface. In a RESTful API, this means changing the media type. The client can then use content negotiation to request a media type it understands.

What Do You Think?

what-do-you-thinkLiterature explaining how to design and document code-based interfaces is readily available.

This is not the case for data-based interfaces like media types.

With RESTful services becoming ever more popular, that is a gap that needs filling. I’ll get back to this topic in the future.

How do you design your services? How do you document them? Please share your ideas in the comments.

Advertisement

The Lazy Developer’s Way to an Up-To-Date Libraries List

groovyLast time I shared some tips on how to use libraries well. I now want to delve deeper into one of those: Know What Libraries You Use.

Last week I set out to create such a list of embedded components for our product. This is a requirement for our Security Development Lifecycle (SDL).

However, it’s not a fun task. As a developer, I want to write code, not update documents! So I turned to my friends Gradle and Groovy, with a little help from Jenkins and Confluence.

Gradle Dependencies

We use Gradle to build our product, and Gradle maintains the dependencies we have on third-party components.

Our build defines a list of names of configurations for embedded components, copyBundleConfigurations, for copying those to the distribution directory. From there, I get to the external dependencies using Groovy’s collection methods:

def externalDependencies() {
  copyBundleConfigurations.collectMany { 
    configurations[it].allDependencies 
  }.findAll {
    !(it instanceof ProjectDependency) && it.group &&
        !it.group.startsWith('com.emc')
  }
}

Adding Required Information

However, Gradle dependencies don’t contain all the required information.

For instance, we need the license under which the library is distributed, so that we can ask the Legal department permission for using it.

So I added a simple XML file to hold the additional info. Combining that information with the dependencies that Gradle maintains is easy using Groovy’s XML support:

ext.embeddedComponentsInfo = 'embeddedComponents.xml'

def externalDependencyInfos() {
  def result = new TreeMap()
  def componentInfo = new XmlSlurper()
      .parse(embeddedComponentsInfo)
  externalDependencies().each { dependency ->
    def info = componentInfo.component.find { 
      it.id == "$dependency.group:$dependency.name" &&
          it.friendlyName?.text() 
    }
    if (!info.isEmpty()) {
      def component = [
        'id': info.id,
        'friendlyName': info.friendlyName.text(),
        'version': dependency.version,
        'latestVersion': info.latestVersion.text(),
        'license': info.license.text(),
        'licenseUrl': info.licenseUrl.text(),
        'comment': info.comment.text()
      ]
      result.put component.friendlyName, component
    }
  }
  result.values()
}

I then created a Gradle task to write the information to an HTML file. Our Jenkins build executes this task, so that we always have an up-to-date list. I used Confluence’s html-include macro to include the HTML file in our Wiki.

Now our Wiki is always up-to-date.

Automatically Looking Up Missing Information

The next problem was to populate the XML file with additional information.

Had we had this file from the start, adding that information manually would not have been a big deal. In our case, we already had over a hundred dependencies, so automation was in order.

First I identified the components that miss the required information:

def missingExternalDependencies() {
  def componentInfo = new XmlSlurper()
      .parse(embeddedComponentsInfo)
  externalDependencies().findAll { dependency ->
    componentInfo.component.find { 
      it.id == "$dependency.group:$dependency.name" && 
          it.friendlyName?.text() 
    }.isEmpty()
  }.collect {
    "$it.group:$it.name"
  }.sort()
}

Next, I wanted to automatically look up the missing information and add it to the XML file (using Groovy’s MarkupBuilder). In case the required information can’t be found, the build should fail:

project.afterEvaluate {
  def missingComponents = missingExternalDependencies()
  if (!missingComponents.isEmpty()) {
    def manualComponents = []
    def writer = new StringWriter() 
    def xml = new MarkupBuilder(writer)
    xml.expandEmptyElements = true
    println 'Looking up information on new dependencies:'
    xml.components {
      externalDependencyInfos().each { existingComponent ->
        component { 
          id(existingComponent.id)
          friendlyName(existingComponent.friendlyName)
          latestVersion(existingComponent.latestVersion)
          license(existingComponent.license)
          licenseUrl(existingComponent.licenseUrl)
          approved(existingComponent.approved)
          comment(existingComponent.comment)
        }
      }
      missingComponents.each { missingComponent ->
        def lookedUpComponent = collectInfo(missingComponent)
        component {
          id(missingComponent)
          friendlyName(lookedUpComponent.friendlyName)
          latestVersion(lookedUpComponent.latestVersion)
          license(lookedUpComponent.license)
          licenseUrl(lookedUpComponent.licenseUrl)
          approved('?')
          comment(lookedUpComponent.comment)
        }
        if (!lookedUpComponent.friendlyName || 
            !lookedUpComponent.latestVersion || 
            !lookedUpComponent.license) {
          manualComponents.add lookedUpComponent.id
          println '    => Please enter information manually'
        }
      }
    }
    writer.close()
    def embeddedComponentsFile = 
        project.file(embeddedComponentsInfo)
    embeddedComponentsFile.text = writer.toString()
    if (!manualComponents.isEmpty()) {
      throw new GradleException('Missing library information')
    }
  }
}

Anyone who adds a dependency in the future is now forced to add the required information.

So all that is left to implement is the collectInfo() method.

There are two primary sources that I used to look up the required information: the SpringSource Enterprise Bundle Repository holds OSGi bundle versions of common libraries, while Maven Central holds regular jars.

Extracting information from those sources is a matter of downloading and parsing XML and HTML files. This is easy enough with Groovy’s String.toURL() and URL.eachLine() methods and support for regular expressions.

Conclusion

All of this took me a couple of days to build, but I feel that the investment is well worth it, since I no longer have to worry about the list of used libraries being out of date.

How do you maintain a list of used libraries? Please let me know in the comments.

How to Create Extensible Java Applications

Extension pointsMany applications benefit from being open to extension. This post describes two ways to implement such extensibility in Java.

Extensible Applications

Extensible applications are applications whose functionality can be extended without having to recompile them and sometimes even without having to restart them. This may happen by simply adding a jar to the classpath, or by a more involved installation procedure.

One example of an extensible application is the Eclipse IDE. It allows extensions, called plug-ins, to be installed so that new functionality becomes available. For instance, you could install a Source Code Management (SCM) plug-in to work with your favorite SCM.

As another example, imagine an implementation of the XACML specification for authorization. The “X” in XACML stands for “eXtensible” and the specification defines a number of extension points, like attribute and category IDs, combining algorithms, functions, and Policy Information Points. A good XACML implementation will allow you to extend the product by providing a module that implements the extension point.

Service Provider Interface

Oracle’s solution for creating extensible applications is the Service Provider Interface (SPI).

In this approach, an extension point is defined by an interface:

package com.company.application;

public interface MyService {
  // ...
}

You can find all extensions for such an extension point by using the ServiceLoader class:

public class Client {

  public void useService() {
    Iterator<MyService> services = ServiceLoader.load(
        MyService.class).iterator();
    while (services.hasNext()) {
      MyService service = services.next();
      // ... use service ...
  }

}

An extension for this extension point can be any class that implements that interface:

package com.company.application.impl;

public class MyServiceImpl implements MyService {
  // ...
}

The implementation class must be publicly available and have a public no-arg constructor. However, that’s not enough for the ServiceLoader class to find it.

You must also create a file named after the fully qualified name of the extension point interface in META-INF/services. In our example, that would be:

META-INF/services/com.company.application.Myservice

This file must be UTF-8 encoded, or ServiceLoader will not be able to read it. Each line of this file should contain the fully qualified name of one extension implementing the extension point, for instance:

com.company.application.impl.MyServiceImpl 

OSGi Services

Service registryThe SPI approach described above only works when the extension point files are on the classpath.

In an OSGi environment, this is not the case. Luckily, OSGi has its own solution to the extensibility problem: OSGi services.

With Declarative Services, OSGi services are easy to implement, especially when using the annotations of Apache Felix Service Component Runtime (SCR):

@Service
@Component
public class MyServiceImpl implements MyService {
  // ...
}

With OSGi and SCR, it is also very easy to use a service:

@Component
public class Client {

  @Reference
  private MyService myService;

  protected void bindMyService(MyService bound) {
    myService = bound;
  }

  protected void unbindMyService(MyService bound) {
    if (myService == bound) {
      myService = null;
    }
  }

  public void useService() {
    // ... use myService ...
  }

}

Best of Both Worlds

So which of the two options should you chose? It depends on your situation, of course. When you’re in an OSGi environment, the choice should obviously be OSGi services. If you’re not in an OSGi environment, you can’t use those, so you’re left with SPI.

CakeBut what if you’re writing a framework or library and you don’t know whether your code will be used in an OSGi or classpath based environment?

You will want to serve as many uses of your library as possible, so the best would be to support both models. This can be done if you’re careful.

Note that adding a Declarative Services service component file like OSGI-INF/myServiceComponent.xml to your jar (which is what the SCR annotations end up doing when they are processed) will only work in an OSGi environment, but is harmless outside OSGi.

Likewise, the SPI service file will work in a traditional classpath environment, but is harmless in OSGi.

So the two approaches are actually mutually exclusive and in any given environment, only one of the two approaches will find anything. Therefore, you can write code that uses both approaches. It’s a bit of duplication, but it allows your code to work in both types of environments, so you can have your cake and eat it too.

Using Cryptography in Java Applications

This post describes how to use the Java Cryptography Architecture (JCA) that allows you to use cryptographic services in your applications.

Java Cryptography Architecture Services

The JCA provides a number of cryptographic services, like message digests and signatures. These services are accessible through service specific APIs, like MessageDigest and Signature. Cryptographic services abstract different algorithms. For digests, for instance, you could use MD5 or SHA1. You specify the algorithm as a parameter to the getInstance() method of the cryptographic service class:

MessageDigest digest = MessageDigest.getInstance("MD5");

You find the value of the parameter for your algorithm in the JCA Standard Algorithm Name Documentation. Some algorithms have parameters. For instance, an algorithm to generate a private/public key pair will take the key size as a parameter. You specify the parameter(s) using the initialize() method:

KeyPairGenerator generator = KeyPairGenerator.getInstance("DSA");
generator.initialize(1024);

If you don’t call the initialize() method, some default value will be used, which may or may not be what you want. Unfortunately, the API for initialization is not 100% consistent across services. For instance, the Cipher class uses init() with an argument indicating encryption or decryption, while the Signature class uses initSign() for signing and initVerify() for verification.

Java Cryptography Architecture Providers

The JCA keeps your code independent from a particular cryptographic algorithm’s implementation through the provider system. Providers are ranked according to a preference order, which is configurable (see below). The best preference is 1, the next best is 2, etc. The preference order allows the JCA to select the best available provider that implements a given algorithm. Alternatively, you can specify a specific provider in the second argument to getInstance():

Signature signature = Signature.getInstance("SHA1withDSA", "SUN");

The JRE comes with a bunch of providers from Oracle by default. However, due to historical export restrictions, these are not the most secure implementations. To get access to better algorithms and larger key sizes, install the Java Cryptography Extension Unlimited Strength Jurisdiction Policy Files. Update: Note that the above statement is true for the Oracle JRE. OpenJDK doesn’t have the same limitation.

Make Your Use of Cryptography Configurable

You should always make sure that the cryptographic services that your application uses are configurable. If you do that, you can change the cryptographic algorithm and/or implementation without issuing a patch. This is particularly valuable when a new attack on an (implementation of an) algorithm becomes available. The JCA makes it easy to configure the use of cryptography. The getInstance() method accepts both the name of the algorithm and the name of the provider implementing that algorithm. You should read both and any values for the algorithm’s parameters from some sort of configuration file. Also make sure you keep your code DRY and instantiate cryptographic services in a single place. Check that the requested algorithm and/or provider are actually available. The getInstance() method throws NoSuchAlgorithmException when a given algorithm or provider is not available, so you should catch that. The safest option then is to fail and have someone make sure the system is configured properly. If you continue despite a configuration error, you may end up with a system that is less secure than required. Note that Oracle recommends not specifying the provider. The reasons they provide is that not all providers may be available on all platforms, and that specifying a provider may mean that you miss out on optimizations. You should weigh those disadvantages against the risk of being vulnerable. Deploying specific providers with known characteristics with your application may neutralize the disadvantages that Oracle mentions.

Adding Cryptographic Service Providers

The provider system is extensible, so you can add providers. For example, you could use the open source Bouncy Castle or the commercial RSA BSAFE providers. In order to add a provider, you must make sure that its jar is available to the application. You can put it on the classpath for this purpose. Alternatively, you can make it an installed extension by placing it in the $JAVA_HOME/lib/ext directory, where $JAVA_HOME is the location of your JDK/JRE distribution. The major difference between the two approaches is that installed extensions are granted all permissions by default whereas code on the classpath is not. This is significant when (part of) your code runs in a sandbox. Some services, like Cipher, require the provider jar to be signed. The next step is to register the provider with the JCA provider system. The simplest way is to use Security.addProvider():

Security.addProvider(new BouncyCastleProvider());

You can also set the provider’s preference order by using the Security.insertProviderAt() method:

Security.insertProviderAt (new JsafeJCE(), 1);

One downside of this approach is that it couples your code to the provider, since you have to import the provider class. This may not be an important issue in an modular system like OSGi. Another thing to look out for is that code requires SecurityPermission to add a provider programmatically. The provider can also be configured as part of your environment via static registration by adding an entry to the java.security properties file (found in $JAVA_HOME/jre/lib/security/java.security):

security.provider.1=com.rsa.jsafe.provider.JsafeJCE
security.provider.2=sun.security.provider.Sun

The property names in this file start with security.provider. and end with the provider’s preference. The property value is the fully qualified name of the class implementing Provider.

Implementing Your Own Cryptographic Service Provider

Don’t do it. You will get it wrong and be vulnerable to attacks.

Using Cryptographic Service Providers

The documentation for the provider should tell you what provider name to use as the second argument to getInstance(). For instance, Bouncy Castle uses BC, while RSA BSAFE uses JsafeJCE. Most providers have custom APIs as well as JCA conformant APIs. Do not use the custom APIs, since that will make it impossible to configure the algorithms and providers used.

Not All Algorithms and Implementations Are Created Equal

It’s important to note that different algorithms and implementations have different characteristics and that those may make them more or less suitable for your situation. For instance, some organizations will only allow algorithms and implementations that are FIPS 140-2 certified or are on the list of NSA Suite B cryptographic algorithms. Always make sure you understand your customer’s cryptographic needs and requirements.

Using JCA in an OSGi environment

The getInstance() method is a factory method that uses the Service Provider Interface (SPI). That is problematic in an OSGi world, since OSGi violates the SPI framework’s assumption that there is a single classpath. Another potential issue is that JCA requires some jars to be signed. If those jars are not valid OSGi bundles, you can’t run them through bnd to make them so, since that would make the signature invalid. Fortunately, you can kill both birds with one stone. Put your provider jars on the classpath of your main program, that is the program that starts the OSGi framework. Then export the provider package from the OSGi system bundle using the org.osgi.framework.system.packages.extra system property. This will make the system bundle export that package. Now you can simply use Import-Package on the provider package in your bundles. There are other options for resolving these problems if you can’t use the above solution.

Permissions in OSGi

In a previous post, we looked at implementing a sandbox for Java applications in which we can securely run mobile code.

This post looks at how to do the same in an OSGi environment.

OSGi

The OSGi specification defines a dynamic module system for Java. As such, it’s a perfect candidate for implementing the kind of plugin system that would enable your application to dynamically add mobile code.

Security in OSGi builds on the Java 2 security architecture that we discussed earlier, so you can re-use your knowledge about code signing, etc.

OSGi goes a couple of steps further, however.

Revoking Permissions

One of the weaknesses in the Java permissions model is that you can only explicitly grant permissions, not revoke them. There are many cases where you want to allow everything except a particular special case.

There is no way to do that with standard Java permissions, but, luckily, OSGi introduces a solution.

The downside is that OSGi introduces its own syntax for specifying policies.

The following example shows how to deny PackagePermission for subpackages of com.acme.secret:

DENY {
  ( ..PackagePermission "com.acme.secret.*" "import,exportonly" )
} "denyExample"

(In this and following examples, I give the simple name of permission classes instead of the fully qualified name. I hint at that by prefixing the simple name with ..)

PackagePermission is a permission defined by OSGi for authorization of package imports and exports. Your application could use a policy like this to make sure that mobile code can’t call the classes in a given package, for instance to limit direct access to the database.

Extensible Conditions on Permissions

The second improvement that OSGi brings is that the conditions under which a permission are granted can be dynamically evaluated at runtime.

The following example shows how to conditionally grant ServicePermission:

ALLOW {
  [ ..BundleSignerCondition "* ; o=ACME" ]
  ( ..ServicePermission "..ManagedService" "register" )
} "conditionalExample"

ServicePermission is an OSGi defined permission that restricts access to OSGi services.

The condition is the part between square brackets. OSGi defines two conditions, which correspond to the signedBy and codeBase constructs in regular Java policies.

You can also define your own conditions. The specification gives detailed instructions on implementing conditions, especially with regard to performance.

Different Types of Permissions

The final innovation that OSGi brings to the Java permissions model, is that there are different types of permissions.

Bundles can specify their own permissions. This doesn’t mean that bundles can grant themselves permissions, but rather that they can specify the maximum privileges that they need to function. These permissions are called local permissions.

The OSGi framework ensures that the bundle will never have more permissions than the local permissions, thus implementing the principle of least privilege.

Actually, that statement is not entirely accurate. Every bundle will have certain permissions that they need to function in an OSGi environment, like being able to read the org.osgi.framework.* system properties.

These permissions are called implicit permissions, since every bundle will have them, whether the permissions are explicitly granted to the bundle or not.

The final type of permissions are the system permissions. These are the permissions that are granted to the bundle.

The effective permissions are the set of permissions that are checked at runtime:

effective = (local ∩ system) ∪ implicit

Local permissions enable auditing. Before installing a bundle into your OSGi environment, you can inspect the Bundle Permission Resource in OSGI-INF/permissions.perm to see what permissions the bundle requires.

If you are not comfortable with granting the bundle these permissions, you can decide to not install the bundle. The point is that you can know all of this without running the bundle and without having access to its source code.

Integration into the Java Permissions Model

The OSGi framework integrates their extended permissions model into the standard Java permissions model by subclassing ProtectionDomain.

Each bundle gets a BundleProtectionDomainImpl for this purpose.

This approach allows OSGi to tap into the standard Java permissions model that you have come to know, so you can re-use most of your skills in this area. The only thing you’ll have to re-learn, is how to write policies.

Comparison of Permission Models

To put the OSGi permission model into perspective, consider the following comparison table, which uses terminology from the XACML specification:

Permission Models Standard Java OSGi
Effects permit permit, deny
Target, Condition codeBase, signedBy codeBase, signedBy, custom conditions
Combining Algorithms first-applicable first-applicable, local/system/implicit

From this table you can see that the OSGi model is quite a bit more expressive than the standard Java permission model, although not as expressive as XACML.

Root Cause Analysis

I’ve moved on to a new project recently. It’s quite different from the previous one. Before I worked on a monolythic web application, now we’re using OSGi. As a result, our project consists of a lot of sub-projects (OSGi bundles) which makes it very inconvenient to use Ant. So we’ve switched to Gradle. Our company has also standardized on Perforce, where we used Subversion before.

To summarize, a lot has changed and I’m not really up to speed yet. This is evidenced by the fact that I broke our build 4 times in 5 days. As an aspiring software craftsman, I feel really bad about that. So why did this happen so often? And can I do anything to prevent it from happening again? Enter Root Cause Analysis.

Root cause analysis is performed to not just treat the symptoms, but cure the disease:

The key to effective problem solving is to first make sure you understand the problem that you are
trying to solve – why it needs to be solved, how you will know when you’ve solved it, and what the
root cause is.
Often symptoms show up in one place while the actual cause of the problem is somewhere
completely different. If you just “solve” the symptom without digging deeper it is highly likely that
problem will just reappear later in a different shape.

Henrik Kniberg has written about one way of doing root cause analysis: using Cause-Effect diagrams. Using this method, I ended up with the following:

Build Failure Cause Effect Diagram

I started out with the problem: Build Failure, in the orange rectangle. I then repeatedly asked myself why this is a bad thing and added the effects in the red rectangles. Just repeat this until you find something that conflicts with your goal. Finally, I repeatedly added causes in blue rectangles. You can stop when you’ve found something you can fix, but in general it’s good to keep asking a bit deeper than feels comfortable. This is where the Five Whys technique comes in handy.

As you can see, my main problem is impatience. For those who know me, that won’t come as a surprise 😉 However, in this case this personal flaw of mine gets in the way of my goal of making customers happy.

With the causes identified, it’s time to think up some countermeasures. Pick some causes that you can fix, and think of a way to treat them. The ones I’m going to work on are marked with a star in the figure above:

  • Use a checklist when submitting code to the source code repository. This will prevent me from making silly mistakes, such as forgetting to add a new file
  • Take the time to learn the tools better, in particular Gradle and Groovy
  • In general, try to be more patient

The last one is the hard one, of course. Wish me luck!

Update 2010-08-01: I have created a checklist with things to consider before submitting code to our Perforce repository and used this checklist all week. I broke the build twice this week, both times because of special circumstances that were not accounted for on the checklist. So I think I’m improving, but I’m not there yet.

OSGi & Maven & Eclipse

If you’re involved in a large software development effort in Java, then OSGi seems like a natural fit to keep things modular and thus maintainable. But every advantage can also be seen as a disadvantage: using OSGi you will end up with lots of small projects. Handling these and their interrelationships can be challenging.

Enter Maven. This build tool makes it a lot easier to build all these little (or not so little) projects. Which is a necessity, since a command line driven build tool is essential for doing Continuous Integration. And we all practice that, right?

However, as a developer it’s a pain to keep switching between your favorite IDE and the command line. Not to worry, Eclipse has plug-ins that handle just about any situation. Using M2Eclipse, you can maintain your POM from within the IDE.

But an Eclipse Maven project is not an Eclipse OSGi project. For handling OSGi bundles, one would want to use the Eclipse Plug-in Development Environment (PDE) with all the goodies that brings to OSGi development. There is, however, a way to get the best of both worlds, although it still isn’t perfect, as we will see shortly.

The trick is to start with a PDE project:

Make sure to follow the Maven convention for sources and classes and to use plain OSGi (so you’re not tied to Eclipse/Equinox):

Once you’ve created the project, you can add Maven support:

Make sure to use the same identification for Maven as for PDE:

Now you have an Eclipse project that plays nice with both PDE (and thus OSGi) and Maven. The only downside to this solution is that some information, like the bundle ID, is duplicated.

Pre-OSGi modularity with Macker

OSGi is gaining a lot of traction lately. But what if you have a very large application? Migration can be a lot of work.

I would like to point to a simple tool we use that might help out a bit here. It’s called Macker and

it’s meant to model the architectural ideals programmers always dream up for their projects, and then break — it helps keep code clean and consistent.

Macker is open source (GPL). It’s current version is 0.4 and has been for a long time. That doesn’t mean it’s immature or abandoned, however. It’s author had a lot more features planned, hence the 0.4. But what’s already available is enough to give it a serious look.

So, what does Macker do, exactly? It enforces rules about your architecture. For example, suppose you have a product with a public API. You could create a rule file with an <access-rule> that the API must be self-contained:

  <message>The API should be self-contained</message>
  <deny>
    <from pattern="api" />
  </deny>
  <allow>
    <from pattern="api" />
    <to pattern="api" />
  </allow>
  <allow>
   <from pattern="api" />
    <to pattern="jre" />
  </allow>

These rules can be very explicit about what is and what isn’t allowed. There are several ways to specify them, but I’ve found it easiest to use patterns, like in the example above, since they can have symbolic names. Here’s an example:

<pattern name="api">
  <include class="com.acme.api.**"/>
</pattern>

Where the ** denotes every class in the com.acme.api package, or any of its sub-packages. See the Macker user guide for more information about supported regular expressions.

Macker comes with an Ant task, so you can enforce your architecture from your build. Maybe not as good as OSGi, but it sure helps with keeping your code the way you intended it.

Running a JavaFX script from an OSGi bundle

Two technologies that have been on my radar for a while are JavaFX, Sun’s entry in the RIA race, and OSGi, the Dynamic Module System for Java. In this tutorial, I will combine the two by running a JavaFX script from an OSGi bundle. You will need Eclipse and Java 6. [That’s right: a JavaFX tutorial that doesn’t want you to use NetBeans!]

Although JavaFX is still in the beta stage (despite being announced at JavaOne last year), and is very much in flux, it makes for an interesting UI technology. I wouldn’t want to write production ready applications in it yet, but maybe that will change in the not-too-distant future.

OSGi, on the other hand, is very much a mature technology. It is widely used, for instance for the plug-ins that make up Eclipse. In fact, Eclipse Equinox is now the reference implementation for OSGi. BTW, the OSGi term for plug-in is bundle, and I will use the two interchangeably.

Wrapping the JavaFX libraries in an OSGI bundle

Everything in OSGi happens inside a bundle, so the first step to running a JavaFX script from an OSGi bundle, is to create a bundle that holds the JavaFX jars. You will need to download the latest JavaFX distribution and unzip it somewhere.

Then, in Eclipse, select File|New|Project..., so that the New Project wizard appears. In the first step, select from the Plug-in Development category the Plug-in from existing JAR archives, and click Next. In the second step, click the Add External button and add all the jars from the JavaFX distribution’s lib directory. In the third step, enter JavaFX for the Project name, and click an OSGi framework under Target platform. Select standard for the OSGi framework, and click Finish. After a while, the plug-in project appears.

     

Next, open META-INF/MANIFEST.MF, select the MANIFEST.MF tab, and paste the following line into it:

Bundle-RequiredExecutionEnvironment: JavaSE-1.6

Now right-click on JRE System Library in the Package Explorer, and select Properties. For System library, select Execution environment, and from the combo box next to it, select JavaSE-1.6. Click OK.

 

Creating an OSGi bundle that will run a JavaFX script

You will now use the bundle you just created by a new bundle, the one running the JavaFX script. Again, select File|New|Project..., but this time, select Plug-in Project from the Plug-in Development category. In the next step, enter Run JavaFX for the Project Name, select the standard OSGi framework as the Target platform (same as above), and click Next. In the next step, just click Finish to create the project.

 

In the Manifest Editor, select the Dependencies tab. Click the Add button under Required Plug-ins, select JavaFX (1.0.0) from the list, and click the Save button in the toolbar.

   

Creating the JavaFX script

Create a new directory named script, and create a new file named HelloWorld.fx in it. Paste the following code into it:

import javafx.application.Frame;
import javafx.application.Stage;
import javafx.scene.paint.Color;
import javafx.scene.text.*;

Frame {
  title: "Hello World!"
  width: 550
  height: 200
  visible: true
  stage: Stage {
    content: Text {
      font: Font {
        name: "Sans Serif"
        style: FontStyle.BOLD
        size: 24
      }
      x: 20
      y: 40
      stroke: Color.BLUE
      fill: Color.BLUE
      content: "Hello from JavaFX Compiled Script"
    }
  }
}

   

Next, right-click the script directory, and select Build path|Use as Source Folder.

Running the script

To actually run the JavaFX script, we will use JSR-223: Scripting for the Java Platform. [That’s why I made you select JavaSE-1.6 as the bundle execution environment.]

In the Run JavaFX project, open the src directory, the run_javafx package, and then the Activator class. [A bundle’s activator allows you to do stuff when your bundle starts or stops.] In the start() method, add the following code:

final ScriptEngine engine = new ScriptEngineManager()
    .getEngineByExtension("fx");
final String scriptName = "HelloWorld.fx";
final InputStream stream = Thread.currentThread()
    .getContextClassLoader()
    .getResourceAsStream(scriptName);
engine.eval(new InputStreamReader(stream));

Use Eclipse’s Quick Fix to add the required import statements for ScriptEngine, ScriptEngineManager, InputStream, and InputStreamReader.

To run the bundle from Eclipse, select the MANIFEST.MF file, select the Overview tab, and click the Run button at the top.

In the Console, you will see the osgi> prompt, followed by a lot of error output. If you scroll through the output, you will see

org.osgi.framework.BundleException: Exception in
    run_javafx.Activator.start() of bundle Run_JavaFX.
[...]
Caused by: java.lang.NullPointerException
	at run_javafx.Activator.start(Activator.java:24)
[...]

Type exit + Enter in the Console to shut down the plug-in.

Finding the script engine

The NullPointerException comes from the ScriptEngine for JavaFX that could not be found. We can fix this as follows: create a new folder named services under META-INF, and copy the javax.script.ScriptEngineFactory file from the JavaFX project to it. See the Service Provider section in the Jar File Specification for an explanation.

 

Now when you run the plug-in, you will get

org.osgi.framework.BundleException: Exception in
    run_javafx.Activator.start() of bundle Run_JavaFX.
[...]
Caused by: javax.script.ScriptException: compilation failed
	at com.sun.tools.javafx.script.JavaFXScriptEngineImpl.parse(JavaFXScriptEngineImpl.java:254)
	at com.sun.tools.javafx.script.JavaFXScriptEngineImpl.eval(JavaFXScriptEngineImpl.java:144)
	at com.sun.tools.javafx.script.JavaFXScriptEngineImpl.eval(JavaFXScriptEngineImpl.java:135)
	at com.sun.tools.javafx.script.JavaFXScriptEngineImpl.eval(JavaFXScriptEngineImpl.java:140)
	at javax.script.AbstractScriptEngine.eval(AbstractScriptEngine.java:232)
	at run_javafx.Activator.start(Activator.java:24)
[...]

There is nothing wrong with the JavaFX script, though. You can test this yourself by running the command line tools (javafxc and javafx) against it.

Fixing the class path

The problem is in the JavaFXScriptEngineImpl class. It passes the class path to the JavaFXScriptCompiler. But this class path is not the actual class path that is used by the class loader, it is taken from the java.class.path system property. You can override this by either setting the com.sun.tools.javafx.script.classpath system property, or by setting the classpath attribute on the ScriptContext.

I will take that last approach. Just before the engine.eval(...) line, add the following:

engine.getContext().setAttribute("classpath", 
    getJavafxClassPath(), ScriptContext.ENGINE_SCOPE);

Implementing the getJavafxClassPath() method is a bit tricky:

private String getJavafxClassPath() {
  String result = FX.class.getProtectionDomain()
      .getCodeSource().getLocation().toExternalForm();
  final int index = result.indexOf(":/");
  if (index >= 0) {
    result = result.substring(index + 1);
  }
  return result;
}

Now finally, when you run the plug-in, it will show the Hello world window: