How To Process Java Annotations

One of the cool new features of Java 8 is the support for lambda expressions. Lambda expressions lean heavily on the FunctionalInterface annotation.

In this post, we’ll look at annotations and how to process them so you can implement your own cool features.

Annotations

Annotations were added in Java 5. The Java language comes with some predefined annotations, but you can also define custom annotations.

Many frameworks and libraries make good use of custom annotations. JAX-RS, for instance, uses them to turn POJOs into REST resources.

Annotations can be processed at compile time or at runtime (or even both).

At runtime, you can use the reflection API. Each element of the Java language that can be annotated, like class or method, implements the AnnotatedElement interface. Note that an annotation is only available at runtime if it has the RUNTIME RetentionPolicy.

Compile-Time Annotation Processing

Java 5 came with the separate apt tool to process annotations, but since Java 6 this functionality is integrated into the compiler.

You can either call the compiler directly, e.g. from the command line, or indirectly, from your program.

In the former case, you specify the -processor option to javac, or you use the ServiceLoader framework by adding the file META-INF/services/javax.annotation.processing.Processor to your jar. The contents of this file should be a single line containing the fully qualified name of your processor class.

The ServiceLoader approach is especially convenient in an automated build, since all you have to do is put the annotation processor on the classpath during compilation, which build tools like Maven or Gradle will do for you.

Compile-Time Annotation Processing From Within Your Application

You can also use the compile-time tools to process annotations from within your running application.

Rather than calling javac directly, use the more convenient JavaCompiler interface. Either way, you’ll need to run your application with a JDK rather than just a JRE.

The JavaCompiler interface gives you programmatic access to the Java compiler. You can obtain an implementation of this interface using ToolProvider.getSystemJavaCompiler(). This method is sensitive to the JAVA_HOME environment variable.

The getTask() method of JavaCompiler allows you to add your annotation processor instances. This is the only way to control the construction of annotation processors; all other methods of invoking annotation processors require the processor to have a public no-arg constructor.

Annotation Processors

A processor must implement the Processor interface. Usually you will want to extend the AbstractProcessor base class rather than implement the interface from scratch.

Each annotation processor must indicate the types of annotations it is interested in through the getSupportedAnnotationTypes() method. You may return * to process all annotations.

The other important thing is to indicate which Java language version you support. Override the getSupportedSourceVersion() method and return one of the RELEASE_x constants.

With these methods implemented, your annotation processor is ready to get to work. The meat of the processor is in the process() method.

When process() returns true, the annotations processed are claimed by this processor, and will not be offered to other processors. Normally, you should play nice with other processors and return false.

Elements and TypeMirrors

The annotations and the Java elements they are present on are provided to your process() method as Element objects. You may want to process them using the Visitor pattern.

The most interesting types of elements are TypeElement for classes and interfaces (including annotations), ExecutableElement for methods, and VariableElement for fields.

Each Element points to a TypeMirror, which represents a type in the Java programming language. You can use the TypeMirror to walk the class relationships of the annotated code you’re processing, much like you would using reflection on the code running in the JVM.

Processing Rounds

Annotation processing happens in separate stages, called rounds. During each round, a processor gets a chance to process the annotations it is interested in.

The annotations to process and the elements they are present on are available via the RoundEnvironment parameter passed into the process() method.

If annotation processors generate new source or class files during a round, then the compiler will make those available for processing in the next round. This continues until no more new files are generated.

The last round contains no input, and is thus a good opportunity to release any resources the processor may have acquired.

Initializing and Configuring Processors

Annotation processors are initialized with a ProcessingEnvironment. This processing environment allows you to create new source or class files.

It also provides access to configuration in the form of options. Options are key-value pairs that you can supply on the command line to javac using the -A option. For this to work, you must return the options’ keys in the processor’s getSupportedOptions() method.

Finally, the processing environment provides some support routines (e.g. to get the JavaDoc for an element, or to get the direct super types of a type) that come in handy during processing.

Classpath Issues

To get the most accurate information during annotation processing, you must make sure that all imported classes are on the classpath, because classes that refer to types that are not available may have incomplete or altogether missing information.

When processing large numbers of annotated classes, this may lead to a problem on Windows systems where the command line becomes too large (> 8K). Even when you use the JavaCompiler interface, it still calls javac behind the scenes.

The Java compiler has a nice solution to this problem: you can use argument files that contain the arguments to javac. The name of the argument file is then supplied on the command line, preceded by @.

Unfortunately, the JavaCompiler.getTask() method doesn’t support argument files, so you’ll have to use the underlying run() method.

Remember that the getTask() approach is the only one that allows you to construct your annotation processors. If you must use argument files, then you have to use a public no-arg constructor.

If you’re in that situation, and you have multiple annotation processors that need to share a single instance of a class, you can’t pass that instance into the constructor, so you’ll be forced to use something like the Singleton pattern.

Conclusion

Annotations are an exciting technology that have lots of interesting applications. For example, I used them to extract the resources from a REST API into a resource model for further processing, like generating documentation.

I’m very interested to learn what you have used them for. Please leave a comment below.

Advertisements

How to Create Extensible Java Applications

Extension pointsMany applications benefit from being open to extension. This post describes two ways to implement such extensibility in Java.

Extensible Applications

Extensible applications are applications whose functionality can be extended without having to recompile them and sometimes even without having to restart them. This may happen by simply adding a jar to the classpath, or by a more involved installation procedure.

One example of an extensible application is the Eclipse IDE. It allows extensions, called plug-ins, to be installed so that new functionality becomes available. For instance, you could install a Source Code Management (SCM) plug-in to work with your favorite SCM.

As another example, imagine an implementation of the XACML specification for authorization. The “X” in XACML stands for “eXtensible” and the specification defines a number of extension points, like attribute and category IDs, combining algorithms, functions, and Policy Information Points. A good XACML implementation will allow you to extend the product by providing a module that implements the extension point.

Service Provider Interface

Oracle’s solution for creating extensible applications is the Service Provider Interface (SPI).

In this approach, an extension point is defined by an interface:

package com.company.application;

public interface MyService {
  // ...
}

You can find all extensions for such an extension point by using the ServiceLoader class:

public class Client {

  public void useService() {
    Iterator<MyService> services = ServiceLoader.load(
        MyService.class).iterator();
    while (services.hasNext()) {
      MyService service = services.next();
      // ... use service ...
  }

}

An extension for this extension point can be any class that implements that interface:

package com.company.application.impl;

public class MyServiceImpl implements MyService {
  // ...
}

The implementation class must be publicly available and have a public no-arg constructor. However, that’s not enough for the ServiceLoader class to find it.

You must also create a file named after the fully qualified name of the extension point interface in META-INF/services. In our example, that would be:

META-INF/services/com.company.application.Myservice

This file must be UTF-8 encoded, or ServiceLoader will not be able to read it. Each line of this file should contain the fully qualified name of one extension implementing the extension point, for instance:

com.company.application.impl.MyServiceImpl 

OSGi Services

Service registryThe SPI approach described above only works when the extension point files are on the classpath.

In an OSGi environment, this is not the case. Luckily, OSGi has its own solution to the extensibility problem: OSGi services.

With Declarative Services, OSGi services are easy to implement, especially when using the annotations of Apache Felix Service Component Runtime (SCR):

@Service
@Component
public class MyServiceImpl implements MyService {
  // ...
}

With OSGi and SCR, it is also very easy to use a service:

@Component
public class Client {

  @Reference
  private MyService myService;

  protected void bindMyService(MyService bound) {
    myService = bound;
  }

  protected void unbindMyService(MyService bound) {
    if (myService == bound) {
      myService = null;
    }
  }

  public void useService() {
    // ... use myService ...
  }

}

Best of Both Worlds

So which of the two options should you chose? It depends on your situation, of course. When you’re in an OSGi environment, the choice should obviously be OSGi services. If you’re not in an OSGi environment, you can’t use those, so you’re left with SPI.

CakeBut what if you’re writing a framework or library and you don’t know whether your code will be used in an OSGi or classpath based environment?

You will want to serve as many uses of your library as possible, so the best would be to support both models. This can be done if you’re careful.

Note that adding a Declarative Services service component file like OSGI-INF/myServiceComponent.xml to your jar (which is what the SCR annotations end up doing when they are processed) will only work in an OSGi environment, but is harmless outside OSGi.

Likewise, the SPI service file will work in a traditional classpath environment, but is harmless in OSGi.

So the two approaches are actually mutually exclusive and in any given environment, only one of the two approaches will find anything. Therefore, you can write code that uses both approaches. It’s a bit of duplication, but it allows your code to work in both types of environments, so you can have your cake and eat it too.