HyperRosetta

rosetta-stoneThe Rosetta stone is a rock with the same text inscribed in three different languages. This allowed us to decipher Egyptian hieroglyphs.

In this post I’ll introduce a similar “stone” for hypermedia formats, using the RESTBucks example.

I think that seeing concrete hypermedia messages in different formats will make the similarities and differences clearly visible, which will hopefully make it easier to choose the right format for your API.

The text that I’ll use for HyperRosetta, the hypermedia Rosetta stone, is Example 5.6 of the REST in Practice book. For those hypermedia formats that can include templates, I’ll use the payment link with the representation in Example 5.7 of the book.

These are the media types available on HyperRosetta:

If you’d like to see another media type added to this list, please add a comment to this post.

application/vnd.restbucks+xml

This is the representation used in the book:

<order xmlns="http://schemas.restbucks.com/" 
    xmlns:dap="http://schemas.restbucks.com/dap">
  <dap:link mediaType="application/vnd.restbucks.com"
      uri="http://restbucks.com/order/1234"
      rel"http://relations.restbucks.com/cancel"/>
  <dap:link mediaType="application/vnd.restbucks.com"
      uri="http://restbucks.com/payment/1234"
      rel"http://relations.restbucks.com/payment"/>
  <dap:link mediaType="application/vnd.restbucks.com"
      uri="http://restbucks.com/order/1234"
      rel"http://relations.restbucks.com/update"/>
  <dap:link mediaType="application/vnd.restbucks.com"
      uri="http://restbucks.com/order/1234"
      rel"self"/>
  <item>
    <milk>semi</milk>
    <size>large</size>
    <drink>cappuccino</drink>
  </item>
  <location>takeAway</location>
  <cost>2.0</cost>
  <status>unpaid</status
</order>

This is an example of a domain-specific media type, so I will not be able to re-use existing client libraries to parse it. Since this format is based on XML, I can use an existing XML parser, but it won’t be able to recognize links in this representation.

This media type doesn’t tell me in the message what HTTP methods to use to dereference the link URIs. So the client has to be programmed with the knowledge that it has to use PUT on the update link, for instance. If the service should ever change that to PATCH, the client will break.

The message does tell me the media type to expect in the response, making it easier for the client to decide whether it should follow a link.

The RESTBucks media type is a fiat standard that isn’t registered with IANA.

application/atom+xml

<entry xmlns="http://www.w3.org/2005/Atom">
  <author>
    <name>RESTBucks</name>
  </author>
  <content type="application/vnd.restbucks+xml">
    <order xmlns="http://schemas.restbucks.com/">
      <item>
        <milk>semi</milk>
        <size>large</size>
        <drink>cappuccino</drink>
      </item>
      <location>takeAway</location>
      <cost>2.0</cost>
      <status>unpaid</status
    </order>
  </content>
  <link type="application/vnd.restbucks.com"
      href="http://restbucks.com/payment/1234"
      rel"http://relations.restbucks.com/payment"/>
  <link type="application/vnd.restbucks.com"
      href="http://restbucks.com/order/1234"
      rel"edit"/>
  <link type="application/vnd.restbucks.com"
      href="http://restbucks.com/order/1234"
      rel"self"/>
</entry>

Atom is a domain-general media type that implements the collection pattern. Since it is not specific to a single service, I can use an Atom library to parse the message and it will be able to find links. It will also know that the edit link will allow me to update and delete the order (using PUT and DELETE respectively).

The Atom library won’t help me with parsing the content, but I can still use an XML library for that. As with the domain-specific media type, I will need to embed knowledge of the HTTP method used for paying into my client.

A disadvantage of this particular domain-general media type is the overhead of things like author that don’t particularly make sense for RESTBucks.

application/xml

<order xmlns="http://schemas.restbucks.com/">
  <link href="http://restbucks.com/order/1234"
      rel"http://relations.restbucks.com/cancel"/>
  <link href="http://restbucks.com/payment/1234"
      rel"http://relations.restbucks.com/payment"/>
  <link href="http://restbucks.com/order/1234"
      rel"http://relations.restbucks.com/update"/>
  <link href="http://restbucks.com/order/1234"
      rel"self"/>
  <item>
    <milk>semi</milk>
    <size>large</size>
    <drink>cappuccino</drink>
  </item>
  <location>takeAway</location>
  <cost>2.0</cost>
  <status>unpaid</status
</order>

XML is a domain-agnostic format, which means it can be used for anything. The downside is that you won’t know anything about the content until you look at it.

Formally, this means I can’t dispatch on the media type alone anymore, but need to look at the namespace inside the message. In practice people dispatch on the media type anyway. This can lead to parsing problems when the expectation isn’t met, e.g. when an error document is sent instead of an order.

Worse, XML isn’t even a hypermedia type, since it doesn’t describe links. There is the XLInk standard for that, but I haven’t seen that used in APIs.

application/json

{ "order": {
  "_links": {
    "http://relations.restbucks.com/cancel": { 
      "href": "http://restbucks.com/order/1234"
    },
    "http://relations.restbucks.com/payment": {
      "href": "http://restbucks.com/payment/1234"
    }
    "http://relations.restbucks.com/update": { 
      "href": "http://restbucks.com/order/1234"
    },
    "self": { 
      "href": "http://restbucks.com/order/1234"
    },
    "item": {
      "milk": "semi",
      "size": "large",
      "drink": "cappuccino"
    }
    "location": "takeAway",
    "cost": 2.0,
    "status": "unpaid"
  }
}

JSON is a another domain-agnostic format without linking capabilities, so the same caveats apply as for XML. Despite these significant drawbacks, this is what high-profile projects like Spring use.

application/hal+json

{ "_links": {
    "self": { 
      "href": "http://restbucks.com/order/1234"
    },
    "curies": [{
      "name": "relations",
      "href": "http://relations.restbucks.com/"
    }],
    "relations:cancel": { 
      "href": "http://restbucks.com/order/1234"
    },
    "relations:payment": {
      "href": "http://restbucks.com/payment/1234"
    }
    "relations:update": { 
      "href": "http://restbucks.com/order/1234"
    }
  },
  "_embedded": {
    "item": [{
      "milk": "semi",
      "size": "large",
      "drink": "cappuccino"
    }]
  },
  "location": "takeAway",
  "cost": 2.0,
  "status": "unpaid"
}

HAL is a another domain-agnostic format, so the same caveats apply as for JSON.

However, HAL is a real hypermedia type, since it standardizes how to find links (using the _links property). In addition, it also defines how to find embedded objects (using the _embedded property).

HAL uses curies to make the representation a bit more compact. This is nice for humans, but another thing to program into your clients.

HAL is currently an Internet-Draft. There is also an XML version of HAL registered.

application/collection+json

{  "collection": {
    "version" : "1.0",
    "href" : "http://restbucks.com/orders",
    "items" : [{
      "href": "http://restbucks.com/order/1234",
      "data": [
        { "name": "item1.milk", "value": "semi" }, 
        { "name": "item1.size", "value": "large" }, 
        { "name": "item1.drink", "value": "cappuccino" }, 
        { "name": "location", "value": "takeAway" }, 
        { "name": "cost", "value": 2.0 }, 
        { "name": "status", "value": "unpaid" }
      ],
      "links" : [{ 
        "rel" : "http://relations.restbucks.com/cancel", 
        "href" : "http://restbucks.com/order/1234"
      }, { 
        "rel" : "http://relations.restbucks.com/payment", 
        "href" : "http://restbucks.com/payment/1234"
      }, { 
        "rel" : "http://relations.restbucks.com/update", 
        "href" : "http://restbucks.com/order/1234"
      }, { 
        "rel" : "self", 
        "href" : "http://restbucks.com/order/1234"
      }]
    }]
  }
}

Collection+JSON (Cj) is the translation of Atom into JSON. Like Atom, it’s a domain-general format for collections.

In Cj, properties are single values only, so I had to cheat and use the item1. prefix to keep the item data in the message. A better solution would probably be to extract the order items into its own collection, but I didn’t want to change the granularity of the messages.

Cj provides a template property that specifies how to add an item to the collection or update an existing one. Unfortunately, that mechanism is specific to collection actions, so we can’t use it to specify how to add a payment, for example. I don’t show the template in this example because it suffers from the same problem as the data property in that it can’t embed objects.

application/vnd.mason+json

{ "@namespaces": {
    "relations": {
      "name": "http://relations.restbucks.com/"
    }
  },
  "item": [{
    "milk": "semi",
    "size": "large",
    "drink": "cappuccino"
  }],
  "location": "takeAway",
  "cost": 2.0,
  "status": "unpaid",
  "@links": {
    "self": { 
      "href": "http://restbucks.com/order/1234"
    }
  },
  "@actions": {
    "relations:cancel": { 
      "href": "http://restbucks.com/order/1234",
      "type": "void",
      "method": "DELETE"
    },
    "relations:payment": {
      "href": "http://restbucks.com/payment/1234",
      "title": "Pay the order",
      "type": "any",
      "method": "PUT",
      "template": {
        "payment": {
          "amount": 2.0,
          "cardholderName": "",
          "cardNumber": "",
          "expiryMonth": "",
          "expiryYear": ""
        }
      }
    },
    "relations:update": { 
      "href": "http://restbucks.com/order/1234",
      "type": "any",
      "method": "PUT",
      "template": {
        "item": [{
          "milk": "semi",
          "size": "large",
          "drink": "cappuccino"
        }],
        "location": "takeAway",
        "cost": 2.0,
        "status": "unpaid"
      }
    }
  }
}

Mason is a superset of HAL. It adds actions and errors (not shown in this example), which turns it into a full hypermedia type (level 3b). Mason also supports curies.

A peculiarity of the actions is the type property, which can be void, json, json-files, or any. I’d expected a media type there.

application/vnd.siren+json

{ "class": [ "order" ],
  "properties": [{
    "location": "takeAway",
    "cost": 2.0,
    "status": "unpaid"
  },
  "entities": [{
    "class": [ "item" ],
    "rel": [ "http://relations.restbucks.com/item" ],
    "properties": {
      "milk": "semi",
      "size": "large",
      "drink": "cappuccino"
    }
  }],
  "actions": [{
    "name": "http://relations.restbucks.com/cancel",
    "title": "Cancel the order",
    "method": "DELETE",
    "href": "http://restbucks.com/order/1234"
  }, {
    "name": "http://relations.restbucks.com/payment",
    "title": "Pay the order",
    "href": "http://restbucks.com/payment/1234",
    "type": "application/vnd.siren+json",
    "method": "PUT",
    "fields": [{
      "name": "amount",
      "type": "number",
      "value": 2.0
    }, {
      "name": "cardholderName",
      "type": "text"
    }, {
      "name": "cardNumber",
      "type": "text"
    }, {
      "name": "expiryMonth",
      "type": "number"
    }, {
      "name": "expiryYear",
      "type": "number"
    }],
  }, {
    "name": "http://relations.restbucks.com/update",
    "href": "http://restbucks.com/order/1234",
    "method": "PUT",
    "type": "application/vnd.siren.json"
  }],
  "links": [{ 
    "rel": "self",
    "href": "http://restbucks.com/order/1234"
  }]
}

Siren is a full hypermedia type that includes the HTTP methods and media types to use. It also allows specifying the application semantics using the class, rel, and name properties. This makes it suitable for level 4 APIs.

Siren specifies the type for fields, so that clients can render appropriate UIs.

application/vnd.uber+json

{ "uber": {
  "version": "1.0",
  "data": [{
    "rel": [ "self" ],
    "url": "http://restbucks.com/order/1234"
  }, {
    "id": "order",
    "data": [{
      "name": "item",
      "data": [{
        "name": "milk",
        "value": "semi"
      }, {
        "name": "size", 
        "value": "large",
      }, {
        "name": "drink",
        "value": "cappuccino"
      }]
    }, {
      "name": "location",
      "value": "takeAway"
    }, {
      "name": "cost",
      "value": 2.0
    }, {
      "name": "status",
      "value": "unpaid"
    }]
  }, {
    "rel": "http://relations.restbucks.com/cancel",
    "url": "http://restbucks.com/payment/1234",
    "action": "remove"
  }, {
    "rel": "http://relations.restbucks.com/payment",
    "url": "http://restbucks.com/payment/1234",
    "sending": "application/vnd.uber+json",
    "action": "replace",
    "data": [{
      "name": "amount",
      "value": 2.0
    }, {
      "name": "cardholderName",
    }, {
      "name": "cardNumber",
    }, {
      "name": "expiryMonth",
    }, {
      "name": "expiryYear",
    }],
  }, {
    "rel": "http://relations.restbucks.com/update",
    "url": "http://restbucks.com/order/1234",
    "action": "replace",
    "sending": "application/vnd.uber.json"
  }]
}

UBER is a full hypermedia format with a lean message structure. It combines in the data property what other media types separate out in e.g. links, actions, and properties, which makes it a bit hard to read for humans.

UBER allows specifying application semantics using the name and rel properties, making it a level 4 capable media type.

There is also an XML variant of UBER.

Advertisement

REST 101 For Developers

rest-easy

Local Code Execution

Functions in high-level languages like C are compiled into procedures in assembly. They add a level of indirection that frees us from having to think about memory addresses.

Methods and polymorphism in object-oriented languages like Java add another level of indirection that frees us from having to think about the specific variant of a set of similar functions.

Despite these indirections, methods are basically still procedure calls, telling the computer to switch execution flow from one memory location to another. All of this happens in the same process running on the same computer.

Remote Code Execution

This is fundamentally different from switching execution to another process or another computer. Especially the latter is very different, as the other computer may not even have the same operating system through which programs access memory.

It is therefore no surprise that mechanisms of remote code execution that try to hide this difference as much as possible, like RMI or SOAP, have largely failed. Such technologies employ what is known as Remote Procedure Calls (RPCs).

rpcOne reason we must distinguish between local and remote procedure calls is that RPCs are a lot slower.

For most practical applications, this changes the nature of the calls you make: you’ll want to make less remote calls that are more coarsely grained.

Another reason is more organizational than technical in nature.

When the code you’re calling lives in another process on another computer, chances are that the other process is written and deployed by someone else. For the two pieces of code to cooperate well, some form of coordination is required. That’s the price we pay for coupling.

Coordinating Change With Interfaces

We can also see this problem in a single process, for instance when code is deployed in different jar files. If you upgrade a third party jar file that your code depends on, you may need to change your code to keep everything working.

Such coordination is annoying. It would be much nicer if we could simply deploy the latest security patch of that jar without having to worry about breaking our code. Fortunately, we can if we’re careful.

interfaceInterfaces in languages like Java separate the public and private parts of code.

The public part is what clients depend on, so you must evolve interfaces in careful ways to avoid breaking clients.

The private part, in contrast, can be changed at will.

From Interfaces to Services

In OSGi, interfaces are the basis for what are called micro-services. By publishing services in a registry, we can remove the need for clients to know what object implements a given interface. In other words, clients can discover the identity of the object that provides the service. The service registry becomes our entry point for accessing functionality.

There is a reason these interfaces are referred to as micro-services: they are miniature versions of the services that make up a Service Oriented Architecture (SOA).

A straightforward extrapolation of micro-services to “SOA services” leads to RPC-style implementations, for instance with SOAP. However, we’ve established earlier that RPCs are not the best way to invoke remote code.

Enter REST.

RESTful Services

rest-easyRepresentational State Transfer (REST) is an architectural style that brings the advantages of the Web to the world of programs.

There is no denying the scalability of the Web, so this is an interesting angle.

Instead of explaining REST as it’s usually done by exploring its architectural constraints, let’s compare it to micro-services.

A well-designed RESTful service has a single entry point, like the micro-services registry. This entry point may take the form of a home resource.

We access the home resource like any other resource: through a representation. A representation is a series of bytes that we need to interpret. The rules for this interpretation are given by the media type.

Most RESTful services these days serve representations based on JSON or XML. The media type of a resource compares to the interface of an object.

Some interfaces contain methods that give us access to other interfaces. Similarly, a representation of a resource may contain hyperlinks to other resources.

Code-Based vs Data-Based Services

soapThe difference between REST and SOAP is now becoming apparent.

In SOAP, like in micro-services, the interface is made up of methods. In other words, it’s code based.

In REST, on the other hand, the interface is made up of code and data. We’ve already seen the data: the representation described by the media type. The code is the uniform interface, which means that it’s the same (uniform) for all resources.

In practice, the uniform interface consists of the HTTP methods GET, POST, PUT, and DELETE.

Since the uniform interface is fixed for all resources, the real juice in any RESTful service is not in the code, but in the data: the media type.

Just as there are rules for evolving a Java interface, there are rules for evolving a media type, for example for XML-based media types. (From this it follows that you can’t use XML Schema validation for XML-based media types.)

Uniform Resource Identifiers

So far I haven’t mentioned Uniform Resource Identifiers (URIs). The documentation of many so-called RESTful services may give you the impression that they are important.

identityHowever, since URIs identify resources, their equivalent in micro-services are the identities of the objects implementing the interfaces.

Hopefully this shows that clients shouldn’t care about URIs. Only the URI of the home resource is important.

The representation of the home resource contains links to other resources. The meaning of those links is indicated by link relations.

Through its understanding of link relations, a client can decide which links it wants to follow and discover their URIs from the representation.

Versions of Services

evolutionAs much as possible, we should follow the rules for evolving media types and not introduce any breaking changes.

However, sometimes that might be unavoidable. We should then create a new version of the service.

Since URIs are not part of the public interface of a RESTful API, they are not the right vehicle for relaying version information. The correct way to indicate major (i.e. non-compatible) versions of an API can be derived by comparison with micro-services.

Whenever a service introduces a breaking change, it should change its interface. In a RESTful API, this means changing the media type. The client can then use content negotiation to request a media type it understands.

What Do You Think?

what-do-you-thinkLiterature explaining how to design and document code-based interfaces is readily available.

This is not the case for data-based interfaces like media types.

With RESTful services becoming ever more popular, that is a gap that needs filling. I’ll get back to this topic in the future.

How do you design your services? How do you document them? Please share your ideas in the comments.

How To Start With Software Security – Part 2

white-hatLast time, I wrote about how an organization can get started with software security.

Today I will look at how to do that as an individual.

From Development To Secure Development

As a developer, I wasn’t always aware of the security implications of my actions.

Now that I’m the Engineering Security Champion for my project, I have to be.

It wasn’t an easy transition. The security field is vast and I keep learning something new almost every day. I read a number of books on security, some of which I reviewed on this site.

As an aspiring software craftsman, I realize that personal efforts are only half the story. The other half is the community of professionals.

Secure Development Communities

I’m lucky to work in a big organization, where such a community already exist.

EMC’s Product Security Office (PSO) provides me with a personal security adviser, maintains a security-related wiki, and operates a space on our internal collaboration environment.

communityIf your organization doesn’t have something like our PSO, you can look elsewhere. (And if it does, you should look outside too!)

OWASP is a great place to start.

They actually have three sub-communities, one of which is for Builders.

But it’s also good to look at the other sub-communities, since they’re all related. Looking at things from the perspective of the others can be quite enlightening.

That’s also why it’s a good idea to attend a security conference, if you can. OWASP holds annual AppSec conferences in three geos. The RSA Conference is another good place to meet your peers.

If you can’t afford to attend a conference, you can always follow the security section of Stack Exchange or watch SecurityTube.

Contributing To The Community

So far I’ve talked about taking in information, but you shouldn’t forget to share your personal experiences as well.

contributeYou may think you know very little yet, but even then it’s valuable to share.

It helps to organize your thoughts, which is crucial when learning and you may find you’ll gain insights from comments that readers leave as well.

More to the point, there are many others out there that are getting started and who would benefit from seeing they are not alone.

Apart from posting to this blog, I also contribute to the EMC Developer Network, where I’m currently writing a series on XML and Security.

There are other ways to contribute as well. You could join or start an OWASP chapter, for instance.

What Do You Think?

How did you get started with software security? How do you keep up with the field? What communities are you part of? Please leave a comment.

How To Implement Input Validation For REST resources

rest-validationThe SaaS platform I’m working on has a RESTful interface that accepts XML payloads.

Implementing REST Resources

For a Java shop like us, it makes sense to use JAX-B to generate JavaBean classes from an XML Schema.

Working with XML (and JSON) payloads using JAX-B is very easy in a JAX-RS environment like Jersey:

@Path("orders")
public class OrdersResource {
  @POST
  @Consumes({ "application/xml", "application/json" })
  public void place(Order order) {
    // Jersey marshalls the XML payload into the Order 
    // JavaBean, allowing us to write type-safe code 
    // using Order's getters and setters.
    int quantity = order.getQuantity();
    // ...
  }
}

(Note that you shouldn’t use these generic media types, but that’s a discussion for another day.)

The remainder of this post assumes JAX-B, but its main point is valid for other technologies as well. Whatever you do, please don’t use XMLDecoder, since that is open to a host of vulnerabilities.

Securing REST Resources

Let’s suppose the order’s quantity is used for billing, and we want to prevent people from stealing our money by entering a negative amount.

We can do that with input validation, one of the most important tools in the AppSec toolkit. Let’s look at some ways to implement it.

Input Validation With XML Schema

xml-schemaWe could rely on XML Schema for validation, but XML Schema can only validate so much.

Validating individual properties will probably work fine, but things get hairy when we want to validate relations between properties. For maximum flexibility, we’d like to use Java to express constraints.

More importantly, schema validation is generally not a good idea in a REST service.

A major goal of REST is to decouple client and server so that they can evolve separately.

If we validate against a schema, then a new client that sends a new property would break against an old server that doesn’t understand the new property. It’s usually better to silently ignore properties you don’t understand.

JAX-B does this right, and also the other way around: properties that are not sent by an old client end up as null. Consequently, the new server must be careful to handle null values properly.

Input Validation With Bean Validation

bean-validationIf we can’t use schema validation, then what about using JSR 303 Bean Validation?

Jersey supports Bean Validation by adding the jersey-bean-validation jar to your classpath.

There is an unofficial Maven plugin to add Bean Validation annotations to the JAX-B generated classes, but I’d rather use something better supported and that works with Gradle.

So let’s turn things around. We’ll handcraft our JavaBean and generate the XML Schema from the bean for documentation:

@XmlRootElement(name = "order")
public class Order {
  @XmlElement
  @Min(1)
  public int quantity;
}
@Path("orders")
public class OrdersResource {
  @POST
  @Consumes({ "application/xml", "application/json" })
  public void place(@Valid Order order) {
    // Jersey recognizes the @Valid annotation and
    // returns 400 when the JavaBean is not valid
  }
}

Any attempt to POST an order with a non-positive quantity will now give a 400 Bad Request status.

Now suppose we want to allow clients to change their pending orders. We’d use PATCH or PUT to update individual order properties, like quantity:

@Path("orders")
public class OrdersResource {
  @Path("{id}")
  @PUT
  @Consumes("application/x-www-form-urlencoded")
  public Order update(@PathParam("id") String id, 
      @Min(1) @FormParam("quantity") int quantity) {
    // ...
  }
}

We need to add the @Min annotation here too, which is duplication. To make this DRY, we can turn quantity into a class that is responsible for validation:

@Path("orders")
public class OrdersResource {
  @Path("{id}")
  @PUT
  @Consumes("application/x-www-form-urlencoded")
  public Order update(@PathParam("id") String id, 
      @FormParam("quantity")
      Quantity quantity) {
    // ...
  }
}
@XmlRootElement(name = "order")
public class Order {
  @XmlElement
  public Quantity quantity;
}
public class Quantity {
  private int value;

  public Quantity() { }

  public Quantity(String value) {
    try {
      setValue(Integer.parseInt(value));
    } catch (ValidationException e) {
      throw new IllegalArgumentException(e);
    }
  }

  public int getValue() {
    return value;
  }

  @XmlValue
  public void setValue(int value) 
      throws ValidationException {
    if (value < 1) {
      throw new ValidationException(
          "Quantity value must be positive, but is: " 
          + value);
    }
    this.value = value;
  }
}

We need a public no-arg constructor for JAX-B to be able to unmarshall the payload into a JavaBean and another constructor that takes a String for the @FormParam to work.

setValue() throws javax.xml.bind.ValidationException so that JAX-B will stop unmarshalling. However, Jersey returns a 500 Internal Server Error when it sees an exception.

We can fix that by mapping validation exceptions onto 400 status codes using an exception mapper. While we’re at it, let’s do the same for IllegalArgumentException:

@Provider
public class DefaultExceptionMapper 
    implements ExceptionMapper<Throwable> {

  @Override
  public Response toResponse(Throwable exception) {
    Throwable badRequestException 
        = getBadRequestException(exception);
    if (badRequestException != null) {
      return Response.status(Status.BAD_REQUEST)
          .entity(badRequestException.getMessage())
          .build();
    }
    if (exception instanceof WebApplicationException) {
      return ((WebApplicationException)exception)
          .getResponse();
    }
    return Response.serverError()
        .entity(exception.getMessage())
        .build();
  }

  private Throwable getBadRequestException(
      Throwable exception) {
    if (exception instanceof ValidationException) {
      return exception;
    }
    Throwable cause = exception.getCause();
    if (cause != null && cause != exception) {
      Throwable result = getBadRequestException(cause);
      if (result != null) {
        return result;
      }
    }
    if (exception instanceof IllegalArgumentException) {
      return exception;
    }
    if (exception instanceof BadRequestException) {
      return exception;
    }
    return null;
  }

}

Input Validation By Domain Objects

dddEven though the approach outlined above will work quite well for many applications, it is fundamentally flawed.

At first sight, proponents of Domain-Driven Design (DDD) might like the idea of creating the Quantity class.

But the Order and Quantity classes do not model domain concepts; they model REST representations. This distinction may be subtle, but it is important.

DDD deals with domain concepts, while REST deals with representations of those concepts. Domain concepts are discovered, but representations are designed and are subject to all kinds of trade-offs.

For instance, a collection REST resource may use paging to prevent sending too much data over the wire. Another REST resource may combine several domain concepts to make the client-server protocol less chatty.

A REST resource may even have no corresponding domain concept at all. For example, a POST may return 202 Accepted and point to a REST resource that represents the progress of an asynchronous transaction.

ubiquitous-languageDomain objects need to capture the ubiquitous language as closely as possible, and must be free from trade-offs to make the functionality work.

When designing REST resources, on the other hand, one needs to make trade-offs to meet non-functional requirements like performance, scalability, and evolvability.

That’s why I don’t think an approach like RESTful Objects will work. (For similar reasons, I don’t believe in Naked Objects for the UI.)

Adding validation to the JavaBeans that are our resource representations means that those beans now have two reasons to change, which is a clear violation of the Single Responsibility Principle.

We get a much cleaner architecture when we use JAX-B JavaBeans only for our REST representations and create separate domain objects that handle validation.

Putting validation in domain objects is what Dan Bergh Johnsson refers to as Domain-Driven Security.

cave-artIn this approach, primitive types are replaced with value objects. (Some people even argue against using any Strings at all.)

At first it may seem overkill to create a whole new class to hold a single integer, but I urge you to give it a try. You may find that getting rid of primitive obsession provides value even beyond validation.

What do you think?

How do you handle input validation in your RESTful services? What do you think of Domain-Driven Security? Please leave a comment.

The Lazy Developer’s Way to an Up-To-Date Libraries List

groovyLast time I shared some tips on how to use libraries well. I now want to delve deeper into one of those: Know What Libraries You Use.

Last week I set out to create such a list of embedded components for our product. This is a requirement for our Security Development Lifecycle (SDL).

However, it’s not a fun task. As a developer, I want to write code, not update documents! So I turned to my friends Gradle and Groovy, with a little help from Jenkins and Confluence.

Gradle Dependencies

We use Gradle to build our product, and Gradle maintains the dependencies we have on third-party components.

Our build defines a list of names of configurations for embedded components, copyBundleConfigurations, for copying those to the distribution directory. From there, I get to the external dependencies using Groovy’s collection methods:

def externalDependencies() {
  copyBundleConfigurations.collectMany { 
    configurations[it].allDependencies 
  }.findAll {
    !(it instanceof ProjectDependency) && it.group &&
        !it.group.startsWith('com.emc')
  }
}

Adding Required Information

However, Gradle dependencies don’t contain all the required information.

For instance, we need the license under which the library is distributed, so that we can ask the Legal department permission for using it.

So I added a simple XML file to hold the additional info. Combining that information with the dependencies that Gradle maintains is easy using Groovy’s XML support:

ext.embeddedComponentsInfo = 'embeddedComponents.xml'

def externalDependencyInfos() {
  def result = new TreeMap()
  def componentInfo = new XmlSlurper()
      .parse(embeddedComponentsInfo)
  externalDependencies().each { dependency ->
    def info = componentInfo.component.find { 
      it.id == "$dependency.group:$dependency.name" &&
          it.friendlyName?.text() 
    }
    if (!info.isEmpty()) {
      def component = [
        'id': info.id,
        'friendlyName': info.friendlyName.text(),
        'version': dependency.version,
        'latestVersion': info.latestVersion.text(),
        'license': info.license.text(),
        'licenseUrl': info.licenseUrl.text(),
        'comment': info.comment.text()
      ]
      result.put component.friendlyName, component
    }
  }
  result.values()
}

I then created a Gradle task to write the information to an HTML file. Our Jenkins build executes this task, so that we always have an up-to-date list. I used Confluence’s html-include macro to include the HTML file in our Wiki.

Now our Wiki is always up-to-date.

Automatically Looking Up Missing Information

The next problem was to populate the XML file with additional information.

Had we had this file from the start, adding that information manually would not have been a big deal. In our case, we already had over a hundred dependencies, so automation was in order.

First I identified the components that miss the required information:

def missingExternalDependencies() {
  def componentInfo = new XmlSlurper()
      .parse(embeddedComponentsInfo)
  externalDependencies().findAll { dependency ->
    componentInfo.component.find { 
      it.id == "$dependency.group:$dependency.name" && 
          it.friendlyName?.text() 
    }.isEmpty()
  }.collect {
    "$it.group:$it.name"
  }.sort()
}

Next, I wanted to automatically look up the missing information and add it to the XML file (using Groovy’s MarkupBuilder). In case the required information can’t be found, the build should fail:

project.afterEvaluate {
  def missingComponents = missingExternalDependencies()
  if (!missingComponents.isEmpty()) {
    def manualComponents = []
    def writer = new StringWriter() 
    def xml = new MarkupBuilder(writer)
    xml.expandEmptyElements = true
    println 'Looking up information on new dependencies:'
    xml.components {
      externalDependencyInfos().each { existingComponent ->
        component { 
          id(existingComponent.id)
          friendlyName(existingComponent.friendlyName)
          latestVersion(existingComponent.latestVersion)
          license(existingComponent.license)
          licenseUrl(existingComponent.licenseUrl)
          approved(existingComponent.approved)
          comment(existingComponent.comment)
        }
      }
      missingComponents.each { missingComponent ->
        def lookedUpComponent = collectInfo(missingComponent)
        component {
          id(missingComponent)
          friendlyName(lookedUpComponent.friendlyName)
          latestVersion(lookedUpComponent.latestVersion)
          license(lookedUpComponent.license)
          licenseUrl(lookedUpComponent.licenseUrl)
          approved('?')
          comment(lookedUpComponent.comment)
        }
        if (!lookedUpComponent.friendlyName || 
            !lookedUpComponent.latestVersion || 
            !lookedUpComponent.license) {
          manualComponents.add lookedUpComponent.id
          println '    => Please enter information manually'
        }
      }
    }
    writer.close()
    def embeddedComponentsFile = 
        project.file(embeddedComponentsInfo)
    embeddedComponentsFile.text = writer.toString()
    if (!manualComponents.isEmpty()) {
      throw new GradleException('Missing library information')
    }
  }
}

Anyone who adds a dependency in the future is now forced to add the required information.

So all that is left to implement is the collectInfo() method.

There are two primary sources that I used to look up the required information: the SpringSource Enterprise Bundle Repository holds OSGi bundle versions of common libraries, while Maven Central holds regular jars.

Extracting information from those sources is a matter of downloading and parsing XML and HTML files. This is easy enough with Groovy’s String.toURL() and URL.eachLine() methods and support for regular expressions.

Conclusion

All of this took me a couple of days to build, but I feel that the investment is well worth it, since I no longer have to worry about the list of used libraries being out of date.

How do you maintain a list of used libraries? Please let me know in the comments.

Book review: Secure Programming with Static Analysis

Secure Programming with Static AnalysisOne thing that should be part of every Security Development Lifecycle (SDL) is static code analysis.

This topic is explained in great detail in Secure Programming with Static Analyis.

Chapter 1, The Software Security Problem, explains why security is easy to get wrong and why typical methods for catching bugs aren’t effective for finding security vulnerabilities.

Chapter 2, Introduction to Static Analysis, explains that static analysis involves a software program checking the source code of another software program to find structural, quality, and security problems.

Chapter 3, Static Analysis as Part of Code Review, explains how static code analysis can be integrated into a security review process.

Chapter 4, Static Analysis Internals, describes how static analysis tools work internally and what trade-offs are made when building them.

This concludes the first part of the book that describes the big picture. Part two deals with pervasive security problems.

InputChapter 5, Handling Input, describes how programs should deal with untrustworthy input.

Chapter 6, Buffer Overflow, and chapter 7, Bride to Buffer Overflow, deal with buffer overflows. These chapters are not as interesting for developers working with modern languages like Java or C#.

Chapter 8, Errors and Exceptions, talks about unexpected conditions and the link with security issues. It also handles logging and debugging.

Chapter 9, Web Applications, starts the third part of the book about common types of programs. This chapter looks at security problems specific to the Web and HTTP.

Chapter 10, XML and Web Services, discusses the security challenges associated with XML and with building up applications from distributed components.

CryptographyChapter 11, Privacy and Secrets, switches the focus from AppSec to InfoSec with an explanation of how to protect private information.

Chapter 12, Privileged Programs, continues with a discussion on how to write programs that operate with different permissions than the user.

The final part of the book is about gaining experience with static analysis tools.

Chapter 13, Source Code Analysis Exercises for Java, is a tutorial on how to use Fortify (a trial version of which is included with the book) on some sample Java projects.

Chapter 14, Source Code Analysis Exercises for C does the same for C programs.

rating-five-out-of-five

This book is very useful for anybody working with static analysis tools. Its description of the internals of such tools helps with understanding how to apply the tools best.

I like that the book is filled with numerous examples that show how the tools can detect a particular type of problem.

Finally, the book makes clear that any static analysis tool will give both false positives and false negatives. You should really understand security issues yourself to make good decisions. When you know how to do that, a static analysis tool can be a great help.

Supporting Multiple XACML Representations

We’re in the process of registering an XML media type for the eXtensible Access Control Markup Language (XACML). Simultaneously, the XACML Technical Committee is working on a JSON format.

Both media types are useful in the context of another committee effort, the REST profile. This post explains what benefit these profiles will bring once approved, and how to support them in clients and servers.

Media Types Support Content Negotiation

With the REST profile, any application can communicate with a Policy Decision Point (PDP) in a RESTful manner. The media types make it possible to communicate with such a PDP in a manner that is most convenient for the client, using a process called content negotiation.

For instance, a web application that is mainly implemented in JavaScript may prefer to use JSON for communication with the PDP, to avoid having to bring in infrastructure to deal with XML.

Content negotiation is not just a convenience feature, however. It also facilitates evolution.

A server with many clients that understand 2.0 may start also serving 3.0, for instance. The older clients stay functional using 2.0, whereas newer clients can communicate in 3.0 syntax with the same server.

This avoids having to upgrade all the clients at the same time as the server.

So how does a server that supports multiple versions and/or formats know which one to serve to a particular client? The answer is the Accept HTTP header. For instance, a client can send Accept: application/xacml+xml; version=2.0 to get an XACML 2.0 XML format, or Accept: application/xacml+json; version=3.0 to get an XACML 3.0 JSON answer.

The value for the Accept header is a list of media types that are acceptable to the client, in decreasing order of precedence. For instance, a new client could prefer 3.0, but still work with older servers that only support 2.0 by sending Accept: application/xacml+xml; version=3.0, application/xacml+xml; version=2.0.

Supporting Multiple Versions and Formats

So there is value for both servers and clients to support multiple versions and/or formats. Now how does one go about implementing this? The short answer is: using indirection.

The longer answer is to make an abstraction for the version/format combination. We’ll dub this abstraction a representation.

For instance, an XACML request is really not much more than a collection of categorized attributes, while a response is basically a collection of results.

Instead of working with, say, the XACML 3.0 XML form of a request, the client or server code should work with the abstract representation. For each version/format combination, you then add a parser and a builder.

The parser reads the concrete syntax and creates the abstract representation from it. Conversely, the builder takes the abstract representation and converts it to the desired concrete syntax.

In many cases, you can re-use parts of the parsers and builders between representations. For instance, all the XML formats of XACML have in common that they require XML parsing/serialization.

In a design like this, no code ever needs to be modified when a new version of the specification or a new serialization format comes out. All you have to do is add a parser and a builder, and all the other code can stay the way it is.

The only exception is when a new version introduces new capabilities and your code wants to use those. In that case, you probably must also change the abstract representation to accommodate the new functionality.

Behavior-Driven Development (BDD) with JBehave, Gradle, and Jenkins

Behavior-Driven Development (BDD) is a collaborative process where the Product Owner, developers, and testers cooperate to deliver software that brings value to the business.

BDD is the logical next step up from Test-Driven Development (TDD).

Behavior-Driven Development

In essence, BDD is a way to deliver requirements. But not just any requirements, executable ones! With BDD, you write scenarios in a format that can be run against the software to ascertain whether the software behaves as desired.

Scenarios

Scenarios are written in Given, When, Then format, also known as Gherkin:

Given the ATM has $250
And my balance is $200
When I withdraw $150
Then the ATM has $100
And my balance is $50

Given indicates the initial context, When indicates the occurrence of an interesting event, and Then asserts an expected outcome. And may be used to in place of a repeating keyword, to make the scenario more readable.

Given/When/Then is a very powerful idiom, that allows for virtually any requirement to be described. Scenarios in this format are also easily parsed, so that we can automatically run them.

BDD scenarios are great for developers, since they provide quick and unequivocal feedback about whether the story is done. Not only the main success scenario, but also alternate and exception scenarios can be provided, as can abuse cases. The latter requires that the Product Owner not only collaborates with testers and developers, but also with security specialists. The payoff is that it becomes easier to manage security requirements.

Even though BDD is really about the collaborative process and not about tools, I’m going to focus on tools for the remainder of this post. Please keep in mind that tools can never save you, while communication and collaboration can. With that caveat out of the way, let’s get started on implementing BDD with some open source tools.

JBehave

JBehave is a BDD tool for Java. It parses the scenarios from story files, maps them to Java code, runs them via JUnit tests, and generates reports.

Eclipse

JBehave has a plug-in for Eclipse that makes writing stories easier with features such as syntax highlighting/checking, step completion, and navigation to the step implementation.

JUnit

Here’s how we run our stories using JUnit:

@RunWith(AnnotatedEmbedderRunner.class)
@UsingEmbedder(embedder = Embedder.class, generateViewAfterStories = true,
    ignoreFailureInStories = true, ignoreFailureInView = false, 
    verboseFailures = true)
@UsingSteps(instances = { NgisRestSteps.class })
public class StoriesTest extends JUnitStories {

  @Override
  protected List<String> storyPaths() {
    return new StoryFinder().findPaths(
        CodeLocations.codeLocationFromClass(getClass()).getFile(),
        Arrays.asList(getStoryFilter(storyPaths)), null);
  }

  private String getStoryFilter(String storyPaths) {
    if (storyPaths == null) {
      return "*.story";
    }
    if (storyPaths.endsWith(".story")) {
      return storyPaths;
    }
    return storyPaths + ".story";
  }

  private List<String> specifiedStoryPaths(String storyPaths) {
    List<String> result = new ArrayList<String>();
    URI cwd = new File("src/test/resources").toURI();
    for (String storyPath : storyPaths.split(File.pathSeparator)) {
      File storyFile = new File(storyPath);
      if (!storyFile.exists()) {
        throw new IllegalArgumentException("Story file not found: " 
          + storyPath);
      }
      result.add(cwd.relativize(storyFile.toURI()).toString());
    }
    return result;
  }

  @Override
  public Configuration configuration() {
    return super.configuration()
        .useStoryReporterBuilder(new StoryReporterBuilder()
            .withFormats(Format.XML, Format.STATS, Format.CONSOLE)
            .withRelativeDirectory("../build/jbehave")
        )
        .usePendingStepStrategy(new FailingUponPendingStep())
        .useFailureStrategy(new SilentlyAbsorbingFailure());
  }

}

This uses JUnit 4’s @RunWith annotation to indicate the class that will run the test. The AnnotatedEmbedderRunner is a JUnit Runner that JBehave provides. It looks for the @UsingEmbedder annotation to determine how to run the stories:

  • generateViewAfterStories instructs JBehave to create a test report after running the stories
  • ignoreFailureInStories prevents JBehave from throwing an exception when a story fails. This is essential for the integration with Jenkins, as we’ll see below

The @UsingSteps annotation links the steps in the scenarios to Java code. More on that below. You can list more than one class.

Our test class re-uses the JUnitStories class from JBehave that makes it easy to run multiple stories. We only have to implement two methods: storyPaths() and configuration().

The storyPaths() method tells JBehave where to find the stories to run. Our version is a little bit complicated because we want to be able to run tests from both our IDE and from the command line and because we want to be able to run either all stories or a specific sub-set.

We use the system property bdd.stories to indicate which stories to run. This includes support for wildcards. Our naming convention requires that the story file names start with the persona, so we can easily run all stories for a single persona using something like -Dbdd.stories=wanda_*.

The configuration() method tells JBehave how to run stories and report on them. We need output in XML for further processing in Jenkins, as we’ll see below.

One thing of interest is the location of the reports. JBehave supports Maven, which is fine, but they assume that everybody follows Maven conventions, which is really not. The output goes into a directory called target by default, but we can override that by specifying a path relative to the target directory. We use Gradle instead of Maven, and Gradle’s temporary files go into the build directory, not target. More on Gradle below.

Steps

Now we can run our stories, but they will fail. We need to tell JBehave how to map the Given/When/Then steps in the scenarios to Java code. The Steps classes determine what the vocabulary is that can be used in the scenarios. As such, they define a Domain Specific Language (DSL) for acceptance testing our application.

Our application has a RESTful interface, so we wrote a generic REST DSL. However, due to the HATEOAS constraint in REST, a client needs a lot of calls to discover the URIs that it should use. Writing scenarios gets pretty boring and repetitive that way, so we added an application-specific DSL on top of the REST DSL. This allows us to write scenarios in terms the Product Owner understands.

Layering the application-specific steps on top of generic REST steps has some advantages:

  • It’s easy to implement new application-specific DSL, since they only need to call the REST-specific DSL
  • The REST-specific DSL can be shared with other projects

Gradle

With the Steps in place, we can run our stories from our favorite IDE. That works great for developers, but can’t be used for Continuous Integration (CI).

Our CI server runs a headless build, so we need to be able to run the BDD scenarios from the command line. We automate our build with Gradle and Gradle can already run JUnit tests. However, our build is a multi-project build. We don’t want to run our BDD scenarios until all projects are built, a distribution is created, and the application is started.

So first off, we disable running tests on the project that contains the BDD stories:

test {
  onlyIf { false } // We need a running server
}

Next, we create another task that can be run after we start our application:

task acceptStories(type: Test) {
  ignoreFailures = true
  doFirst {
    // Need 'target' directory on *nix systems to get any output
    file('target').mkdirs()

    def filter = System.getProperty('bdd.stories') 
    if (filter == null) {
      filter = '*'
    }
    def stories = sourceSets.test.resources.matching { 
      it.include filter
    }.asPath
    systemProperty('bdd.stories', stories)
  }
}

Here we see the power of Gradle. We define a new task of type Test, so that it already can run JUnit tests. Next, we configure that task using a little Groovy script.

First, we must make sure the target directory exists. We don’t need or even want it, but without it, JBehave doesn’t work properly on *nix systems. I guess that’s a little Maven-ism 😦

Next, we add support for running a sub-set of the stories, again using the bdd.stories system property. Our story files are located in src/test/resources, so that we can easily get access to them using the standard Gradle test source set. We then set the system property bdd.stories for the JVM that runs the tests.

Jenkins

So now we can run our BDD scenarios from both our IDE and the command line. The next step is to integrate them into our CI build.

We could just archive the JBehave reports as artifacts, but, to be honest, the reports that JBehave generates aren’t all that great. Fortunately, the JBehave team also maintains a plug-in for the Jenkins CI server. This plug-in requires prior installation of the xUnit plug-in.

After installation of the xUnit and JBehave plug-ins into jenkins, we can configure our Jenkins job to use the JBehave plug-in. First, add an xUnit post-build action. Then, select the JBehave test report.

With this configuration, the output from running JBehave on our BDD stories looks just like that for regular unit tests:

Note that the yellow part in the graph indicates pending steps. Those are used in the BDD scenarios, but have no counterpart in the Java Steps classes. Pending steps are shown in the Skip column in the test results:

Notice how the JBehave Jenkins plug-in translates stories to tests and scenarios to test methods. This makes it easy to spot which scenarios require more work.

Although the JBehave plug-in works quite well, there are two things that could be improved:

  • The output from the tests is not shown. This makes it hard to figure out why a scenario failed. We therefore also archive the JUnit test report
  • If you configure ignoreFailureInStories to be false, JBehave throws an exception on a failure, which truncates the XML output. The JBehave Jenkins plug-in can then no longer parse the XML (since it’s not well formed), and fails entirely, leaving you without test results

All in all these are minor inconveniences, and we ‘re very happy with our automated BDD scenarios.

XACML at XML Amsterdam 2011

Less than three weeks till XML Amsterdam 2011 where I’ll be speaking on XACML.

XML Amsterdam is a small, focused conference dedicated to XML. As part of X-Hive EMC’s XML R&D center, I live and breathe XML, so I’m glad to have an XML conference in the Netherlands. The conference is small, so we won’t get lost in the crowd and the chances of good networking opportunities are high. And it is completely focused on XML, unlike Momentum, for instance. Finally, with speakers from all major native XML databases (xDB, MarkLogic, and eXists), we can expect some lively discussions 😉 . Especially with EMC IIG Chief Architect Jeroen van Rotterdam opening the conference with a keynote titled Is XML dead?

My presentation will introduce eXtensible Access Control Markup Language (XACML), an XML language for access control. I will place XACML in the context of access control models, and show that XACML is a future-proof technology that organizations can start using now, and then stay with as their access control needs evolve. I will continue to explain the architecture, request/response protocol, and policy language that make up XACML.

Here are my slides.

Performance tuning a GWT application

With Google Web Toolkit (GWT), you write your AJAX front-end in the Java programming language which GWT then cross-compiles into optimized JavaScript that automatically works across all major browsers.

…claims Google. And I must say, I’m pretty impressed by the ease of development GWT offers. I’ve used it at work on a project that I probably couldn’t have done without it, given my poor JavaScript skills. Especially the fact that you don’t have to worry about whether your code works in all browsers is great.

There are a couple of caveats, however:

  • The set of supported Java classes is limited, which sometimes causes confusion. For instance, there is a Character class, but the isWhitespace() method is not supported. And neither is List.subList().
  • Serialization works differently from the Java standard.
  • You can’t work with regular W3C DOM objects on the client. Instead, GWT provides it’s own DOM hierarchy.
  • Even though Google claims that GWT “allows developers to quickly build and maintain complex yet highly performant JavaScript front-end applications in the Java programming language”, performance can be a problem.

The purpose of this post is to elaborate on that last point.

Java isn’t JavaScript

Most developers evolve a sense of intuition about what type of code can be a performance problem, and what not. The problem with GWT development, however, is that even though you write Java code, the browser executes JavaScript code. So any intuitions about Java performance are misleading.

Therefore, your usual rules of thumb won’t work in GWT development. Here’s a couple that may.

Serialization is slow

Our application created a domain model on the server, serialized that to the client, and had the client render it. The problem was that the model could become quite large, consisting of thousands of objects. This is not something that GWT currently handles well. Serialization performance appears to be proportional to the number of objects, not just to their total size. Therefore, we translated the model to XML, and serialized that as a single string. This was way faster.

However, this meant that we needed to parse the XML on the client side to reconstruct our model. GWT provides an XMLParser class to handle just that. This class is very efficient in parsing XML, but it turned out that traversing the resulting DOM document was still too slow.

So I wrote a dedicated XML parser, one that can only parse the subset of XML documents that represent our domain model. This parser builds the domain model directly, without intermediate representation. This proved to be faster than the generic approach, but only after being very careful with handling strings.

Some string handling is slow, particularly in Internet Explorer

Java developers know that string handling can be a performance bottleneck. This is also true for GWT development, but not in quite the same way. For instance, using StringBuffer or StringBuilder is usually sufficient for improving String handling performance in Java code. But not so in JavaScript. StringBuffer.append() can be very slow on Internet Explorer, for instance.

GWT 1.6 will contain its own version of StringBuffer that will alleviate some of these problems. Since 1.6 isn’t released yet, we just copied this class into our project.

But even with the new StringBuffer class, you still need to be careful when dealing with strings. Some of the StringBuffer methods are implemented by calling toString() and doing something on the result. That can be a real performance killer. So anything you can do to stay away from substring(), charAt(), etc. will help you. This can mean that it’s sometimes better to work with plain Strings instead of StringBuffers!

Tables

For displaying data in a table-like format, you can use the Grid widget. This is not terribly fast, however, so you may want to consider the bulk table renderers.

The downside of these is that they translate any widgets you insert into the table to HTML, and you loose all the functionality to attached to them, like click handling. Instead, you can add a TableListener, that has a onCellClicked() method.

Varargs are slow

Variable argument lists are sometimes quite handy. However, we’ve found that they come at a severe performance penalty, because a JavaScript array needs to be created to wrap the arguments. So if at all possible, use fixed argument lists, even though this may be less convenient from a code writing/maintaining perspective.

Don’t trust rules of thumb, use a profiler

The problem with rules of thumb is that they are just that: principles with broad application that are not intended to be strictly accurate or reliable for every situation. You shouldn’t put all your trust in them, but measure where the pain really is. This means using a profiler.

You could, of course, run your favorite Java profiler against hosted mode to get a sense of performance of your code. But that would be besides the point. Your Java code is compiled to JavaScript and it is the JavaScript code that gets executed by the browser. So you should use a JavaScript profiler.

We used Firebug to profile the compiled JavaScript code and this helped us enormously. As usual with profiling, we found performance bottlenecks that we didn’t anticipate. As a result, we were able to make our application load over 60 times faster! (on Internet Explorer 7)

Performance in different browsers

The only problem with Firebug is that it’s a FireFox plugin, and therefore not available for Internet Explorer. [Firebug Lite doesn’t contain a profiler.] Not that I personally would want to use IE, but our customers do, unfortunately.

The irony is that you need a JavaScript profiler in Internet Explorer the most: FireFox is unbelievably much faster than Internet Explorer when it comes to processing JavaScript, especially with string handling. For instance, for one data set, FireFox 3 loaded the page in 51 seconds, while Internet Explorer 7 took 12 minutes and 4 seconds!

Internet Explorer 8 will supposedly be better:

We have made huge improvements to widely-used JScript functionality including faster string, array, and lookup operations.

We’ll have to see how that works out when IE8 is released…

BTW, if you’re interested in browser performance, check out this comparison.