Securing HTTP-based APIs With Signatures

CloudSecurityI work at EMC on a platform on top of which SaaS solutions can be built.

This platform has a RESTful HTTP-based API, just like a growing number of other applications.

With development frameworks like JAX-RS, it’s relatively easy to build such APIs.

It is not, however, easy to build them right.

Issues With Building HTTP-based APIs

The problem isn’t so much in getting the functionality out there. We know how to develop software and the available REST/HTTP frameworks and libraries make it easy to expose the functionality.

That’s only half the story, however. There are many more -ilities to consider.

rest-easyThe REST architectural style addresses some of those, like scalability and evolvability.

Many HTTP-based APIs today claim to be RESTful, but in fact are not. This means that they are not reaping all of the benefits that REST can bring.

I’ll be talking more about how to help developers meet all the constraints of the REST architectural style in future posts.

Today I want to focus on another non-functional aspect of APIs: security.

Security of HTTP-based APIs

In security, we care about the CIA-triad: Confidentiality, Integrity, and availability.

Availability of web services is not dramatically different from that of web applications, which is relatively well understood. We have our clusters, load balancers, and what not, and usually we are in good shape.

Confidentiality and integrity, on the other hand, both require proper authentication, and here matters get more interesting.

Authentication of HTTP-based APIs

authenticationFor authentication in an HTTP world, it makes sense to look at HTTP Authentication.

This RFC describes Basic and Digest authentication. Both have their weaknesses, which is why you see many APIs use alternatives.

Luckily, these alternatives can use the same basic machinery defined in the RFC. This machinery includes status code 401 Unauthorized, and the WWW-Authenticate, Authentication-Info, and Authorization headers. Note that the Authorization header is unfortunately misnamed, since it’s used for authentication, not authorization.

The final piece of the puzzle is the custom authentication scheme. For example, Amazon S3 authentication uses the AWS custom scheme.

Authentication of HTTP-based APIs Using Signatures

The AWS scheme relies on signatures. Other services, like EMC Atmos, use the same approach.

It is therefore good to see that a new IETF draft has been proposed to standardize the use of signatures in HTTP-based APIs.

Standardization enables the construction of frameworks and libraries, which will drive down the cost of implementing authentication and will make it easier to build more secure APIs.

What do you think?

what-do-you-thinkIf you’re in the HTTP API building and/or consuming business –and who isn’t these days– then please go ahead and read the draft and provide feedback.

I’m also interested in your experiences with building or consuming secure HTTP APIs. Please leave a comment on this post.

Advertisement

Using Cryptography in Java Applications

This post describes how to use the Java Cryptography Architecture (JCA) that allows you to use cryptographic services in your applications.

Java Cryptography Architecture Services

The JCA provides a number of cryptographic services, like message digests and signatures. These services are accessible through service specific APIs, like MessageDigest and Signature. Cryptographic services abstract different algorithms. For digests, for instance, you could use MD5 or SHA1. You specify the algorithm as a parameter to the getInstance() method of the cryptographic service class:

MessageDigest digest = MessageDigest.getInstance("MD5");

You find the value of the parameter for your algorithm in the JCA Standard Algorithm Name Documentation. Some algorithms have parameters. For instance, an algorithm to generate a private/public key pair will take the key size as a parameter. You specify the parameter(s) using the initialize() method:

KeyPairGenerator generator = KeyPairGenerator.getInstance("DSA");
generator.initialize(1024);

If you don’t call the initialize() method, some default value will be used, which may or may not be what you want. Unfortunately, the API for initialization is not 100% consistent across services. For instance, the Cipher class uses init() with an argument indicating encryption or decryption, while the Signature class uses initSign() for signing and initVerify() for verification.

Java Cryptography Architecture Providers

The JCA keeps your code independent from a particular cryptographic algorithm’s implementation through the provider system. Providers are ranked according to a preference order, which is configurable (see below). The best preference is 1, the next best is 2, etc. The preference order allows the JCA to select the best available provider that implements a given algorithm. Alternatively, you can specify a specific provider in the second argument to getInstance():

Signature signature = Signature.getInstance("SHA1withDSA", "SUN");

The JRE comes with a bunch of providers from Oracle by default. However, due to historical export restrictions, these are not the most secure implementations. To get access to better algorithms and larger key sizes, install the Java Cryptography Extension Unlimited Strength Jurisdiction Policy Files. Update: Note that the above statement is true for the Oracle JRE. OpenJDK doesn’t have the same limitation.

Make Your Use of Cryptography Configurable

You should always make sure that the cryptographic services that your application uses are configurable. If you do that, you can change the cryptographic algorithm and/or implementation without issuing a patch. This is particularly valuable when a new attack on an (implementation of an) algorithm becomes available. The JCA makes it easy to configure the use of cryptography. The getInstance() method accepts both the name of the algorithm and the name of the provider implementing that algorithm. You should read both and any values for the algorithm’s parameters from some sort of configuration file. Also make sure you keep your code DRY and instantiate cryptographic services in a single place. Check that the requested algorithm and/or provider are actually available. The getInstance() method throws NoSuchAlgorithmException when a given algorithm or provider is not available, so you should catch that. The safest option then is to fail and have someone make sure the system is configured properly. If you continue despite a configuration error, you may end up with a system that is less secure than required. Note that Oracle recommends not specifying the provider. The reasons they provide is that not all providers may be available on all platforms, and that specifying a provider may mean that you miss out on optimizations. You should weigh those disadvantages against the risk of being vulnerable. Deploying specific providers with known characteristics with your application may neutralize the disadvantages that Oracle mentions.

Adding Cryptographic Service Providers

The provider system is extensible, so you can add providers. For example, you could use the open source Bouncy Castle or the commercial RSA BSAFE providers. In order to add a provider, you must make sure that its jar is available to the application. You can put it on the classpath for this purpose. Alternatively, you can make it an installed extension by placing it in the $JAVA_HOME/lib/ext directory, where $JAVA_HOME is the location of your JDK/JRE distribution. The major difference between the two approaches is that installed extensions are granted all permissions by default whereas code on the classpath is not. This is significant when (part of) your code runs in a sandbox. Some services, like Cipher, require the provider jar to be signed. The next step is to register the provider with the JCA provider system. The simplest way is to use Security.addProvider():

Security.addProvider(new BouncyCastleProvider());

You can also set the provider’s preference order by using the Security.insertProviderAt() method:

Security.insertProviderAt (new JsafeJCE(), 1);

One downside of this approach is that it couples your code to the provider, since you have to import the provider class. This may not be an important issue in an modular system like OSGi. Another thing to look out for is that code requires SecurityPermission to add a provider programmatically. The provider can also be configured as part of your environment via static registration by adding an entry to the java.security properties file (found in $JAVA_HOME/jre/lib/security/java.security):

security.provider.1=com.rsa.jsafe.provider.JsafeJCE
security.provider.2=sun.security.provider.Sun

The property names in this file start with security.provider. and end with the provider’s preference. The property value is the fully qualified name of the class implementing Provider.

Implementing Your Own Cryptographic Service Provider

Don’t do it. You will get it wrong and be vulnerable to attacks.

Using Cryptographic Service Providers

The documentation for the provider should tell you what provider name to use as the second argument to getInstance(). For instance, Bouncy Castle uses BC, while RSA BSAFE uses JsafeJCE. Most providers have custom APIs as well as JCA conformant APIs. Do not use the custom APIs, since that will make it impossible to configure the algorithms and providers used.

Not All Algorithms and Implementations Are Created Equal

It’s important to note that different algorithms and implementations have different characteristics and that those may make them more or less suitable for your situation. For instance, some organizations will only allow algorithms and implementations that are FIPS 140-2 certified or are on the list of NSA Suite B cryptographic algorithms. Always make sure you understand your customer’s cryptographic needs and requirements.

Using JCA in an OSGi environment

The getInstance() method is a factory method that uses the Service Provider Interface (SPI). That is problematic in an OSGi world, since OSGi violates the SPI framework’s assumption that there is a single classpath. Another potential issue is that JCA requires some jars to be signed. If those jars are not valid OSGi bundles, you can’t run them through bnd to make them so, since that would make the signature invalid. Fortunately, you can kill both birds with one stone. Put your provider jars on the classpath of your main program, that is the program that starts the OSGi framework. Then export the provider package from the OSGi system bundle using the org.osgi.framework.system.packages.extra system property. This will make the system bundle export that package. Now you can simply use Import-Package on the provider package in your bundles. There are other options for resolving these problems if you can’t use the above solution.

Signing Java Code

In a previous post, we discussed how to secure mobile code.

One of the measures mentioned was signing code. This post explores how that works for Java programs.

Digital Signatures

The basis for digital signatures is cryptography, specifically, public key cryptography. We use a set of cryptographic keys: a private and a public key.

The private key is used to sign a file and must remain a secret. The public key is used to verify the signature that was generated with the private key. This is possible because of the special mathematical relationship between the keys.

Both the signature and the public key need to be transferred to the recipient.

Certificates

In order to trust a file, one needs to verify the signature on that file. For this, one needs the public key that corresponds to the private key that was used to sign the file. So how can we trust the public key?

This is where certificates come in. A certificate contains a public key and the distinguished name that identifies the owner of that key.

The trust comes from the fact that the certificate is itself signed. So the certificate also contains a signature and the distinguished name of the signer.

When we control both ends of the communication, we can just provide both with the certificate and be done with it. This works well for mobile apps you write that connect to a server you control, for instance.

If you don’t control both ends, then we need an alternative. The distinguished name of the signer can be used to look up the signer’s certificate. With the public key from that certificate, the signature in the original certificate can be verified.

We can continue in this manner, creating a certificate chain, until we reach a signer that we explicitly trust. This is usually a well-established Certificate Authority (CA), like VeriSign or Thawte.

Keystores

In Java, private keys and certificates are stored in a password-protected database called a keystore.

Each key/certificate combination is identified by a string known as the alias.

Code Signing Tools

Java comes with two tools for code signing: keytool and jarsigner.

Use the jarsigner program to sign jar files using certificates stored in a keystore.

Use the keytool program to create private keys and the corresponding public key certificates, to retrieve/store those from/to a keystore, and to manage the keystore.

The keytool program is not capable of creating a certificate signed by someone else. It can create a Certificate Signing Request, however, that you can send to a CA. It can also import the CA’s response into the keystore.

The alternative is to use tools like OpenSSL or BSAFE, which support such CA capabilities.

Code Signing Environment

Code signing should happen in a secure environment, since private keys are involved and those need to remain secret. If a private key falls into the wrong hands, a third party could sign their code with your key, tricking your customers into trusting that code.

This means that you probably don’t want to maintain the keystore on the build machine, since that machine is likely available to many people. A more secure approach is to introduce a dedicated signing server:

You should also use different signing certificates for development and production.

Timestamping

Certificates are valid for a limited time period only. Any files signed with a private key for which the public key certificate has expired, should no longer be trusted, since it may have been signed after the certificate expired.

We can alleviate this problem by timestamping the file. By adding a trusted timestamp to the file, we can trust it even after the signing certificate expires.

But then how do we trust the timestamp? Well, by signing it using a Time Stamping Authority, of course! The OpenSSL program can help you with that as well.

Beyond Code Signing

When you sign your code, you only prove that the code came from you. For a customer to be able to trust your code, it needs to be trustworthy. You probably want to set up a full-blown Security Development Lifecycle (SDL) to make sure that it is as much as possible.

Another thing to consider in this area is third-party code. Most software packages embed commercial and/or open source libraries. Ideally, those libraries are signed by their authors. But no matter what, you need to take ownership, since customers don’t care whether a vulnerability is found in code you wrote yourself or in a library you used.

Securing Mobile Java Code

Mobile Code is code sourced from remote, possibly untrusted systems, that are executed on your local system. Mobile code is an optional constraint in the REST architectural style.

This post investigates our options for securely running mobile code in general, and for Java in particular.

Mobile Code

Examples of mobile code range from JavaScript fragments found in web pages to plug-ins for applications like FireFox and Eclipse.

Plug-ins turn a simple application into an extensible platform, which is one reason they are so popular. If you are going to support plug-ins in your application, then you should understand the security implications of doing so.

Types of Mobile Code

Mobile code comes in different forms. Some mobile code is source code, like JavaScript.

Mobile code in source form requires an interpreter to execute, like JägerMonkey in FireFox.

Mobile code can also be found in the form of executable code.

This can either be intermediate code, like Java applets, or native binary code, like Adobe’s Flash Player.

Active Content Delivers Mobile Code

A concept that is related to mobile code is active content, which is defined by NIST as

Electronic documents that can carry out or trigger actions automatically on a computer platform without the intervention of a user.

Examples of active content are HTML pages or PDF documents containing scripts and Office documents containing macros.

Active content is a vehicle for delivering mobile code, which makes it a popular technology for use in phishing attacks.

Security Issues With Mobile Code

There are two classes of security problems associated with mobile code.

The first deals with getting the code safely from the remote to the local system. We need to control who may initiate the code transfer, for example, and we must ensure the confidentiality and integrity of the transferred code.

From the point of view of this class of issues, mobile code is just data, and we can rely on the usual solutions for securing the transfer. For instance, XACML may be used to control who may initiate the transfer, and SSL/TLS may be used to protect the actual transfer.

It gets more interesting with the second class of issues, where we deal with executing the mobile code. Since the remote source is potentially untrusted, we’d like to limit what the code can do. For instance, we probably don’t want to allow mobile code to send credit card data to its developer.

However, it’s not just malicious code we want to protect ourselves from.

A simple bug that causes the mobile code to go into an infinite loop will threaten your application’s availability.

The bottom line is that if you want your application to maintain a certain level of security, then you must make sure that any third-party code meets that same standard. This includes mobile code and embedded libraries and components.

That’s why third-party code should get a prominent place in a Security Development Lifecycle (SDL).

Safely Executing Mobile Code

In general, we have four types of safeguards at our disposal to ensure the safe execution of mobile code:

  • Proofs
  • Signatures
  • Filters
  • Cages (sandboxes)

We will look at each of those in the context of mobile Java code.

Proofs

It’s theoretically possible to present a formal proof that some piece of code possesses certain safety properties. This proof could be tied to the code and the combination is then proof carrying code.

After download, the code could be checked against the code by a verifier. Only code that passes the verification check would be allowed to execute.

Updated for Bas’ comment:
Since Java 6, the StackMapTable attribute implements a limited form of proof carrying code where the type safety of the Java code is verified. However, this is certainly not enough to guarantee that the code is secure, and other approaches remain necessary.

Signatures

One of those approaches is to verify that the mobile code is made by a trusted source and that it has not been tampered with.

For Java code, this means wrapping the code in a jar file and signing and verifying the jar.

Filters

We can limit what mobile content can be downloaded. Since we want to use signatures, we should only accept jar files. Other media types, including individual .class files, can simply be filtered out.

Next, we can filter out downloaded jar files that are not signed, or signed with a certificate that we don’t trust.

We can also use anti-virus software to scan the verified jars for known malware.

Finally, we can use a firewall to filter out any outbound requests using protocols/ports/hosts that we know our code will never need. That limits what any code can do, including the mobile code.

Cages/Sandboxes

After restricting what mobile code may run at all, we should take the next step: prevent the running code from doing harm by restricting what it can do.

We can intercept calls at run-time and block any that would violate our security policy. In other words, we put the mobile code in a cage or sandbox.

In Java, cages can be implemented using the Security Manager. In a future post, we’ll take a closer look at how to do this.