Using Cryptography in Java Applications

This post describes how to use the Java Cryptography Architecture (JCA) that allows you to use cryptographic services in your applications.

Java Cryptography Architecture Services

The JCA provides a number of cryptographic services, like message digests and signatures. These services are accessible through service specific APIs, like MessageDigest and Signature. Cryptographic services abstract different algorithms. For digests, for instance, you could use MD5 or SHA1. You specify the algorithm as a parameter to the getInstance() method of the cryptographic service class:

MessageDigest digest = MessageDigest.getInstance("MD5");

You find the value of the parameter for your algorithm in the JCA Standard Algorithm Name Documentation. Some algorithms have parameters. For instance, an algorithm to generate a private/public key pair will take the key size as a parameter. You specify the parameter(s) using the initialize() method:

KeyPairGenerator generator = KeyPairGenerator.getInstance("DSA");
generator.initialize(1024);

If you don’t call the initialize() method, some default value will be used, which may or may not be what you want. Unfortunately, the API for initialization is not 100% consistent across services. For instance, the Cipher class uses init() with an argument indicating encryption or decryption, while the Signature class uses initSign() for signing and initVerify() for verification.

Java Cryptography Architecture Providers

The JCA keeps your code independent from a particular cryptographic algorithm’s implementation through the provider system. Providers are ranked according to a preference order, which is configurable (see below). The best preference is 1, the next best is 2, etc. The preference order allows the JCA to select the best available provider that implements a given algorithm. Alternatively, you can specify a specific provider in the second argument to getInstance():

Signature signature = Signature.getInstance("SHA1withDSA", "SUN");

The JRE comes with a bunch of providers from Oracle by default. However, due to historical export restrictions, these are not the most secure implementations. To get access to better algorithms and larger key sizes, install the Java Cryptography Extension Unlimited Strength Jurisdiction Policy Files. Update: Note that the above statement is true for the Oracle JRE. OpenJDK doesn’t have the same limitation.

Make Your Use of Cryptography Configurable

You should always make sure that the cryptographic services that your application uses are configurable. If you do that, you can change the cryptographic algorithm and/or implementation without issuing a patch. This is particularly valuable when a new attack on an (implementation of an) algorithm becomes available. The JCA makes it easy to configure the use of cryptography. The getInstance() method accepts both the name of the algorithm and the name of the provider implementing that algorithm. You should read both and any values for the algorithm’s parameters from some sort of configuration file. Also make sure you keep your code DRY and instantiate cryptographic services in a single place. Check that the requested algorithm and/or provider are actually available. The getInstance() method throws NoSuchAlgorithmException when a given algorithm or provider is not available, so you should catch that. The safest option then is to fail and have someone make sure the system is configured properly. If you continue despite a configuration error, you may end up with a system that is less secure than required. Note that Oracle recommends not specifying the provider. The reasons they provide is that not all providers may be available on all platforms, and that specifying a provider may mean that you miss out on optimizations. You should weigh those disadvantages against the risk of being vulnerable. Deploying specific providers with known characteristics with your application may neutralize the disadvantages that Oracle mentions.

Adding Cryptographic Service Providers

The provider system is extensible, so you can add providers. For example, you could use the open source Bouncy Castle or the commercial RSA BSAFE providers. In order to add a provider, you must make sure that its jar is available to the application. You can put it on the classpath for this purpose. Alternatively, you can make it an installed extension by placing it in the $JAVA_HOME/lib/ext directory, where $JAVA_HOME is the location of your JDK/JRE distribution. The major difference between the two approaches is that installed extensions are granted all permissions by default whereas code on the classpath is not. This is significant when (part of) your code runs in a sandbox. Some services, like Cipher, require the provider jar to be signed. The next step is to register the provider with the JCA provider system. The simplest way is to use Security.addProvider():

Security.addProvider(new BouncyCastleProvider());

You can also set the provider’s preference order by using the Security.insertProviderAt() method:

Security.insertProviderAt (new JsafeJCE(), 1);

One downside of this approach is that it couples your code to the provider, since you have to import the provider class. This may not be an important issue in an modular system like OSGi. Another thing to look out for is that code requires SecurityPermission to add a provider programmatically. The provider can also be configured as part of your environment via static registration by adding an entry to the java.security properties file (found in $JAVA_HOME/jre/lib/security/java.security):

security.provider.1=com.rsa.jsafe.provider.JsafeJCE
security.provider.2=sun.security.provider.Sun

The property names in this file start with security.provider. and end with the provider’s preference. The property value is the fully qualified name of the class implementing Provider.

Implementing Your Own Cryptographic Service Provider

Don’t do it. You will get it wrong and be vulnerable to attacks.

Using Cryptographic Service Providers

The documentation for the provider should tell you what provider name to use as the second argument to getInstance(). For instance, Bouncy Castle uses BC, while RSA BSAFE uses JsafeJCE. Most providers have custom APIs as well as JCA conformant APIs. Do not use the custom APIs, since that will make it impossible to configure the algorithms and providers used.

Not All Algorithms and Implementations Are Created Equal

It’s important to note that different algorithms and implementations have different characteristics and that those may make them more or less suitable for your situation. For instance, some organizations will only allow algorithms and implementations that are FIPS 140-2 certified or are on the list of NSA Suite B cryptographic algorithms. Always make sure you understand your customer’s cryptographic needs and requirements.

Using JCA in an OSGi environment

The getInstance() method is a factory method that uses the Service Provider Interface (SPI). That is problematic in an OSGi world, since OSGi violates the SPI framework’s assumption that there is a single classpath. Another potential issue is that JCA requires some jars to be signed. If those jars are not valid OSGi bundles, you can’t run them through bnd to make them so, since that would make the signature invalid. Fortunately, you can kill both birds with one stone. Put your provider jars on the classpath of your main program, that is the program that starts the OSGi framework. Then export the provider package from the OSGi system bundle using the org.osgi.framework.system.packages.extra system property. This will make the system bundle export that package. Now you can simply use Import-Package on the provider package in your bundles. There are other options for resolving these problems if you can’t use the above solution.

Advertisement

Signing Java Code

In a previous post, we discussed how to secure mobile code.

One of the measures mentioned was signing code. This post explores how that works for Java programs.

Digital Signatures

The basis for digital signatures is cryptography, specifically, public key cryptography. We use a set of cryptographic keys: a private and a public key.

The private key is used to sign a file and must remain a secret. The public key is used to verify the signature that was generated with the private key. This is possible because of the special mathematical relationship between the keys.

Both the signature and the public key need to be transferred to the recipient.

Certificates

In order to trust a file, one needs to verify the signature on that file. For this, one needs the public key that corresponds to the private key that was used to sign the file. So how can we trust the public key?

This is where certificates come in. A certificate contains a public key and the distinguished name that identifies the owner of that key.

The trust comes from the fact that the certificate is itself signed. So the certificate also contains a signature and the distinguished name of the signer.

When we control both ends of the communication, we can just provide both with the certificate and be done with it. This works well for mobile apps you write that connect to a server you control, for instance.

If you don’t control both ends, then we need an alternative. The distinguished name of the signer can be used to look up the signer’s certificate. With the public key from that certificate, the signature in the original certificate can be verified.

We can continue in this manner, creating a certificate chain, until we reach a signer that we explicitly trust. This is usually a well-established Certificate Authority (CA), like VeriSign or Thawte.

Keystores

In Java, private keys and certificates are stored in a password-protected database called a keystore.

Each key/certificate combination is identified by a string known as the alias.

Code Signing Tools

Java comes with two tools for code signing: keytool and jarsigner.

Use the jarsigner program to sign jar files using certificates stored in a keystore.

Use the keytool program to create private keys and the corresponding public key certificates, to retrieve/store those from/to a keystore, and to manage the keystore.

The keytool program is not capable of creating a certificate signed by someone else. It can create a Certificate Signing Request, however, that you can send to a CA. It can also import the CA’s response into the keystore.

The alternative is to use tools like OpenSSL or BSAFE, which support such CA capabilities.

Code Signing Environment

Code signing should happen in a secure environment, since private keys are involved and those need to remain secret. If a private key falls into the wrong hands, a third party could sign their code with your key, tricking your customers into trusting that code.

This means that you probably don’t want to maintain the keystore on the build machine, since that machine is likely available to many people. A more secure approach is to introduce a dedicated signing server:

You should also use different signing certificates for development and production.

Timestamping

Certificates are valid for a limited time period only. Any files signed with a private key for which the public key certificate has expired, should no longer be trusted, since it may have been signed after the certificate expired.

We can alleviate this problem by timestamping the file. By adding a trusted timestamp to the file, we can trust it even after the signing certificate expires.

But then how do we trust the timestamp? Well, by signing it using a Time Stamping Authority, of course! The OpenSSL program can help you with that as well.

Beyond Code Signing

When you sign your code, you only prove that the code came from you. For a customer to be able to trust your code, it needs to be trustworthy. You probably want to set up a full-blown Security Development Lifecycle (SDL) to make sure that it is as much as possible.

Another thing to consider in this area is third-party code. Most software packages embed commercial and/or open source libraries. Ideally, those libraries are signed by their authors. But no matter what, you need to take ownership, since customers don’t care whether a vulnerability is found in code you wrote yourself or in a library you used.

Software Development and Lifelong Learning

The main constraint in software development is learning. This means that learning is a core skill for developers and we should not think we’re done learning after graduation. This post explores some different ways in which to learn.

Go To Conferences

Conferences are a great place to learn new things, but also to meet new people. New people can provide new ways of looking at things, which helps with learning as well.

You can either go to big and broad conferences, like Java One or the RSA conference, or you can attend a smaller, more focused event. Some of these smaller events may not be as well-known, but there are some real gems nonetheless.

Take XML Amsterdam, for example, a small conference here in the Netherlands with excellent international speakers and attendees (even some famous ones).

Attend Workshops

Learning is as much about doing as it is about hearing and watching. Some conferences may have hands-on sessions or labs, but they’re in the minority. So just going to conferences isn’t good enough.

A more practical variant are workshops. They are mostly organized by specific communities, like Java User Groups.

One particularly useful form for developers is the code retreat. Workshops are much more focused than conferences and still provide some of the same networking opportunities.

Get Formal Training

Lots of courses are being offered, many of them conveniently online. One great (and free) example is Cryptography from Coursera.

Some of these course lead to certifications. The world is sharply divided into those who think certifications are a must and those that feel they are evil. I’ll keep my opinion on this subject to myself for once 😉 but whatever you do, focus on the learning, not on the piece of paper.

Learn On The Job

There is a lot to be learned during regular work activities as well.

You can organize that a bit better by doing something like job rotation. Good forms of job rotation for developers are collective code ownership and swarming.

Pair programming is an excellent way to learn all kinds of things, from IDE shortcuts to design patterns.

Practice in Private

Work has many distractions, though, like Getting a Story Done.

Open source is an alternative, in the sense that it takes things like deadlines away, which can help with learning.

However, that still doesn’t provide the systematic exploration that is required for the best learning. So practicing on “toy problems” is much more effective.

There are many katas that do just that, like the Roman Numerals Kata. They usually target a specific skill, like Test-Driven Development (TDD).

Book Review: The Security Development Lifecycle (SDL)

In The Security Development Lifecycle (SDL), A Process for Developing Demonstrably More Secure Software, authors Michael Howard and Steven Lipner explain how to build secure software through a repeatable process.

The methodology they describe was developed at Microsoft and has led to a measurable decrease in vulnerabilities. That’s why it’s now also used elsewhere, like at EMC (my employer).

Chapter 1, Enough is Enough: The Threats have Changed, explains how the SDL was born out of the Trustworthy Computing initiative that started with Bill Gates’ famous email in early 2002. Most operating systems have since become relatively secure, so hackers have shifted their focus to applications and the burden is now on us developers to crank up our security game. Many security issues are also privacy problems, so if we don’t, we are bound to pay the price.

Chapter 2, Current Software Development Methods Fail to Produce Secure Software, reviews current software development methods with regard to how (in)secure the resulting applications are. It shows that the adage given enough eyeballs, all bugs are shallow is wrong when it comes to security. The conclusion is that we need to explicitly include security into our development efforts.

Chapter 3, A Short History of the SDL at Microsoft, describes how security improvement efforts at Microsoft evolved into a consistent process that is now called the SDL.

Chapter 4, SDL for Management, explains that the SDL requires time, money, and commitment from senior management to prioritize over time to market. We’re talking real commitment, like delaying the release of an insecure application.

Chapter 5, Stage 0: Education and Awareness, starts the second part of the book, that describes the stages of the SDL. It all starts with educating developers about security. Without this, there’s no real chance of delivering secure software.

Chapter 6, Stage 1: Project Inception, sets the security context for the development effort. This includes assigning someone to guide the team through the SDL, building security leaders within the team, and setting up security expectations and tools.

Chapter 7, Stage 2: Define and Follow Best Practices, lists common secure design principles and describes attack surface analysis and attack surface reduction. The latter is about reducing the amount of code accessible to untrusted users, for example by disabling certain features by default.

Chapter 8, Stage 3: Product Risk Assessment, shows how to determine the application’s level of vulnerability to attack and its privacy impact. This helps to determine what level of security investment is appropriate for what parts of the application.

Chapter 9, Stage 4: Risk Analysis, explains threat modeling. The authors think that this is the practice with the most significant contribution to an application’s security. The idea is to understand the potential threats to the application, the risks those threats pose, and the mitigations that can reduce those risks. Threat models also help with code reviews and penetration tests. The chapter uses a pet shop website as an example.

[Note that there is now a tool that helps you with threat modeling. In this tool, you draw data flow diagrams, after which the tool uses the STRIDE approach to automatically find threats. The tool requires Visio 2007+.]

Chapter 10, Stage 5: Creating Security Documents, Tools, and Best Practices for Customers, describes the collateral that helps customers install, maintain, and use your application securely.

Chapter 11, Stage 6: Secure Coding Policies, explains the need for prescribing security-specific coding practices, educating developers about them, and verifying that they are adhered to. This is a high-level chapter, with details following in later chapters.

Chapter 12, Stage 7: Secure Testing Policies, describes the various forms of security testing, like fuzz testing, penetration testing, and run-time verification.

Chapter 13, Stage 8: The Security Push, explains that the goal of a security push is to hunt for security bugs and triage them. Fixes should follow the push. A security push doesn’t really fit into the SDL, since the goal is to prevent vulnerabilities. It can, however, be useful for legacy (i.e. pre-SDL) code.

Chapter 14, Stage 9: The Final Security Review, describes how to assess (from a security perspective) whether the application is ready to ship. A questionnaire is filled out to show compliance with the SDL, the threat models are reviewed, and unfixed security bugs are reviewed to make sure none are critical.

Chapter 15, Stage 10: Security Response Planning, explains that you need to be prepared to respond to the discovery of vulnerabilities in your deployed application, so that you can prevent panic and follow a solid process. You should have a Security Response Center outside your development team that interfaces with security researchers and others who discover vulnerabilities and guides the development team through the process of releasing a fix. It’s also important to feed back lessons learned into the development process.

Chapter 16, Stage 11: Product Release, explains that the actual release is a non-event, since all the hard work was done in the Final Security Review.

Chapter 17, Stage 12: Security Response Execution, describes the real-world challenges associated with responding to reported vulnerabilities, including when and how to deviate from the plan outlined in Security Response Planning. Above all, you must take the time to fix the root problem properly and to make sure you’re not introducing new bugs.

Chapter 18, Integrating SDL with Agile Methods, starts the final part of the book. It shows how to incorporate agile practices into the SDL, or the other way around.

Chapter 19, SDL Banned Function Calls, explains that some functions are so bad from a security perspective, that they never should be used. This chapter is heavily focused on C.

Chapter 20, SDL Minimum Cryptographic Standards, gives guidance on the use of cryptography, like never roll your own, make the use of crypto algorithms configurable, and what key sizes to use for what algorithms.

Chapter 21, SDL-Required Tools and Compiler Options, describes security tools you should use during development. This chapter is heavily focused on Microsoft technologies.

Chapter 22, Threat Tree Patterns, shows a number of threat trees that reflect common attack patterns. It follows the STRIDE approach again.

The appendix has information about the authors.

I think this book is a must-read for every developer who is serious about building secure software.