TDD is like working out

We all know that exercise is good for us, members of the species Homo Sapiens Sitonourasses. And yet most of us don’t do enough of it. The same is true for Test-Driven Development (TDD). But the similarity doesn’t end there.

With regular exercise, you can grow your muscles. The way this works is not linear, however. It follows an upward saw-tooth pattern: first you damage your muscles (training), then they heal (recovery) and eventually grow stronger than before (supercompensation), then you damage them again, etc:

muscle-workout

TDD follows the Damage-Heal-Grow cycle as well.

First we damage the system by writing a test that fails. Where before we could gloat in knowing all was well because our test suite said so, now we have to admit that there is still something wrong with our code. For some, this realization may hurt as much as their sore muscles after working out.

Luckily, we heal the system quickly by writing only the minimal amount of code required to make the test pass. With everything back to green in minutes or even seconds, we have every right to feel good again. TDD is perfect for short-attention spans.

Finally we grow the system by improving the design. The system can now handle everything we’ve ever thrown at it and more, because we generalized concepts and gave them a proper place in the code.

Damaging and healing happen on two levels in TDD.

First there is the syntactic level. You write a test that calls code that doesn’t exist yet, so the code doesn’t even compile. The healing that follows is to make the code compile, even though it doesn’t yet pass the test.

Only after this syntactic healing do you change the code to pass the test. The latter is more of a semantic type of healing.

The distinction between syntactic and semantic healing has implications for how we work.

There are only so many ways that a program can be syntactically broken, and in many cases, a sophisticated enough IDE can help heal it. For example, when you write a test that refers to a class that doesn’t yet exist, Eclipse offers a Quick Fix to create the class for you.

Semantic healing, on the other hand, is more difficult. The transformations of the Transformation Priority Premise can be seen as standard building blocks, and at least some of them can be automated. But that’s still a long way from the IDE generating the code that will make the failing test pass.

get-your-code-in-shape-practice-tddI haven’t seen many TDD practitioners do the equivalent of walking around the beach showing off their rock-hard abs, and that’s probably a good thing.

But just as we appreciate how a strong, muscular friend can easily handle any piece of furniture when he helps us move, so do product owners like it that we can always deliver any feature in a short amount of time.

Unfortunately, it just doesn’t score us any dates; for that we really do need to hit the gym.

How To Control Access To REST APIs

hackerExposing your data or application through a REST API is a wonderful way to reach a wide audience.

The downside of a wide audience, however, is that it’s not just the good guys who come looking.

Securing REST APIs

Security consists of three factors:

  1. Confidentiality
  2. Integrity
  3. Availability

In terms of Microsoft’s STRIDE approach, the security compromises we want to avoid with each of these are Information Disclosure, Tampering, and Denial of Service. The remainder of this post will only focus on Confidentiality and Integrity.

In the context of an HTTP-based API, Information Disclosure is applicable for GET methods and any other methods that return information. Tampering is applicable for PUT, POST, and DELETE.

Threat Modeling REST APIs

A good way to think about security is by looking at all the data flows. That’s why threat modeling usually starts with a Data Flow Diagram (DFD). In the context of a REST API, a close approximation to the DFD is the state diagram. For proper access control, we need to secure all the transitions.

The traditional way to do that, is to specify restrictions at the level of URI and HTTP method. For instance, this is the approach that Spring Security takes. The problem with this approach, however, is that both the method and the URI are implementation choices.

link-relationURIs shouldn’t be known to anybody but the API designer/developer; the client will discover them through link relations.

Even the HTTP methods can be hidden until runtime with mature media types like Mason or Siren. This is great for decoupling the client and server, but now we have to specify our security constraints in terms of implementation details! This means only the developers can specify the access control policy.

That, of course, flies in the face of best security practices, where the access control policy is externalized from the code (so it can be reused across applications) and specified by a security officer rather than a developer. So how do we satisfy both requirements?

Authorizing REST APIs

I think the answer lies in the state diagram underlying the REST API. Remember, we want to authorize all transitions. Yes, a transition in an HTTP-based API is implemented using an HTTP method on a URI. But in REST, we shield the URI using a link relation. The link relation is very closely related to the type of action you want to perform.

The same link relation can be used from different states, so the link relation can’t be the whole answer. We also need the state, which is based on the representation returned by the REST server. This representation usually contains a set of properties and a set of links. We’ve got the links covered with the link relations, but we also need the properties.

PolicyIn XACML terms, the link relation indicates the action to be performed, while the properties correspond to resource attributes.

Add to that the subject attributes obtained through the authentication process, and you have all the ingredients for making an XACML request!

There are two places where such access control checks comes into play. The first is obviously when receiving a request.

You should also check permissions on any links you want to put in the response. The links that the requester is not allowed to follow, should be omitted from the response, so that the client can faithfully present the next choices to the user.

Using XACML For Authorizing REST APIs

I think the above shows that REST and XACML are a natural fit.

All the more reason to check out XACML if you haven’t already, especially XACML’s REST Profile and the forthcoming JSON Profile.

The Decorator Pattern

decoratingOne design pattern that I don’t see being used very often is Decorator.

I’m not sure why this pattern isn’t more popular, as it’s quite handy.

The Decorator pattern allows one to add functionality to an object in a controlled manner. This works at runtime, even with statically typed languages!

The decorator pattern is an alternative to subclassing. Subclassing adds behavior at compile time, and the change affects all instances of the original class; decorating can provide new behavior at run-time for individual objects.

The Decorator pattern is a good tool for adhering to the open/closed principle.

Some examples may show the value of this pattern.

Example 1: HTTP Authentication

Imagine an HTTP client, for example one that talks to a RESTful service.

Some parts of the service are publicly accessible, but some require the user to log in. The RESTful service responds with a 401 Unauthorized status code when the client tries to access a protected resource.

Changing the client to handle the 401 leads to duplication, since every call could potentially require authentication. So we should extract the authentication code into one place. Where would that place be, though?

Here’s where the Decorator pattern comes in:

public class AuthenticatingHttpClient
    implements HttpClient {

  private final HttpClient wrapped;

  public AuthenticatingHttpClient(HttpClient wrapped) {
    this.wrapped = wrapped;
  }

  @Override
  public Response execute(Request request) {
    Response response = wrapped.execute(request);
    if (response.getStatusCode() == 401) {
      authenticate();
      response = wrapped.execute(request);
    }
    return response;
  }

  protected void authenticate() {
    // ...
  }

}

A REST client now never has to worry about authentication, since the AuthenticatingHttpClient handles that.

Example 2: Caching Authorization Decisions

OK, so the user has logged in, and the REST server knows her identity. It may decide to allow access to a certain resource to one person, but not to another.

IOW, it may implement authorization, perhaps using XACML. In that case, a Policy Decision Point (PDP) is responsible for deciding on access requests.

Checking permissions it often expensive, especially when the permissions become more fine-grained and the access policies more complex. Since access policies usually don’t change very often, this is a perfect candidate for caching.

This is another instance where the Decorator pattern may come in handy:

public class CachingPdp implements Pdp {

  private final Pdp wrapped;

  public CachingPdp(Pdp wrapped) {
    this.wrapped = wrapped;
  }

  @Override
  public ResponseContext decide(
      RequestContext request) {
    ResponseContext response = getCached(request);
    if (response == null) {
      response = wrapped.decide(request);
      cache(request, response);
    }
    return response;
  }

  protected ResponseContext getCached(
      RequestContext request) {
    // ...
  }

  protected void cache(RequestContext request, 
      ResponseContext response) {
    // ...
  }

}

As you can see, the code is very similar to the first example, which is why we call this a pattern.

As you may have guessed from these two examples, the Decorator pattern is really useful for implementing cross-cutting concerns, like the security features of authentication, authorization, and auditing, but that’s certainly not the only place where it shines.

If you look carefully, I’m sure you’ll be able to spot many more opportunities for putting this pattern to work.

How To Start With Software Security – Part 2

white-hatLast time, I wrote about how an organization can get started with software security.

Today I will look at how to do that as an individual.

From Development To Secure Development

As a developer, I wasn’t always aware of the security implications of my actions.

Now that I’m the Engineering Security Champion for my project, I have to be.

It wasn’t an easy transition. The security field is vast and I keep learning something new almost every day. I read a number of books on security, some of which I reviewed on this site.

As an aspiring software craftsman, I realize that personal efforts are only half the story. The other half is the community of professionals.

Secure Development Communities

I’m lucky to work in a big organization, where such a community already exist.

EMC’s Product Security Office (PSO) provides me with a personal security adviser, maintains a security-related wiki, and operates a space on our internal collaboration environment.

communityIf your organization doesn’t have something like our PSO, you can look elsewhere. (And if it does, you should look outside too!)

OWASP is a great place to start.

They actually have three sub-communities, one of which is for Builders.

But it’s also good to look at the other sub-communities, since they’re all related. Looking at things from the perspective of the others can be quite enlightening.

That’s also why it’s a good idea to attend a security conference, if you can. OWASP holds annual AppSec conferences in three geos. The RSA Conference is another good place to meet your peers.

If you can’t afford to attend a conference, you can always follow the security section of Stack Exchange or watch SecurityTube.

Contributing To The Community

So far I’ve talked about taking in information, but you shouldn’t forget to share your personal experiences as well.

contributeYou may think you know very little yet, but even then it’s valuable to share.

It helps to organize your thoughts, which is crucial when learning and you may find you’ll gain insights from comments that readers leave as well.

More to the point, there are many others out there that are getting started and who would benefit from seeing they are not alone.

Apart from posting to this blog, I also contribute to the EMC Developer Network, where I’m currently writing a series on XML and Security.

There are other ways to contribute as well. You could join or start an OWASP chapter, for instance.

What Do You Think?

How did you get started with software security? How do you keep up with the field? What communities are you part of? Please leave a comment.

How To Start With Software Security

white-hatThe software security field sometimes feels a bit negative.

The focus is on things that went wrong and people are constantly told what not to do.

Build Security In

One often heard piece of advice is that one cannot bolt security on as an afterthought, that it has to be built in.

But how do we do that? I’ve written earlier about two approaches: Cigital’s TouchPoints and Microsoft’s Security Development Lifecycle (SDL).

The Touchpoints are good, but rather high-level and not so actionable for developers starting out with security. The SDL is also good, but rather heavyweight and difficult to adopt for smaller organizations.

The Software Assurance Maturity Model (SAMM)

We need a framework that we can ease into in an iterative manner. It should also provide concrete guidance for developers that don’t necessarily have a lot of background in the security field.

Enter OWASP‘s SAMM:

The Software Assurance Maturity Model (SAMM) is an open framework to help organizations formulate and implement a strategy for software security that is tailored to the specific risks facing the organization.

SAMM assumes four business functions in developing software and assigns three security practices to each of those:

opensamm-security-practices

For each practice, three maturity levels are defined, in addition to an implicit Level 0 where the practice isn’t performed at all. Each level has an objective and several activities to meet the objective.

To get a baseline of the current security status, you perform an assessment, which consists of answering questions about each of the practices. The result of an assessment is a scorecard. Comparing scorecards over time gives insight into evolving security capabilities.

With these building blocks in place, you can build a roadmap for improving your capabilities.

A roadmap consists of phases in which certain practices are improved so that they reach a higher level. SAMM even provides roadmap templates for specific types of organizations to get you started quickly.

What Do You Think?

Do you think the SAMM is actionable? Would it help your organization build out a roadmap for improving its security capabilities? Please leave a comment.

Securing HTTP-based APIs With Signatures

CloudSecurityI work at EMC on a platform on top of which SaaS solutions can be built.

This platform has a RESTful HTTP-based API, just like a growing number of other applications.

With development frameworks like JAX-RS, it’s relatively easy to build such APIs.

It is not, however, easy to build them right.

Issues With Building HTTP-based APIs

The problem isn’t so much in getting the functionality out there. We know how to develop software and the available REST/HTTP frameworks and libraries make it easy to expose the functionality.

That’s only half the story, however. There are many more -ilities to consider.

rest-easyThe REST architectural style addresses some of those, like scalability and evolvability.

Many HTTP-based APIs today claim to be RESTful, but in fact are not. This means that they are not reaping all of the benefits that REST can bring.

I’ll be talking more about how to help developers meet all the constraints of the REST architectural style in future posts.

Today I want to focus on another non-functional aspect of APIs: security.

Security of HTTP-based APIs

In security, we care about the CIA-triad: Confidentiality, Integrity, and availability.

Availability of web services is not dramatically different from that of web applications, which is relatively well understood. We have our clusters, load balancers, and what not, and usually we are in good shape.

Confidentiality and integrity, on the other hand, both require proper authentication, and here matters get more interesting.

Authentication of HTTP-based APIs

authenticationFor authentication in an HTTP world, it makes sense to look at HTTP Authentication.

This RFC describes Basic and Digest authentication. Both have their weaknesses, which is why you see many APIs use alternatives.

Luckily, these alternatives can use the same basic machinery defined in the RFC. This machinery includes status code 401 Unauthorized, and the WWW-Authenticate, Authentication-Info, and Authorization headers. Note that the Authorization header is unfortunately misnamed, since it’s used for authentication, not authorization.

The final piece of the puzzle is the custom authentication scheme. For example, Amazon S3 authentication uses the AWS custom scheme.

Authentication of HTTP-based APIs Using Signatures

The AWS scheme relies on signatures. Other services, like EMC Atmos, use the same approach.

It is therefore good to see that a new IETF draft has been proposed to standardize the use of signatures in HTTP-based APIs.

Standardization enables the construction of frameworks and libraries, which will drive down the cost of implementing authentication and will make it easier to build more secure APIs.

What do you think?

what-do-you-thinkIf you’re in the HTTP API building and/or consuming business –and who isn’t these days– then please go ahead and read the draft and provide feedback.

I’m also interested in your experiences with building or consuming secure HTTP APIs. Please leave a comment on this post.

Is XACML Dead?

ripXACML is dead. Or so writes Forrester’s Andras Cser.

Before I take a critical look at the reasons underlying this claim, let me disclose that I’m a member of the OASIS committee that defines the XACML specification. So I may be a little biased.

Lack of broad adoption

The first reason for claiming XACML dead is the lack of adoption. Being a techie, I don’t see a lot of customers, so I have to assume Forrester knows better than me.

At last year’s XACML Seminar in the Netherlands, there were indeed not many people who actually used XACML, but the room was filled with people who were at least interested enough to pay to hear about practical experiences with XACML.

I also know that XACML is in use at large enterprises like Bank of America, Bell Helicopter, and Boeing, to name just some Bs. And the supplier side is certainly not the problem.

So there is some adoption, buI grant that it’s not broad.

Inability to serve the federated, extended enterprise

XACML was designed to meet the authorization needs of the monolithic enterprise where all users are managed centrally in AD.

extended-enterpriseI don’t understand this statement at all, as there is nothing in the XACML spec that depends on centrally managed users.

Especially in combination with SAML, XACML can handle federated scenarios perfectly fine.

In my current project, we’re using XACML in a multi-tenant environment where each tenant uses their own identity provider. No problem.

PDP does a lot of complex things that it does not inform the PEP about

The PDP is apparently supposed to tell the PEP why access is denied. I don’t get that: I’ve never seen an application that greyed out a button and included the text “You need the admin role to perform this operation”.

Maybe this is about testing access control policies. Or maybe I just don’t understand the problem. I’d love to learn more about this.

Not suitable for cloud and distributed deployment

CloudSecurityI guess what they mean is that fine-grained access control doesn’t work well in high latency environments. If so, sure.

XACML doesn’t prescribe how fine-grained your policies have to be, however, so I can’t see how this could be XACML’s fault. That’s like blaming my keyboard for allowing me to type more characters than fit in a tweet.

Actually, I’d say that XACML works very well in the cloud. And with the recently approved REST profile and the upcoming JSON profile, XACML will be even better suited for cloud solutions.

Commercial support is non-existent

This is lack of adoption again.

BTW, absolute claims like “there is no software library with PEP support” turn you into an easy target. All it takes is one counter example to prove you wrong.

Refactoring and rebuilding existing in-house applications is not an option

This, I think, is the main reason for slow adoption: legacy applications create inertia. We see the same thing with SSO. Even today, there are EMC internal applications that require me to maintain separate credentials.

The problem is worse for authorization. Authentication is a one-time thing at the start of a session, but authorization happens all the time. There are simply more places in an application that require modification.

There may be some light at the end of the tunnel, however.

Under constant attackHistory shows that inertia can be overcome by a large enough force.

That force might be the changing threat landscape. We’ll see.

OAuth supports the mobile application endpoint in a lightweight manner

OAuth does well in the mobile space. One reason is that mobile apps usually provide focused functionality that doesn’t require fine-grained access control decisions. It remains to be seen whether that continues to be true as mobile apps get more advanced.

Of course, if all your access control needs can be implemented with one yes/no question, then using XACML is overkill. That doesn’t, however, mean there is no place for XACML is the many, many places where life is not that simple.

What do you think?

All in all, I’m certainly not convinced by Forrester’s claim that XACML is dead. Are you? If XACML were buried, what would you use instead?

Update: Others have joined in the discussion and confirmed that XACML is not dead:

  • Gary from XACML vendor Axiomatics
  • Danny from XACML vendor Dell
  • Anil from open source XACML implementation JBoss PicketBox
  • Ian from analyst Gartner

Update 2: More people joined the discussion. One is confused, one is confusing, and Forrester’s Eva Mahler (of SGML and UMA fame) backs her colleague.

Update 3: Another analyst joins the discussion: KuppingerCole doesn’t think XACML is dead either.

Update 4: CA keeps supporting XACML in their SiteMinder product.

How To Secure an Organization That Is Under Constant Attack

Battle of GeonosisThere have been many recent security incidents at well-respected organizations like the Federal Reserve, the US Energy Department, the New York Times, and the Wall Street Journal.

 

If these large organizations are incapable of keeping unwanted people off their systems, then who is?

The answer unfortunately is: not many. So we must assume our systems are compromised. Compromised is the new normal.

This has implications for our security efforts:

  1. We need to increase our detection capabilities
  2. We need to be able to respond quickly, preferably in an automated fashion, when we detect an intrusion

Increasing Intrusion Detection Capabilities with Security Analytics

There are usually many small signs that something fishy is going on when an intruder has compromised your network.

For instance, our log files might show that someone is logging in from an IP address in China instead of San Francisco. While that may be normal for our CEO, it’s very unlikely for her secretary.

Another example is when someone tries to access a system it normally doesn’t. This may be an indication of an intruder trying to escalate his privileges.

Security AnalyticsMost of us are currently unable to collect such small indicators into firm suspicions, but that is about to change with the introduction of Big Data Analytics technology.

RSA recently released a report that predicts that big data will play a big role in Security Incident Event Monitoring (SIEM), network monitoring, Identity and Access Management (IAM), fraud detection, and Governance, Risk, and Compliance (GRC) systems.

RSA is investing heavily in Security Analytics to prevent and predict attacks, and so is IBM.

Quick, Automated, Responses to Intrusion Detection with Risk-Adaptive Access Control

The information we extract from our big security data can be used to drive decisions. The next step is to automate those decisions and actions based on them.

Large organizations, with hundreds or even thousands of applications, have a large attack surface. They are also interesting targets and therefore must assume they are under attack multiple times a day.

Anything that is not automated is not going to scale.

Risk-Adaptive Access Control (RAdAC)One decision than can be automated is whether we grant someone access to a particular system or piece of data.

This dynamic access control based on risk information is what NIST calls Risk-Adaptive Access Control (RAdAC).

As I’ve shown before, RAdAC can be implemented using eXtensible Access Control Markup Language (XACML).

What do you think?

Is your organization ready to look at security analytics? What do you see as the major road blocks for implementing RAdAC?

Book review: Secure Programming with Static Analysis

Secure Programming with Static AnalysisOne thing that should be part of every Security Development Lifecycle (SDL) is static code analysis.

This topic is explained in great detail in Secure Programming with Static Analyis.

Chapter 1, The Software Security Problem, explains why security is easy to get wrong and why typical methods for catching bugs aren’t effective for finding security vulnerabilities.

Chapter 2, Introduction to Static Analysis, explains that static analysis involves a software program checking the source code of another software program to find structural, quality, and security problems.

Chapter 3, Static Analysis as Part of Code Review, explains how static code analysis can be integrated into a security review process.

Chapter 4, Static Analysis Internals, describes how static analysis tools work internally and what trade-offs are made when building them.

This concludes the first part of the book that describes the big picture. Part two deals with pervasive security problems.

InputChapter 5, Handling Input, describes how programs should deal with untrustworthy input.

Chapter 6, Buffer Overflow, and chapter 7, Bride to Buffer Overflow, deal with buffer overflows. These chapters are not as interesting for developers working with modern languages like Java or C#.

Chapter 8, Errors and Exceptions, talks about unexpected conditions and the link with security issues. It also handles logging and debugging.

Chapter 9, Web Applications, starts the third part of the book about common types of programs. This chapter looks at security problems specific to the Web and HTTP.

Chapter 10, XML and Web Services, discusses the security challenges associated with XML and with building up applications from distributed components.

CryptographyChapter 11, Privacy and Secrets, switches the focus from AppSec to InfoSec with an explanation of how to protect private information.

Chapter 12, Privileged Programs, continues with a discussion on how to write programs that operate with different permissions than the user.

The final part of the book is about gaining experience with static analysis tools.

Chapter 13, Source Code Analysis Exercises for Java, is a tutorial on how to use Fortify (a trial version of which is included with the book) on some sample Java projects.

Chapter 14, Source Code Analysis Exercises for C does the same for C programs.

rating-five-out-of-five

This book is very useful for anybody working with static analysis tools. Its description of the internals of such tools helps with understanding how to apply the tools best.

I like that the book is filled with numerous examples that show how the tools can detect a particular type of problem.

Finally, the book makes clear that any static analysis tool will give both false positives and false negatives. You should really understand security issues yourself to make good decisions. When you know how to do that, a static analysis tool can be a great help.

Using Cryptography in Java Applications

This post describes how to use the Java Cryptography Architecture (JCA) that allows you to use cryptographic services in your applications.

Java Cryptography Architecture Services

The JCA provides a number of cryptographic services, like message digests and signatures. These services are accessible through service specific APIs, like MessageDigest and Signature. Cryptographic services abstract different algorithms. For digests, for instance, you could use MD5 or SHA1. You specify the algorithm as a parameter to the getInstance() method of the cryptographic service class:

MessageDigest digest = MessageDigest.getInstance("MD5");

You find the value of the parameter for your algorithm in the JCA Standard Algorithm Name Documentation. Some algorithms have parameters. For instance, an algorithm to generate a private/public key pair will take the key size as a parameter. You specify the parameter(s) using the initialize() method:

KeyPairGenerator generator = KeyPairGenerator.getInstance("DSA");
generator.initialize(1024);

If you don’t call the initialize() method, some default value will be used, which may or may not be what you want. Unfortunately, the API for initialization is not 100% consistent across services. For instance, the Cipher class uses init() with an argument indicating encryption or decryption, while the Signature class uses initSign() for signing and initVerify() for verification.

Java Cryptography Architecture Providers

The JCA keeps your code independent from a particular cryptographic algorithm’s implementation through the provider system. Providers are ranked according to a preference order, which is configurable (see below). The best preference is 1, the next best is 2, etc. The preference order allows the JCA to select the best available provider that implements a given algorithm. Alternatively, you can specify a specific provider in the second argument to getInstance():

Signature signature = Signature.getInstance("SHA1withDSA", "SUN");

The JRE comes with a bunch of providers from Oracle by default. However, due to historical export restrictions, these are not the most secure implementations. To get access to better algorithms and larger key sizes, install the Java Cryptography Extension Unlimited Strength Jurisdiction Policy Files. Update: Note that the above statement is true for the Oracle JRE. OpenJDK doesn’t have the same limitation.

Make Your Use of Cryptography Configurable

You should always make sure that the cryptographic services that your application uses are configurable. If you do that, you can change the cryptographic algorithm and/or implementation without issuing a patch. This is particularly valuable when a new attack on an (implementation of an) algorithm becomes available. The JCA makes it easy to configure the use of cryptography. The getInstance() method accepts both the name of the algorithm and the name of the provider implementing that algorithm. You should read both and any values for the algorithm’s parameters from some sort of configuration file. Also make sure you keep your code DRY and instantiate cryptographic services in a single place. Check that the requested algorithm and/or provider are actually available. The getInstance() method throws NoSuchAlgorithmException when a given algorithm or provider is not available, so you should catch that. The safest option then is to fail and have someone make sure the system is configured properly. If you continue despite a configuration error, you may end up with a system that is less secure than required. Note that Oracle recommends not specifying the provider. The reasons they provide is that not all providers may be available on all platforms, and that specifying a provider may mean that you miss out on optimizations. You should weigh those disadvantages against the risk of being vulnerable. Deploying specific providers with known characteristics with your application may neutralize the disadvantages that Oracle mentions.

Adding Cryptographic Service Providers

The provider system is extensible, so you can add providers. For example, you could use the open source Bouncy Castle or the commercial RSA BSAFE providers. In order to add a provider, you must make sure that its jar is available to the application. You can put it on the classpath for this purpose. Alternatively, you can make it an installed extension by placing it in the $JAVA_HOME/lib/ext directory, where $JAVA_HOME is the location of your JDK/JRE distribution. The major difference between the two approaches is that installed extensions are granted all permissions by default whereas code on the classpath is not. This is significant when (part of) your code runs in a sandbox. Some services, like Cipher, require the provider jar to be signed. The next step is to register the provider with the JCA provider system. The simplest way is to use Security.addProvider():

Security.addProvider(new BouncyCastleProvider());

You can also set the provider’s preference order by using the Security.insertProviderAt() method:

Security.insertProviderAt (new JsafeJCE(), 1);

One downside of this approach is that it couples your code to the provider, since you have to import the provider class. This may not be an important issue in an modular system like OSGi. Another thing to look out for is that code requires SecurityPermission to add a provider programmatically. The provider can also be configured as part of your environment via static registration by adding an entry to the java.security properties file (found in $JAVA_HOME/jre/lib/security/java.security):

security.provider.1=com.rsa.jsafe.provider.JsafeJCE
security.provider.2=sun.security.provider.Sun

The property names in this file start with security.provider. and end with the provider’s preference. The property value is the fully qualified name of the class implementing Provider.

Implementing Your Own Cryptographic Service Provider

Don’t do it. You will get it wrong and be vulnerable to attacks.

Using Cryptographic Service Providers

The documentation for the provider should tell you what provider name to use as the second argument to getInstance(). For instance, Bouncy Castle uses BC, while RSA BSAFE uses JsafeJCE. Most providers have custom APIs as well as JCA conformant APIs. Do not use the custom APIs, since that will make it impossible to configure the algorithms and providers used.

Not All Algorithms and Implementations Are Created Equal

It’s important to note that different algorithms and implementations have different characteristics and that those may make them more or less suitable for your situation. For instance, some organizations will only allow algorithms and implementations that are FIPS 140-2 certified or are on the list of NSA Suite B cryptographic algorithms. Always make sure you understand your customer’s cryptographic needs and requirements.

Using JCA in an OSGi environment

The getInstance() method is a factory method that uses the Service Provider Interface (SPI). That is problematic in an OSGi world, since OSGi violates the SPI framework’s assumption that there is a single classpath. Another potential issue is that JCA requires some jars to be signed. If those jars are not valid OSGi bundles, you can’t run them through bnd to make them so, since that would make the signature invalid. Fortunately, you can kill both birds with one stone. Put your provider jars on the classpath of your main program, that is the program that starts the OSGi framework. Then export the provider package from the OSGi system bundle using the org.osgi.framework.system.packages.extra system property. This will make the system bundle export that package. Now you can simply use Import-Package on the provider package in your bundles. There are other options for resolving these problems if you can’t use the above solution.