How To Control Access To REST APIs

hackerExposing your data or application through a REST API is a wonderful way to reach a wide audience.

The downside of a wide audience, however, is that it’s not just the good guys who come looking.

Securing REST APIs

Security consists of three factors:

  1. Confidentiality
  2. Integrity
  3. Availability

In terms of Microsoft’s STRIDE approach, the security compromises we want to avoid with each of these are Information Disclosure, Tampering, and Denial of Service. The remainder of this post will only focus on Confidentiality and Integrity.

In the context of an HTTP-based API, Information Disclosure is applicable for GET methods and any other methods that return information. Tampering is applicable for PUT, POST, and DELETE.

Threat Modeling REST APIs

A good way to think about security is by looking at all the data flows. That’s why threat modeling usually starts with a Data Flow Diagram (DFD). In the context of a REST API, a close approximation to the DFD is the state diagram. For proper access control, we need to secure all the transitions.

The traditional way to do that, is to specify restrictions at the level of URI and HTTP method. For instance, this is the approach that Spring Security takes. The problem with this approach, however, is that both the method and the URI are implementation choices.

link-relationURIs shouldn’t be known to anybody but the API designer/developer; the client will discover them through link relations.

Even the HTTP methods can be hidden until runtime with mature media types like Mason or Siren. This is great for decoupling the client and server, but now we have to specify our security constraints in terms of implementation details! This means only the developers can specify the access control policy.

That, of course, flies in the face of best security practices, where the access control policy is externalized from the code (so it can be reused across applications) and specified by a security officer rather than a developer. So how do we satisfy both requirements?

Authorizing REST APIs

I think the answer lies in the state diagram underlying the REST API. Remember, we want to authorize all transitions. Yes, a transition in an HTTP-based API is implemented using an HTTP method on a URI. But in REST, we shield the URI using a link relation. The link relation is very closely related to the type of action you want to perform.

The same link relation can be used from different states, so the link relation can’t be the whole answer. We also need the state, which is based on the representation returned by the REST server. This representation usually contains a set of properties and a set of links. We’ve got the links covered with the link relations, but we also need the properties.

PolicyIn XACML terms, the link relation indicates the action to be performed, while the properties correspond to resource attributes.

Add to that the subject attributes obtained through the authentication process, and you have all the ingredients for making an XACML request!

There are two places where such access control checks comes into play. The first is obviously when receiving a request.

You should also check permissions on any links you want to put in the response. The links that the requester is not allowed to follow, should be omitted from the response, so that the client can faithfully present the next choices to the user.

Using XACML For Authorizing REST APIs

I think the above shows that REST and XACML are a natural fit.

All the more reason to check out XACML if you haven’t already, especially XACML’s REST Profile and the forthcoming JSON Profile.

Advertisement

The Decorator Pattern

decoratingOne design pattern that I don’t see being used very often is Decorator.

I’m not sure why this pattern isn’t more popular, as it’s quite handy.

The Decorator pattern allows one to add functionality to an object in a controlled manner. This works at runtime, even with statically typed languages!

The decorator pattern is an alternative to subclassing. Subclassing adds behavior at compile time, and the change affects all instances of the original class; decorating can provide new behavior at run-time for individual objects.

The Decorator pattern is a good tool for adhering to the open/closed principle.

Some examples may show the value of this pattern.

Example 1: HTTP Authentication

Imagine an HTTP client, for example one that talks to a RESTful service.

Some parts of the service are publicly accessible, but some require the user to log in. The RESTful service responds with a 401 Unauthorized status code when the client tries to access a protected resource.

Changing the client to handle the 401 leads to duplication, since every call could potentially require authentication. So we should extract the authentication code into one place. Where would that place be, though?

Here’s where the Decorator pattern comes in:

public class AuthenticatingHttpClient
    implements HttpClient {

  private final HttpClient wrapped;

  public AuthenticatingHttpClient(HttpClient wrapped) {
    this.wrapped = wrapped;
  }

  @Override
  public Response execute(Request request) {
    Response response = wrapped.execute(request);
    if (response.getStatusCode() == 401) {
      authenticate();
      response = wrapped.execute(request);
    }
    return response;
  }

  protected void authenticate() {
    // ...
  }

}

A REST client now never has to worry about authentication, since the AuthenticatingHttpClient handles that.

Example 2: Caching Authorization Decisions

OK, so the user has logged in, and the REST server knows her identity. It may decide to allow access to a certain resource to one person, but not to another.

IOW, it may implement authorization, perhaps using XACML. In that case, a Policy Decision Point (PDP) is responsible for deciding on access requests.

Checking permissions it often expensive, especially when the permissions become more fine-grained and the access policies more complex. Since access policies usually don’t change very often, this is a perfect candidate for caching.

This is another instance where the Decorator pattern may come in handy:

public class CachingPdp implements Pdp {

  private final Pdp wrapped;

  public CachingPdp(Pdp wrapped) {
    this.wrapped = wrapped;
  }

  @Override
  public ResponseContext decide(
      RequestContext request) {
    ResponseContext response = getCached(request);
    if (response == null) {
      response = wrapped.decide(request);
      cache(request, response);
    }
    return response;
  }

  protected ResponseContext getCached(
      RequestContext request) {
    // ...
  }

  protected void cache(RequestContext request, 
      ResponseContext response) {
    // ...
  }

}

As you can see, the code is very similar to the first example, which is why we call this a pattern.

As you may have guessed from these two examples, the Decorator pattern is really useful for implementing cross-cutting concerns, like the security features of authentication, authorization, and auditing, but that’s certainly not the only place where it shines.

If you look carefully, I’m sure you’ll be able to spot many more opportunities for putting this pattern to work.

Is XACML Dead?

ripXACML is dead. Or so writes Forrester’s Andras Cser.

Before I take a critical look at the reasons underlying this claim, let me disclose that I’m a member of the OASIS committee that defines the XACML specification. So I may be a little biased.

Lack of broad adoption

The first reason for claiming XACML dead is the lack of adoption. Being a techie, I don’t see a lot of customers, so I have to assume Forrester knows better than me.

At last year’s XACML Seminar in the Netherlands, there were indeed not many people who actually used XACML, but the room was filled with people who were at least interested enough to pay to hear about practical experiences with XACML.

I also know that XACML is in use at large enterprises like Bank of America, Bell Helicopter, and Boeing, to name just some Bs. And the supplier side is certainly not the problem.

So there is some adoption, buI grant that it’s not broad.

Inability to serve the federated, extended enterprise

XACML was designed to meet the authorization needs of the monolithic enterprise where all users are managed centrally in AD.

extended-enterpriseI don’t understand this statement at all, as there is nothing in the XACML spec that depends on centrally managed users.

Especially in combination with SAML, XACML can handle federated scenarios perfectly fine.

In my current project, we’re using XACML in a multi-tenant environment where each tenant uses their own identity provider. No problem.

PDP does a lot of complex things that it does not inform the PEP about

The PDP is apparently supposed to tell the PEP why access is denied. I don’t get that: I’ve never seen an application that greyed out a button and included the text “You need the admin role to perform this operation”.

Maybe this is about testing access control policies. Or maybe I just don’t understand the problem. I’d love to learn more about this.

Not suitable for cloud and distributed deployment

CloudSecurityI guess what they mean is that fine-grained access control doesn’t work well in high latency environments. If so, sure.

XACML doesn’t prescribe how fine-grained your policies have to be, however, so I can’t see how this could be XACML’s fault. That’s like blaming my keyboard for allowing me to type more characters than fit in a tweet.

Actually, I’d say that XACML works very well in the cloud. And with the recently approved REST profile and the upcoming JSON profile, XACML will be even better suited for cloud solutions.

Commercial support is non-existent

This is lack of adoption again.

BTW, absolute claims like “there is no software library with PEP support” turn you into an easy target. All it takes is one counter example to prove you wrong.

Refactoring and rebuilding existing in-house applications is not an option

This, I think, is the main reason for slow adoption: legacy applications create inertia. We see the same thing with SSO. Even today, there are EMC internal applications that require me to maintain separate credentials.

The problem is worse for authorization. Authentication is a one-time thing at the start of a session, but authorization happens all the time. There are simply more places in an application that require modification.

There may be some light at the end of the tunnel, however.

Under constant attackHistory shows that inertia can be overcome by a large enough force.

That force might be the changing threat landscape. We’ll see.

OAuth supports the mobile application endpoint in a lightweight manner

OAuth does well in the mobile space. One reason is that mobile apps usually provide focused functionality that doesn’t require fine-grained access control decisions. It remains to be seen whether that continues to be true as mobile apps get more advanced.

Of course, if all your access control needs can be implemented with one yes/no question, then using XACML is overkill. That doesn’t, however, mean there is no place for XACML is the many, many places where life is not that simple.

What do you think?

All in all, I’m certainly not convinced by Forrester’s claim that XACML is dead. Are you? If XACML were buried, what would you use instead?

Update: Others have joined in the discussion and confirmed that XACML is not dead:

  • Gary from XACML vendor Axiomatics
  • Danny from XACML vendor Dell
  • Anil from open source XACML implementation JBoss PicketBox
  • Ian from analyst Gartner

Update 2: More people joined the discussion. One is confused, one is confusing, and Forrester’s Eva Mahler (of SGML and UMA fame) backs her colleague.

Update 3: Another analyst joins the discussion: KuppingerCole doesn’t think XACML is dead either.

Update 4: CA keeps supporting XACML in their SiteMinder product.

How to Create Extensible Java Applications

Extension pointsMany applications benefit from being open to extension. This post describes two ways to implement such extensibility in Java.

Extensible Applications

Extensible applications are applications whose functionality can be extended without having to recompile them and sometimes even without having to restart them. This may happen by simply adding a jar to the classpath, or by a more involved installation procedure.

One example of an extensible application is the Eclipse IDE. It allows extensions, called plug-ins, to be installed so that new functionality becomes available. For instance, you could install a Source Code Management (SCM) plug-in to work with your favorite SCM.

As another example, imagine an implementation of the XACML specification for authorization. The “X” in XACML stands for “eXtensible” and the specification defines a number of extension points, like attribute and category IDs, combining algorithms, functions, and Policy Information Points. A good XACML implementation will allow you to extend the product by providing a module that implements the extension point.

Service Provider Interface

Oracle’s solution for creating extensible applications is the Service Provider Interface (SPI).

In this approach, an extension point is defined by an interface:

package com.company.application;

public interface MyService {
  // ...
}

You can find all extensions for such an extension point by using the ServiceLoader class:

public class Client {

  public void useService() {
    Iterator<MyService> services = ServiceLoader.load(
        MyService.class).iterator();
    while (services.hasNext()) {
      MyService service = services.next();
      // ... use service ...
  }

}

An extension for this extension point can be any class that implements that interface:

package com.company.application.impl;

public class MyServiceImpl implements MyService {
  // ...
}

The implementation class must be publicly available and have a public no-arg constructor. However, that’s not enough for the ServiceLoader class to find it.

You must also create a file named after the fully qualified name of the extension point interface in META-INF/services. In our example, that would be:

META-INF/services/com.company.application.Myservice

This file must be UTF-8 encoded, or ServiceLoader will not be able to read it. Each line of this file should contain the fully qualified name of one extension implementing the extension point, for instance:

com.company.application.impl.MyServiceImpl 

OSGi Services

Service registryThe SPI approach described above only works when the extension point files are on the classpath.

In an OSGi environment, this is not the case. Luckily, OSGi has its own solution to the extensibility problem: OSGi services.

With Declarative Services, OSGi services are easy to implement, especially when using the annotations of Apache Felix Service Component Runtime (SCR):

@Service
@Component
public class MyServiceImpl implements MyService {
  // ...
}

With OSGi and SCR, it is also very easy to use a service:

@Component
public class Client {

  @Reference
  private MyService myService;

  protected void bindMyService(MyService bound) {
    myService = bound;
  }

  protected void unbindMyService(MyService bound) {
    if (myService == bound) {
      myService = null;
    }
  }

  public void useService() {
    // ... use myService ...
  }

}

Best of Both Worlds

So which of the two options should you chose? It depends on your situation, of course. When you’re in an OSGi environment, the choice should obviously be OSGi services. If you’re not in an OSGi environment, you can’t use those, so you’re left with SPI.

CakeBut what if you’re writing a framework or library and you don’t know whether your code will be used in an OSGi or classpath based environment?

You will want to serve as many uses of your library as possible, so the best would be to support both models. This can be done if you’re careful.

Note that adding a Declarative Services service component file like OSGI-INF/myServiceComponent.xml to your jar (which is what the SCR annotations end up doing when they are processed) will only work in an OSGi environment, but is harmless outside OSGi.

Likewise, the SPI service file will work in a traditional classpath environment, but is harmless in OSGi.

So the two approaches are actually mutually exclusive and in any given environment, only one of the two approaches will find anything. Therefore, you can write code that uses both approaches. It’s a bit of duplication, but it allows your code to work in both types of environments, so you can have your cake and eat it too.

XACML Vendor: NextLabs

This is the third in a series of posts where I interview XACML vendors. This time we talk to NextLabs.

Why does the world need XACML? What benefits do your customers realize?

Over the last 20 years IT has focused on building walls around their networks and applications. Now with cross-organizational collaboration, cloud and mobile we are finding that those walls are no longer relevant for protecting critical information.

The world needs XACML to protect critical information in today’s collaborative business and IT environment.

At NextLabs we focus on applying Extensible Access Control Markup Language (XACML) to information protection to enable our customers to accelerate global collaboration while simultaneously protecting their most sensitive intellectual property.

Using Attribute-Based Access Control (ABAC) and externalized authorization we can protect data based on its sensitivity, defined by attributes, across applications and systems. Traditional access control models such as Role-Based Access Control (RBAC) and Access Control Lists (ACLs) simply do not scale to address the information protection problem.

What products do you have in the XACML space?

NextLabs has taken an industry-solution approach to the market. We provide several industry-solutions for regulatory compliance, secure partner collaboration, and intellectual property protection.

Each solution is comprised of pre-built policy libraries that implement industry best-practices, pre-built policy-enforcement-points (PEPs) for critical enterprise applications, our Control Center Information Control Platform based on XACML, and pre-built reporting.

Control Center is our Information Control Platform. It has several components:

  • Control Center Server – the Control Center server includes our Policy Administration Point (PAP) and additional services necessary for information control use cases. These include:
    • Information Classification Services – a compressive set of services that automate information classification such as content-analysis, data tagging, and user driven classification
    • Policy Development and Lifecycle Management Services – Services to govern and simplify the development and management of policy such as delegated administration, approval workflow, testing and validation, audit trail, versioning, and dictionary services. On top of this we provide Policy Studio, a graphical policy integrated development environment (IDE)
    • Policy Deployment and PDP Management Services – services that allow us to reliably deploy policies to distributed PDPs, even over the public internet
    • Audit and Reporting Services – role-based dashboards, analytics, and reporting to provide insights into information activity and policy compliance
  • Control Center Policy Controller – the Policy Controller is our policy-decision-point (PDP). We provide three different editions of the Policy Controller:
    • Endpoint Policy Controller – designed to run on laptops and desktops, even when disconnected
    • Server Policy Controller – designed to run co-located with a server based applications. Can be run as a service/daemon or embedded into an application
    • Policy Controller Service – designed to run as a stand-alone PDP service in J2EE Application Server

NextLabs provides over a dozen pre-built Policy Enforcement Points (PEPs) for common applications and system. These are separated into three product lines:

  • Entitlement Management – pre-built PEPs for server applications, including:
    • Document Management (Microsoft SharePoint, SAP Document Management)
    • SAP Enterprise Resource Planning
    • Product Lifecycle Management (SAP PLM, Dassault Enovia)
    • Collaboration (CIFS and NFS File Servers)
  • Collaborative Rights Management – Collaborative Rights Management (cRM) applies XACML to protect unstructured data (files)
  • Data Protection – Data Protection is a suite of endpoint PEPs for removable devices, networking, email applications, web meeting applications and unified communication applications

What versions of the spec do you support? What optional parts? What profiles?

We support the core 2.0 and 3.0 specifications as well as the SAML, EC-US and IPC profiles.

What sets your product apart from the competition?

At NextLabs we differentiate ourselves through comprehensive industry solutions and our focus on information protection.

XACML is a generic authorization standard and can be applied to many things. Making it useful to the business buyer requires significant work beyond the standard – resources need attributes (i.e. information needs to be classified), PEPs need to be built, obligations/advise need to be implemented and policies need to be designed, developed and tested.

We have addressed this solution gap to make XACML useful for protecting critical information, and that’s what sets us apart.

What customers use your product? What is your biggest deployment?

NextLabs works with leading companies in the Manufacturing, High-Tech, Aerospace and Defense, Chemical, Energy, and Industrial Equipment industries. These companies typically have very high-value or sensitive intellectual property, global operations and are subject to strict global regulations.

We have multiple deployments above 50,000 users and have a project that will soon reach 100,000 users.

We have a few webinars where you can hear how some of our customers like GE and Tyco benefitted from our solutions. Recently one of our customers, BAE Systems, was recognized by CIO magazine for their use of our product.

What programming languages do you support? Will you support the REST profile? And JSON?

We support Java, C#, C++, SOAP, and SAP ABAP. We plan to support the REST and JSON profiles in a future release.

Do you support OpenAz? Spring-Security? Other open source efforts?

NextLabs contributed the C++ implementation of OpenAz and also supports OpenAz in Java.

We are committed to open APIs for authorization since this is critical to the growth of the XACML market and will support any effort that moves the industry forward in this regard.

How easy is it to write a PEP for your product? And a PIP? How long does an implementation of your product usually take?

NextLabs provides over a dozen PEP products and pre-built PIP integrations, which eliminate the need to build PEPs or PIPs for many common commercial applications.

For a custom PEP/PIPs, the time required depends on the nature of the application and the use case you are trying to support. The time can vary from hours to weeks.

Installing the product only takes hours, but the time required to implement a solution to production will vary depending on the number and type of applications and the policy use cases.

Can your product be embedded (i.e. run in-process)?

Yes, our Policy Controller can be embedded into another application.

What optimizations have you made? Can you share performance numbers?

Any latency introduced by external queries to information points (PIP) and evaluating large numbers of policy is a concerns for all customers.

We designed our architecture with the principle of a PDP that can run completely off-line – with the ability to make complex decisions without any network calls. This was a critical requirement for our endpoint products and has the benefit of eliminating latency associated with network roundtrips or external queries to PIPs.

To enable our off-line PDP we developed a patented policy deployment technology, called ICENet, which pre-evaluates multiple dimensions of policy when it is deployed to distributed PDPs.

99% of our policy queries are under 5 milliseconds, with most of those under 1 millisecond.

Securing Mobile Java Code

Mobile Code is code sourced from remote, possibly untrusted systems, that are executed on your local system. Mobile code is an optional constraint in the REST architectural style.

This post investigates our options for securely running mobile code in general, and for Java in particular.

Mobile Code

Examples of mobile code range from JavaScript fragments found in web pages to plug-ins for applications like FireFox and Eclipse.

Plug-ins turn a simple application into an extensible platform, which is one reason they are so popular. If you are going to support plug-ins in your application, then you should understand the security implications of doing so.

Types of Mobile Code

Mobile code comes in different forms. Some mobile code is source code, like JavaScript.

Mobile code in source form requires an interpreter to execute, like JägerMonkey in FireFox.

Mobile code can also be found in the form of executable code.

This can either be intermediate code, like Java applets, or native binary code, like Adobe’s Flash Player.

Active Content Delivers Mobile Code

A concept that is related to mobile code is active content, which is defined by NIST as

Electronic documents that can carry out or trigger actions automatically on a computer platform without the intervention of a user.

Examples of active content are HTML pages or PDF documents containing scripts and Office documents containing macros.

Active content is a vehicle for delivering mobile code, which makes it a popular technology for use in phishing attacks.

Security Issues With Mobile Code

There are two classes of security problems associated with mobile code.

The first deals with getting the code safely from the remote to the local system. We need to control who may initiate the code transfer, for example, and we must ensure the confidentiality and integrity of the transferred code.

From the point of view of this class of issues, mobile code is just data, and we can rely on the usual solutions for securing the transfer. For instance, XACML may be used to control who may initiate the transfer, and SSL/TLS may be used to protect the actual transfer.

It gets more interesting with the second class of issues, where we deal with executing the mobile code. Since the remote source is potentially untrusted, we’d like to limit what the code can do. For instance, we probably don’t want to allow mobile code to send credit card data to its developer.

However, it’s not just malicious code we want to protect ourselves from.

A simple bug that causes the mobile code to go into an infinite loop will threaten your application’s availability.

The bottom line is that if you want your application to maintain a certain level of security, then you must make sure that any third-party code meets that same standard. This includes mobile code and embedded libraries and components.

That’s why third-party code should get a prominent place in a Security Development Lifecycle (SDL).

Safely Executing Mobile Code

In general, we have four types of safeguards at our disposal to ensure the safe execution of mobile code:

  • Proofs
  • Signatures
  • Filters
  • Cages (sandboxes)

We will look at each of those in the context of mobile Java code.

Proofs

It’s theoretically possible to present a formal proof that some piece of code possesses certain safety properties. This proof could be tied to the code and the combination is then proof carrying code.

After download, the code could be checked against the code by a verifier. Only code that passes the verification check would be allowed to execute.

Updated for Bas’ comment:
Since Java 6, the StackMapTable attribute implements a limited form of proof carrying code where the type safety of the Java code is verified. However, this is certainly not enough to guarantee that the code is secure, and other approaches remain necessary.

Signatures

One of those approaches is to verify that the mobile code is made by a trusted source and that it has not been tampered with.

For Java code, this means wrapping the code in a jar file and signing and verifying the jar.

Filters

We can limit what mobile content can be downloaded. Since we want to use signatures, we should only accept jar files. Other media types, including individual .class files, can simply be filtered out.

Next, we can filter out downloaded jar files that are not signed, or signed with a certificate that we don’t trust.

We can also use anti-virus software to scan the verified jars for known malware.

Finally, we can use a firewall to filter out any outbound requests using protocols/ports/hosts that we know our code will never need. That limits what any code can do, including the mobile code.

Cages/Sandboxes

After restricting what mobile code may run at all, we should take the next step: prevent the running code from doing harm by restricting what it can do.

We can intercept calls at run-time and block any that would violate our security policy. In other words, we put the mobile code in a cage or sandbox.

In Java, cages can be implemented using the Security Manager. In a future post, we’ll take a closer look at how to do this.

Using a Layered XACML Architecture to Implement Retention

A previous post showed how the security principle of segmentation led to a small adaption of the XACML architecture for use in the cloud.

This post shows how a similar adaptation may be required on-premise.

Segmentation of Retention and Regular Access Control Policies

Even when we don’t live in a cloud world, there may be reasons for segmentation. Take records management, for instance.

Any piece of data that is marked as a record, may not be deleted until after the end of the retention period (at which point it must be deleted).

This is an access control policy that clearly takes precedence over the regular policies.

A similar situation exists with legal holds.

While it’s certainly possible to achieve that with various policy sets and clever policy combining, the principle of segmentation encourages us to take a different approach. We would like to physically separate the policies into different layers, so that they can never interfere with each other.

Segmenting XACML Policies Using Layered Policy Decision Points

We can create a layered Policy Decision Point (PDP) that wraps smaller PDPs that each deal with a single type of access control policies.

The PDP with retention policies is asked for a decision first. When the decision is NotApplicable it means the resource being accessed is not under retention, and the decision is forwarded to the next PDP, which uses regular access control policies.

The retention policies will probably require a PIP to look up resource attributes, like is-under-retention.

Segmentation Implementation Patterns

While the multi-tenant XACML architecture was an example of a dispatching mechanism, the layered architecture is an example of the Chain of Responsibility pattern.

Supporting Multiple XACML Representations

We’re in the process of registering an XML media type for the eXtensible Access Control Markup Language (XACML). Simultaneously, the XACML Technical Committee is working on a JSON format.

Both media types are useful in the context of another committee effort, the REST profile. This post explains what benefit these profiles will bring once approved, and how to support them in clients and servers.

Media Types Support Content Negotiation

With the REST profile, any application can communicate with a Policy Decision Point (PDP) in a RESTful manner. The media types make it possible to communicate with such a PDP in a manner that is most convenient for the client, using a process called content negotiation.

For instance, a web application that is mainly implemented in JavaScript may prefer to use JSON for communication with the PDP, to avoid having to bring in infrastructure to deal with XML.

Content negotiation is not just a convenience feature, however. It also facilitates evolution.

A server with many clients that understand 2.0 may start also serving 3.0, for instance. The older clients stay functional using 2.0, whereas newer clients can communicate in 3.0 syntax with the same server.

This avoids having to upgrade all the clients at the same time as the server.

So how does a server that supports multiple versions and/or formats know which one to serve to a particular client? The answer is the Accept HTTP header. For instance, a client can send Accept: application/xacml+xml; version=2.0 to get an XACML 2.0 XML format, or Accept: application/xacml+json; version=3.0 to get an XACML 3.0 JSON answer.

The value for the Accept header is a list of media types that are acceptable to the client, in decreasing order of precedence. For instance, a new client could prefer 3.0, but still work with older servers that only support 2.0 by sending Accept: application/xacml+xml; version=3.0, application/xacml+xml; version=2.0.

Supporting Multiple Versions and Formats

So there is value for both servers and clients to support multiple versions and/or formats. Now how does one go about implementing this? The short answer is: using indirection.

The longer answer is to make an abstraction for the version/format combination. We’ll dub this abstraction a representation.

For instance, an XACML request is really not much more than a collection of categorized attributes, while a response is basically a collection of results.

Instead of working with, say, the XACML 3.0 XML form of a request, the client or server code should work with the abstract representation. For each version/format combination, you then add a parser and a builder.

The parser reads the concrete syntax and creates the abstract representation from it. Conversely, the builder takes the abstract representation and converts it to the desired concrete syntax.

In many cases, you can re-use parts of the parsers and builders between representations. For instance, all the XML formats of XACML have in common that they require XML parsing/serialization.

In a design like this, no code ever needs to be modified when a new version of the specification or a new serialization format comes out. All you have to do is add a parser and a builder, and all the other code can stay the way it is.

The only exception is when a new version introduces new capabilities and your code wants to use those. In that case, you probably must also change the abstract representation to accommodate the new functionality.

XACML In The Cloud

The eXtensible Access Control Markup Language (XACML) is the de facto standard for authorization.

The specification defines an architecture (see image on the right) that relates the different components that make up an XACML-based system.

This post explores a variation on the standard architecture that is better suitable for use in the cloud.

Authorization in the Cloud

In cloud computing, multiple tenants share the same resources that they reach over a network. The entry point into the cloud must, of course, be protected using a Policy Enforcement Point (PEP).

Since XACML implements Attribute-Based Access Control (ABAC), we can use an attribute to indicate the tenant, and use that attribute in our policies.

We could, for instance, use the following standard attribute, which is defined in the core XACML specification: urn:oasis:names:tc:xacml:1.0:subject:subject-id-qualifier.

This identifier indicates the security domain of the subject. It identifies the administrator and policy that manages the name-space in which the subject id is administered.

Using this attribute, we can target policies to the right tenant.

Keeping Policies For Different Tenants Separate

We don’t want to mix policies for different tenants.

First of all, we don’t want a change in policy for one tenant to ever be able to affect a different tenant. Keeping those policies separate is one way to ensure that can never happen.

We can achieve the same goal by keeping all policies together and carefully writing top-level policy sets. But we are better off employing the security best practice of segmentation and keeping policies for different tenants separate in case there was a problem with those top-level policies or with the Policy Decision Point (PDP) evaluating them (defense in depth).

Multi-tenant XACML Architecture

We can use the composite pattern to implement a PDP that our cloud PEP can call.

This composite PDP will extract the tenant attribute from the request, and forward the request to a tenant-specific Context Handler/PDP/PIP/PAP system based on the value of the tenant attribute.

In the figure on the right, the composite PDP is called Multi-tenant PDP. It uses a component called Tenant-PDP Provider that is responsible for looking up the correct PDP based on the tenant attribute.

XACML Vendor: Axiomatics

This is the second in a series of posts where I interview XACML vendors. This time it’s Axiomatics’ turn. Their CTO Erik Rissanen is editor of the XACML 3.0 specification.

Why does the world need XACML? What benefits do your customers realize?

The world needs a standardized way to externalize authorization processing from the rest of the application logic – this is where the XACML standard comes in. Customers have different requirements for implementing externalized authorization and, therefore, can derive different benefits.

Here are some of the key benefits we have seen for customers:

  • The ability to share sensitive data with customers, partners and supply chain members
  • Implement fine grained authorization at every level of the application – presentation, application, middleware and data tiers
  • Deploy applications with clearly audit-able access control
  • Build and deploy applications and services faster than the competition
  • Move workloads more easily to the most efficient compute, storage or data capacity
  • Protect access to applications and resources regardless of where they are hosted
  • Implement access control consistently across all layers of an application as well as across application environments deployed on different platforms
  • Exploit dynamic access controls that are much more flexible than roles

What products do you have in the XACML space?

Axiomatics has three core products today:

  • The Axiomatics Policy Server which is a modular XACML-driven authorization server. It fully implements XACML 2.0 and XACML 3.0 and respects the XACML architecture.
  • The Axiomatics Policy Auditor which is a web-based product administrators and business users alike can use to analyze XACML policies to identify security gaps or create a list of entitlements. Generally, the auditor helps answer the question “How can an access be granted?”
  • The Axiomatics Reverse Query takes on a novel approach to authorization. Where one typically creates binary requests (Can Alice do this?) and the Axiomatics Policy Server would reply with a Yes or No, the Axiomatics Reverse Query helps invert the process to tackle the list question. We have noticed that our customers sometimes want to know the list of users that have access to an application or the list of resources a given user can access. This is what we call the list question or reverse querying.
    The Axiomatics Reverse Query is an SDK that requires integration with a given application. With this in mind, Axiomatics engineering have developed extra glue / integration layers to plug into target environments and products. For instance, Axiomatics will release shortly the Axiomatics Reverse Query for Oracle Virtual Private Database. Axiomatics also uses the SDK to drive authorization inside Windows Server 2012. And there are many more integration options we have yet to explore.

In addition, Axiomatics has now released a free tool and a new language called ALFA, the Axiomatics Language for Authorization. ALFA is a lightweight version of XACML with shorthand notations. It borrows much of its syntax from programming languages developers are most familiar with e.g. Java and C#. The tool is a free plugin for the Eclipse IDE which lets developers author ALFA using the usual Eclipse features such as syntax checking and auto-complete. The plugin eventually generates XACML 3.0 conformant policies on the fly from the ALFA the developers write. Axiomatics published a video on its YouTube channel showing how to use the tool.

What versions of the spec do you support? What optional parts? What profiles?

Axiomatics fully supports XACML 2.0 and XACML 3.0 including all optional profiles as specified in our attestation email.

What sets your product(s) apart from the competition?

Axiomatics has historically been what we could call a pure play XACML vendor. This reflects our dedication to the standard and the fact that Axiomatics implements the XACML core and all profiles – no other vendor has adopted such a comprehensive strategy. Furthermore, Axiomatics only uses the XACML policy language, rather than attempting to convert between XACML and one or more proprietary, legacy policy language formats. The comprehensiveness of the XACML policy language gives customers the most flexibility – as well as interoperability – across a multitude of applications and usage scenarios.

This also made Axiomatics a very generic solution for all things fine-grained authorization. This means the Axiomatics solution can be applied to any type of application, in particular .NET or J2SE/J2EE applications but also increasingly COTS such as SharePoint and databases such as Oracle VPD.

Axiomatics also leverages the key benefits of the XACML architecture to provide a very modular set of products. This means our core engine can be plugged into a various set of frameworks extremely easily: the authorization engine can be embedded or exposed as a web service (SOAP, REST, Thrift…). It also means our products scale extremely well and allow for a single point of management with literally hundreds of decision points and as many enforcement points. This makes our product the fastest, most elegant approach to enterprise authorization.

Axiomatics’ auditing capablities are quite unique too: with the Policy Auditor, it is possible to know what could possibly happen, rather a simple audit of what did actually happen. This means it is easier than ever to produce reports that will keep auditors satisfied the enterprise is correctly protected.

Lastly, Axiomatics has over 6 years experience in the area and is always listening to its customers. As a result, new products have been designed to better address customer needs. One such example is our Axiomatics Reverse Query which reverses the authorization process to be able to tackle a new series of authorization requirements our customers in the financial sector had. Instead of getting yes/no answers, these customers wanted a list of resources a user can access (e.g. a list of bank accounts) or a list of employees who can view a given piece of information. By actively listening to our customers we are able to deliver new innovative products that best match their needs.

What customers use your product(s)? What is your biggest deployment?

Axiomatics has several Fortune 50 customers. Some of the world’s largest banks and enterprises are Axiomatics customers. Axiomatics customers are based in the US and Europe mainly. One famous customer where Axiomatics is used intensively is PayPal. It is probably Axiomatics’ current biggest deployment in terms of transactions.

A US-based bank has also deployed Axiomatics products across three continents in order to protect trading applications.

What programming languages do you support? Will you support the upcoming REST and JSON profiles?

Axiomatics supports Java and C#. Axiomatics has been used in customer deployments with other languages such as Python.

Axiomatics is active in defining the new REST profile of the XACML TC and will try to align with it as much as possible. Axiomatics is also leading the design of a JSON-based PEP-PDP interaction. JSON as well as Thrift are likely to be the next communication protocols supported.

Do you support OpenAz? Spring Security? Other open source efforts?

Axiomatics does not currently support OpenAZ but has been watching the specification in order to eventually take part. Axiomatics already supports Spring Security. In addition, there is a new open source initiative aimed at defining a standard PEP API which Axiomatics and other vendors are taking part in.

How easy is it to write a PEP for your product(s)? And a PIP? How long does an implementation of your product(s) usually take?

Should customers decide to write a custom PEP rather than use an off-the-shelf PEP, they can use a Java or C# SDK to quickly write PEPs. Axiomatics has published a video explaining how to write a PEP in 5 minutes and 20 lines of code.

An implementation of our product can take from 1 week to 3 months or more depending on the customer requirements, the complexity of the desired architecture, and the number of integration points.

Can your product(s) be embedded (i.e. run in-process)?

The Axiomatics PDP can be embedded. Customers sometimes choose this approach to achieve even greater levels of performance.

What optimizations have you made? Can you share performance numbers?

There are many factors such as number of policies, complexity of policies, number of PIP look-ups and others that have an effect on performance. One of our customers shared the result of their internal product evaluation where they reached 30.000 requests per second.

The Axiomatics PDP is also used to secure transactions for several hundred million users and protect the medical records of all 9 million Swedish citizens.