How To Control Access To REST APIs

hackerExposing your data or application through a REST API is a wonderful way to reach a wide audience.

The downside of a wide audience, however, is that it’s not just the good guys who come looking.

Securing REST APIs

Security consists of three factors:

  1. Confidentiality
  2. Integrity
  3. Availability

In terms of Microsoft’s STRIDE approach, the security compromises we want to avoid with each of these are Information Disclosure, Tampering, and Denial of Service. The remainder of this post will only focus on Confidentiality and Integrity.

In the context of an HTTP-based API, Information Disclosure is applicable for GET methods and any other methods that return information. Tampering is applicable for PUT, POST, and DELETE.

Threat Modeling REST APIs

A good way to think about security is by looking at all the data flows. That’s why threat modeling usually starts with a Data Flow Diagram (DFD). In the context of a REST API, a close approximation to the DFD is the state diagram. For proper access control, we need to secure all the transitions.

The traditional way to do that, is to specify restrictions at the level of URI and HTTP method. For instance, this is the approach that Spring Security takes. The problem with this approach, however, is that both the method and the URI are implementation choices.

link-relationURIs shouldn’t be known to anybody but the API designer/developer; the client will discover them through link relations.

Even the HTTP methods can be hidden until runtime with mature media types like Mason or Siren. This is great for decoupling the client and server, but now we have to specify our security constraints in terms of implementation details! This means only the developers can specify the access control policy.

That, of course, flies in the face of best security practices, where the access control policy is externalized from the code (so it can be reused across applications) and specified by a security officer rather than a developer. So how do we satisfy both requirements?

Authorizing REST APIs

I think the answer lies in the state diagram underlying the REST API. Remember, we want to authorize all transitions. Yes, a transition in an HTTP-based API is implemented using an HTTP method on a URI. But in REST, we shield the URI using a link relation. The link relation is very closely related to the type of action you want to perform.

The same link relation can be used from different states, so the link relation can’t be the whole answer. We also need the state, which is based on the representation returned by the REST server. This representation usually contains a set of properties and a set of links. We’ve got the links covered with the link relations, but we also need the properties.

PolicyIn XACML terms, the link relation indicates the action to be performed, while the properties correspond to resource attributes.

Add to that the subject attributes obtained through the authentication process, and you have all the ingredients for making an XACML request!

There are two places where such access control checks comes into play. The first is obviously when receiving a request.

You should also check permissions on any links you want to put in the response. The links that the requester is not allowed to follow, should be omitted from the response, so that the client can faithfully present the next choices to the user.

Using XACML For Authorizing REST APIs

I think the above shows that REST and XACML are a natural fit.

All the more reason to check out XACML if you haven’t already, especially XACML’s REST Profile and the forthcoming JSON Profile.

The Decorator Pattern

decoratingOne design pattern that I don’t see being used very often is Decorator.

I’m not sure why this pattern isn’t more popular, as it’s quite handy.

The Decorator pattern allows one to add functionality to an object in a controlled manner. This works at runtime, even with statically typed languages!

The decorator pattern is an alternative to subclassing. Subclassing adds behavior at compile time, and the change affects all instances of the original class; decorating can provide new behavior at run-time for individual objects.

The Decorator pattern is a good tool for adhering to the open/closed principle.

Some examples may show the value of this pattern.

Example 1: HTTP Authentication

Imagine an HTTP client, for example one that talks to a RESTful service.

Some parts of the service are publicly accessible, but some require the user to log in. The RESTful service responds with a 401 Unauthorized status code when the client tries to access a protected resource.

Changing the client to handle the 401 leads to duplication, since every call could potentially require authentication. So we should extract the authentication code into one place. Where would that place be, though?

Here’s where the Decorator pattern comes in:

public class AuthenticatingHttpClient
    implements HttpClient {

  private final HttpClient wrapped;

  public AuthenticatingHttpClient(HttpClient wrapped) {
    this.wrapped = wrapped;

  public Response execute(Request request) {
    Response response = wrapped.execute(request);
    if (response.getStatusCode() == 401) {
      response = wrapped.execute(request);
    return response;

  protected void authenticate() {
    // ...


A REST client now never has to worry about authentication, since the AuthenticatingHttpClient handles that.

Example 2: Caching Authorization Decisions

OK, so the user has logged in, and the REST server knows her identity. It may decide to allow access to a certain resource to one person, but not to another.

IOW, it may implement authorization, perhaps using XACML. In that case, a Policy Decision Point (PDP) is responsible for deciding on access requests.

Checking permissions it often expensive, especially when the permissions become more fine-grained and the access policies more complex. Since access policies usually don’t change very often, this is a perfect candidate for caching.

This is another instance where the Decorator pattern may come in handy:

public class CachingPdp implements Pdp {

  private final Pdp wrapped;

  public CachingPdp(Pdp wrapped) {
    this.wrapped = wrapped;

  public ResponseContext decide(
      RequestContext request) {
    ResponseContext response = getCached(request);
    if (response == null) {
      response = wrapped.decide(request);
      cache(request, response);
    return response;

  protected ResponseContext getCached(
      RequestContext request) {
    // ...

  protected void cache(RequestContext request, 
      ResponseContext response) {
    // ...


As you can see, the code is very similar to the first example, which is why we call this a pattern.

As you may have guessed from these two examples, the Decorator pattern is really useful for implementing cross-cutting concerns, like the security features of authentication, authorization, and auditing, but that’s certainly not the only place where it shines.

If you look carefully, I’m sure you’ll be able to spot many more opportunities for putting this pattern to work.

Is XACML Dead?

ripXACML is dead. Or so writes Forrester’s Andras Cser.

Before I take a critical look at the reasons underlying this claim, let me disclose that I’m a member of the OASIS committee that defines the XACML specification. So I may be a little biased.

Lack of broad adoption

The first reason for claiming XACML dead is the lack of adoption. Being a techie, I don’t see a lot of customers, so I have to assume Forrester knows better than me.

At last year’s XACML Seminar in the Netherlands, there were indeed not many people who actually used XACML, but the room was filled with people who were at least interested enough to pay to hear about practical experiences with XACML.

I also know that XACML is in use at large enterprises like Bank of America, Bell Helicopter, and Boeing, to name just some Bs. And the supplier side is certainly not the problem.

So there is some adoption, buI grant that it’s not broad.

Inability to serve the federated, extended enterprise

XACML was designed to meet the authorization needs of the monolithic enterprise where all users are managed centrally in AD.

extended-enterpriseI don’t understand this statement at all, as there is nothing in the XACML spec that depends on centrally managed users.

Especially in combination with SAML, XACML can handle federated scenarios perfectly fine.

In my current project, we’re using XACML in a multi-tenant environment where each tenant uses their own identity provider. No problem.

PDP does a lot of complex things that it does not inform the PEP about

The PDP is apparently supposed to tell the PEP why access is denied. I don’t get that: I’ve never seen an application that greyed out a button and included the text “You need the admin role to perform this operation”.

Maybe this is about testing access control policies. Or maybe I just don’t understand the problem. I’d love to learn more about this.

Not suitable for cloud and distributed deployment

CloudSecurityI guess what they mean is that fine-grained access control doesn’t work well in high latency environments. If so, sure.

XACML doesn’t prescribe how fine-grained your policies have to be, however, so I can’t see how this could be XACML’s fault. That’s like blaming my keyboard for allowing me to type more characters than fit in a tweet.

Actually, I’d say that XACML works very well in the cloud. And with the recently approved REST profile and the upcoming JSON profile, XACML will be even better suited for cloud solutions.

Commercial support is non-existent

This is lack of adoption again.

BTW, absolute claims like “there is no software library with PEP support” turn you into an easy target. All it takes is one counter example to prove you wrong.

Refactoring and rebuilding existing in-house applications is not an option

This, I think, is the main reason for slow adoption: legacy applications create inertia. We see the same thing with SSO. Even today, there are EMC internal applications that require me to maintain separate credentials.

The problem is worse for authorization. Authentication is a one-time thing at the start of a session, but authorization happens all the time. There are simply more places in an application that require modification.

There may be some light at the end of the tunnel, however.

Under constant attackHistory shows that inertia can be overcome by a large enough force.

That force might be the changing threat landscape. We’ll see.

OAuth supports the mobile application endpoint in a lightweight manner

OAuth does well in the mobile space. One reason is that mobile apps usually provide focused functionality that doesn’t require fine-grained access control decisions. It remains to be seen whether that continues to be true as mobile apps get more advanced.

Of course, if all your access control needs can be implemented with one yes/no question, then using XACML is overkill. That doesn’t, however, mean there is no place for XACML is the many, many places where life is not that simple.

What do you think?

All in all, I’m certainly not convinced by Forrester’s claim that XACML is dead. Are you? If XACML were buried, what would you use instead?

Update: Others have joined in the discussion and confirmed that XACML is not dead:

  • Gary from XACML vendor Axiomatics
  • Danny from XACML vendor Dell
  • Anil from open source XACML implementation JBoss PicketBox
  • Ian from analyst Gartner

Update 2: More people joined the discussion. One is confused, one is confusing, and Forrester’s Eva Mahler (of SGML and UMA fame) backs her colleague.

Update 3: Another analyst joins the discussion: KuppingerCole doesn’t think XACML is dead either.

Update 4: CA keeps supporting XACML in their SiteMinder product.

How To Secure an Organization That Is Under Constant Attack

Battle of GeonosisThere have been many recent security incidents at well-respected organizations like the Federal Reserve, the US Energy Department, the New York Times, and the Wall Street Journal.


If these large organizations are incapable of keeping unwanted people off their systems, then who is?

The answer unfortunately is: not many. So we must assume our systems are compromised. Compromised is the new normal.

This has implications for our security efforts:

  1. We need to increase our detection capabilities
  2. We need to be able to respond quickly, preferably in an automated fashion, when we detect an intrusion

Increasing Intrusion Detection Capabilities with Security Analytics

There are usually many small signs that something fishy is going on when an intruder has compromised your network.

For instance, our log files might show that someone is logging in from an IP address in China instead of San Francisco. While that may be normal for our CEO, it’s very unlikely for her secretary.

Another example is when someone tries to access a system it normally doesn’t. This may be an indication of an intruder trying to escalate his privileges.

Security AnalyticsMost of us are currently unable to collect such small indicators into firm suspicions, but that is about to change with the introduction of Big Data Analytics technology.

RSA recently released a report that predicts that big data will play a big role in Security Incident Event Monitoring (SIEM), network monitoring, Identity and Access Management (IAM), fraud detection, and Governance, Risk, and Compliance (GRC) systems.

RSA is investing heavily in Security Analytics to prevent and predict attacks, and so is IBM.

Quick, Automated, Responses to Intrusion Detection with Risk-Adaptive Access Control

The information we extract from our big security data can be used to drive decisions. The next step is to automate those decisions and actions based on them.

Large organizations, with hundreds or even thousands of applications, have a large attack surface. They are also interesting targets and therefore must assume they are under attack multiple times a day.

Anything that is not automated is not going to scale.

Risk-Adaptive Access Control (RAdAC)One decision than can be automated is whether we grant someone access to a particular system or piece of data.

This dynamic access control based on risk information is what NIST calls Risk-Adaptive Access Control (RAdAC).

As I’ve shown before, RAdAC can be implemented using eXtensible Access Control Markup Language (XACML).

What do you think?

Is your organization ready to look at security analytics? What do you see as the major road blocks for implementing RAdAC?

XACML Vendor: NextLabs

This is the third in a series of posts where I interview XACML vendors. This time we talk to NextLabs.

Why does the world need XACML? What benefits do your customers realize?

Over the last 20 years IT has focused on building walls around their networks and applications. Now with cross-organizational collaboration, cloud and mobile we are finding that those walls are no longer relevant for protecting critical information.

The world needs XACML to protect critical information in today’s collaborative business and IT environment.

At NextLabs we focus on applying Extensible Access Control Markup Language (XACML) to information protection to enable our customers to accelerate global collaboration while simultaneously protecting their most sensitive intellectual property.

Using Attribute-Based Access Control (ABAC) and externalized authorization we can protect data based on its sensitivity, defined by attributes, across applications and systems. Traditional access control models such as Role-Based Access Control (RBAC) and Access Control Lists (ACLs) simply do not scale to address the information protection problem.

What products do you have in the XACML space?

NextLabs has taken an industry-solution approach to the market. We provide several industry-solutions for regulatory compliance, secure partner collaboration, and intellectual property protection.

Each solution is comprised of pre-built policy libraries that implement industry best-practices, pre-built policy-enforcement-points (PEPs) for critical enterprise applications, our Control Center Information Control Platform based on XACML, and pre-built reporting.

Control Center is our Information Control Platform. It has several components:

  • Control Center Server – the Control Center server includes our Policy Administration Point (PAP) and additional services necessary for information control use cases. These include:
    • Information Classification Services – a compressive set of services that automate information classification such as content-analysis, data tagging, and user driven classification
    • Policy Development and Lifecycle Management Services – Services to govern and simplify the development and management of policy such as delegated administration, approval workflow, testing and validation, audit trail, versioning, and dictionary services. On top of this we provide Policy Studio, a graphical policy integrated development environment (IDE)
    • Policy Deployment and PDP Management Services – services that allow us to reliably deploy policies to distributed PDPs, even over the public internet
    • Audit and Reporting Services – role-based dashboards, analytics, and reporting to provide insights into information activity and policy compliance
  • Control Center Policy Controller – the Policy Controller is our policy-decision-point (PDP). We provide three different editions of the Policy Controller:
    • Endpoint Policy Controller – designed to run on laptops and desktops, even when disconnected
    • Server Policy Controller – designed to run co-located with a server based applications. Can be run as a service/daemon or embedded into an application
    • Policy Controller Service – designed to run as a stand-alone PDP service in J2EE Application Server

NextLabs provides over a dozen pre-built Policy Enforcement Points (PEPs) for common applications and system. These are separated into three product lines:

  • Entitlement Management – pre-built PEPs for server applications, including:
    • Document Management (Microsoft SharePoint, SAP Document Management)
    • SAP Enterprise Resource Planning
    • Product Lifecycle Management (SAP PLM, Dassault Enovia)
    • Collaboration (CIFS and NFS File Servers)
  • Collaborative Rights Management – Collaborative Rights Management (cRM) applies XACML to protect unstructured data (files)
  • Data Protection – Data Protection is a suite of endpoint PEPs for removable devices, networking, email applications, web meeting applications and unified communication applications

What versions of the spec do you support? What optional parts? What profiles?

We support the core 2.0 and 3.0 specifications as well as the SAML, EC-US and IPC profiles.

What sets your product apart from the competition?

At NextLabs we differentiate ourselves through comprehensive industry solutions and our focus on information protection.

XACML is a generic authorization standard and can be applied to many things. Making it useful to the business buyer requires significant work beyond the standard – resources need attributes (i.e. information needs to be classified), PEPs need to be built, obligations/advise need to be implemented and policies need to be designed, developed and tested.

We have addressed this solution gap to make XACML useful for protecting critical information, and that’s what sets us apart.

What customers use your product? What is your biggest deployment?

NextLabs works with leading companies in the Manufacturing, High-Tech, Aerospace and Defense, Chemical, Energy, and Industrial Equipment industries. These companies typically have very high-value or sensitive intellectual property, global operations and are subject to strict global regulations.

We have multiple deployments above 50,000 users and have a project that will soon reach 100,000 users.

We have a few webinars where you can hear how some of our customers like GE and Tyco benefitted from our solutions. Recently one of our customers, BAE Systems, was recognized by CIO magazine for their use of our product.

What programming languages do you support? Will you support the REST profile? And JSON?

We support Java, C#, C++, SOAP, and SAP ABAP. We plan to support the REST and JSON profiles in a future release.

Do you support OpenAz? Spring-Security? Other open source efforts?

NextLabs contributed the C++ implementation of OpenAz and also supports OpenAz in Java.

We are committed to open APIs for authorization since this is critical to the growth of the XACML market and will support any effort that moves the industry forward in this regard.

How easy is it to write a PEP for your product? And a PIP? How long does an implementation of your product usually take?

NextLabs provides over a dozen PEP products and pre-built PIP integrations, which eliminate the need to build PEPs or PIPs for many common commercial applications.

For a custom PEP/PIPs, the time required depends on the nature of the application and the use case you are trying to support. The time can vary from hours to weeks.

Installing the product only takes hours, but the time required to implement a solution to production will vary depending on the number and type of applications and the policy use cases.

Can your product be embedded (i.e. run in-process)?

Yes, our Policy Controller can be embedded into another application.

What optimizations have you made? Can you share performance numbers?

Any latency introduced by external queries to information points (PIP) and evaluating large numbers of policy is a concerns for all customers.

We designed our architecture with the principle of a PDP that can run completely off-line – with the ability to make complex decisions without any network calls. This was a critical requirement for our endpoint products and has the benefit of eliminating latency associated with network roundtrips or external queries to PIPs.

To enable our off-line PDP we developed a patented policy deployment technology, called ICENet, which pre-evaluates multiple dimensions of policy when it is deployed to distributed PDPs.

99% of our policy queries are under 5 milliseconds, with most of those under 1 millisecond.

Securing Mobile Java Code

Mobile Code is code sourced from remote, possibly untrusted systems, that are executed on your local system. Mobile code is an optional constraint in the REST architectural style.

This post investigates our options for securely running mobile code in general, and for Java in particular.

Mobile Code

Examples of mobile code range from JavaScript fragments found in web pages to plug-ins for applications like FireFox and Eclipse.

Plug-ins turn a simple application into an extensible platform, which is one reason they are so popular. If you are going to support plug-ins in your application, then you should understand the security implications of doing so.

Types of Mobile Code

Mobile code comes in different forms. Some mobile code is source code, like JavaScript.

Mobile code in source form requires an interpreter to execute, like JägerMonkey in FireFox.

Mobile code can also be found in the form of executable code.

This can either be intermediate code, like Java applets, or native binary code, like Adobe’s Flash Player.

Active Content Delivers Mobile Code

A concept that is related to mobile code is active content, which is defined by NIST as

Electronic documents that can carry out or trigger actions automatically on a computer platform without the intervention of a user.

Examples of active content are HTML pages or PDF documents containing scripts and Office documents containing macros.

Active content is a vehicle for delivering mobile code, which makes it a popular technology for use in phishing attacks.

Security Issues With Mobile Code

There are two classes of security problems associated with mobile code.

The first deals with getting the code safely from the remote to the local system. We need to control who may initiate the code transfer, for example, and we must ensure the confidentiality and integrity of the transferred code.

From the point of view of this class of issues, mobile code is just data, and we can rely on the usual solutions for securing the transfer. For instance, XACML may be used to control who may initiate the transfer, and SSL/TLS may be used to protect the actual transfer.

It gets more interesting with the second class of issues, where we deal with executing the mobile code. Since the remote source is potentially untrusted, we’d like to limit what the code can do. For instance, we probably don’t want to allow mobile code to send credit card data to its developer.

However, it’s not just malicious code we want to protect ourselves from.

A simple bug that causes the mobile code to go into an infinite loop will threaten your application’s availability.

The bottom line is that if you want your application to maintain a certain level of security, then you must make sure that any third-party code meets that same standard. This includes mobile code and embedded libraries and components.

That’s why third-party code should get a prominent place in a Security Development Lifecycle (SDL).

Safely Executing Mobile Code

In general, we have four types of safeguards at our disposal to ensure the safe execution of mobile code:

  • Proofs
  • Signatures
  • Filters
  • Cages (sandboxes)

We will look at each of those in the context of mobile Java code.


It’s theoretically possible to present a formal proof that some piece of code possesses certain safety properties. This proof could be tied to the code and the combination is then proof carrying code.

After download, the code could be checked against the code by a verifier. Only code that passes the verification check would be allowed to execute.

Updated for Bas’ comment:
Since Java 6, the StackMapTable attribute implements a limited form of proof carrying code where the type safety of the Java code is verified. However, this is certainly not enough to guarantee that the code is secure, and other approaches remain necessary.


One of those approaches is to verify that the mobile code is made by a trusted source and that it has not been tampered with.

For Java code, this means wrapping the code in a jar file and signing and verifying the jar.


We can limit what mobile content can be downloaded. Since we want to use signatures, we should only accept jar files. Other media types, including individual .class files, can simply be filtered out.

Next, we can filter out downloaded jar files that are not signed, or signed with a certificate that we don’t trust.

We can also use anti-virus software to scan the verified jars for known malware.

Finally, we can use a firewall to filter out any outbound requests using protocols/ports/hosts that we know our code will never need. That limits what any code can do, including the mobile code.


After restricting what mobile code may run at all, we should take the next step: prevent the running code from doing harm by restricting what it can do.

We can intercept calls at run-time and block any that would violate our security policy. In other words, we put the mobile code in a cage or sandbox.

In Java, cages can be implemented using the Security Manager. In a future post, we’ll take a closer look at how to do this.

Using a Layered XACML Architecture to Implement Retention

A previous post showed how the security principle of segmentation led to a small adaption of the XACML architecture for use in the cloud.

This post shows how a similar adaptation may be required on-premise.

Segmentation of Retention and Regular Access Control Policies

Even when we don’t live in a cloud world, there may be reasons for segmentation. Take records management, for instance.

Any piece of data that is marked as a record, may not be deleted until after the end of the retention period (at which point it must be deleted).

This is an access control policy that clearly takes precedence over the regular policies.

A similar situation exists with legal holds.

While it’s certainly possible to achieve that with various policy sets and clever policy combining, the principle of segmentation encourages us to take a different approach. We would like to physically separate the policies into different layers, so that they can never interfere with each other.

Segmenting XACML Policies Using Layered Policy Decision Points

We can create a layered Policy Decision Point (PDP) that wraps smaller PDPs that each deal with a single type of access control policies.

The PDP with retention policies is asked for a decision first. When the decision is NotApplicable it means the resource being accessed is not under retention, and the decision is forwarded to the next PDP, which uses regular access control policies.

The retention policies will probably require a PIP to look up resource attributes, like is-under-retention.

Segmentation Implementation Patterns

While the multi-tenant XACML architecture was an example of a dispatching mechanism, the layered architecture is an example of the Chain of Responsibility pattern.