How To Control Access To REST APIs

hackerExposing your data or application through a REST API is a wonderful way to reach a wide audience.

The downside of a wide audience, however, is that it’s not just the good guys who come looking.

Securing REST APIs

Security consists of three factors:

  1. Confidentiality
  2. Integrity
  3. Availability

In terms of Microsoft’s STRIDE approach, the security compromises we want to avoid with each of these are Information Disclosure, Tampering, and Denial of Service. The remainder of this post will only focus on Confidentiality and Integrity.

In the context of an HTTP-based API, Information Disclosure is applicable for GET methods and any other methods that return information. Tampering is applicable for PUT, POST, and DELETE.

Threat Modeling REST APIs

A good way to think about security is by looking at all the data flows. That’s why threat modeling usually starts with a Data Flow Diagram (DFD). In the context of a REST API, a close approximation to the DFD is the state diagram. For proper access control, we need to secure all the transitions.

The traditional way to do that, is to specify restrictions at the level of URI and HTTP method. For instance, this is the approach that Spring Security takes. The problem with this approach, however, is that both the method and the URI are implementation choices.

link-relationURIs shouldn’t be known to anybody but the API designer/developer; the client will discover them through link relations.

Even the HTTP methods can be hidden until runtime with mature media types like Mason or Siren. This is great for decoupling the client and server, but now we have to specify our security constraints in terms of implementation details! This means only the developers can specify the access control policy.

That, of course, flies in the face of best security practices, where the access control policy is externalized from the code (so it can be reused across applications) and specified by a security officer rather than a developer. So how do we satisfy both requirements?

Authorizing REST APIs

I think the answer lies in the state diagram underlying the REST API. Remember, we want to authorize all transitions. Yes, a transition in an HTTP-based API is implemented using an HTTP method on a URI. But in REST, we shield the URI using a link relation. The link relation is very closely related to the type of action you want to perform.

The same link relation can be used from different states, so the link relation can’t be the whole answer. We also need the state, which is based on the representation returned by the REST server. This representation usually contains a set of properties and a set of links. We’ve got the links covered with the link relations, but we also need the properties.

PolicyIn XACML terms, the link relation indicates the action to be performed, while the properties correspond to resource attributes.

Add to that the subject attributes obtained through the authentication process, and you have all the ingredients for making an XACML request!

There are two places where such access control checks comes into play. The first is obviously when receiving a request.

You should also check permissions on any links you want to put in the response. The links that the requester is not allowed to follow, should be omitted from the response, so that the client can faithfully present the next choices to the user.

Using XACML For Authorizing REST APIs

I think the above shows that REST and XACML are a natural fit.

All the more reason to check out XACML if you haven’t already, especially XACML’s REST Profile and the forthcoming JSON Profile.

Advertisement

Securing HTTP-based APIs With Signatures

CloudSecurityI work at EMC on a platform on top of which SaaS solutions can be built.

This platform has a RESTful HTTP-based API, just like a growing number of other applications.

With development frameworks like JAX-RS, it’s relatively easy to build such APIs.

It is not, however, easy to build them right.

Issues With Building HTTP-based APIs

The problem isn’t so much in getting the functionality out there. We know how to develop software and the available REST/HTTP frameworks and libraries make it easy to expose the functionality.

That’s only half the story, however. There are many more -ilities to consider.

rest-easyThe REST architectural style addresses some of those, like scalability and evolvability.

Many HTTP-based APIs today claim to be RESTful, but in fact are not. This means that they are not reaping all of the benefits that REST can bring.

I’ll be talking more about how to help developers meet all the constraints of the REST architectural style in future posts.

Today I want to focus on another non-functional aspect of APIs: security.

Security of HTTP-based APIs

In security, we care about the CIA-triad: Confidentiality, Integrity, and availability.

Availability of web services is not dramatically different from that of web applications, which is relatively well understood. We have our clusters, load balancers, and what not, and usually we are in good shape.

Confidentiality and integrity, on the other hand, both require proper authentication, and here matters get more interesting.

Authentication of HTTP-based APIs

authenticationFor authentication in an HTTP world, it makes sense to look at HTTP Authentication.

This RFC describes Basic and Digest authentication. Both have their weaknesses, which is why you see many APIs use alternatives.

Luckily, these alternatives can use the same basic machinery defined in the RFC. This machinery includes status code 401 Unauthorized, and the WWW-Authenticate, Authentication-Info, and Authorization headers. Note that the Authorization header is unfortunately misnamed, since it’s used for authentication, not authorization.

The final piece of the puzzle is the custom authentication scheme. For example, Amazon S3 authentication uses the AWS custom scheme.

Authentication of HTTP-based APIs Using Signatures

The AWS scheme relies on signatures. Other services, like EMC Atmos, use the same approach.

It is therefore good to see that a new IETF draft has been proposed to standardize the use of signatures in HTTP-based APIs.

Standardization enables the construction of frameworks and libraries, which will drive down the cost of implementing authentication and will make it easier to build more secure APIs.

What do you think?

what-do-you-thinkIf you’re in the HTTP API building and/or consuming business –and who isn’t these days– then please go ahead and read the draft and provide feedback.

I’m also interested in your experiences with building or consuming secure HTTP APIs. Please leave a comment on this post.

Securing Mobile Java Code

Mobile Code is code sourced from remote, possibly untrusted systems, that are executed on your local system. Mobile code is an optional constraint in the REST architectural style.

This post investigates our options for securely running mobile code in general, and for Java in particular.

Mobile Code

Examples of mobile code range from JavaScript fragments found in web pages to plug-ins for applications like FireFox and Eclipse.

Plug-ins turn a simple application into an extensible platform, which is one reason they are so popular. If you are going to support plug-ins in your application, then you should understand the security implications of doing so.

Types of Mobile Code

Mobile code comes in different forms. Some mobile code is source code, like JavaScript.

Mobile code in source form requires an interpreter to execute, like JägerMonkey in FireFox.

Mobile code can also be found in the form of executable code.

This can either be intermediate code, like Java applets, or native binary code, like Adobe’s Flash Player.

Active Content Delivers Mobile Code

A concept that is related to mobile code is active content, which is defined by NIST as

Electronic documents that can carry out or trigger actions automatically on a computer platform without the intervention of a user.

Examples of active content are HTML pages or PDF documents containing scripts and Office documents containing macros.

Active content is a vehicle for delivering mobile code, which makes it a popular technology for use in phishing attacks.

Security Issues With Mobile Code

There are two classes of security problems associated with mobile code.

The first deals with getting the code safely from the remote to the local system. We need to control who may initiate the code transfer, for example, and we must ensure the confidentiality and integrity of the transferred code.

From the point of view of this class of issues, mobile code is just data, and we can rely on the usual solutions for securing the transfer. For instance, XACML may be used to control who may initiate the transfer, and SSL/TLS may be used to protect the actual transfer.

It gets more interesting with the second class of issues, where we deal with executing the mobile code. Since the remote source is potentially untrusted, we’d like to limit what the code can do. For instance, we probably don’t want to allow mobile code to send credit card data to its developer.

However, it’s not just malicious code we want to protect ourselves from.

A simple bug that causes the mobile code to go into an infinite loop will threaten your application’s availability.

The bottom line is that if you want your application to maintain a certain level of security, then you must make sure that any third-party code meets that same standard. This includes mobile code and embedded libraries and components.

That’s why third-party code should get a prominent place in a Security Development Lifecycle (SDL).

Safely Executing Mobile Code

In general, we have four types of safeguards at our disposal to ensure the safe execution of mobile code:

  • Proofs
  • Signatures
  • Filters
  • Cages (sandboxes)

We will look at each of those in the context of mobile Java code.

Proofs

It’s theoretically possible to present a formal proof that some piece of code possesses certain safety properties. This proof could be tied to the code and the combination is then proof carrying code.

After download, the code could be checked against the code by a verifier. Only code that passes the verification check would be allowed to execute.

Updated for Bas’ comment:
Since Java 6, the StackMapTable attribute implements a limited form of proof carrying code where the type safety of the Java code is verified. However, this is certainly not enough to guarantee that the code is secure, and other approaches remain necessary.

Signatures

One of those approaches is to verify that the mobile code is made by a trusted source and that it has not been tampered with.

For Java code, this means wrapping the code in a jar file and signing and verifying the jar.

Filters

We can limit what mobile content can be downloaded. Since we want to use signatures, we should only accept jar files. Other media types, including individual .class files, can simply be filtered out.

Next, we can filter out downloaded jar files that are not signed, or signed with a certificate that we don’t trust.

We can also use anti-virus software to scan the verified jars for known malware.

Finally, we can use a firewall to filter out any outbound requests using protocols/ports/hosts that we know our code will never need. That limits what any code can do, including the mobile code.

Cages/Sandboxes

After restricting what mobile code may run at all, we should take the next step: prevent the running code from doing harm by restricting what it can do.

We can intercept calls at run-time and block any that would violate our security policy. In other words, we put the mobile code in a cage or sandbox.

In Java, cages can be implemented using the Security Manager. In a future post, we’ll take a closer look at how to do this.

Software Development and Security

It seems that not many software developers are interested in security. One reason may be that security is a negative feature. Another could be that developers don’t see how security relates to their daily activities. Let’s look at a detailed example that sheds some light on this relation.

Example: Crashing Tetris

My employer, EMC, takes security seriously. Besides the annual security awareness training that every employee has to take, software developers are required to take additional security courses, so that they understand the Security Development Lifecycle. In one of those courses, security guru Hugh Thompson tells the following story.

While on an airplane, he found a Tetris game in the on-board entertainment system. The game showed the next blocks to drop in a preview pane. The game’s settings had up and down buttons to increase or decrease the number of preview blocks.

Using the up button, the number could only be increased to four. However, using the telephone key pad, Thompson could enter 5 and get it accepted.

No higher digits were accepted from the telephone, but now that the number was five, the up button on the screen happily increased the number further.

He increased the number all the way up to 127. The next time he pressed the up button, the screen went black. And so did the screen next to him. And everywhere else in the plane. Zero availability.

Exploits Use Vulnerabilities, Which Come From Bugs

How did this happen? The answer is simple: there were some bugs in the application that were abused in a systematic manner. In the security world, such a bug is referred to as a vulnerability, and the abuse of them to decrease security is known as an exploit.

There is nothing inherently “security related” about vulnerabilities. In the example, the first mistake was that the two interfaces each had their own logic for manipulating the model, a clear violation of DRY. The second was the off-by-one error in the telephone interface. Next, the logic for the up button only checked for the specific boundary value four, instead of for four and anything larger. The final mistake was a missing check for integer overflow. These four more or less innocent bugs combined to form a vulnerability that Thompson exploited.

Certain bugs are more likely to lead to vulnerabilities than others. Two notorious examples are Buffer Overflow and SQL Injection. Luckily, many of such bugs are easily prevented. Good tools and a little awareness on the side of the developer go a long way.

Conclusion: Less Bugs Means More Secure

If vulnerabilities come from bugs, then we need a relentless focus on preventing and eliminating bugs in order to make our applications more secure.

With that insight, we’re firmly back in the land of software development. Security isn’t the big scary monster we developers sometimes think it is.