Sandboxing Java Code

In a previous post, we looked at securing mobile Java code. One of the options for doing so is to run the code in a cage or sandbox.

This post explores how to set up such a sandbox for Java applications.

Security Manager

The security facility in Java that supports sandboxing is the java.lang.SecurityManager.

By default, Java runs without a SecurityManager, so you should add code to your application to enable one:

System.setSecurityManager(new SecurityManager());

You can use the standard SecurityManager, or a descendant.

The SecurityManager has a bunch of checkXXX() methods that all forward to checkPermission(permission, context). This method calls upon the AccessController to do the actual work (see below).

[The checkXXX() methods are a relic from Java 1.1.]

If a requested access is allowed, checkPermission() returns quietly. If denied, a java.lang.SecurityException is thrown.

Code that implements the sandbox should call a checkXXX method before performing a sensitive operation:

SecurityManager securityManager = System.getSecurityManager();
if (securityManager != null) {
  Permission permission = ...;
  securityManager.checkPermission(permission);
}

The JRE contains code just like that in many places.

Permissions

A permission represents access to a system resource.

In order for such access to be allowed, the corresponding permission must be explicitly granted (see below) to the code attempting the access.

Permissions derive from java.security.Permission. They have a name and an optional list of actions (in the form of comma separated string values).

Java ships with a bunch of predefined permissions, like FilePermission. You can also add your own permissions.

The following is a permission to read the file /home/remon/thesis.pdf:

Permission readPermission = new java.io.FilePermission(
    "/home/remon/thesis.pdf", "read");

You can grant a piece of code permissions to do anything and everything by granting it AllPermission. This has the same effect as running it without SecurityManager.

Policies

Permissions are granted using policies. A Policy is responsible for determining whether code has permission to perform a security-sensitive operation.

The AccessController consults the Policy to see whether a Permission is granted.

There can only be one Policy object in use at any given time. Application code can subclass Policy to provide a custom implementation.

The default implementation of Policy uses configuration files to load grants. There is a single system-wide policy file, and a single (optional) user policy file.

You can create additional policy configuration files using the PolicyTool program. Each configuration file must be encoded in UTF-8.

By default, code is granted no permissions at all. Every grant statement adds some permissions. Permissions that are granted cannot be revoked.

The following policy fragment grants code that originates from the /home/remon/code/ directory read permission to the file /home/remon/thesis.pdf:

grant codeBase "file:/home/remon/code/-" {
    permission java.io.FilePermission "/home/remon/thesis.pdf",
        "read";
};

Note that the part following codeBase is a URL, so you should always use forward slashes, even on a Windows system.

A codeBase with a trailing / matches all class files (not JAR files) in the specified directory. A codeBase with a trailing /* matches all files (both class and JAR files) contained in that directory. A codeBase with a trailing /- matches all files (both class and JAR files) in the directory and recursively all files in subdirectories contained in that directory.

For paths in file permissions on Windows systems, you need to use double backslashes (\\), since the \ is an escape character:

grant codeBase "file:/C:/Users/remon/code/-" {
    permission java.io.FilePermission
        "C:\\Users\\remon\\thesis.pdf", "read";
};

For more flexibility, you can write grants with variable parts. We already saw the codeBase wildcards. You can also substitute system properties:

grant codeBase "file:/${user.home}/code/-" {
    permission java.io.FilePermission
        "${user.home}${/}thesis.pdf", "read";
};

Note that ${/} is replaced with the path separator for your system. There is no need to use that in codeBase, since that’s a URL.

Signed Code

Of course, we should make sure that the code we use is signed, so that we know that it actually came from who we think it came from.

We can test for signatures in our policies using the signedBy clause:

keystore "my.keystore";
grant signedBy "signer.alias", codeBase ... {
  ...
};

This policy fragment uses the keystore with alias my.keystore to look up the public key certificate with alias signer.alias.

It then verifies that the executing code was signed by the private key corresponding to the public key in the found certificate.

There can be only one keystore entry.

The combination of codeBase and signedBy clauses specifies a ProtectionDomain. All classes in the same ProtectionDomain have the same permissions.

Privileged Code

Whenever a resource access is attempted, all code on the stack must have permission for that resource access, unless some code on the stack has been marked as privileged.

Marking code as privileged enables a piece of trusted code to temporarily enable access to more resources than are available directly to the code that called it. In other words, the security system will treat all callers as if they originated from the ProtectionDomain of the class that issues the privileged call, but only for the duration of the privileged call.

You make code privileged by running it inside an AccessController.doPrivileged() call:

AccessController.doPrivileged(new PrivilegedAction() {
  public Object run() {
    // ...privileged code goes here...
    return null;
  }
});

Assembling the Sandbox

Now we have all the pieces we need to assemble our sandbox:

  1. Install a SecurityManager
  2. Sign the application jars
  3. Grant all code signed by us AllPermission
  4. Add permission checks in places that mobile code may call
  5. Run the code after the permission checks in a doPrivileged() block

I’ve created a simple example on GitHub.

Securing Mobile Java Code

Mobile Code is code sourced from remote, possibly untrusted systems, that are executed on your local system. Mobile code is an optional constraint in the REST architectural style.

This post investigates our options for securely running mobile code in general, and for Java in particular.

Mobile Code

Examples of mobile code range from JavaScript fragments found in web pages to plug-ins for applications like FireFox and Eclipse.

Plug-ins turn a simple application into an extensible platform, which is one reason they are so popular. If you are going to support plug-ins in your application, then you should understand the security implications of doing so.

Types of Mobile Code

Mobile code comes in different forms. Some mobile code is source code, like JavaScript.

Mobile code in source form requires an interpreter to execute, like J├ĄgerMonkey in FireFox.

Mobile code can also be found in the form of executable code.

This can either be intermediate code, like Java applets, or native binary code, like Adobe’s Flash Player.

Active Content Delivers Mobile Code

A concept that is related to mobile code is active content, which is defined by NIST as

Electronic documents that can carry out or trigger actions automatically on a computer platform without the intervention of a user.

Examples of active content are HTML pages or PDF documents containing scripts and Office documents containing macros.

Active content is a vehicle for delivering mobile code, which makes it a popular technology for use in phishing attacks.

Security Issues With Mobile Code

There are two classes of security problems associated with mobile code.

The first deals with getting the code safely from the remote to the local system. We need to control who may initiate the code transfer, for example, and we must ensure the confidentiality and integrity of the transferred code.

From the point of view of this class of issues, mobile code is just data, and we can rely on the usual solutions for securing the transfer. For instance, XACML may be used to control who may initiate the transfer, and SSL/TLS may be used to protect the actual transfer.

It gets more interesting with the second class of issues, where we deal with executing the mobile code. Since the remote source is potentially untrusted, we’d like to limit what the code can do. For instance, we probably don’t want to allow mobile code to send credit card data to its developer.

However, it’s not just malicious code we want to protect ourselves from.

A simple bug that causes the mobile code to go into an infinite loop will threaten your application’s availability.

The bottom line is that if you want your application to maintain a certain level of security, then you must make sure that any third-party code meets that same standard. This includes mobile code and embedded libraries and components.

That’s why third-party code should get a prominent place in a Security Development Lifecycle (SDL).

Safely Executing Mobile Code

In general, we have four types of safeguards at our disposal to ensure the safe execution of mobile code:

  • Proofs
  • Signatures
  • Filters
  • Cages (sandboxes)

We will look at each of those in the context of mobile Java code.

Proofs

It’s theoretically possible to present a formal proof that some piece of code possesses certain safety properties. This proof could be tied to the code and the combination is then proof carrying code.

After download, the code could be checked against the code by a verifier. Only code that passes the verification check would be allowed to execute.

Updated for Bas’ comment:
Since Java 6, the StackMapTable attribute implements a limited form of proof carrying code where the type safety of the Java code is verified. However, this is certainly not enough to guarantee that the code is secure, and other approaches remain necessary.

Signatures

One of those approaches is to verify that the mobile code is made by a trusted source and that it has not been tampered with.

For Java code, this means wrapping the code in a jar file and signing and verifying the jar.

Filters

We can limit what mobile content can be downloaded. Since we want to use signatures, we should only accept jar files. Other media types, including individual .class files, can simply be filtered out.

Next, we can filter out downloaded jar files that are not signed, or signed with a certificate that we don’t trust.

We can also use anti-virus software to scan the verified jars for known malware.

Finally, we can use a firewall to filter out any outbound requests using protocols/ports/hosts that we know our code will never need. That limits what any code can do, including the mobile code.

Cages/Sandboxes

After restricting what mobile code may run at all, we should take the next step: prevent the running code from doing harm by restricting what it can do.

We can intercept calls at run-time and block any that would violate our security policy. In other words, we put the mobile code in a cage or sandbox.

In Java, cages can be implemented using the Security Manager. In a future post, we’ll take a closer look at how to do this.