Mobile Code is code sourced from remote, possibly untrusted systems, that are executed on your local system. Mobile code is an optional constraint in the REST architectural style.
This post investigates our options for securely running mobile code in general, and for Java in particular.
Mobile Code
Examples of mobile code range from JavaScript fragments found in web pages to plug-ins for applications like FireFox and Eclipse.
Plug-ins turn a simple application into an extensible platform, which is one reason they are so popular. If you are going to support plug-ins in your application, then you should understand the security implications of doing so.
Types of Mobile Code
Mobile code comes in different forms. Some mobile code is source code, like JavaScript.
Mobile code in source form requires an interpreter to execute, like JägerMonkey in FireFox.
Mobile code can also be found in the form of executable code.
This can either be intermediate code, like Java applets, or native binary code, like Adobe’s Flash Player.
Active Content Delivers Mobile Code
A concept that is related to mobile code is active content, which is defined by NIST as
Electronic documents that can carry out or trigger actions automatically on a computer platform without the intervention of a user.
Examples of active content are HTML pages or PDF documents containing scripts and Office documents containing macros.
Active content is a vehicle for delivering mobile code, which makes it a popular technology for use in phishing attacks.
Security Issues With Mobile Code
There are two classes of security problems associated with mobile code.
The first deals with getting the code safely from the remote to the local system. We need to control who may initiate the code transfer, for example, and we must ensure the confidentiality and integrity of the transferred code.
From the point of view of this class of issues, mobile code is just data, and we can rely on the usual solutions for securing the transfer. For instance, XACML may be used to control who may initiate the transfer, and SSL/TLS may be used to protect the actual transfer.
It gets more interesting with the second class of issues, where we deal with executing the mobile code. Since the remote source is potentially untrusted, we’d like to limit what the code can do. For instance, we probably don’t want to allow mobile code to send credit card data to its developer.
However, it’s not just malicious code we want to protect ourselves from.
A simple bug that causes the mobile code to go into an infinite loop will threaten your application’s availability.
The bottom line is that if you want your application to maintain a certain level of security, then you must make sure that any third-party code meets that same standard. This includes mobile code and embedded libraries and components.
That’s why third-party code should get a prominent place in a Security Development Lifecycle (SDL).
Safely Executing Mobile Code
In general, we have four types of safeguards at our disposal to ensure the safe execution of mobile code:
- Proofs
- Signatures
- Filters
- Cages (sandboxes)
We will look at each of those in the context of mobile Java code.
Proofs
It’s theoretically possible to present a formal proof that some piece of code possesses certain safety properties. This proof could be tied to the code and the combination is then proof carrying code.
After download, the code could be checked against the code by a verifier. Only code that passes the verification check would be allowed to execute.
Updated for Bas’ comment:
Since Java 6, the StackMapTable attribute implements a limited form of proof carrying code where the type safety of the Java code is verified. However, this is certainly not enough to guarantee that the code is secure, and other approaches remain necessary.
Signatures
One of those approaches is to verify that the mobile code is made by a trusted source and that it has not been tampered with.
For Java code, this means wrapping the code in a jar file and signing and verifying the jar.
Filters
We can limit what mobile content can be downloaded. Since we want to use signatures, we should only accept jar files. Other media types, including individual .class
files, can simply be filtered out.
Next, we can filter out downloaded jar files that are not signed, or signed with a certificate that we don’t trust.
We can also use anti-virus software to scan the verified jars for known malware.
Finally, we can use a firewall to filter out any outbound requests using protocols/ports/hosts that we know our code will never need. That limits what any code can do, including the mobile code.
Cages/Sandboxes
After restricting what mobile code may run at all, we should take the next step: prevent the running code from doing harm by restricting what it can do.
We can intercept calls at run-time and block any that would violate our security policy. In other words, we put the mobile code in a cage or sandbox.
In Java, cages can be implemented using the Security Manager. In a future post, we’ll take a closer look at how to do this.
Thanks for your addition, Bas. I updated the post.
I would say that Java class files with StackMapTables are a practical form of proof carrying code. The StackMapTable is a proof that the Java byte code is type safe, which is an important safety property. This proof is verified by the JVM classfile verifier.