Understanding the keys to Java security -- the sandbox and authentication

A detailed look at the latest security features in Java -- and the recently discovered code-signing hole

You may have heard about the latest flaw in the security of JDK 1.1 and HotJava 1.0 that was recently discovered by the Secure Internet Programming team at Princeton University (led by one of the authors). If you want the whole story, read on. But there's more to Java security than the specifics about this latest security hole. Let's get some perspective.

Java security and public perception

Everybody knows that security is a big deal for Java. Whenever a security hole is discovered, the story blasts into the computer news (and sometimes the business news) very quickly. You may not be surprised to learn that the popular press monitors comp.risks and other security-related newsgroups. They pick security stories to highlight seemingly at random, though since Java is so hot these days they almost always print Java security stories.

The problem is that most news stories don't explain the holes well at all. This could lead to a classic "cry wolf" problem where people become habituated to seeing "this week's security story" and don't educate themselves about the very real risks of executable content. Moreover, vendors tend to downplay their security problems, thus further confusing the key issues.

The good news is that the JavaSoft security team is serious about making Java secure. The bad news is that a majority of Java developers and users may believe the hype emanating from events like JavaOne where security problems are not given much airplay. As we said in our book, Java Security: Hostile Applets, Holes, & Antidotes, Sun Microsystems has a lot to gain if it makes you believe Java is completely secure. It is true that the vendors have gone to great lengths to make their Java implementations as secure as possible, but developers don't want effort; they want results.

Since a Java-enabled Web browser allows Java code to be embedded in a Web page, downloaded across the net, and run on a local machine, security is a critical concern. Users can download Java applets with exceptional ease -- sometimes without even knowing it. This exposes Java users to a significant amount of risk.

Java's designers are well aware of the many risks associated with executable content. To combat these risks, they designed Java specifically with security concerns in mind. The main goal was to address the security issue head-on so that naive users (say, a majority of the millions of Web surfers) would not have to become security experts just to safely peruse the Web. This is an admirable goal.

The three parts of the Java sandbox

Java is a very powerful development language. Untrusted applets should not be allowed to access all of this power. The Java sandbox restricts applets from performing many activities. The best technical paper on applet restrictions is "Low Level Security in Java" by Frank Yellin.

Java security relies on three prongs of defense: the Byte Code Verifier, the Class Loader, and the Security Manager. Together, these three prongs perform load- and run-time checks to restrict file-system and network access, as well as access to browser internals. Each of these prongs depends in some way on the others. For the security model to function properly, each part must do its job properly.

The byte code verifier:

The Byte Code Verifier is the first prong of the Java security model. When a Java source program is compiled, it compiles down to platform-independent Java byte code. Java byte code is "verified" before it can run. This verification scheme is meant to ensure that the byte code, which may or may not have been created by a Java compiler, plays by the rules. After all, byte code could well have been created by a "hostile compiler" that assembled byte code designed to crash the Java virtual machine. Verifying an applet's byte code is one way in which Java automatically checks untrusted outside code before it is allowed to run. The Verifier checks byte code at a number of different levels. The simplest test makes sure that the format of a byte-code fragment is correct. On a less basic level, a built-in theorem prover is applied to each code fragment. The theorem prover helps to make sure that byte code does not forge pointers, violate access restrictions, or access objects using incorrect type information. The verification process, in concert with the security features built into the language through the compiler, helps to establish a base set of security guarantees.

The applet class loader:

The second prong of security defense is the Java Applet Class Loader. All Java objects belong to classes. The Applet Class Loader determines when and how an applet can add classes to a running Java environment. Part of its job is to make sure that important parts of the Java run-time environment are not replaced by code that an applet tries to install. In general, a running Java environment can have many Class Loaders active, each defining its own "name space." Name spaces allow Java classes to be separated into distinct "kinds" according to where they originate. The Applet Class Loader, which is typically supplied by the browser vendor, loads all applets and the classes they reference. When an applet loads across the network, the Applet Class Loader receives the binary data and instantiates it as a new class.

The security manager:

The third prong of the Java security model is the Java Security Manager. This part of the security model restricts the ways in which an applet can use visible interfaces. Thus the Security Manager implements a good portion of the entire security model. The Security Manager is a single module that can perform run-time checks on "dangerous" methods. Code in the Java library consults the Security Manager whenever a dangerous operation is about to be attempted. The Security Manager is given a chance to veto the operation by generating a Security Exception (the bane of Java developers everywhere). Decisions made by the Security Manager take into account which Class Loader loaded the requesting class. Built-in classes are given more privilege than classes that have been loaded over the net.

Untrusted and banished to the sandbox

Together, the three parts of the Java security model make up the sandbox. The idea is to restrict what an applet can do and make sure it plays by the rules. The sandbox idea is appealing because it is meant to allow you to run untrusted code on your machine without worrying about it. That way you can surf the Web with impunity, running every Java applet you ever come across with no security problems. Well, as long as the Java sandbox has no security holes.

An alternative to the sandbox:

Authentication through code-signing

ActiveX is another high-profile form of executable content. Promoted by Microsoft, ActiveX has been criticized by computer security professionals who view its approach to security as lacking. Unlike the Java security situation, whereby an applet is limited by software control in the sorts of things it can do, an ActiveX control has no limitations on its behavior once it is invoked. The upshot is that users of ActiveX must be very careful to run only completely trusted code. Java users, on the other hand, have the luxury of running untrusted code fairly safely.

The ActiveX approach relies on digital signatures, a kind of encryption technology in which arbitrary binary files can be "signed" by a developer or distributor. Because a digital signature has special mathematical properties, it is irrevocable and unforgeable. That means a program like your browser can verify a signature, allowing you to be certain who vouched for the code. (At least, that's the theory. Things are a bit more ambiguous in real life.) Better yet, you can instruct your browser always to accept code signed by some party that you trust, or always to reject code signed by some party that you don't trust.

A digital signature holds lots of information. For example, it can tell you that even though some code is being redistributed by a site you don't trust, it was originally written by someone you do trust. Or it can tell you that although the code was written and distributed by somebody you don't know, your friend has signed the code, attesting that it is safe. Or it may simply tell you which of the thousands of users at aol.com wrote the code.

(See sidebar for more details on digital signitures, including five key properties.)

The future of executable content: Leaving the sandbox

Do digital signatures make ActiveX more attractive security-wise than Java? We believe not, especially in light of the fact that digital signature capability is now available in Java's JDK 1.1.1 (along with other security enhancements). That means in Java, you get everything that ActiveX is doing for security plus the ability to run untrusted code fairly safely. Java security will be enhanced even farther in the future by flexible, fine-grained access control, which, according to Li Gong, JavaSoft's Java security architect, is planned for release in JDK 1.2. Better access control also will make its way into the next round of browsers, including Netscape Communicator and MicroSoft Internet Explorer 4.0.

In concert with access control, code signing will allow applets to step outside the security sandbox gradually. For example, an applet designed for use in an Intranet setting could be allowed to read and write to a particular company database as long as it was signed by the system administrator. Such a relaxation of the security model is important for developers who are chomping at the bit for their applets to do more. Writing code that works within the tight restrictions of the sandbox is a pain. The original sandbox is very restrictive.

Eventually, applets will be allowed different levels of trust. Since this requires access control, shades of trust currently are not available even though code signing is. As it currently stands in JDK 1.1.1, Java applets are either completely trusted or completely untrusted. A signed applet marked as trusted is allowed to escape the sandbox completely. Such an applet can do anything at all and has no security restrictions.

The main problem with Java's approach to security is that it is complicated. Complicated systems tend to have more flaws than simple systems. Security researchers, most notably Princeton's Secure Internet Programming team, have found several serious security flaws in early versions of the sandbox. Many of these flaws were implementation errors, but some were specification errors. Fortunately, JavaSoft, Netscape, and Microsoft have been very quick to fix such problems when they are discovered. (Clear and complete explanations of Java's security holes can be found in Chapter 3 of our book.)

Just recently, Sun marketeers (sometimes called evangelists) were quick to point out that no new flaws had been discovered in quite some time. They took this as evidence that Java would never again suffer from security problems. They jumped the gun.

The code-signing hole: Java skins its knee

Code signing is complicated. As in the original sandbox model, there is plenty of room for error in designing and implementing a code-signing system. The recent hole was a fairly straightforward problem in the implementation of Java's Class class, as explained on both the Princeton site and JavaSoft's security site. Specifically, the method Class.getsigners() returns a mutable array of all signers known to the system. It is possible for an applet to misuse this information. The fix was as simple as returning only a copy of the array, and not the array itself.

Consider a situation in which a developer, Alice, has been granted no security privilege on a Web user's system. In fact, contrary to what the original JavaSoft statement about the bug claimed, Alice can be completely unknown to the system. In other words, code signed by Alice is not trusted any more than the usual applet off the street. If the Web user (using the HotJava browser -- currently the only commercial product that supports JDK 1.1.1) loads an applet signed by Alice, that applet can still step out of the sandbox by exploiting the hole.

The fact that the system need not have Alice's public key in its database is important. It means that Alice can be any arbitrary attacker who knows how to sign an applet with a completely random identity. Creating such an identity is easy, as is signing an applet with that identity. This makes the hole very serious indeed.

The hole allows Alice's attack applet to change the system's idea of who signed it. This is especially bad if Alice is not granted privilege to run outside the sandbox, but Bob is. Alice's applet can use the getsigners() call to change its level of permission to include all of Bob's privileges. Alice's applet can get the maximum amount of available privileges doled out to any signer known to the system.

If you liken the signature/privilege identities to coats in a closet, Alice's attack applet can try on each coat and attempt various disallowed things until it discovers which of the coats are "magic" and allow it to gain privilege. If a magic coat is discovered, Alice's applet can step out of the sandbox and do things it should not be allowed to do. Trying on coats is as simple as attempting a disallowed call and watching to see what happens.

1 2 Page 1
Page 1 of 2