A New Approach To Mobile Code Security
A New Approach To Mobile Code Security
S ECURITY
D AN S ETH WALLACH
A D ISSERTATION
OF D OCTOR OF P HILOSOPHY
BY THE D EPARTMENT OF
C OMPUTER S CIENCE
J ANUARY 1999
c Copyright by Dan Seth Wallach, 1999.
such as Java. Security-passing style, and its predecessor, stack inspection, allow
the system to capture the complex security relationships that occur when trusted
and untrusted code are run together and interact closely.
subject may be acting on behalf of another, or may be acting on its own behalf.
These systems generally have neither the mechanisms to capture the full secu-
rity context of a request nor the policies expressive enough to be able to resolve
whether these requests should be allowed or denied. Issues such as these arise in
mobile code systems, requiring new security mechanisms to address their security.
also allow us to produce an implementation that has extremely low overhead (in
principal, just over one instruction per method invocation) based on static analy-
sis of the program to be run and dynamic caching to make common-cases execute
faster.
iii
Acknowledgments
It is an amazing thing that this dissertation exists, due in no small part to help and
cajoling from a large cast of friends and colleagues. My advisor, Ed Felten, was an
unending source of useful and pragmatic advice. Andrew Appel gave expert di-
style’ and helped give my system the name ‘SAFKASI’ (pronounced saff-KAH-zee,
the security architecture formerly known as stack inspection). My Secure Inter-
net Programming colleagues Drew Dean and Dirk Balfanz have also been a great
flaws in the security of the Java system and building sneaky applets to take advan-
tage of those flaws. Thanks also to Ken Steiglitz for reading my dissertation and
catching all the silly mistakes that managed to slip by everybody else.
In the course of doing my work, I have gotten all kinds of useful commentary
from anonymous conference and journal referees. I also thank Martı́n Abadi, John
Wilkes, Dave Gifford, Norman Hardy, Olin Sibert, and Mark Miller for their feed-
spent at Netscape and was designed and built with Jim Roskind, Raman Tenneti,
and Tom Dell. Thanks also to Warren Harris, Nick Thompson, Jim Gellman, Bob
Relyea, Tom Weinstein, John Hines, and Tara Hernandez for all their help making
my internship and my code successful. I probably still owe Tara a bottle of tequila
for breaking the build.
Along the way, stack inspection has seen the tug and pull of the industry’s Java
iv
battles. For all the flames and invective, many people gave me extremely useful
feedback and clarifications of how things were really working. Thanks to Li Gong,
Marianne Mueller, Larry Koved, Bob Blakley, Jeff Bisset, Mike Jerger, and Mike
Toutonghi. Thanks also to Kenneth Zadeck and David Chase for their useful and
entertaining perspectives.
To actually get security-passing style working, I eventually gave up on the bugs
and strange behavior of Sun’s JDK and turned to Kenneth Zadeck, David Chase,
standing of the intricacies of the Java class file format were invaluable to me.
Thanks also go to Paul Martino, Greg Humphreys, and Benji Jasik of Ahpah Soft-
ware for allowing me free use of their SourceAgain Java decompiler, which proved
Thanks to Jim Roberts, Chris Tengi, Chris Krantz, and Matt Norcross for putting
the National Science Foundation, by the Alfred P. Sloan Foundation, and by dona-
v
tions from Bellcore, Intel, Merrill Lynch, Microsoft, Sun Microsystems, and Trin-
vi
To Mom and Dad
vii
Contents
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
1 Introduction 1
1.1 My Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Structure of the Dissertation . . . . . . . . . . . . . . . . . . . . . . . . 2
viii
3.3.1 Overflowing Buffers . . . . . . . . . . . . . . . . . . . . . . . . 24
3.4.1 Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.4.2 Enforcement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.4.3 Integrity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.4.4 Accountability . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
ix
5 Security Architectures for Java 49
5.2 Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
5.2.1 First Approach: Processes . . . . . . . . . . . . . . . . . . . . . 54
5.3 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
5.3.10 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
5.3.11 Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
5.4 Combinations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
5.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
x
6 Access Control Logic 95
xi
8 Security-Passing Style: Efficient Infrastructure for Access Control 124
xii
List of Tables
eration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
xiii
List of Figures
xiv
Chapter 1
Introduction
1.1 My Thesis
— and hardware memory protection — to isolate and protect one program from
another, modern type-safe languages allow separation and resource controls to
Once type safety is addressed, the major remaining difficulty in a language run-
time is determining the identity whose authority must be checked before perform-
system heap, so the applets all share a common set of system classes responsible
1
for various dangerous operations such as access to the network and file system. As
each applet might have different privileges, the system classes must impose some
ing that it can be built easily above any type-safe language runtime and that it
extends naturally to support secure remote procedure calls. This dissertation con-
tributes a new form of stack inspection called “security-passing style,” defined as
spection system, having the same desirable security properties without interfering
implementation and measures its performance using a recent optimizing Java run-
time. Current Java compilers and runtimes still have a way to go before achieving
the efficiency of hand-tuned assembly code, so we also consider what the overhead
2
unpublished material. This section briefly describes each chapter and cites its in-
fluences.
ter points out how mobile code violates many of the assumptions made by
traditional commercial security. This material borrows from [DFWB97].
Chapter 2 defines what exactly we mean by “security” and quotes the rele-
vant parts of the Orange Book [Nat85]. The Orange Book defines what se-
curity generally means for an operating system and defines specific features
that a system should support in order to receive various security ratings.
Rather than presenting each and every flaw we found, this chapter presents
a taxonomy of problems with examples drawn from our experience studying
Many of the security mechanisms proposed for Java have been tried in one
form or another in traditional operating systems. Chapter 4 describes how
operating systems have traditionally build security architectures, both on lo-
cal machines and across computer networks. This chapter reviews a broad
Chapter 5 shows how traditional security architectures apply to Java and in-
troduces some new mechanisms that have been successfully applied to Java.
3
We originally designed and built two such systems (name-space manage-
ment and stack inspection) [WBDF97]. This chapter also discusses capability-
based Java systems and process-structured systems [BTS+ 98, Cv98] and eval-
uates all four architectures against each other.
Burrows, Lampson, and Plotkin [ABLP93], which we use later in the thesis.
Chapter 7 uses this logic to design security-passing style and prove its equiv-
alence with stack inspection. Chapter 7 also discusses how security-passing
to implement these primitives. This material has not been previously pub-
lished.
4
erly authorized individuals, or processes operating on their behalf, will
Commercial organizations are often more concerned that data cannot be modified
without sufficient authorization. In academia, the largest concern may well be that
scarce resources are consumed only by authorized users. And, just about every-
body would like to have enough of an audit trail to identify their attackers after
the fact, and present evidence to the police. These are the four pillars of computer
security: secrecy, integrity, auditability, and protection from theft of service.
trusted, you were 99% done. Once inside the system, each user could be con-
trolled in their ability to read and write files on the system. They could be billed
for their usage of disk space and CPU cycles. Sure, some system utilities may
have had bugs that users could exploit, but those were few and far between. It
was not really all that important, since only employees could get anywhere near
the computer, and they generally had it in their best interests not to attack their
employer! Even in university environments, where some users may have been
malicious toward their peers, information about system vulnerabilities was care-
5
Firewall
victim.org attacker.com
permission
granted
Mail Server
User
permission
granted
In practice, the threat of attack from insiders was and is still a serious issue.
In conversations I have had with early timesharing computer users, many have
described discovering and exploiting security holes with the same pride one might
describe finding a $20 bill on the street. The security of these timesharing systems
relied as much on their users’ lack of malice as on any of their security mechanisms.
puters and those networks were eventually linked to the Internet, the rules began
to change. Many organizations were fairly permissive in their internal security, op-
erating under the assumption that their employees were not malicious, and that all
internal machines were sufficiently well controlled that no normal use of the sys-
tem utilities would result in a security problem. Of course, no organization could
6
safely make such assumptions about arbitrary machines on the Internet. To safely
walls, gateways between the inside and outside that would hopefully not hinder
internal people trying to get their job done, yet would restrict external malcon-
tents from wreaking havoc (see figure 1.1). While a number of different techniques
exist for building firewalls [CB94], they all work fundamentally by blocking exter-
nal attempts to connect to all but a handful of carefully chosen internal machines.
If secure information must travel over an insecure network, or just over the In-
ternet backbone, a very real danger is that the traffic might be sniffed — observed
wishes to connect to its field offices without expensive leased lines. All the de-
When properly deployed, firewalls and cryptography have the potential to re-
duce the network security problem to be no worse than the traditional security
problem. Of course, the traditional security issues have not gone away.
7
Firewall
victim.org attacker.com
permission
User granted
Mail Server
Mobile Code
permission
granted
Web Server
File Server
The firewall model was quite a success commercially, and the firewall industry
continues to grow today with numerous vendors fighting each other over features
and prices. Unfortunately, the firewall model makes a fundamental error: it as-
sumes that all the programs running inside the network are acting only on the
requests of internal users and the data that passes through the firewall is inconse-
quential. After all, no internal user would accidentally choose to leak a sensitive
corporate document to the open Internet. And, there should be no harm in letting
them download arbitrary files from the network. If files are all plain ASCII text
then there is clearly no danger. Even if a user downloaded a binary program, it
had to be explicitly installed in the system. Because this process was relatively la-
borious, there was relatively lower danger of installing a dangerous program by
mistake.
With the modern Internet, users are not just getting their documents and e-
8
mail in plain text. Instead, documents are themselves programs or contain pro-
grams within themselves [Sib96]. Such documents are sometimes said to have “ac-
tive content,” containing exciting new gizmos like “scripting,” “applets,” “custom
controls,” “plugins,” or other strange marketing phrases like “crossware.” Funda-
mentally, they are all mobile code systems and they violate the assumptions made by
firewalls. Previously, a firewall could assume that an attacker could only be on the
outside. Now, with mobile code, an attack might originate from the inside as well,
mobile code technologies, the program can install itself and begin running with, at
most, a single mouse click. Following that one click, it might be possible for mobile
code, now running on a user’s machine, to act as if it were the user and attack the
corporate network.
Likewise, cryptography offers very little to help with mobile code. By using
digital signatures, cryptography can be used to certify the origin of mobile code,
and provide guarantees that the code received was not tampered with in transit.
Cryptography can make no guarantees about what the code might do when exe-
cuted.
In order to properly protect the users and their networks against attacks from
mobile code, either all mobile code support must be turned off, with the conse-
quent loss of functionality, or the users’ platforms must maintain adequate safe-
guards to control the actions of mobile code. Turning off mobile code is certainly
a simpler solution. Why not do it? The users will scream! If a company blocks
its employees from seeing the latest snazzy Web plugins, they may be preventing
their employees from interacting with suppliers and studying competitors. Like it
9
erly.
In essence, the biggest concern with mobile code is that it might be a way to
create Trojan Horses. When the invading Greeks wanted to sack Troy, they did it by
building a large wooden horse as a gift for the Trojans and pretending to abandon
their siege of the city. Unbeknownst to the Trojans, a number of Greek soldiers
hid inside the horse, which the Trojans brought inside their walls as a trophy. At
night, the soldiers emerged from their hiding place and opened the city gates to
the returning Greek armies. In the context of computer programs, we call any
program a Trojan Horse that has apparently or actually useful features and also
contains hidden malicious functionality that exploits any privileges the program
issues will need to be addressed if we are to feel safe using our computers. Prob-
ably the single largest issue is known bugs in older existing systems. Most people
who attack computers are not computer researchers discovering new vulnerabil-
ities. Rather, they are skilled librarians, collecting known exploits like tools on a
tool belt. An attacker only needs to learn what software their target is using and
see if they have an appropriate exploit ready to go. While users can gain some
assurance by always upgrading their systems to the latest software versions and
applying security patches when available, much of the blame rests on the software
vendors who continue to ship buggy code. So long as feature lists and ship dates
are sufficient to gain market share and sell products, the effort to fix bugs or design
10
software correctly from the ground up will not be rewarded in the marketplace.
One promising trend is the shift away from the C and C++ programming lan-
guages to Java. Java, along with Modula-3, Scheme, ML, and most other modern
programming languages support type safety and strong abstraction. This means that,
while programs may continue to have bugs, it is not possible for a program to acci-
dentally reference memory it has not properly allocated or been given a reference
to, nor is it possible to confuse one data type for another. Type safety is a strong
enough property that it completely thwarts attacks that attempt to overflow in-
ternal buffers and overwrite the system’s memory with hostile code. While it is
possible to write secure code in C or C++, a great deal more effort is required be-
to come. In order to secure our future, we must realistically assess the legacy sys-
tems of our past and investigate systems that can retrofit security around them.
11
Chapter 2
While this dissertation is about the security of mobile code systems, it is impor-
tant to begin by talking about traditional computer security and what it means
for a system to be secure. With systems as early as Multics [BL76, Org72, Sal74]
and later in operating systems of all kinds and variations, vendors have worked
to build secure systems. The landmark “Orange Book” [Nat85], published by the
U.S. Department of Defense in 1985, sought to define common profiles for secure
systems, helping vendors reach consensus on what a secure system should be, and
then widely distribute such systems in the commercial as well as military mar-
ketplace. Based on research beginning with a task force in 1967, the DoD and its
contractors wrote down everything they believed they knew about building se-
cure software. The full series of books (“the rainbow books”) covers everything
from how software should be physically delivered to how version control should
be managed. The Orange Book introduced six criteria around which all secure sys-
tems could be judged, and specified a number of classifications, ranging from “D”
through “A1” to which a system would be evaluated.
12
Security Policy – There must be an explicit and well-defined se-
objects, there must be a set of rules that are used by the system to de-
termine whether a given subject can be permitted to gain access to a
every object with a label that reliably identifies the object’s sensitivity
level (e.g., classification), and/or the modes of access accorded those
system.
13
from modification and unauthorized destruction to permit detection
out the assigned tasks in a secure manner. The basis for trusting such
system mechanisms in their operational setting must be clearly docu-
ence monitor. A reference monitor is defined to be the portion of code that checks
each and every object reference and validates it against the system’s security pol-
icy [And72]. In order to be trustworthy, the reference monitor must be tamper-
14
proof, always invoked, and small enough to be analyzed and tested. Reference
This security kernel, and any external utilities it depends on to enforce system se-
curity are referred to as the trusted computing base or TCB.
lists or Unix-style file permissions, although some commercial systems will also
want the ability to build restrictions about which applications can access which
data [CW87]. In contrast to the civilian sector, the military is very concerned about
maintaining its multilevel security (MLS) — classified, secret, top secret, and so
forth. A user with “secret” clearance should not be permitted to read a “top se-
cret” document or even learn of the existence of such a document. Likewise, there
policy [BN89], which gradually builds a wall around users as they view sensitive
information to prevent them from tainting other information considered to be com-
been formally verified. Formal verification has had its greatest successes in study-
While some operating systems designers have attempted to create provably se-
15
cure systems [NBF+ 80], to require this for operating systems in 1985 was quite
about the validity of the system’s components would also help a system’s rating
by increasing our assurance of the system’s correctness.
16
Chapter 3
3.1 Introduction
The continuing growth and popularity of the Internet has led to a flurry of devel-
opments for the World Wide Web. Many content providers have expressed frustra-
tion with the inability to express their ideas in HTML. For example, before support
for tables was common, many pages simply used digitized pictures of tables. As
quickly as new HTML tags are added, there will be demand for more. In addition,
many content providers wish to integrate interactive features such as chat systems,
the notion of downloading a program (called an applet) that runs inside the web
browser. Such remote code raises serious security issues; a casual web reader
web page. Languages such as Java [GJS96], JavaScript [Fla97], Safe-Tcl [Bor94],
17
Limbo [Com97], Phantom [Cou95], Juice [FK97] and Telescript [Gen95] have been
proposed for running untrusted code, and each has varying ideas of how to thwart
malicious programs.
After several years of development inside Sun Microsystems, the Java language
was released in mid-1995 as part of Sun’s HotJava web browser. Shortly thereafter,
Netscape Communications Corp. announced they had licensed Java and would
incorporate it into their Netscape Navigator web browser, beginning with version
2.0. Microsoft later licensed Java from Sun, and incorporated it into Microsoft In-
ternet Explorer 3.0. With the support of many influential companies, Java effec-
tively became the standard for executable content on the web. This also made it
an attractive target for malicious attackers, and demanded external review of its
security.
Drew Dean and I first looked at Java in November, 1995 [DFW96]. Since that
time, the Princeton Secure Internet Programming group has found a number of
bugs in Netscape Navigator through all its various releases and later in Microsoft’s
Internet Explorer. As a direct result of our investigation, and the tireless efforts of
the vendors’ Java programmers, we believe the security of Java has significantly
improved since its early days [DFWB97]. In particular, Internet Explorer 3.0, which
shipped in August, 1996, had the benefit of nine months of our investigation into
Netscape’s Java. Still, despite all the work done by us and by others, no one can
Java is similar in many ways to C++ [Str94]. Both provide support for object-
oriented programming, share many keywords and other syntactic elements, and
18
can be used to develop stand-alone applications. Java diverges from C++ in a num-
execution. A Java runtime system may either interpret the bytecode directly or
compile it to native machine code [LY96]. Some newer Java systems statically com-
The standard Java distribution contains a large and growing collection of utility
code for every purpose, from basic data structures (hash tables, vectors, queues)
to string parsing routines, graphics and GUI functionality, networking and remote
procedure call support, database connectivity, cryptography, internationalization
Java programmers may combine related classes into a package. These packages
the apparent name hierarchy. In Java, public and private have the same meaning
as in C++: Public classes, methods, and instance variables are accessible every-
where, while private methods and instance variables are only accessible inside the
class definition. Java protected methods and variables are accessible in the class
19
or its subclasses or in the current (package, origin of code) pair. A (package, ori-
gin of code) pair defines the scope of a Java class, method, or instance variable
protected variables and methods can only be accessed in subclasses when they
class Foo {
protected int x;
The definition of protected was the same as C++ in some early versions of Java; it
was changed during the beta-test period to patch a security problem[Mue96] (see
also section 3.4.2).
The Java runtime system is designed to enforce the language’s access semantics.
Unlike C++, programs are not permitted to forge a pointer to a function and in-
voke it directly, nor to forge a pointer to data and access it directly. If a rogue
applet attempts to call a private method, the runtime system throws an excep-
20
tion, preventing the errant access. Thus, if the system libraries are specified safely,
the runtime system is designed to ensure that application code cannot break these
specifications.
Java’s type safety can be mostly verified statically by the bytecode verifier which
examines every class before it is loaded by the Java system. Unfortunately, the Java
language has some features that prevent completely static verification. The type
system uses a covariant [Cas95] rule for subtyping arrays, so array stores require
run time type checks1 in addition to the normal array bounds checks. Cast ex-
pressions also require runtime checks. In addition to their performance penalties,
dynamic checks stretch the trusted computing base beyond the bytecode verifier.
In practice, the dynamic verification of Java classes is remarkably subtle and bugs
have been found regularly in the bytecode verifier [MF97, Sir97] and the class load-
The original version of Java distinguished remote code from local code. While local
code was permitted to do do anything it wanted, remote code was restricted to the
Java “sandbox” security policy, which roughly states that applets may not access
the local file system at all and may only make network connections to their host of
origin.
Since local code and remote code could co-exist in the same Java virtual ma-
1
For example, suppose that A is a subtype of B; then the Java typing rules say that A[] (“array of
A”) is a subtype of B[]. Now the following procedure cannot be statically type-checked:
void proc(B[] x, B y) f
x[0] = y;
g
Since A[] is a subtype of B[], x could really have type A[]; similarly, y could really have type A. The
body of proc is not type-safe if the value of x passed in by the caller has type A[] and the value of
y passed in by the caller has type B. This condition cannot be checked statically.
21
chine (JVM), and can in fact call each other, the system needed a way to determine
if a sensitive call, such as a network or file system access, was executing “locally”
or “remotely,” since the security policy allowed more freedom for local code. The
original JVMs have two inherent properties used to make these checks:
Every class in the JVM that came from the network was loaded by a Class-
Loader, and includes a reference to its ClassLoader. Classes that came from
the local file system have a special system ClassLoader. Thus, local classes
can be distinguished from remote classes by their ClassLoader.
Every frame on the call stack includes a reference to the class running in that
frame. Many language features, such as the default exception handler, use
Combined, these two JVM implementation properties allow the security system to
search for remote code on the call stack. The system would actually count how
many “system” stack frames existed between the security check and the first “re-
mote” stack frame. This value, called the ClassLoader depth was used in making a
number of security decisions.
Later versions of Java replaced ClassLoader depths with a more general check-
ing mechanism called stack inspection. This will be introduced in chapter 5 and
To enforce the sandbox policy, all the potentially dangerous methods in the
system were designed to call a centralized SecurityManager class that checks if the
requested action is allowed (using the mechanisms described above), and throws
an exception if the request violates the policy.
22
3.2.4 System Architecture
Because Java attempts to protect the local system from potentially malicious mo-
bile code, and it also tries to protect one such program from another, Java begins
to take on the appearance of an operating system instead of just a language. As
an operating system, the most conspicuously missing feature of Java is the use of
separate address spaces to separate processes. Instead, Java relies strictly on the
memory safety it gets from being type safe. This may have important ramifica-
by itself, does not provide all the guarantees of a traditional operating system, such
as fair scheduling, resource usage limits, multiple users, and more. Some recent
research projects have begun to address these issues [GWTB96, Dig97, MRR98,
BTS+ 98, TL98, Cv98, HCC+ 98, BG98], but no commercial products yet offer a com-
plete solution. These issues are discussed in more detail in chapter 5.
Different attackers may have different goals. Some may wish to steal secrets from
you. Others may wish to delete your files and crash your machine. Some may
to understand how attackers will go about breaking the system. This dissertation
is not meant to be a comprehensive list of every bug ever discovered in the Java
VM (most of which were never published) nor of every possible kind of Internet-
based security attack (although Howard [How97] has a nice summary), but instead
presents examples we and others have found that demonstrate each category.
23
3.3.1 Overflowing Buffers
security attack has been to overflow a fixed buffer. A common practice among
C programmers had been the use of gets() to read one line of text, terminated
by a newline, into a buffer. Because gets() has no argument specifying the max-
imum length of its output, it will happily overflow buffers that were allocated
with insufficient space. Similar issues occur with numerous C utility functions
like sprintf(). Attacks like this have been aimed at every conceivable Unix and
Windows utility. If the attacker knows something about where the buffer is allo-
cated (either on the stack or on the heap), it becomes possible to write executable
machine code directly into the system’s RAM and (particularly when overflowing
stack-allocated buffers) easily arrange for it to be executed.
nerable. In early alpha releases of Java, we found numerous cases where Sun used
fixed buffers and unsafe routines to write to them. Most of these problems were
fixed in the beta releases.
While an operating system can largely address this class of attacks by requir-
The goal in violating the type system is the same as in overflowing a buffer: in-
duce the system to execute arbitrary machine code and thereby work around any
restrictions imposed by the Java environment. To violate the type system, we need
24
a mechanism that allows an unchecked type cast. In C or C++, the programmer is
free to cast any object type to any other type. In fact, C’s union structure makes this
fast and convenient. Java goes to great lengths to prevent this (see section 3.2.2)
because unchecked type casts would allow an untrusted program to write a mem-
ory address into an integer, treat it as an object reference, and then write arbitrary
data anywhere in memory.
which is officially against the sandbox security policy, yet a number of tricks have
been discovered that have allowed them to be created anyway [MF97]. Since Class-
Loaders are responsible for the consistency of the Java type system [Dea97], it is
only natural that a malicious ClassLoader could arrange for an inconsistent type
system. With this, it becomes possible to treat a reference of one type as if it is
any other type in the system. Given this type caster, it is possible to write native
code into memory and execute it, although it is tricky and unportable. An easier
class with private members as if it is a reference to a class with all public members.
A simple but effective attack is to set the SecurityManager to null, effectively dis-
abling all system security and allowing a previously untrusted applet full access
to the system.
The system libraries need to exercise a number of dangerous privileges that are
25
buggy system code, you may be able to make it act on your behalf.
One such attack focused on the way Java handles fonts. Font properties are
stored in the the Properties subsystem along with a fair amount of privileged in-
formation, such as the identity of the user and other browser settings. Normally, an
restricted information. This bug was initially discovered in 1996, but still appeared
as late as Sun’s JDK1.2beta4 release in mid-1998. The bug was fixed in later releases
of JDK1.2.
This attack, reading the system properties through the font subsystem, is an
rity implications of every line of code they write. One of the main goals of stack
inspection (discussed later in this dissertation) is to thwart this class of attacks.
Simple infinite for-loops or infinite memory allocation can destabilize and crash
the Java virtual machine. Even allocating thousands of windows can also crash
some window systems. Simply playing an annoying tune might drive users away
from their machines. Mark LaDue has written a number of “hostile applets” that
demonstrate these attacks [LaD96].
pleasant music and one that plays annoying noise. Similar issues apply to CPU
26
3.3.5 Spoofing the User
One of the most frightening attacks is one where Java is used to simulate the pop-
or credit card number. Even though Java’s graphics are constrained to be within
the Web browser’s internal window, it is still possible to draw a simulated window
with simulated borders that can be dragged and resized! The only constraint is
that it cannot be dragged outside the browser’s internal window. This constraint
generally is not sufficient to protect users.
tween the user and the password system. This is done by requiring the user to
hit Control-Alt-Delete before an official password dialog will appear. NT will not
Unfortunately, this works for NT login, but most other password systems have
no such concept, particularly login screens for commerce-related Web sites, whether
running on NT or any other system. Some recent commercial trends toward sin-
gle sign-on might address this problem on the local system, but deploying such a
system where it can run compatibly across the Web and across different operating
systems will be a fascinating challenge.
3.4 Analysis
Through our studies of Java, beginning with the alpha releases of HotJava and
continuing through the latest releases from Sun, Netscape, and Microsoft, we have
27
found all kinds of security problems [DFWB97]. More instructive than the partic-
ular bugs we and others have found is an analysis of their possible causes. Policy
enforcement failures, coupled with the lack of a formal security policy, make inter-
esting information available to applets, and also provide channels to transmit it to
an arbitrary third party. The integrity of the runtime system can also be compro-
mised by applets. To compound these problems, no audit trail exists to reconstruct
an attack afterward. In short, the Java runtime system is not a high assurance
system.
3.4.1 Policy
plorer [Mic97a, Mic97b], and HotJava [Sun95] do not formally define a security
policy beyond the roughly stated “sandbox” policy. This contradicts the first of
is supposed to behave [Lan81]. In fact, Java has two entirely different uses: as a
general purpose programming language, like C++, and as a system for develop-
ing untrusted applets on the web. These roles will require vastly different security
policies for Java. The first role does not demand any extra security, as we expect the
operating system to treat applications written in Java just like any other applica-
tion, and we trust that the operating system’s security policy will be enforced. Web
applets, however, cannot necessarily be trusted with the full authority granted to
a given user, and so require that Java define and implement a protected subsystem
28
3.4.2 Enforcement
Java fundamentally bases its protection on the type safety of the language. An in-
teresting issue is that the Java’s bytecode representation is strictly more expressive
than its source language; legal bytecode exists for which there is no equivalent Java
source. Tools like jasmin make it easy to write any desired bytecode, legal or ma-
licious [MD97]. Lindholm and Yellin [LY96] discuss many of the restrictions that
are placed on bytecode and implemented by the Java bytecode verifier, but their
system class generally reasons about the code’s security using the semantics of the
Java source language. For example, a private variable may not be read or writ-
ten by code outside of the class that contains the variable. If this property were
enforced only by the compiler and not by the Java runtime, then hand-coded mali-
cious bytecode would be able to freely attack the system. More complex properties
relating to how a class calls its superclass constructor or when two classes are con-
sidered to be in the ‘same’ package are also relied upon for security, yet are not
discussed in the Java language specification [GJS96]. If any one of these security-
relevant properties turns out to be unenforced by the Java VM, then the code that
relied upon it would be vulnerable to attack.
to address the safety of the language, in theory, and the bytecode restrictions, in
practice.
29
Java’s security must be enforced at a higher level as well. The Java “sandbox”
security policy, such as it is, specifies certain restrictions on a Java applet’s author-
ity to make network connections, open files, and learn system properties. To en-
force this, the Java SecurityManager is intended to be a reference monitor [Lam71].
1. It is always invoked.
2. It is tamperproof.
3. It is verifiable.
Unfortunately, the Java SecurityManager design has weaknesses in all three ar-
eas. It is not always invoked: programmers writing the security-relevant portions
of the Java runtime system must remember to explicitly call the SecurityManager.
A failure to call the SecurityManager will result in access being granted, contrary
to the security engineering principle that dangerous operations should fail unless
ble to analyze. Consider the code in figure 3.1. The Java authors identified two
30
/**
* Applets are not allowed to link dynamic libraries.
*/
public synchronized void checkLink(String lib) {
switch (classLoaderDepth()) {
case 2: // Runtime.load clas
case 3: // System.loadLibrary
throw new AppletSecurityException("link", lib);
default:
break;
}
}
entry points from which applet code might request the dynamic loading of a li-
brary and encoded some fairly fragile information about these entry points in the
SecurityManager. The security check is fragile because other system classes might
also provide a way for an applet to load a library, directly or indirectly. These could
Explorer 4.0 and Sun’s JDK 1.2, was stack inspection, which was designed specifi-
cally to generalize and clean up the ClassLoader depth issues. This is described in
detail in chapter 5. Stack inspection is actually a very interesting mechanism and
its design, formalization, analysis, and performance is one of the main contribu-
tions of this dissertation.
3.4.3 Integrity
The Java runtime has a substantial amount of code written in C. Sun’s JDK 1.0.2,
for example, had 121K lines of C or C++ code compared to 107K lines of Java code.
31
This means that over half of the JDK is potentially vulnerable to buffer overflow
attacks. We do not have access to current Java source, but the native code size has
certainly grown larger in newer releases. If more of the Java runtime were written
in Java itself, these potentially vulnerabilities would not exist.
veal internal state to an applet because the HotJava browser’s state is kept in Java
variables and classes. Variables and methods that are public are potentially very
dangerous: they give the attacker a toe-hold into HotJava’s internal state. Static
and never release them. These are all issues that can be addressed with good de-
sign practices, coding standards, and code reviews.
The Java system does not include an identified trusted computing base (TCB).
Substantial and dispersed parts of the system must cooperate to maintain security.
The bytecode verifier, and interpreter or native code generator must properly im-
plement all the checks that are documented. The HotJava browser (a substantial
program) must not export any security-critical, unchecked public interfaces. This
does not approach the goal of a small, well defined, verifiable TCB. An analysis
of which components require trust would have found the problems we have ex-
ploited, and perhaps solved some of them. More recent work has begun to address
32
3.4.4 Accountability
information must be selectively kept and protected so that actions affecting secu-
rity can be traced to the responsible party.” [Nat85] The Java system does not define
any auditing capability. If we wish to trust a Java implementation that runs byte-
code downloaded across a network, a reliable audit trail is a necessity. The level
of auditing should be selectable by the user or system administrator. As a min-
imum, files read and written from the local file system should be logged, along
with network usage. Some users may wish to log the bytecode of all the programs
they download. This requirement exists because the user cannot count on the at-
tacker’s web site to remain unaltered after a successful attack. The Java runtime
system should provide a configurable audit system.
that purport to block applets, fewer offer support to log them, despite the rela-
tive ease with which it could be added to their products. Martin et al. [MRR97]
and LaDue [LaD96] have shown that blocking is somewhere between difficult and
crypted channel. Without cooperation from the Web browser, the firewall would
not be able to distinguish the applet from any other encrypted data. Perhaps sim-
ply logging all Java classes would be easier, and could provide sufficient evidence
to reconstruct an attack and identify its source. If the JVM was forced to load its
classes through a Web proxy server, that proxy server could host the tamper-proof
log. If the JVM sent an extra message to the proxy server before loading any class
(e.g., “now loading x/y/z.cls from foo.zip”), the server would know to log its
33
cached copy.
The Java VM’s that ship with Netscape and Microsoft’s products make no attempt
to control attacks that simply consume resources such as using too many CPU
cycles or allocating too much memory. Several researchers have proposed archi-
tectures where Java applets run on distinct machines, separated from a user’s Web
browser, and displaying graphics remotely [SGB+ 98, MRR98, Dig97]. These sys-
tems fundamentally place no trust in the JVM itself and instead use OS-level pro-
tection to sandbox the entire JVM. Other researchers have proposed Java architec-
tures that run separate instances of the Java machine within the same address space
and can control the resource usage of each JVM instance [Cv98, BTS+ 98, TL98]. We
will discuss and analyze these architectures in more detail in chapter 5.
34
Chapter 4
can use them and what they can do. A bank may allow customers access to infor-
mation on their accounts. A subscription Web site may allow known subscribers to
download the latest news and gossip. Generally, such systems are designed with
a large database containing the state of the server, wrapped with Web servers with
software that limits the kinds of requests and queries that can be made against the
database (sometimes called a “three tier architecture,” referring to the web client,
the web servers with their “application logic,” and a database system backing the
web servers). Obviously, a bank does not want its customers to arbitrarily access
the database and edit the amount of money present in their accounts!
A secure service is a layer of software (or hardware) that takes a dangerous prim-
itive service, such as access to the bank’s database, and exports a safe interface,
of computers, the complete system must still be secure against attacks from the
35
outside. Many of the techniques we might use to build security in single-computer
is allowed to access what. In the abstract, all security policies can be expressed
as an access control matrix [Lam71], stating for each subject and object, what kinds
of accesses are permitted from that subject to that object. In practice, a number of
different techniques have evolved to express the access control matrix. These are
discussed below.
Much of the work in security architectures has occurred in the pursuit of secure
operating systems, particularly for time-sharing applications. With hundreds or
thousands of users simultaneously sharing the same computer, the operating sys-
tem is responsible for ensuring an equitable division of the machine’s resources
4.1.1 Processes
The Unix systems we use today as well as Multics and numerous other systems
that came beforehand [Tan92] support the notion of a process. The process en-
writing directly to any of the computer’s devices. Processes are likewise prohib-
ited from directly handling interrupts or other hardware-related events. A process
36
may read and write only to memory mapped into its address space by the kernel.
In order for a process to interact with the outside world, it must either use a shared
memory buffer or make a system call. A number of different mechanisms are used
to implement system calls. Fundamentally, the kernel provides interfaces to read
and write files, make network connections, allocate and free memory, interact with
devices such as keyboards and mice, communicate with other processes, and so
forth.
must very carefully check the arguments passed to every kernel call and make
cess runs with a user ID and a group ID. Based on these two ID’s, the kernel will
evaluate whether the process has sufficient privilege to access the requested re-
source. Traditional Unix systems label each file with a user and a group and have
permission bits associated with each object. These permission bits, nine in all, state
whether the user may read, write or execute the file, whether a member of the
group may read, write, or execute the file, and finally whether anybody else may
read, write, or execute. Some newer Unix systems as well as Microsoft Windows
NT, Multics, and other systems support access control lists (ACLs), which provide
a more general specification of access rights than Unix’s permission bits. An ACL
can express an arbitrary list of user and group ID’s and state what permissions are
granted to each of them. An ACL represents one row of the access control matrix
– saying, for one object, all the subjects that may access it.
When the kernel is evaluating the permissions on an object, it must be careful
in how the arguments are read and written from the user process; it would be
37
unacceptable for a multi-threaded user processes to change the name of a file while
it is being simultaneously validated by the kernel. This could result in the security
portion of the kernel approving a file to be read only to have another file read
instead. Likewise, the kernel must be careful in how it passes results back to a user
process to avoid accidentally leaking the secrets of another process that may have
previously occupied the same buffer in the kernel.
The traditional solution to these security issues is to always copy the data. The
kernel will copy all string arguments before parsing them, therefore ensuring that
the file validated to be read is the file that is actually read. Then, when data is
actually read from a file or network, it is copied from the kernel’s disk buffer back
tem are copying data between address spaces and updating the system’s page
tables [Ous90, ALBL91]. Both of these operations are quite expensive and will
reduces system throughput by requiring multiple cycles of the CPU reading and
writing the data to memory. This may also cause useful data to be ejected from
memory caches. Extraneous page table operations may also require cache flushing
in some virtual-memory mapped caches.
Operating system designers and system architects have considered a wide range
of solutions to reduce these costs. Data copying can be reduced by careful buffer
management and by mapping hardware devices directly into a user process’s ad-
dress space. To avoid forcing a cache flush on every context switch, some CPUs
with virtually-mapped caches include a process-ID tag as part of the cache address.
Alternately, single address-space systems [CLLBH92, CLFL94] place all processes
into a flat address space. When such a system switches between processes, it need
38
only change protection bits on the relevant pages.
interaction between untrusted user code and trusted system code. Modern oper-
ating systems have begun to look at injecting user code directly into the kernel,
where costs can be even smaller. We discuss these systems later in section 4.3.
rings, with more privileged code getting closer to the center and unprivileged code
on outer rings [Org72, Sal74, SS72]. Calls to inner rings were restricted, much like
Unix system calls. The GE mainframe supported ring crossing calls with no ad-
ditional overhead relative to normal procedure calls. This led to a system where
device drivers could run in a separate ring from the rest of the kernel. This sepa-
ration could isolate a fault in one kernel subsystem and prevent it from damaging
the rest of the system.
The benefit of ring structures is that they naturally follow the layering inherent
in an operating system design and provide nice separation and protection between
layers. Still, problems arise in deciding what portions of the system to assign to
what layers. Inner rings are allowed to freely read and write the memory of their
callers in outer rings, so code closer to the center must necessarily be more trusted.
Multics also supported the U.S. military’s multi-level security policy [BL76],
where a document that is classified top secret may not be read by a user with only
a secret clearance. Where access control lists provide discretionary access control,
39
4.1.3 Other Grouping Structures
Systems that support domain and type enforcement (DTE) allow grouping to-
tory policies. Likewise, role-based access control systems [SCFY94], allow privi-
leges to be granted to roles rather than individual users. If a user is added to the
group corresponding to a specific role, then the user may exercise the privileges
tion to the access control matrix, simplifying the expression of policies by either
grouping subjects or objects together and expressing policies against the aggregate
groups.
In Unix-style operating systems, a process may name any system object it wants,
whether a file or a system device, and the kernel then centrally decides whether
the process making the request is sufficiently privileged to make the request. In
a capability system, however, a process will not necessarily be able to name any
file or resource. Instead, the ability to access a resource is mediated by the ability
will prevent an untrusted process from changing the value of the pointer, or the
kernel can supply “handles” or “descriptors” that allow an untrusted process to
40
ileges will be a group of capabilities that are inherited by programs run by that
Hardy [Har85].
Plan9 [PPTT90] can be thought of as an interesting variation on a capability sys-
bilities were relatively expensive to invoke, creating issues with system perfor-
mance, and the confinement of privileges is problematic. The performance issue is
relatively easy to envision: either your hardware must support relatively exotic
semantics with tagged memory, or you must add a level of indirection between
a process and its capabilities. Either will constrain the performance of a system.
The confinement issue is much trickier: once a capability has been passed from a
user to a program or from one program to another, there must be a mechanism to
eventually revoke that capability. Without revocation, the capability can no longer
be said to “belong” to the user to whom it was originally granted. While a number
of extensions to capability systems exist to address these concerns [KL87], they all
either increase the cost of invoking a capability or increase the cost of sharing one.
41
4.2 Distributed Systems
true in a distributed system. For example, while all tasks on a single machine can
traffic on the network or even inject their own messages that pretend to be from an
sufficient.
4.2.1 Firewalls
through a firewall. In this way, every machine in the cluster will implicitly trust
all messages it receives. This is probably the dominant method used in building
Capabilities extend quite naturally across a network. If remote object references in-
clude an unguessable string of bits (e.g., 128 random bits), then a remote capability
reference has all the security properties of a local capability [TMvR86, Gon89] (as-
third party has just as much power to use the capability as its intended holder. By
42
themselves, networked capabilities offer no way for an object to identify its remote
thenticated channel (as provided by Kerberos [KN93], SSL [FKK96] / TLS [DA97],
or Taos [WABL94]), the channel’s remote identity might be useful. Gong [Gon89],
Deng, et al. [DBWL95], and van Doorn, et al. [vABW96] describe implementations
of this.
connection, they will likely perform some kind of cryptographic handshake that
proves to each party the identity of the other party [Sch96]. Many remote object
systems, including DCE and CORBA [Hu95, DBWL95, Sie96], allow for access con-
trol lists associated with each object to be consulted when an object is invoked.
Taos [WABL94, LABW92, vABW96] is distinctive because it maintains com-
pound principals. So, as a request passes from one process to another and from one
machine to the next, the full chain of identities is maintained and can be passed
along. Likewise, Taos has a strong notion of delegation, whereby one principal can
make a statement that another principal can act on its behalf. This statement could
be sent across an encrypted channel or digitally signed and sent in the clear. Del-
egation statements can become quite complex, so Taos uses a theorem prover to
43
4.3 Mobile Code Systems
Modern research operating systems have invested a fair amount of effort in sup-
porting mobile code. One of the important motivations for mobile code, particular
injecting code from user processes into the kernel, has been to overcome the per-
formance barriers inherent to crossing from user mode to kernel mode and back.
In addition to the need for a security architecture, such systems also need a way to
prevent the injected code from having arbitrary memory access to the kernel. So,
just as Java relies on a bytecode verifier and then an interpreter or compiler, other
mobile code systems have used interpreters, machine code rewriting [WLAG93],
trusted compilers [BSP+ 95, PB96, SESS94, SESS96], or machine-checkable proofs of
code safety [NL96].
Many interesting security architectures have also been explored by these sys-
tems. Vino [SESS94, SESS96] exports a transactional interface to its kernel exten-
sions. If the extension misbehaves, the transaction can be cancelled and its effects
undone. Flux [FHL+ 96] allows recursive virtual machines to manage each others
resources. Scout [MP96] allows user and kernel code to cooperate in groups called
paths that are scheduled together and may be assigned resource limits as a group.
that users may invoke to change their password. In general, users should have
sufficient privilege to change their own password, but should not have sufficient
44
be necessary when a user forgets his or her password. In a Unix-style system,
searching for bugs that might allow a user to convince the password program to
perform a dangerous action on its behalf (see also the discusson on “the confused
deputy” in section 5.2.4). In a DTE system or a system with ACLs, the password
program may be granted a more fine-grained privilege to edit specifically the pass-
word file but no other. In a capability system, you could imagine each user be-
ing given a customized instance of the password program that would only allow
necessary privileges for it to get its job done. As we will see in the next chapter,
each security architecture has pros and cons in how easily it can capture desired
security policies.
It is important to point out that not all security policies can be strictly expressed
as access control matrices. A security policy may be arbitrarily complex. It may
involve constraints on the time of day a request is made. It might require the au-
icy. We call this a trusted subsystem. The Unix password utility is one example of
such a system. Clark and Wilson [CW87] discuss the concept of trusted subsystems
in great detail, and suggest that secure systems in the commercial world are fun-
damentally built from trusted subsystems rather than traditional access controls.
45
4.5 The Bad Guys Are Out There...
Many attackers use tools like COPS [Far93] or SATAN [FV93], which automate
the process of checking for known bugs in remote network systems. These freely
available tools, as well as commercial tools such as ISS’s Internet Scanner [Int98],
are designed to help systems administrators audit their own networks, but are
Once an attacker has broken into the machine, a common next step is to install
a back door to allow them back in more easily in the future. Software that imple-
ments sophisticated back door functionality, hiding the back door from the user
of a system by replacing system utilities that might be used to notice it, is freely
available for both Unix and Windows [O’B96, Cul98].
One would like to believe that, with so many ways to build a secure system, that
one of them must actually work. Certainly, it must be possible to build a system
nately, several forces seem to be pushing vendors that conflict with security.
From the companies developing software to those deploying their systems,
there seems to be a strong preference for additional features instead of fewer bugs.
From users to technology executives, from salespeople to advertisers, features sell
46
products. Across the industry, development resources appear to be increasingly
system that large, it becomes impossible for any one developer to be aware of the
whole system and every conceivable interaction among components. This leads
to unforeseen control flow and data dependencies that may cause one module to
violate the invariants held by another module. Normally, that would lead to a bug
that should be caught in normal quality assurance testing of the product before it
ships. Security-related bugs, however, are different because they do not necessar-
ily cause quality assurance tests to fail or raise other alarms. Instead, the system
may be quietly permitting something that it should not. Security bugs thus can lie
When code is written in a type-safe language, including Java, Modula-3, ML, and
Scheme but excluding C or C++, buffer overflow bugs largely disappear (see sec-
tion 3.3.1). Likewise, if programs were structured with a security kernel to reduce
their TCB (see chapter 2), large sections of the system that have no need to be
trusted could happily run with lower privilege.
the best design and debugging their software, we would still have insecure sys-
tems. Security requires serious effort on the part of systems administrators to
properly install and configure their systems. Security also requires users to follow
safe practices. One user installing a malicious (or virus-infected) program from the
Internet could foil all the efforts of their administrators to protect their domain.
Given the increasing sophistication of attacker’s tools combined with the in-
47
creasing complexity of commercial software, it becomes difficult to reach anything
beyond a pessimistic conclusion about security in the future. We hope the increas-
ing attention paid to software security and software engineering will result in more
sophisticated techniques (such as those espoused in this dissertation) being used in
industry. Likewise, with the market for security products such as firewalls grow-
ing steadily, application and operating systems vendors may see security as a way
48
Chapter 5
In Sun’s original Java system, there was no real security architecture. Instead, the
Java system relied on a number of ad-hoc mechanisms from which to build secure
many of which have been borrowed from traditional operating systems (detailed
in chapter 4).
Java have been implemented and will evaluate them against a number of criteria
relevant to security, performance, and flexibility. In the case of mobile Java applets,
all of these architectures allow the traditional Java sandbox to be extended when
the code is “trusted.” Generally this trust is based on digital signatures that have
To implement a flexible security policy, steps must be taken to identify the “source”
of any program, either based on network addresses or digital signatures. The sys-
49
tem must then decide what privileges are appropriate, based on input from either
the user or system administrator. Any flexible system will have these same basic
requirements.
in a store, the user decides whether to trust the program before installing it based
on the reputation of the program and its vendor. To guarantee the legitimacy of a
digital signature. The identity of a program’s signer forms the basis of a trust de-
cision for the user: do you trust Company X to endorse potentially dangerous code
that you run on your computer? The digital signature also guarantees that the code
that was downloaded is the same as the code that its endorser originally signed.
The endorser does not necessarily have to be the same as the original developer of
the program.
Digital signatures can provide the same benefits and opportunities for Java.
However, Java offers the opportunity to give more fine-grained privileges than the
50
the same way that every user of a traditional operating system is also a principal1 .
Then, the system’s security policy can maintain a table listing which principals are
allowed to endorse access to which resources, based on the desires of the system’s
user and administrator.
There is no reason a program cannot have multiple signatures, and hence multi-
ple principals. This means we must be able to combine potentially conflicting per-
Microsoft Internet Explorer 4.0 (MSIE 4.0) and the Sun Java Developer Kit 1.2
(JDK 1.2) only support a single signature per program [Mic97a, GS98].
While all the Java systems maintain a table of which principals are allowed
which privileges, MSIE 4.0 also allows a digital signature to endorse a Java pro-
gram for only a specific set of privileges. This allows an endorser to say “I certify
this program is safe to use unrestricted networking, but not file system access”
rather than simply “I certify this program is safe.” The Jar (Java archive) file for-
mat [Sun96, Net96], used by NS 4.0 and JDK 1.2, does not currently support limited
5.1.2 Administration
In Java, a critical issue with the system’s security policy is how to help non-technical
users make security-relevant decisions about who they trust to access particular
resources. One strategy to simplify the user interface, used in NS 4.0, pre-defines
groups of common privileges and gives them user-friendly names. For example,
1
Principal and target, as used in this dissertation, are the same as subject and object, as used in the
security literature, but are more clear for discussing security in object-oriented systems.
51
a “typical game privileges” group might refer to specific limited file system ac-
cess, full-screen graphics, and network access to the game server. Another strat-
egy, used by MSIE 4.0, is to classify principals together into groups, called security
zones. These grouping strategies reduce the number of dialog boxes presented to
a user and make the dialog boxes simpler than deciding whether each individual
principal may be granted each individual privilege.
NS 4.0 leaves this decision up to the applet. An applet may request any privi-
lege at any time. This would allow an applet to begin running in a demonstration
mode, for example, and later ask the user for the privilege to save work to the disk.
MSIE 4.0, on the other hand, requires the request for privileges to be encoded di-
rectly into the digital signature, forcing any user dialogs to occur before the applet
begins running. While this allows a single user dialog to specify permissions at
once, avoiding “dialog fatigue,” it also forces an applet to ask for every possible
permission it might ever need, since it will never get a chance to ask for a privilege
icy.
These organizations need access to the Web browser’s policy mechanism either
to pre-install and “lock down” all security choices or at least to pre-approve ap-
all users should not be burdened with dialogs asking them to grant it privileges.
Likewise, if a Web site is known to be malicious, an administrator could block it
52
Both NS 4.0 and MSIE 4.0 have extensive support for centralized policy admin-
istration in their Web browsers. While a sophisticated user may not necessarily be
prevented from reinstalling their Web browser (or operating system) to override
the centralized security policies, normal users can at least benefit from their site
5.2 Architectures
grams.
pointers that could be safely given to user code. Java provides an efficient
environment for implementing capabilities.
classes with the same names. By restricting an applet’s name space, we can
limit its activities.
Extended stack inspection The original Java method of searching the stack for un-
privileged code can be extended to include code sources on the call stack.
53
5.2.1 First Approach: Processes
separate instance of the Java virtual machine, although multiple such VM’s will
run in the same native operating system process. Alta allows memory to be shared
across applets, whereas the J-Kernel requires applets to communicate with remote
procedure call mechanisms. GVM [BTS+ 98] runs applets in effectively separate
processes, although allows limited memory sharing. JRes [Cv98] uses bytecode
and CPU usage to be “charged” to the applet responsible for that usage, even when
multiple applets are running in the same virtual machine.
Taking process separation a step further, the Digitivity Cage [Dig97], the Java
Playground [MRR98], and Kimera [SGB+ 98] dedicate a separate physical machine
for running untrusted Java applets and use remote procedure calls to display graph-
immediately terminated and its entire heap can be deallocated at once. Because
no two virtual machines have direct references to each other’s data, there is no
54
sender of the message is sufficiently trusted to grant access to the desired resource.
The two largest drawbacks to process models are the overhead of marshaling
and copying data across processes, as well as the general overhead associated
with running a distributed object system, even though all object references may
of various IPC mechanisms, using the Kaffe virtual machine [Tra98]. A normal
sented for running real applications, but if these interprocess calls are in the critical
implemented such systems. This section discusses general issues for capabilities
often been seen as a good way to structure a secure operating system [WCC+ 74,
first explicitly given that capability, either as part of its initialization or as the re-
sult of invoking the service associated with another capability. Once a capability
has been given to a program, the program may then use the capability as often
as it wishes and may pass the capability, or a subset of the capability, to other pro-
55
grams, although some systems take steps to control the propagation of capabilities.
This leads to a basic property of capabilities: any program that has a capability is
Capabilities in Java
Some systems used tagged memory to differentiate capabilities from other data
types and protect them from tampering. Other systems stored capabilities in the
kernel, providing a limited interface to user programs. Unix file descriptors are
ory addresses, Java uses type safety to prevent the forgery of object references. It
likewise blocks access to methods or member variables that are labeled private,
allowing an object to hide data from programs that have references to it.
The current Java class libraries already use a capability-style interface to repre-
sent open files and network connections (InputStream and OutputStream). How-
ever, static method calls and class constructors are used to acquire these capabil-
such a system, the initial capabilities would be passed as arguments to the program
(or perhaps stored inside the Applet object). For example, if an applet wished to
open a file, it would be forced to call Applet.getFileSystem(), which would con-
sult the system’s security policy and then either return a capability to access the file
system or nothing at all. Calls to the FileInputStream() constructor and similar
classes would need to be completely forbidden, since a pure capability model does
not provide any machinery for the constructor to determine who called it.
56
Interposition
Since a capability is just a reference to a Java object, the object can implement its
own security policy by checking arguments before using them to initiate a dan-
gerous operation. A capability’s object can easily contain another capability as a
private member and invoke the internal capability according to its own security
criteria. As long as both objects implement the same Java interface, the capabilities
can be used interchangeably.
For example, imagine we wish to provide access to a subset of the file system —
only files below a given subdirectory. One possible implementation is presented
in figure 5.1. A SubFS represents a capability to access a subtree of the file system.
Hidden inside each SubFS is a FileSystem capability; the SubFS prepends a fixed
string to all pathnames before accessing the hidden FileSystem. Any code that
already possesses a handle to a FileSystem can create a SubFS, which can then be
passed to an untrusted subsystem. Note that a SubFS can also wrap another SubFS,
the security policy called for all file access to be logged, or if it called for interaction
with the user, all such behavior would be written as a capability and the applet
would use this new file system capability in exactly the same way as it used the
original. Because both capabilities implement the same interface, they can be used
interchangeably.
For example, removing the usual FileInputStream constructor from the class li-
braries would constitute a major change to the standard Java class interfaces. Sim-
57
// In this example, any code wishing to open a file must first obtain
// an object that implements FileSystem.
interface FileSystem {
public FileInputStream getInputStream(String path);
}
// This is the primitive class for accessing the file system. Note that
// the constructor is not public -- Java semantics restrict creation
// of these objects to other code in the same Java package.
public class FS implements FileSystem {
FS() {}
58
ilar issues apply with other major system services. Additionally, class constructors
for the capabilities themselves must be carefully controlled. In figure 5.1, the FS
constructor is marked “package” scope, blocking the constructor from being called
by arbitrary classes. This provides protection only if an applet cannot declare its
classes to be in the same package as the capability. Current JVMs treat system
packages (those beginning with java, sun, sunw, netscape, and com.ms) specially,
and also enforce separation when the call crosses from applet code to system code.
icant changes.
This section presents a modification to Java’s dynamic linking mechanism that can
both MSIE 3.0 and NS 3.0. His implementation did not modify the JVM itself,
but changed several classes in the Java system libraries. The full system is im-
plemented in 4500 lines of Java; much of the code manages the graphical user
interface.
We first give an overview of what name-space management is and how it can
59
Original name Alice Bob
java.net.Socket security.Socket java.net.Socket
java.io.File — security.File
... ... ...
Design
how names in a program are resolved into runtime classes. We can either remove
a class entirely from the name-space (thus causing attempts to use the class to fail),
or we can cause its name to refer to a different class that is compatible with the
For example, a File class may be used to access the file system and a Socket class
may be used to access networking operations. If the File class, and any other class
that may refer to the file system, is not visible when the remote code is linked to
local classes, then the file system will not be available to be attacked. Instead, an
attempt to reference the file system would be equivalent to a reference to a class
60
class can potentially have a different name-space configuration, we can arrange
for the new classes to see the original sensitive classes, and we can block applet
classes from seeing the same sensitive class. For example, the File class could
be replaced with one that prepended the name of a subdirectory (see table 5.1),
much like the capability-based example in figure 5.1. In both cases, only a sub-
tree of the file system is visible to the untrusted applet. In order for the program
to continue working correctly, each substituted class must be compatible with the
original class it replaces (e.g., the replacement must be a subclass of the original
and must be careful to never return an instance of the original class).
ity system. If different programs see different versions of the system classes, the
effect is identical to those programs being given different capabilities to system re-
sources, with the exception that while a program may pass a capability to another
program, it cannot pass its name space.
An interesting property of this design is that all security decisions are made
Implementation in Java
dynamic binding to other classes in the Java runtime. Whenever a new class is ref-
erenced, the ClassLoader of the referencer provides the implementation of the new
class (see figure 5.2). So, ClassLoaders will be used to implement the name-space
61
HTML page with Java applet:
…
Foo.java: <applet
codebase = …
...
... code = Foo.class>
FileSystem
FileSystem fs fs == new
new FileSystem();
FileSystem(); …
File
File ff == fs.openFile(“foo”);
InputStream
fs.openFile(“foo”); applet
applet
InputStream s = new InputStream(f);
s = new InputStream(f);
...
...
“FileSystem” class
class “FileSystem”
FileSystem class loader
loader FileSystem class
System
62
package security;
public class File extends java.io.File {
will look for other classes from the same network source as the applet. If two ap-
plets from separate network locations reference classes with the same name that
are not system classes, each will get a different class because each applet’s Applet-
ClassLoader looks to separate locations for class implementations.
Our implementation works similarly, except that we replace each applet’s Applet-
63
reference, a PrincipalClassLoader can
see the class at all (exactly the same behavior that the applet would see if the
class had been removed from the class library – this is essentially a link-time
error),
2. return the class in question if the calling principal has full access to it, or
In the third case we hide the expected class by binding its name to a subclass
of the original class. Figure 5.4 shows a class, security.File, that can replace
java.io.File. When an applet calls new java.io.File("foo"), it will actually
get an instance of security.File, that is compatible in every way with the original
class, except it restricts file system access to a subdirectory. Note that, to achieve
complete protection, other classes such as FileInputStream, RandomAccessFile,
and so forth need to be replaced as well, as they also allow access to the file sys-
tem. Likewise, there should be no way for an applet to ask for the superclass of
java.io.File, thus getting a handle to the real File class instead of its renamed
version. This would require some modifications to Java’s class reflection, where
any applet can call Class.getSuperclass().
New Java features such as reflection (a feature introduced in JDK 1.1 that al-
lows dynamic inquiry and invocation of a class’s methods) could also defeat name-
space management entirely unless the reflection system was specifically designed
to respect the renaming. Also, common super-classes that are shared among sep-
arate subsystems would be difficult to rename without introducing incompatibili-
ties (e.g., both file system and networking use the same InputStream and OutputStream
super-classes).
64
The Future of Name Space Management The ability to transparently replace a
class with another class, as name-space management does, is useful only if the
applet code continues to type-check properly after the replacement. Though there
is no problem in this regard in NS 3.0, MSIE 3.0, and Sun’s JDK 1.1, the semantics
of dynamic linking in Java have changed with JDK 1.2 [LB98]. Though the change
in JDK linking semantics is sound and well-motivated, it appears to have the side-
worked in previous versions of the system. For example, if class C has a method
M whose return type is C, and name-space management replaces C by D (a subclass
of C) in some applet, calls to M in the applet will fail to type-check — the byte-code
verifier will report them as type errors. The issue of compatibility of rewritten
classes is discussed in detail (in the guise of class version upgrades) in the Java
which was described in section 3.2.3. Variations on this approach are taken by
NS 4.02 [Net97], MSIE 4.0 [Mic97b], and Sun JDK 1.2 [GS98]3.
2
This approach is sometimes incorrectly referred to as “capability-based security” in vendor
literature.
3
At Netscape, I participated in the design and implementation of NS 4.0’s Java security archi-
tecture along with Jim Roskind and Raman Tenneti in the summer of 1996. The Microsoft and Sun
65
Stack inspection has very little prior art. In some ways, it resembles dynamic
scope of variables (where free variables are resolved from the caller’s environment
rather than from the environment in which the function is defined), as used in
early versions of LISP [MAE+ 62]. In other ways, it resembles the notion of effective
user ID in Unix, where the current ID is either inherited from the calling process or
asserted through an explicit setuid bit on the new program.
Java stack inspection’s roots lie in the original Java system’s use of “Class-
Loader depth,” discussed in more detail in section 3.2.3. This section will describe
how it has been extended to support flexible security by each of the major Java
To explain how extensible stack inspection works, we will first consider a simpli-
fied model of stack inspection which resembles the stack inspection system used
internally in Netscape Navigator 3.0 [Ros96a].
The problem faced by Netscape was what Norman Hardy calls the confused
deputy problem [Har88]. Hardy describes a situation where the system Fortran com-
piler wished to log various statistics about its usage. The compiler was granted
write-privileges to a directory which contained the statistics file, but also contained
a file used to log system billing information. While no user had sufficient privi-
leges to directly access the billing file, the compiler did, by accident. Asking the
compiler to output debugging information to the billing file was sufficient to de-
stroy the system’s billing information. The Fortran compiler, granted a very broad
privilege, could be tricked into abusing that privilege on behalf of any user. This
architectures were designed afterward and have different implementations while sharing the same
basic structure.
66
problem was solved by granting the compiler a specific capability to access its
statistics file, rather than editing access control information in the filesystem.
ileges that were significantly more broad than was ever granted to applets. The
simple stack inspection model addresses these concerns by allowing system code
to optionally “enable its privileges”. Before any file is opened or other restricted
operation is performed, the system will check to make sure these privileges have
been, in fact, enabled. If not, the system reverts to the restrictive applet security
policy.
Fundamentally, stack inspection relies on being able to reconstruct the call stack
and identify the source of every method or procedure on that stack. This gener-
the privilege flag disappears with the rest of the stack frame. When privileges are
checked, a search begins at the most recent stack frame and continues outward. If
the search discovers the flag, privileges are granted. However, there might be a
case where system code enabled a privilege then called back to untrusted applet
code. This might occur in the event-dispatching loop of user-interface code. How-
ever, we do not want this event-handler to be able to take advantage of the privi-
lege flag preceding it on the stack. Netscape calls this a luring attack and addresses
it by considering the privileges of each stack frame as it performs the search. If the
search discovers the stack frame of untrusted code before it reaches a flag set by
67
trusted code, the search is terminated early and privileges are denied.
In Hardy’s example, the Fortran compiler would enable its privileges before
writing to its statistics file, but would not enable privileges when doing its normal
output.
Stack inspection satisfies the principle of least privilege (see Section 5.3.4), al-
lowing the Java system to operate with less than its full privileges active at all
times and thus reducing exposure to attacks. This proved extremely useful in
as grep and then subjected to code auditing. With limited time to audit a large
code base, this technique allows an audit to focus its efforts on code that will effect
the security of the system.
The stack inspection algorithm used in current Java systems can be thought of as a
generalization of the simple stack inspection model described above. Rather than
having only untrusted applets and fully trusted system code, the system supports
code from an unbounded number of principals, each of which may be granted
different levels of trust. Likewise, rather than having either full privileges or re-
stricted applet privileges, a number of more specific privileges called targets are
system.
Four fundamental primitives are necessary to use extended stack inspection:4
enablePrivilege()
4
Each Java vendor has different syntax for these primitives. This dissertation follows the
Netscape syntax.
68
disablePrivilege()
revertPrivilege()
checkPrivilege()
As described in the previous section, privileges to use a target must be en-
abled before the target is used. This is done by calling enablePrivilege(T) on
the desired target T, placing a flag on the caller’s stack frame where it can be
found later by a stack search. After calling enablePrivilege(), the program may
continue normal execution. When execution reaches the code that manages a
security-relevant resource, such as the file system or network, this code will wish
to make a security check to validate that its caller and the rest of the callers on
the stack have sufficient permissions to use the resource. This is done by call-
ing checkPrivilege(T), which searches on the stack for a method that called
enablePrivilege(T), using the algorithm in figure 5.5.
and deny access to the target. The final operation, revertPrivilege(T) is used to
clear flags related to T from the current stack frame.
tions, searches the frames on the caller’s stack in sequence, from newest to oldest.
The search terminates, allowing access, upon finding a stack frame that has appro-
priately enabled its privileges. The search also terminates, forbidding access (and
throwing an exception), upon finding a stack frame that is either forbidden by the
local policy from accessing the target or that has explicitly disabled its privileges.
We note that each vendor takes different actions when the search reaches the
69
end of the stack uneventfully (i.e., if enablePrivilege() was never called and all
stack frames are trusted classes): NS 4.0 denies permission, while both JDK 1.2
and MSIE 4.0 allow it. The NS 4.0 approach follows the principle of least privilege
(see Section 5.3.4), since it requires that privileges be explicitly enabled before they
can be used — this can help protect an applet against a hostile caller trying to
take advantage of the applet’s privileges. The MSIE 4.0 / JDK 1.2 approach, on
the other hand, may be easier for developers, since no calls to enablePrivilege()
are required in the common case where trusted applets are using trusted system
libraries. It also allows local, trusted Java applications to run on an older JVM
without support for stack inspection: they run as a trusted principal so all of their
accesses are allowed by default. Because NS 4.0 stack inspection is currently used
only in Netscape’s Web browser, compatibility with existing Java applications is not
considered to be an issue.
before calling a system method to access the protected resource. Untrusted code
would be unable to access the protected resource directly since it would be unable
to create the necessary enabled privilege. Figure 5.6 demonstrates how code for a
Several things are notable about this example. The classes in figure 5.6 do not
need to be signed by the system principal (the all-powerful principal whose privi-
leges are hard-wired into the JVM). They could be signed by any trusted principal.
This could be used by a third party to provide new functionality to any untrusted
applet.
With the trusted subsystem, an untrusted applet now has two ways to open
70
checkPrivilege (target) f
// loop, newest to oldest stack frame
foreach stackFrame f
if (local policy forbids access to target by class executing in stackFrame)
throw ForbiddenException;
71
a file: call through TrustedService, which will restrict it to a subset of the file
open a file. Note that, even should the applet try to create an instance of FS
and then call getInputStream() directly, the low-level file system call (inside
Details
The vendor implementations have many additional features beyond those described
above. We will attempt to describe a few of these enhancements and features here.
Threads An interesting issue is how to manage privileges that must cross thread
boundaries. Since separate threads have separate stacks, stack inspection might
seem to have a problem managing this case. There are two cases to address: when
one thread starts a child thread, and when two existing threads are communicat-
ing. In the case of child threads, MSIE 4.0 and JDK 1.2 have the child thread inherit
the privileges of its parent, effectively copying the enabled privileges from the par-
ent thread’s stack to the child thread before it begins execution. NS 4.0, in contrast,
starts the child thread with no privileges enabled, assuming it can always enable
any privilege it needs.
“Smart Targets” Sometimes a security decision depends not only on which re-
source (the file system, network, etc.) is being accessed, but on what specific part
of the resource is involved, for example, on exactly which file is being accessed.
Since there are too many files to create targets for each one, each vendor has a
72
// this class shows how to implement a trusted subsystem with
// stack inspection
public FS() {
try {
PrivilegeManager.checkPrivilege("UniversalFileRead");
usePrivs = true;
} catch (ForbiddenTargetException e) {
usePrivs = false;
}
}
if(usePrivs)
PrivilegeManager.enablePrivilege("UniversalFileRead");
return new java.io.FileInputStream(path);
}
}
73
form of “smart targets” that have internal parameters and can be queried dynami-
cally for access decisions. JDK 1.2 implements these by creating subclasses of their
granted or denied permissions. NS 4.0 has similar support for subclasses of Target
that validate arguments.
Who can define targets? Each system offers a set of predefined targets that rep-
resent resources the JVM implementation wants to protect; they also offer ways to
define new targets. MSIE 4.0 allows only fully trusted code to define new targets,
while NS 4.0 and JDK 1.2 allow anyone to define new targets, while taking steps
to guarantee new targets are not confused with existing ones. In NS 4.0, targets
are named with a (principal, string) pair, and the system requires that the principal
field match the principal who signed the code that created the target. The prin-
cipal field effectively defines a name space for targets. Built-in targets belong to
the predefined System principal. In JDK 1.2, targets are classes in the Java library
Allowing anyone to define new targets, as NS 4.0 does, allows third-party li-
brary developers to define their own protected resources and use the system’s
stack inspection mechanisms to protect them. The drawback is that users may
be asked questions regarding targets that the Netscape designers did not know
about. If the third-party library designer wishes to answer these questions inter-
nally without querying the user or wishes to display a security dialog, that is up to
them. Regardless, NS 4.0 does not allow third-party libraries to create a target that
74
name space.
JDK 1.2 extensions The stack inspection semantics of JDK 1.2 have changed sev-
eral times during its beta release cycles. Here are JDK 1.2’s important differences
target representing the entire filesystem would imply a target representing a single
file. The implies() method on targets defines a directed graph of all targets.
Sun has not clearly defined whether the implies() relationships are transi-
tive, and if they are, whether this information may be used to optimize security
queries. One useful optimization might be to compute the transitive closure of the
implies() graph, which would then allow for constant-time queries. However, if
a target is allowed to change its mind about what other targets it implies, the se-
curity system might be forced to reevaluate the implies() relationships on every
security query.
Finally, between JDK 1.2beta3 and beta4, the enablePrivilege() call was
replaced with a doPrivileged() method whose argument must implement a
PrivilegedAction interface. So, code that would previously have been written
as:
enablePrivilege();
revertPrivilege();
75
do_something_else(); // this executes normally
doPrivileged(new PrivilegedAction() {
do_something();
do_something_else();
These two code examples are semantically identical. The new JDK 1.2 syntax, us-
ing inner classes, probably simplified the job of the compiler writer, since the stack
searching procedure need only search for the stack frame corresponding to the
doPrivileged() operation, rather than maintain security flags on all stack frames.
The downside of the JDK 1.2 semantics is that they encourage the use of inner
classes, which can be quite dangerous in security-relevant code. The Java virtual
machine has no fundamental support for inner classes. In particular, the JVM has
no support for a class to be nested within another, and thus being allowed access to
its parent’s private members and methods. Instead, the Java compiler will create
accessor methods as necessary, each of which could be a serious security hole.
Implementation size. NS 4.0’s stack inspection system required about 5500 lines
of Java source code (including javadoc in-line documentation) and about 600 lines
76
of C source code in the Java virtual machine, including small changes to the
JDK 1.2’s stack inspection system required about 5200 lines of Java source code
[GS98] and an unspecified amount of C code. We do not have any size information
5.3 Analysis
Now that we have presented four systems, we need a set of criteria to eval-
uate them. The criteria used here are derived from the work of Saltzer and
Schroeder [SS75] and include several non-security issues such as compatibility
with existing code as well as performance. Table 5.2 summarizes the security prop-
Designs that are smaller and simpler are easier to inspect and trust.
Capabilities are fundamentally the simplest mechanism. They rely only on the ex-
isting Java type system and require no changes or extensions to the JVM. However,
a more fully capability-based Java system would require redesigning the Java class
libraries to present a capability-style interface — a significant departure from the
current class library, although capability-style classes (“factories”) are used inter-
nally in some Java classes.
Process models are relatively simple because they do not depend on the VM
for any security. Instead, they either use external operating system mechanisms
or simulate the OS interaction between a user and kernel. These mechanisms are
77
Criteria Processes Capabilities Name-space Stack inspection
management
Economy of straightforward no special mecha- simple mecha- moderately com-
mechanism processes but nisms required nism requires plex mechanism
complex RPCs complex map-
and reimplemen- pings to avoid
tation of security type inconsisten-
checks cies
Fail-safe de- Similar defaults on all systems (*)
faults
Complete medi- Very strong Problematic capa- Very strong Strong and more
ation bility leaks informative than
others
Least privilege Strong Very strong Hard to limit Strong (*)
privileges
Code auditabil- Trust the process Must potentially audit the whole VM Simplifies VM au-
ity abstraction, not diting
the Java VM
Least common Strong Lots of shared state
mechanism
Accountability Easy to track Problematic Reasonable Easy to track
Resource limits Easy to enforce No known solution
but more copying
necessary
Psychological Too early to really know
acceptability
Performance cross-process Fast and cheap uses 1% to 9% of
calls 10x-500x CPU
slower than local
calls
Compatibility Process separa- Complete incom- Excellent Good (*)
(API) tion may break patibility
cooperating
or privileged
applets
Compatibility Some use a stock Excellent, no VM Small VM hacks Larger VM hacks
(VM) VM, some make changes to manage name to support stack
major changes spaces inspection
78
fairly well understood but are not, in and of themselves, simple. Process models
trustworthy, the process security infrastructure must repeat the runtime checks
normally done in Java. Some rules, such as the restrictions on Java network con-
configurations. The mechanisms that remove and replace classes, and hence pro-
vide for interposition, are minimal. However, they affect critical parts of the JVM
and a bug in this code could open the system to attack.
quire a front-end to rewrite Java bytecode (see chapter 8). As before, changes to
the JVM could destabilize the whole system. Each method to be protected must
explicitly consult the security system to see if it has been invoked by an authorized
party. This check adds exactly one line of code, so its complexity is analogous
to the configuration table in name-space management. And, as with name-space
management, any security-relevant class that has not been modified to consult the
Name space management and stack inspection have similar fail-safe behavior. If
79
a potentially dangerous system resource has been properly modified to work with
the system, it will deny access to unauthorized callers. With name-space manage-
gerous resource can be used. When no enabled privilege is found on the stack,
access to the resource will be denied by default. (MSIE 4.0 and JDK 1.2 sacrifice
access through such “choke points” (e.g., system-call interfaces), it becomes easy
to prevent a process from acting beyond its privileges.
all three systems provide mechanisms to interpose security checks between iden-
tified targets and anyone who tries to use them.
However, mediation is not complete if privileges are not confined to their right-
ful holders [Lam73]. It should not generally be possible for one program to del-
egate a privilege to another program; that right should also be mediated by the
two programs that can communicate object references can share their capabilities
80
without system mediation. This means that any code granted a capability must
how capabilities can be used, or must place restrictions on how capabilities can
be shared. Some systems, such as ICAP [Gon89], make the objects referenced by
capabilities aware of “who” called them; an object can know who is supposed to
invoke it and refuse to work for anyone else. The IBM System/38 [BTR80] as-
sociates optional access control lists with its capabilities, accomplishing the same
purpose. Other systems use hardware mechanisms to block the sharing of capa-
bilities [KH84]. For Java, any such technique would be problematic. To make an
object aware of who is calling it, a certain level of inspection into the call stack must
be available. To make an object reference unshareable, you must either remove its
class from the name space of potential attackers, or block all communication chan-
nels that could be used for an authorized program to leak it (either blocking all
the FileInputStream class. Unfortunately, it could still likely see InputStream (the
superclass of FileInputStream), which has all the necessary methods to use the
object. If InputStream were also hidden, then networking code would break, as it
81
the Java class libraries.
Stack inspection has excellent confinement. Because the stack annotations are
not directly accessible by a program, they can neither be passed to nor stolen by
another program. The only way to propagate stack annotations is through method
calls, and every subsequent method must also be granted sufficient privilege to
use the stack annotation. The system’s access control matrix can thus be thought
of as mediating delegation rights for privileges. Because the access matrix is con-
sulted both at creation and at use of privileges, privileges are limited to code that
is authorized to use them.
for performance. These systems only use stack inspection to verify security for
otherwise expensive operations such as opening a disk file or network connection,
where much higher latencies are normal. Once the connection is open, the more
common read and write operations do not invoke the checkPrivilege() opera-
tion. Instead, the InputStream and OutputStream objects are treated as capabilities.
While a specific input or output stream could potentially be leaked across applets,
the general ability to open a file or network connection would still be contained.
where privilege decisions can be made. Unlike stack inspection, there is never any
ambiguity in a process model over where a request came from, simplifying the
application of access controls.
An issue with any system is when privileged code maliciously acts as a “proxy”
for unprivileged code. Such proxying is a general issue in any system where
communication is allowed between programs of different privilege levels. Nei-
ther stack inspection nor name-space management have special provisions against
82
proxy attacks. However, all but the capability systems at least guarantee that the
privileged calls can be traced to the program that was originally granted the privi-
leges, and a simple extension to the capabilities system could give similar guaran-
tees.
Every program should operate with the minimum set of privileges necessary to
do its job.
The principle of least privilege applies in remarkably different ways to each system
we consider.
linked. If those privileges are too strong, there is no way to revoke them later —
once a class name is resolved into an implementation, there is no way to unlink
it. However, because Java’s dynamic linker is lazy (linking a class when it is first
used), some flexibility is available before linking has occurred.
wishes to discard a capability, it only needs to discard its reference to the capability.
Likewise, if a method only needs a subset of the program’s capabilities, the appro-
priate subset of capabilities may be passed as arguments to the method, and it will
have no way of seeing any others. If a capability needs to be passed through the
system and then back to the program through a call-back (a common paradigm in
GUI applications), the capability can be wrapped in another Java object with non-
public access methods. Java’s access modifiers provide the necessary semantics to
83
privilege is equivalent to taking responsibility for the subsequent use of that priv-
ilege. If a privilege is not enabled, the corresponding target cannot be used (in
NS 4.0, whereas MSIE 4.0 and JDK 1.2 relax this requirement — see Section 5.2.4).
Likewise, the lifetime and visibility of an enabled privilege are limited by the fact
unnecessary privileges.
way for a program to discard its privileges if they are not needed. The alternative
would be to provide extensions to the Java system that allow it to manipulate the
state of its controlling process. In contrast, one of the benefits of a process system,
especially one that runs the untrusted code on a separate machine, is that most
dangerous privileges are simply unavailable. There is no way for an applet to read
a user’s personal files if those files are not accessible from the remote machine.
However, if a process-based system intends to support access to the file system for
some applets (i.e., applets that have been granted privileges by a user), then more
privileges must be available. This lessens any benefit that might be gained from
running applets on a separate machine.
Large programs have bugs. The security architecture can help focus the efforts
of auditors to find bugs that impact security.
help a security audit rule out code that has no effect by virtue of its running with
84
reduced privileges, the audit can be focused on more dangerous code.
Name space management effectively defines a list of classes that must be in-
spected for correctness. With both capabilities and stack inspection, simple text
searching tools such as grep can locate where privileges are acquired, and the
transitive closure of code that is called after a privilege is enabled. If code is not in
this control flow, it will not be able to exercise the privilege.
For example, Netscape ported Sun’s RMI (remote method invocation) to run
inside the Netscape VM. This was tricky as RMI was never designed to be used
from within applets, as it wished to install its own SecurityManager and use its
own ClassLoaders. When reviewing this port, we focused our efforts on study-
ing the code immediately preceding and following calls to enable privileges and
were able to rapidly identify security-relevant bugs. RMI was now available for
any applet in the Netscape VM. This would not have been possible without stack
inspection.
Process models are a very different challenge to audit. When the system does
not depend on any of Java’s security, it must then rely on either its own security
can be quite large, auditing them is non-trivial. Also, if the operating system was
written in a non-type-safe language such as C, there may be hidden opportunities
85
5.3.6 Least Common Mechanism
tion and a potential security hole, so as little data as possible should be shared.
The principle of least common mechanism concerns the dangers of sharing state
among different programs. If one program can corrupt the shared state, it can then
corrupt other programs depending on it. This problem applies equally to capabil-
was Hopwood’s interface attack [MF97], which combined a bug in Java’s inter-
face mechanism with a shared public variable to break the type system, and thus
applets.
5.3.7 Accountability
The system should be able to accurately record “who” is responsible for using
a particular privilege.
In the event that the user has granted trust to a program that then abuses that trust,
logging mechanisms will be necessary to prove that damages occurred and then
seek recourse.
In each system, the interposed protection code can always record what hap-
an administrator can learn which principal enabled the privileges to damage the
86
system. Because all the intermediary code is tracked when a privilege is checked,
this could also be recorded, allowing for a very complete picture of how privileges
granted and log this information when invoked. If the capability can be leaked
to another program (see Section 5.3.3), the principal logged will not be the same
as the principal responsible for using the capability. A modified capability system
Process models have enough information to record which applet was responsi-
ble for using a privilege because of the strong separation between applets.
Mechanisms should exist to fairly meter out finite system resources such as
memory, CPU, and disk space.
Traditional Java systems have had no resource limits. There was nothing to prevent
any Java applet from going into an infinite for-loop or from allocating memory
without limit. Traditional Java schedulers were simple round-robin systems that,
87
The Java operating systems designed by the Flux project at Utah [BTS+ 98, TL98]
are the only current systems to apply resource limits to Java, and they work with a
fairly traditional process model. While this separation increases the costs of shar-
ing between applets, it makes traditional resource management possible.
Systems like stack inspection can make binary access control decisions such
as whether a specific request to make a network connection should be granted.
However, they have no provisions to meter out limited resources. This will be an
All systems face a fundamental issue: the security policy must be specified some-
where. If the policy were expressed directly as an access control matrix, it would
be quite large and unwieldy. The number of possible principals is potentially un-
bounded and the number of targets is also quite large.
lazy evaluation. When a specific query is made to the security policy, if it has
not been previously answered, a security dialog box will ask the user to decide
whether the requested access should be allowed or granted. Asking the user ques-
tions too often can result in a condition called “dialog fatigue,” where the user
learns to ignore the dialogs, hitting the “OK” button automatically to continue
administrator to pre-load the system avoid requiring the user to answer security
88
questions.
The security architectures presented here can all support these features to re-
5.3.10 Performance
ject references cannot be shared. If one process must reference the data of an-
other, a distributed garbage collection algorithm must additionally run. Still,
with 64-bit architectures, processes need not require the high overhead of con-
text switching. Multiple Java processes could run in the same flat address space,
gaining the same benefits associated with single address space operating sys-
vary widely. These costs are discussed in detail in chapter 8, and seem to vary
systems. However, all systems must pay similar runtime costs when they imple-
89
routines).
is free to inline methods within a process, it cannot make any optimizations which
cross processes. The security system must enforce a specific calling convention,
allowing it to intercept and manage cross-process calls.
allowed to perform any optimizations, since all the methods run within the same
environment. However, the compiler must guarantee that all the stack information
is preserved for later security checks. This is roughly equivalent to the problem of
maintaining debugging information in optimized code [Hen82, TA90].
ties, or with name-space management, better compilers will result in faster systems
all around. Because these systems are implemented on top of Java without under-
lying VM changes, any optimization that accelerates normal method calls will also
accelerate cross-boundary calls as well.
5.3.11 Compatibility
We must consider the number and depth of changes necessary to integrate the
security system with the existing Java virtual machine and standard libraries.
90
and stack inspection is that language-based protection can be implemented on top
of a type-safe language without diverging too much from the original specification
of that language. For both name-space management and stack inspection, old ap-
plets, those written against the original Java class libraries and unaware of the new
Systems with a process model should similarly be able to run unmodified ap-
plications. In cases where separate applets are intended to run in the same vir-
tual machine and share memory, there could be compatibility issues because static
variables will not necessarily be shared across the process boundaries. Likewise,
“signed” applets that can request additional privileges may not work properly if
the applet process is strongly separated from the user’s machine. An applet that
tries to access a restricted browser resource would be denied.
On the other hand, as noted in section 5.3.1, a capability system would require
a new class libraries and thus completely break compatibility with traditional Java
classes.
5.4 Combinations
91
5.4.1 Name-Space Management + Capabilities
works well for hiding classes. This could provide the basis for implementing a
strongly capability-based system. The classes implementing the capabilities would
be allowed to see the original system classes, but other code would not.
Stack inspection can be expensive to use, but has nice security properties. Capa-
bility systems are quite fast, but require a radically different style of programming
from Java’s common usage. The combination gets many of the best features of both
systems and it is already being used. As discussed in section 5.3.3, stack inspection
protects “important” calls, such as opening files and network connections, while
objects representing capabilities to use those files and network connections are re-
turned. While this follows good object-oriented design, it can only be secure if one
applet cannot leak its capabilities or steal the capabilities of another. If two applets
are in a situation where one can get a reference to any data from the other applet,
to open a file would be through a capability rather than through trying to con-
struct an appropriate object directly, then stack inspection could be used to block
anybody but the system from successfully calling Java’s traditional file interfaces,
while allowing the capability classes full access. For an example of how this might
work, see figure 5.6.
If backward compatibility were not an issue, would there still be a place for
92
complete mediation that are not possible with pure capabilities (discussed in sec-
tions 5.3.3 and 5.3.5). Stack inspection can also be seen as a raw mechanism to
extend a capability system to address the problems inherent with pure capabili-
ties.
Existing systems that use processes to separate Java applets must support com-
munication among these applets as well as with “kernel.” The facilities to do this
are closely related to remote procedure call systems. As discussed in section 4.2.2,
capabilities are a simple way of adding security to a distributed system, so they
would also work for process systems. In essence, any mechanism that provides
5.5 Conclusion
implemented in Java and two of them (extended stack inspection and name space
management) have been integrated in commercial Web browsers.
All four designs have their strengths and weaknesses. For example, capability
systems are implemented very naturally in Java. However, they are suitable only
for applications where programs are not expecting to use the standard Java class
93
Java’s libraries and newer Java mechanisms such as Java reflection may limit its
use.
Process models seem to offer good security, but they impose performance costs
that question their usefulness.
Stack inspection has been adopted by the commercial Web browsers, offering
relatively good security. The remainder of this dissertation will be focused on
94
Chapter 6
the details of the system and allow us to reason about it. In particular, we would
like the model to capture the “security state” of the system at any time and let us
express transitions from this state as mathematical operations.
There are no hard and fast rules of how one should model a system formally.
Instead, the most expedient path is to find a formal model of a similar system
and adapt it to stack inspection. The system that we decided to borrow from was
originally used to describe authentication and access control in the Taos operat-
ing system [LABW92, WABL94]. In Taos, the operating system maintains infor-
mation about every channel between processes on the same machine and across
the network. When a process receives a request, the process may ask the system
to identify who has connected to it. Because a channel may pass through multi-
ple points of trust (the local operating system, the network, the remote operating
system, etc.), the system explicitly puts these into the principal, creating a com-
pound principal. Taos actually included a theorem prover inside the system which
95
could, given these compound principals and a security policy, both expressed in
the same formal logic, generate proofs of whether a given request is authorized to
occur. The logic, a relatively simple propositional or modal logic with no negation
of statements (and with certain restrictions on the form of statements), allows the
theorem prover to run fast enough to not dramatically impact system performance.
We decided to adopt this logic, originally specified by Abadi, Burrows, Lampson,
Stack inspection can be modeled using a subset of ABLP. This section will describe
the subset and give a general flavor for how it can be used.
The logic is based on a few simple concepts: principals, conjunctions of princi-
A principal is a person, organization or any other entity that may have the
right to take actions or authorize actions. In addition, entities such as pro-
(Targets are traditionally known as “objects” in the literature, but this can be
confusing when talking about an object-oriented language.)
A statement is any kind of utterance a principal can emit. Some statements are
made explicitly by a principal, and some are made implicitly as a side-effect
ing that we can act as if the principal P supports the statement s. Note that
96
saying something does not make it true; a speaker could make an inaccurate
that we should place faith in a statement only if we trust the speaker and it
is the kind of statement that the speaker has the authority to make. Thus, if
sider whether A’s utterance might be incorrect, and our degree of faith in s
will depend on our beliefs about A and B. When A quotes B, we have no
that B supports the same statement. If A )B, then A has at least as much au-
97
)-operator can be used to represent group mem-
thority as B. Note that the
This section presents a grammar for valid ABLP expressions. Note that, while
this grammar is not completely unambiguous, the axioms that operate on ABLP
ment could be an atomic statement such as “the sky is blue.” It could also be a
compound statement such as “Bob says the sky is blue” or “Alice speaks for Bob,”
as indicated with the ) symbol. A statement may also be the conjunction of sev-
eral independent statements, as indicated with the ^ symbol. Or, a statement may
imply another statement, as indicated with the symbol.
Statement ! AtomicStatement
j
compound principal such as “Alice quoting Bob,” as indicated with the symbol
or a conjunction of principals, as indicated with the ^ symbol.
98
Principal ! AtomicPrincipal
Principal ! j
Principal Principal
Statement ! ( Statement )
Thus, it’s perfectly reasonable to make a statement such as
((Alice says X) X)
where the first part is a form of delegation (Alice and Bob delegating their privi-
leges to Charlie), the second part is an assertion that “Charlie quoting Alice” wants
to do X (i.e., Charlie is claiming that Alice wants to do X), and the third part is an
access control rule stating that when Alice says she wants to do X, we will believe
her.
6.3 Axioms
Here is a list of the subset of axioms in ABLP logic used in this dissertation. We
omit axioms for delegation, roles, and exceptions because they are not necessary
99
If s and s s0 then s0: (6.2)
(A j (B ^ C)) (A j B) ^ (A j C) (6.9)
(A ) B) (A = A ^ B) (6.10)
100
Charlie ) (Alice ^ Bob) by axiom 6.11
(CharliejAlice says X) ^
(AlicejAlice says X) ^ (BobjAlice says X) by axiom 6.5
X by axiom 6.3
Now, in general, not all ABLP proofs are this easy. It is possible to encode prob-
lems in ABLP that are equivalent to the halting problem. However, by carefully
choosing a subset of ABLP, we can not only guarantee that proofs are decidable,
but we can also make efficient decision procedures for them. Chapter 7 presents
the subset of ABLP that we use to model Java’s stack inspection and presents an
efficient decision procedure for it.
With an understanding of how ABLP logic works, we can explain how it can be
used to model actual systems. A great amount of detail on this is available in
Lampson, Abadi, Burrows, and Wobber [LABW92]. ABLP can be used to model
the flow of control through a single system, from user to keyboard to motherboard
to device driver to operating system to user process. It can also be used to model
information passing across a network to the same level of detail. The key is quot-
ing. When an application receives a keystroke, it might want to verify that the
keystroke, in fact, came from the user. In the model, such an application would be
101
required to validate
j j
(Kernel DeviceDriver Keyboard says KeyPressed(‘g’)) KeyPressed(‘g’)
In order to do this, it must believe that each layer truthfully speaks for the layer
below it:
Kernel) DeviceDriver
DeviceDriver ) Keyboard
(Keyboard says KeyPressed(x)) KeyPressed(x)
Given the above beliefs and the axioms of ABLP logic, an application may safely
believe in the authenticity of its keystrokes.
prove that the window server speaks for the keyboard. Such a proof would require
modeling the event dispatch mechanism inside the server. If the window server
supported features like synthetic key events (where an application may simulate
keystroke events to drive another application), this would also need to be taken
into account in the model. As the model’s complexity grows, our certainty of
BAN logic [BAN90], has been applied to the underlying cryptographic protocols
as well. In the next chapters of this dissertation, ABLP will be used to model the
authentication and access control within Java.
102
Chapter 7
presenting a model of stack inspection using ABLP logic. While the model is much
simpler than the original stack inspection system, we prove the model is equiva-
lent to the original specification and we present an efficient decision procedure for
that there are a finite number of such statements, allowing us to represent the se-
curity state of the system as a deterministic pushdown automaton. We also show
that this automaton may be embedded in Java by rewriting all Java classes to pass
discusses an implementation based on this). Finally, we show how the logic al-
lows us to describe a straightforward design for extending stack inspection across
103
7.1 Mapping Java to ABLP
We will now describe a mapping from the stack, the privilege calls, and the stack
7.1.1 Principals
In Java, code is digitally signed with a private key, then shipped to the virtual
machine where it will run. If KSigner is the public key of Signer, the public-key
When Code is invoked, it generates a stack frame Frame. The virtual machine as-
sumes that the frame speaks for the code it is executing:
The transitivity of ) (which can be derived from equation 6.10) then implies
Frame ) Signer: (7.5)
We define Φ to be the set of all such valid Frame ) Signer statements. We call Φ
104
Note also that code can be signed by more than one principal. In this case, the
code and its stack frames speak for all of the signers. To simplify the discussion,
all of our examples will use single signers, but the theory can support multiple
signers without difficulty.
7.1.2 Targets
Recall that the resources we wish to protect are called targets. For each target, we
create a dummy principal whose name is identical to that of the target. These
dummy principals do not make any statements themselves, but various principals
may speak for them.
For each target T, the statement Ok(T) means that access to T should be allowed
underlying the Java Virtual Machine (JVM). From the operating system’s point of
view, the JVM is a single process and all system calls coming from the JVM are
performed under the authority of the JVM’s principal (often the user running the
JVM). The JVM’s responsibility, then, is to allow a system call only when there
is justification for issuing that system call under the JVM’s authority. Our model
will support this intuition by requiring the JVM to prove in ABLP logic that each
105
F1 enablePrivilege(T1) F2 enablePrivilege(T2) F3 disablePrivilege(T1) F4 enablePrivilege(T2)
Ok(T1) F1 says Ok(T1) F2 says Ok(T2) F3 | F2 says Ok(T2)
Ok(T2) Ok(T2)
7.1.4 Stacks
When a Java program is executing, we treat each stack frame as a principal. At any
point in time, a stack frame F has a set of statements that it believes. We refer to
this as the security context of F and write it SF. We now describe where the security
Starting a Program
When a program starts, we need to set the security context of the initial stack frame,
SF0 . In the Netscape model, SF0 = fg. In the Sun and Microsoft models, SF0 =
fOk(T) j T 2 Targetsg. These correspond to Netscape’s initial unprivileged state
106
and Sun and Microsoft’s initial privileged state.
Enabling Privileges
Calling a Procedure
When a stack frame F makes a procedure call, this creates a new stack frame G.
A stack frame can also choose to disable some of its privileges. The call
disablePrivilege(T) asks to disable any privilege to access the target T. This
is implemented by giving the frame a new security context that consists of the
old security context with all statements in which anyone says Ok(T) removed.
revertPrivilege() is handled in a similar manner, by giving the frame a new
security context that is equal to the security context it originally had. The latest
JDK 1.2beta4 from Sun does not support any calls equivalent to these, so these
calls need not be modeled for Sun’s version of the architecture.
Example
Figure 7.1 shows an example of these rules in action. In the beginning, SF1 = fg. F1
then calls enablePrivilege(T1), which adds the statement Ok(T1 ) to SF1 .
107
f g
When F2 is created, F1 tells it Ok(T1 ), so SF2 is initially F1 says Ok(T1 ) . F2 then
SF3 initially contains F2 j F1 says Ok(T1 ) and F2 says Ok(T2 ). When F3 calls
disablePrivilege(T2), the latter statement is deleted from SF3 . SF4 initially con-
j
tains F3 F2 says Ok(T1 ). When F4 calls enablePrivilege(T2), this adds Ok(T2 ) to
SF4 .
Before making a system call or otherwise invoking a dangerous operation, the Java
virtual machine calls checkPrivilege() to make sure that the requested operation
rived from Φ (the frame credentials), AVM (the access control matrix), and SF (the
security context of the frame that called checkPrivilege()).
logic, we now present an efficient decision procedure that gives the correct answer
for our subset of the logic. checkPrivilege() implements that decision procedure.
directed graph which we will call the speaks-for graph of EF . This graph has an
When examining the statement F1 F2 j j j Fk says Ok(U), the decision procedure
terminates and returns true if both
for all i 2 [1; k], there is a path from F to T in the speaks-for graph, and
i
U = T.
If the decision procedure examines all of the Class 3 statements without success, it
Proof: The result follows directly from the fact that EF has finite cardinality; there
are a finite number of principals that the system knows about, a finite number of
stack frames that must be considered, and a security policy of finite length. This
implies that each loop in the algorithm has a bounded number of iterations; and
109
In fact, the decision procedure runs quite efficiently. We can separately analyze
the runtime complexity and space complexity of each phase, as presented above. If
there are N rules in the access control matrix AVM(F) and a stack depth of D (i.e., Φ
has at most D elements), the cost of computing the transitive closure of the graph
will be, at worst, O((N + D)2) and consume O((N + D)2) space. Then, if there are k
statements in SF (each of which can have at most D principals in its quoting chain),
the cost of checking all statements will be O(kD). The total cost of the decision
(subject to the caveats in section 5.2.4) and the O(kD) complexity of the security
Theorem 2 (Soundness) If the check decision procedure returns true when invoked in
stack frame F, then there exists a proof in ABLP logic that EF Ok(T).
Lemma 1 If there is a path from A to B in the speaks-for graph of EF , then EF (A ) B).
(A; v1 ; v2 ; : : : ; vk ; B)
in the speaks-for graph of EF . In order for this path to exist, we know that the
statements
A ) v1;
vi ) vi+1 for all i 2 [1; k , 1];
and
vk )B
110
are all members of EF . Since ) is transitive, this implies that
EF A ) B :
Proof of Theorem 2: There are two cases in which the check decision procedure
1. The decision procedure returns true while it is iterating over the Class 1
statements. This occurs when the decision procedure finds the statement
ments. In this case we know that the decision procedure found a Class 2
statement of the form
where for all i2 [1; k] there is path from Pi to T in the speaks-for graph of EF .
It follows from Lemma 1 that for all i 2 [1; k], Pi ) T. It follows that
voked in stack frame F, then there is no proof in ABLP logic of the statement EF Ok(T).
Although we believe this conjecture to be true, we do not presently have a com-
plete proof. If the conjecture is false, then some legitimate access may be denied.
111
If the conjecture is true, then Java stack inspection, our access control decision
procedure, and proving statements in our subset of ABLP logic are all mutually
equivalent.
Proof: The Java stack inspection algorithm (Figure 5.5) itself does not have a for-
mal definition. However, we can treat the evolution of the system inductively and
We also assume Netscape semantics. A simple adjustment to the base case can
be used to prove equivalence between the decision procedure and the Sun/Microsoft
semantics.
Base case: In the base case, no steps have been taken. In this case, the stack
inspection system has a single stack frame with no privilege annotation; in the
ABLP model, the stack frame’s security context is empty. In this base case,
checkPrivilege(T; S0 ) and check(T; M(S0 )) will both return false.
112
Inductive step: We assume that N steps have been taken (N 0) and we are in
a situation where both checkPrivilege(T; S) and check(T; M(S)) would yield the
privilege(T) annotation on the current stack frame. In the ABLP model, it adds
Ok(T) to the current security context (a part of M(S)).
Java stack inspection algorithm will succeed because the enabled-privilege(T) flag is
immediately discovered. Likewise, a call to check(T; M(S)) will succeed because
Procedure call step: Let P be the principal of the procedure that is called. In
the stack inspection system, this adds to the stack an unannotated stack frame
the check will deny access because every statement starts with “P says” and
113
P does not speak for T.
P is trusted for T. In the stack inspection case, the stack search will ignore the
current frame and proceed to the next frame on the stack. In the ABLP case,
since P )T, the “P says” on the front of every statement has no effect. Thus
both systems give the same answer they would have given before the last
step. By the inductive hypothesis, both systems thus give the same result.
There are a number of cases in which Java implementations differ from the model
we have described. These are minor differences with no effect on the strength of
the model.
7.2.1 Groups
resented by saying the member speaks for the group. Deployed Java systems use
user or administrator can divide the principals into groups with names like “local”,
“intranet”, and “internet”, and then define policies on a per-group basis.
Netscape defines “macro targets” that are groups of targets. A typical macro
target might be called “typical game privileges.” This macro target would speak
for those privileges that network games typically need.
The Sun system has a general notion of targets in which one target can imply
114
another. In fact, each target is required to define an implies() procedure, which
can be used to ask the target whether it signifies a superset of the privileges as-
sociated with another target. This can be handled with a simple extension to the
model.
7.2.2 Threads
trol, and hence multiple stacks can exist concurrently. When a new thread is cre-
ated in Netscape’s system, the first frame on the new stack begins with an empty
security context. In Sun and Microsoft’s systems, the first frame on the stack of the
new thread is told the security context of the stack frame that created the thread in
exactly the same way as what happens during a normal procedure call.
The model of enablePrivilege() in section 7.1.4 differs somewhat from the Netscape
implementation of stack inspection, where a stack frame F cannot successfully call
enablePrivilege(T) unless the local access credentials include F )T. The restric-
tion imposed by Netscape is related to their user interface and is not necessary in
our formulation, since the statement F says Ok(T) is ineffectual unless F )T. Sun
Java implementations do not treat stack frames or their code as separate principals.
Instead, they track only the public key that signed the code and call this the frame’s
principal. As we saw in section 7.1.1, for any stack frame, we can prove the stack
115
frame speaks for the public key that signed the code. In practice, neither the stack
frame nor the code speaks for any principal except the public key. Likewise, access
control policies are represented directly in terms of the public keys, so there is no
need to separately track the principal for which the public key speaks. As a result,
the Java implementations say the principal of any given stack frame is exactly the
public key that signed that frame’s code. This means that Java implementations
need not have an internal notion of the frame credentials described here.
decision procedure can help us find more efficient implementations of stack in-
spection. We improve the performance in two ways. First, we show that the
evolution of security contexts can be represented by a deterministic pushdown
1. Interchanging the positions of two principals in any quoting chain does not
j
2. If P is an atomic principal, replacing P P by P in any statement does not
affect the result of the decision procedure.
116
Both observations are easily proven, since they follow directly from the structure
We also use the observation in section 7.2.4 that we need not consider frame
credentials, but need only consider the signer of a given stack frame. This means
that multiple stack frames corresponding to the same signature will be considered
to have the same principal.
It then follows that without affecting the result of the decision procedure we
can rewrite each statement in the security context into a canonical form in which
each atomic principal appears at most once, and the atomic principals appear in
some canonical order. After this transformation, we can discard any duplicate
While the number of possible security contexts can grow exponentially in the
number of principals and targets, it is nonetheless finite. Therefore, we can rep-
frames are created and destroyed in LIFO order, the execution of a thread can be
117
Representing the system as an automaton has several advantages. It allows
Furthermore, the results of a security check can be stored along with the secu-
rity contexts. So in cases where the same security check may be made numerous
times (such as when one program opens a multitude of files), only the first check
would require invoking the decision procedure. Subsequent security checks could
consult a local cache and execute in constant time.
One concern is that, because the space of all possible security contexts is ex-
ponential in the number of principals and targets, the amount of memory needed
will be similarly exponential. This concern is addressed by noting that very few of
these security contexts will ever be used. A lazy implementation, one which only
allocates memory for security contexts as they are needed, would only allocate
memory proportional to the complexity of its security needs. Thus, if the execu-
tion of a program only uses a handful of distinct security contexts, only those few
contexts will be allocated. Conversely, if a program is truly exponential in its secu-
rity complexity, it would need to run for an exponential amount of time in order to
cause the full security context space to be instantiated. Such degenerate cases are
The implementation discussed thus far has the disadvantage that security state is
tracked separately from the rest of the program’s state. This means that there are
two subsystems (the security subsystem and the code execution subsystem) with
118
separate semantics and separate implementations of pushdown stacks coexisting
in the same Java Virtual Machine (JVM). We can improve this situation by imple-
extra argument is a pointer into the finite-state space of the automaton. This
eliminates the need to have a separate pushdown stack for security contexts or
maintain stack annotations on the existing run-time stack. We dub this approach
rewritten, it no longer needs any special security functionality from the JVM. The
rewritten program consists of ordinary Java bytecode that can be executed by any
JVM, even one that knows nothing about stack inspection. This has many advan-
tages, including portability and efficiency. The main performance benefit is that
the JVM can use standard compiler optimizations such as dead-code elimination
and constant propagation to remove unused security tracking code, or inlining and
tail-recursion elimination to reduce procedure call overhead.
inspection model within the existing semantics of the Java language, rather than
requiring an additional and possibly incompatible definition for the semantics of
the security mechanisms. Security-passing style also lets us more easily transplant
119
F1 enablePrivilege(T1) F2 enablePrivilege(T2) KVM1 F3 disablePrivilege(T1)
Ok(T1) F1 says Ok(T1) KVM1 | K2 says Ok(T2)
Ok(T2) F2 => K2
VM1 VM2
RPC security has received a good deal of attention in the literature. The two pre-
vailing styles of security are capabilities and access control lists [TMvR86, Gon89,
Hu95, Obj96, vABW96]. Most of these systems support only simple principals.
Even in systems that support more complex principals [WABL94], the mechanisms
RPCs. One of the principal uses for ABLP logic is in reasoning about access con-
trol in distributed systems, and we use the customary ABLP model of network
communication to derive a straightforward extension of our model to the case of
RPC.
7.4.1 Channels
When two machines establish an encrypted channel between them, each machine
proves that it knows a specific private key that corresponds to a well-known public
key. When one side sends a message through the encrypted channel, we model
120
this as a statement made by the sender’s session key: we write K says s, where K
is the sender’s session key and s is the statement. As discussed in section 6.4, the
public-key infrastructure and the session key establishment protocol together let
us establish that K speaks for the principal that sent the message.
In order to extend Java stack inspection to RPCs, each RPC call must transmit
the security context of the RPC caller to the RPC callee. Since each of the caller’s
statement S of the caller’s frame arrives on the callee side as KCVM says S, where
KCVM is a cryptographic key that speaks for the caller’s virtual machine. The
stack frame that executes the RPC on the callee is given an initial security context
of the caller’s virtual machine (or more properly, of its key); the callee will disbe-
mits its security context along with the normal RPC data. The security context is
marshalled, transmitted, and unmarshalled like any other RPC data.
Figure 7.2 presents an example of how this would work. The Java stack inspec-
tion algorithm executes on the callee’s machine when an access control decision
must be made, exactly as in the local case.
121
7.4.2 Dealing with Malicious Callers
format. Regardless of the security context sent, each statement arrives at the callee
as a statement made by the caller’s virtual machine. If the callee does not trust the
caller, such statements will not convince the callee to allow access.
attempt is simply Ok(T)3 ; this will arrive at the callee as MC says Ok(T). Note that
this is a statement that MC can make without lying, since MC is entitled to add
Ok(T) to its own security context. Any lie that MC can tell is less powerful than
this true statement, so lying cannot help MC gain access to T. The most powerful
thing MC can do is to ask, under its own authority, to access T.
Malicious code on a trustworthy caller also does not cause any new problems. The
malicious code can add Ok(T) to its security context, and that statement will be
transmitted correctly to the callee. The callee will then allow access to T only if
it trusts the malicious code to access T. This is the same result that would have
occurred had the malicious code been running directly on the callee. This matches
the intuition that (with proper use of cryptography for authentication, confiden-
122
running.
7.5 Conclusion
Commercial Java applications often need to execute untrusted code, such as ap-
plets, within themselves. In order to allow sufficiently expressive security policies,
granting different privileges to code signed by different principals, the latest Java
logic developed by Abadi, Burrows, Lampson, and Plotkin [ABLP93]. Using this
have used the ABLP expression of this model to suggest a novel implementation
for a Java-based secure RPC system. While the implementation of such an RPC sys-
tem is future work, our model gives us greater confidence that the system would
123
Chapter 8
Stack inspection has many benefits in structuring large systems with mutually un-
trusting authors. But its original definition, in terms of searching stack frames,
had an unclear relationship to the actual achievement of security, over-constrained
procedural optimizations.
Our new semantics for stack inspection based on a belief logic and its imple-
mentation using the calculus of security-passing style solves all of these problems.
We can efficiently represent the security context for any method activation, and
we can build a new implementation by merely rewriting the Java bytecodes be-
fore they are loaded by the system. No changes to the Java runtime or bytecode
semantics are necessary. With a combination of static analysis and runtime opti-
mizations, our new implementation shows competitive performance, yet will not
124
(1) SP S fun(function f (a1 ; : : : ; an ) = E) =
function f (a1 ; : : : ; an ; s) = (let s0 = says(owner( f ) s) in SP S (E
; ; s0))
(2) SP S (p:g(x1 ; : : : ; xm ); s0 ) = p g(x1 xm s0 )
: ;::: ; ;
(3) +
SP S (E1 E2 ; s0 ) = SP S (E1 s0 ) + SP S (E2 s0 )
; ;
This chapter presents our new implementation and compares its performance
to traditional stack inspection implementations using the javac compiler and our
code transformer itself as benchmarks.
This section describes the design of the security-passing style transformation. Note
that we are using a somewhat simpler version than that described in previous
chapters, which is closer to the design of Sun’s JDK 1.2 system. This system only
supports BeginPrivilege(), CheckPrivilege(), and function calls. Furthermore, in
the simplified model, one may only enable privileges for a specific root target Troot ,
8
where targetsTx : Troot )Tx . As a shorthand, we write BeginPrivilege() with no
target argument and speak of Ok() with no target.
P ! function f (a1 ; : : : ; an ) =E
P ! PP
125
E ! p:g(x1 ; : : : ; xm )
E ! E+E
E ! let v = E in E
E ! BeginPrivilege E
E ! CheckPrivilege(T)
at runtime, and it avoids the danger that a privileged method may call what it
thinks is a method of the same class, yet is actually a method in a subclass. This
would be an example of a luring attack which we wish to avoid (see section 5.2.4).
Figure 8.1 shows the rules for converting a program to security-passing style.
The conversion SP Sfun is applied to each function; SP S is applied to each expres-
sion. Rule 1 involves the introduction of new local variables s and s0 whose names
are not used elsewhere. The function f is rewritten to take s as a new formal pa-
rameter, the security context, which will be the representation of a statement in the
126
(1) SP S fun(function f (a1 ; : : : ; an ) = E) = =
function f (a1 ; : : : ; an ; s) SP S (E; s)
(2) SP S (p:g(x1 ; : : : ; xm ); s) = p:g(x1 ; : : : ; xm ; says(owner(p:g); s))
(3) SP S (E1 + E2 ; s) = +
SP S (E1 ; s) SP S (E2 ; s)
(4) SP S (BeginPrivilege E; s) = SP S (E; Ok())
(5) SP S (CheckPrivilege(T); s) = check(T; s)
ABLP logic. We then construct a new security context with s0 = owner( f ) says s,
and SP S -convert the body of the function using s0 for all outgoing function calls.
Rule 2 of Figure 8.1 shows the use of s0 as the “extra” argument of an outgoing
call; rule 3 shows that most statements are unaffected by SPS-conversion. Rule 4
shows that BeginPrivilege discards the security context s0 and simply uses Ok();
rule 5 shows that CheckPrivilege invokes the decision procedure check, described
in section 7.1.5.
To complete the definition of SPS conversion, we assume that the main function
Stack inspection was originally implemented at Netscape (and then at Sun and Mi-
crosoft) by adding support for it to the runtime system. These extensions required
changing the stack frame representation, which in turn affected the garbage collec-
tor and JIT compiler.
in “vanilla” Java bytecodes (or source), without stack-frame marks or any other
constraints on the Java virtual machine implementation. Every method has an
extra parameter for passing the security context, but this parameter and its repre-
127
sentation are just Java.
8.2 Optimization
the vast majority of methods, which do not perform either operation. There is a
Also, their system has a linear-time cost for CheckPrivilege(), which must scan a
potentially unbounded number of frames to recover the security context. After
this, with the current JDK 1.2 semantics, analyzing the security context could be
potentially as bad as O(N2 ) in the number of targets because Sun allows the target
speaks-for graph to change over time. If Sun allowed caching of the speaks-for
graph (see section 5.2.4), then a transitive closure could be computed once, allow-
ing the final security check to be linear in the size of the security context, which
Our semantics costs O(1) per operation (with the same caveats about checking
privileges), since a security context has a bounded-size representation. Even so,
128
8.2.1 Caller-says vs. callee-says
Suppose a function body g(: : : ; s) contains a call f (: : : ; s0 ), where s and s0 are the
then
s00 = says(owner( f ); s0 ) F says s0
129
eliminating the computation of the says function inside g.
= says(owner(g); t) G says t
= s
again using the ABLP axioms. As a result, g can call f with the security con-
In practice, it is very common for one function to call another with the same owner;
in such cases, no says computation is necessary (since owner( f ) ) owner( f )).
if the caller is known to speak for the callee. Callee-says requires fetching the
owner of the callee, and can be statically optimized if the callee speaks for the
caller. Depending on how often these different speaks-for relations can be stati-
cally determined, and how often the owner of the callee can be determined stat-
ically, one convention or the other may turn out to perform best in practice. We
point to to an object of any subclass of C. Therefore, the method call p:f() may
invoke any of several actual method bodies, depending upon how f is overridden
130
A define f()
B override f()
p C
D H override f()
E override f()
in the subclasses of C.
Figure 8.3 illustrates a simple flow-insensitive class hierarchy analysis. Given a
or overrides f(), and put that ancestor into the set P. Then we examine all (direct
and indirect) subclasses of C, and any of those that override f() are also put into P.
A static analysis of the program may be able to narrow the set of possible types
that p may take on at the site of the call p:f(), and this in turn narrows the set of
possible method bodies (callees) that this call site may invoke. Such an analysis
lookup is more expensive than a static procedure call; if the set of callees can be
narrowed to a singleton, then the call p:f() can be implemented without run-time
For security-passing style, it is not necessary to narrow the set of possible method
bodies to a singleton – it suffices to prove that all possible method bodies for this
131
call to f have a common owner. In fact, an even weaker property will suffice: for
caller-says, we require only that every possible owner of f have (nonstrictly) fewer
privileges than the owner of its caller, g; for callee-says, we require that all owners
of f have (nonstrictly) more privileges than the owner of g.
There has been much work on static analyses of object types. Class hierarchy
analysis [Fer95, DGC95] simply examines all the subclasses of C to see if any of
them overrides method f. If not, the definition of f in class C (or, if C does not
that p may contain at the call site; this in turn prunes the set of possible method
bodies for f at this point, which in turn allows more precise dataflow analysis. This
iterative process is called interprocedural class analysis and has been shown practical
System that has access to every target. Even without flow analysis or hierarchy
analysis the compiler can use the rule System ) C to eliminate says computations
when System code is calling other methods (in the caller-says convention) or other
Leaf procedures. Many functions do not use their security context in any way.
A leaf procedure is one that makes no other function call and contains no Check-
Privilege operations; its security context argument is statically dead at all times.
A generalized leaf procedure is one that neither calls CheckPrivilege nor any native
methods, either directly or indirectly. Static analysis of the dynamic call tree can
conservatively identify many generalized leaf procedures; these procedures do not
132
require any security-context argument or a says computation.
bytecodes from every method (to a limited recursion depth) and is repeated until a
fixed point is reached. In practice, thousands of methods can be analyzed this way
cause security-passing style has expressed all the says computations in the under-
lying programming language. However, the compiler needs to know that says
dead, even though says might have internal side effects to lazily compute part of
the transition graph.
Unfortunately, the invoke bytecode instructions are not sufficient for the leaf
analysis. If a getstatic or putstatic bytecode references a class that has not yet
been loaded, this will cause the class to be initialized, creating an implicit call to the
These implicit method invocations are not directly visible from Java bytecode,
making it difficult to pass the security context to the new method. To address
class initializers, we observe that, when a class is first loaded, it cannot make any
pessimistic about any security context they might inherit. Our system gives class
initializers the same security context they would receive after an EnablePrivilege
operation. While not completely compatible with Sun’s implementation, this solu-
133
tion simulates a possible outcome with which a class initializer should be prepared
to cope.
The implicit calls to create runtime exceptions are much simpler. By observa-
tion, all of the exceptions that might be thrown make only one native method call,
to fill in their stack trace, and are otherwise generalized leaf methods. Because the
security context is never used by these exception constructors, it is safe to allow
context can be represented with a finite table of labeled out-edges, so that says(o; s)
is computed by looking up o in the table for s.
Although n is bounded, it may not always be tiny (e.g., a stock market with
thousands of principals), so we lazily compute the tables and represent only those
security contexts that are actually reached. Following an untraversed edge requires
(1) looking up a “new” subset in a global hash table to see if this context has been
reached before, (2a) using the context-pointer from the table or (2b) creating a new
context data structure, and (3) installing the edge into the context that had lacked
the edge.
From a security context s there be many consecutive says computations by the
134
8.2.4 Open vs. Closed World Assumptions
that we can inspect all code before execution begins. This is often called a “closed
world assumption.” In systems where Java’s security features are often used, such
as Java applets or servlets, new code may arrive at any time. Currently, all of our
algorithms have been designed for a closed world. In particular, our class hierar-
chy analysis runs once, up front, and code is then generated based on properties
true in the closed world. Dean et al. [DGC95] discusses precisely this issue and
proposes a scheme for incrementally updating the analysis.
Keep in mind that the performance numbers in section 8.3 are based on a closed
world. Generally speaking, an open world has strictly less information available
from which to infer that an optimization is legal. This implies that, in general,
possible for an open world system to closely approach the performance of a closed
world system. For example, in a Java system supporting dynamic code recompi-
lation, such as Sun’s forthcoming HotSpot [Gri98], it would be possible for an SPS
135
a relatively high-level interface to parse and edit Java class files.
and 2300 lines to do byte-code rewriting (SPS conversion). Our runtime support
(implementing the says and CheckPrivilege functions) is 1900 lines. Our system
loads, analyzes, and rewrites roughly 800 Java classes in 100 seconds. We made no
effort to tune the performance of the rewriter itself; achieving an order of magni-
ple and generalized leaf methods. Because we require the full program for this
analysis, we cannot presently support the dynamic loading features of Java (see
section 8.2.2). Instead, we run the program from local disk with our specialized
classes.
Our system runs by modifying the class libraries of the NaturalBridge Bul-
authors offered us invaluable assistance with their unreleased product. Also, be-
136
8.3.1 Making SPS Work
Security-passing style has some very nice theoretical properties, but actually im-
Native methods. Java programs can call native methods (functions not written in
Java) that might then call back to Java methods. We cannot apply SPS conversion
within the native methods. Instead, when calling from Java to native, we store the
security context s into a per-thread global variable; when calling from native back
to Java we fetch s as the security context for the Java code. If we assume that all
native method calls have the owner, System, then since says(System ; s) = s this is
the correct behavior.
We must also support up-calls, where native methods choose to call back into
Java methods. The standard mechanism for this, JNI (Java Native Interface), re-
quires the native code to specify a method’s complete signature, including the
types of its arguments and return value. This means the SPS-converter must gen-
erate stubs with the original signatures to receive a JNI up-call. A stub method will
retrieve the per-thread stored security context and then invoke its SPS-converted
sibling.
many parameters each of its methods takes; since SPS conversion introduces extra
arguments, this is a problem that would have to be fixed by modifying the imple-
mentation of reflection; we have not yet done this.
137
Bootstrapping. In practice, bootstrapping proved to be the most difficult aspect
the bootstrapping code is written in Java itself. This makes the system extremely
sensitive about the order in which classes begin execution, and many classes which
appear to be normal are handled specially by the compiler. To address these con-
cerns, an SPS-converted program must bootstrap in three stages.
Classes involved in the very beginning of bootstrapping the runtime were iden-
tified by hand and added to a list of classes that are not modified by the SPS con-
verter. Instead, any calls to these classes are treated the same as calls to any native
Finally, when “real” security contexts are available, the application’s main rou-
tine can be invoked with a proper security context and execution continues nor-
mally.
Consistency and inheritance. Because many system classes must not be SPS-
class or vice versa. It is obviously important to maintain the consistency of the type
system, and SPS-converting only a subset of the classes can cause confusion.
converted, then all classes that implement that interface must also be SPS-converted.
138
This rule implies that, if a class cannot be SPS-converted, its superclass may not
are handled: the security context is saved and the method is invoked without the
security context argument.
Several issues must still be resolved to make this work. One problem
not occur anywhere else, so it was solved by adding a new method specifically to
java.lang.Thread.
Portability. As mentioned above, Java systems are relatively fragile during the
ing which classes need to be handled specially. Also, sufficient access to the system
bootstrapping process is required such that the SPS system can be loaded as early
as possible. Aside from these issues, the SPS runtime should be straightforward to
139
No Stack Security-
Security Inspection Passing
(baseline) Style
No says (leaf) 1-4 cycles
=
says(o; s) s (static opt.) 0 0 1-4
says(o; s) (cache hit) 33
says(o; s) (cache miss) 69
BeginPrivilege 24 cycles 2200 cycles 57
Javac Benchmark
Runtime (sec) 15.53 15.77 17.93 18.13
Stddev 0.26 0.09 0.18 0.08
Overhead 0% 1.60% 15.50% 16.78%
140
40
35
30
Time (microseconds)
25
20
15
10
Stack inspection
5 Security-passing style
0
0 2 4 6 8 10 12 14 16 18
Average Stack Depth
141
8.3.2 Performance
mark was executed ten million times, allowing Java’s millisecond-accurate timer to
resolve single-cycle differences in execution time. Table 8.1 shows the results. Fig-
ure 8.4 shows the variable cost of the CheckPrivilege() primitive when using stack
converted code, with its cheap security checks, to normal code, performing expen-
sive stack inspections for its security checks. As benchmarks, we used Sun’s javac
Java compiler and our own SPS converter. Both benchmarks do a fair amount
of file reading and writing, requiring a security check for each file opened. Each
benchmark was executed ten times; we show performance numbers with the av-
142
erage and standard deviation of their runtimes. On these benchmarks, the Bul-
letTrain stack inspection system imposed an overhead that varied from 1.60% to
Currently, both our security-passing system and the BulletTrain stack in-
ther has been heavily tuned for performance. Kenneth Zadeck of NaturalBridge
claims the BulletTrain implementation will be much simpler to tune for perfor-
mance [Zad98]. Certainly, if our benchmarks represent typical usage patterns for
security checking, stack inspection is probably the most efficient option. However,
if an application requires a dramatically higher number of security checks, our mi-
performance.
on Java bytecodes, which are then compiled using the NaturalBridge BulletTrain
compiler (see section 8.3). Although this is a good compiler, it is not as efficient as
we might expect from hand-coded assembly language. Therefore, the cost of our
security-passing operations and of ordinary program execution are both higher
143
SPS Converter Javac
(stat) (dyn) (stat) (dyn)
a Leaf methods 18.42% 84.47% 18.90% 56.92%
Statically identifiable 93.76% 41.16%
b 88.27% 70.40%
dominated callee
c Save/restore cost 1 instruction (estimated)
d Fetch owner 1 instruction (estimated)
e Cache miss rate
(one word cache)a
0.649
0.649 1:146 10,6 1:146 10,6
f Cache test cost 3 instructions (estimated)
g Fetch target 1 instruction (estimated)
h Look in table 10 instructions (estimated)
Total overhead
z 1.368 1.146 1.287 1.068
(instructions / call)
piler assumed to generate the smallest and fastest possible code for the SPS prim-
itives. These estimates are based on dynamic performance counters which were
added to the benchmarks discussed in section 8.3.2 and are summarized in ta-
ble 8.3.
1. Static analysis will identify between 18% and 19% of methods as leaf meth-
makes more than one call must save its security context variable into the
activation record before the first call, and fetch it back each subsequent call.
144
We approximate the register save/restore cost by assuming c = 1 instruction
per call.
3. The security context is usually passed in the same register, so that a method
g(: : : ; s) needs no move instructions to call f(: : : ; s). All says computations
are inlined.
4. Each class descriptor contains one field that points to the owner of that class.
When executing a dynamic method call p:f(), the class descriptor of p must
perfect hit rates in the javac benchmark and fairly poor hit rates for the SPS
converter (missing nearly e = 65% of all tests). The cost of testing the cache is
estimated to be f= 3 instructions (fetch owner from context cache, compare,
branch) and the cost of processing a hit is estimated to be an additional g = 1
instruction (fetch target from one-word cache).
6. Each security context contains a data structure (list, hash table, ...) holding
contexts. The average number of outgoing edges actually present (they are
calculated lazily) is approximately one. Querying this data structure (and
145
No Security Method Estimated
(baseline) Invocations Overhead
SPS Converter 94.25 sec 109 106 , ,
0:27 1:1 sec (0:28% 1:1%)
Javac 15.53 sec 23 106 , ,
0:056 0:22 sec (0:36% 1:4%)
Table 8.4: Estimated runtimes of benchmarks using “optimal” SPS code genera-
tion.
Under these assumptions, the average cost of says computations per method in-
vocation is
Based on these “optimal” overhead figures, we would like to estimate the runtime
overhead while running real programs. This requires measuring the number of
actual method calls made during a program’s execution and then adding the esti-
mated cost per method of SPS-converted code. The main difficulty is converting
from some number of instructions to an equivalent number of CPU clock cycles.
Modern CPUs can potentially execute multiple instructions per clock, but load
and store operations may take additional time, depending on whether the desired
memory address is cached or a host of other factors. In many cases, the latency
necessary to compute an intermediate value can be hidden if the value is not yet
needed and other instructions are ready to run. In the case of SPS-conversion, us-
ing caller-says semantics, the computed security context is not necessary until the
first callsite is reached, so it is entirely feasible for the cycle cost of SPS to be low.
146
However, measuring this in practice is beyond the scope of this thesis (and could
vary widely from one CPU to another). Instead, we present what the overhead
would be if each instruction for SPS conversion consumed exactly one clock cycle
(an optimistic estimate) or exactly four clock cycles (a pessimistic estimate).
These results, summarized in table 8.4, are based on a 450 MHz system clock. With
either benchmark, we estimate the overhead of an ideal SPS converter would add
between a quarter of a percent to at most 1:4% to the program’s runtime. This
applications like our benchmarks, yet would likely maintain this lower overhead
even in systems which made security checks more often.
8.5 Conclusion
Security-passing style is a simple semantics for the “stack inspection” security ar-
chitecture. Users can reason about the security of their systems using ABLP belief
logic, and implementors can reason about interprocedural optimization using the
semantics of the original source language, in the SPS-converted program. The im-
cated and SPS-unaware compiler – and with better compilers it should be possible
to implement security-passing with an overhead of approximately one instruction
per call.
In the future we plan to try adding other information to the security context
to control resource usage by callees. We will handle classes with multiple signers
147
class hierarchy analysis to improve the static optimization of SPS, and integrate
our system into a (dynamically linking) Java ClassLoader (although this would
require VM hooks to invalidate the optimizations when new classes are loaded
that contradict our previous flow analysis).
148
Chapter 9
Future Work
able technique for access control. Security-passing style was shown to be an ef-
cedure is used strictly for passing the security context. An interesting extension
would be to augment the security context with other information. Perhaps the se-
curity context could additionally help with resource allocation by specifying some-
thing about who is about to allocate memory and on whose behalf (similar to the
per-thread resource tracking in JRes [Cv98]). Similarly, security contexts might
prove useful for tracking CPU usage, allowing code to declare to a scheduler that
it is consuming cycles itself, or on the behalf of its caller. A system that combines
security-passing style with its scheduler and memory management may have in-
teresting properties.
The static optimization presented here is fairly primitive, failing to use flow-
149
study the ability of a more aggressive optimizer to further reduce the overhead of
security-passing code. Likewise, the system presented here does not account for
dynamic loading of code. In current Java implementations, new code may arrive at
any time and violate previously valid properties about the class hierarchy. It would
port security-passing style and integrate that with a local security-passing system.
As more distributed systems are deployed, it would be beneficial to apply the se-
are increasingly adopting mobile code, dynamic linking, and object-oriented se-
mantics. Languages are increasingly dealing with issues of access control and fair
resource allocation. Interesting solutions will be found all across the spectrum be-
tween languages and operating systems. A number of commercial systems from
the 1960s blurred the distinction between language and OS, and I expect these dis-
tinctions will begin to be blurred again as we address the computing needs of the
Internet age.
9.1 Conclusions
150
An analysis of security architectures that might be applied to a type-safe lan-
guage, such as Java.
system for access controls in mobile code systems. Security-passing style addresses
concerns that mobile code systems must inherently be inefficient or must rely on
can work seamlessly across networks, it can go farther than traditional operating
systems security mechanisms in allowing for detailed and principled access con-
trols. As mobile code is increasingly deployed, whether in the form of active net-
works, shared virtual realities, or programmed stock trading, the importance of a
sound security architecture, such as the one proposed in this dissertation, increases
likewise.
151
Bibliography
September 1993.
[AGS83] Stanley R. Ames, Jr., Morrie Gasser, and Roger G. Schell. Security
kernel design and implementation: An introduction. Computer, pages
152
[BAN90] Michael Burrows, Martı́n Abadi, and Roger M. Needham. A logic
February 1990.
1998.
[BL76] D. Elliot Bell and Leonard J. LaPadula. Secure computer system: Uni-
[BN89] David F. C. Brewer and Michael J. Nash. The Chinese wall security
policy. In Proceedings of the 1989 IEEE Symposium on Security and Pri-
[BNOW95] Andrew D. Birrell, Greg Nelson, Susan Owicki, and Edward P. Wob-
[Bor94] Nathaniel S. Borenstein. Email with a mind of its own: The Safe-Tcl
language for enabled mail. In IFIP International Working Conference on
153
[BSP+ 95] Brian N. Bershad, Stefan Savage, Przemyslaw Pardyak, Emin Gün
Sirer, Marc Fiuczynski, David Becker, Susan Eggers, and Craig Cham-
[BSS+ 95] Lee Badger, Daniel F. Sterne, David L. Sherman, Kenneth M. Walker,
[BTS+ 98] Godmar Back, Patrick Tullman, Leigh Stoller, Wilson C. Hseih, and Jay
[CCK98] Geoff Cohen, Jeff Chase, and David Kaminsky. Automatic program
154
Technical Symposium, pages 167–178, New Orleans, Louisiana, June
1998.
//www.lucent-inferno.com/Pages/Developers/Documentation/
White_Papers/commedia.html.
[Cul98] Cult of the Dead Cow. Back Orifice, August 1998. http://www.
cultdeadcow.com.
155
[CW87] D. Clark and D. Wilson. A comparison of commercial and military
[DA97] Tim Dierks and Christopher Allen. The TLS Protocol, Version 1.0. In-
ternet Engineering Task Force, November 1997. Internet draft, ftp:
//ietf.org/internet-drafts/draft-ietf-tls-protocol-05.txt.
[DE97a] Sophia Drossopoulou and Susan Eisenbach. Is the Java type system
[DE97b] Sophia Drossopoulou and Susan Eisenbach. Java is type safe — prob-
[Dea97] Drew Dean. The security of static typing with dynamic linking.
In Fourth ACM Conference on Computer and Communications Security,
[DFW96] Drew Dean, Edward W. Felten, and Dan S. Wallach. Java security:
156
[DFWB97] Drew Dean, Edward W. Felten, Dan S. Wallach, and Dirk Balfanz.
1997.
[DGC98] Greg DeFouw, David Grove, and Craig Chambers. Fast interproce-
com.
157
[Ele96] Electric Communities, Sunnyvale, California. The Electric Communities
Trust Manager and Its Use to Secure Java, September 1996. http://www.
communities.com/company/papers/trust/.
[ER89] Mark W. Eichin and Jon A. Rochlis. With microscope and tweezers:
An analysis of the Internet virus of November 1988. In Proceedings of
the 1989 IEEE Symposium on Security and Privacy, pages 326–343, Oak-
[Far93] Dan Farmer. Cops (computer oracle and password system), May 1993.
http://www.trouble.org/cops/.
103–115, 1995.
[FHL+ 96] Bryan Ford, Mike Hibler, Jay Lepreau, Patrick Tullman, Godmar Back,
ber 1996.
158
96-58, Department of Information and Computer Science, University
[FKK96] Alan O. Freier, Philip Karlton, and Paul C. Kocher. The SSL Pro-
draft-freier-ssl-version3-01.txt.
[Fla97] David Flanagan. JavaScript: The Definitive Guide. O’Reilly & Asso-
Telescript/TDE/TDEDOCS_HTML/telescript.html.
[GJS96] James Gosling, Bill Joy, and Guy Steele. The Java Language Specification.
Addison-Wesley, Reading, Massachusetts, 1996.
jecf_gateway.ps.
the 1989 IEEE Symposium on Security and Privacy, pages 56–63, Oak-
159
[Gri98] David Griswold. The Java HotSpot Virtual Machine Architecture. Sun
products/hotspot/whitepaper.html.
1995.
[GWTB96] Ian Goldberg, David Wagner, Randi Thomas, and Eric A. Brewer. A
[Har88] Norman Hardy. The confused deputy. ACM Operating Systems Review,
22(4):36–38, October 1988. http://www.cis.upenn.edu/~KeyKOS/
ConfusedDeputy.html.
[HCC+ 98] Chris Hawblitzel, Chi-Chao Chang, Grzegorz Czajkowski, Deyu Hu,
160
[Hen82] John L. Hennessy. Symbolic debugging of optimized code. ACM
1982.
1997. http://www.cert.org/research/JHThesis/.
[Hu95] Wei Hu. DCE Security Programming. O’Reilly & Associates, Inc., Se-
[HYHD95] Richard C. Ho, C. Han Yang, Mark Horowitz, and David L. Dill. Ar-
chitecture validation for processors. In Proceedings of the 22nd Annual
Scanner.pdf.
ceedings of the 1984 IEEE Symposium on Security and Privacy, pages 2–12,
161
[KN93] John T. Kohl and Clifford Neuman. The Kerberos network authen-
[LABW92] Butler Lampson, Martı́n Abadi, Michael Burrows, and Edward Wob-
ber. Authentication in distributed systems: Theory and practice. ACM
[LB98] Sheng Liang and Gilad Bracha. Dynamic class loading in the java vir-
162
[LDOW98] Jacob Y. Levy, Laurent Demailly, John K. Ousterhout, and Brent B.
[LY96] Tim Lindholm and Frank Yellin. The Java Virtual Machine Specification.
Addison-Wesley, Reading, Massachusetts, 1996.
edition, 1962.
[MD97] Jon Meyer and Troy Downing. Java Virtual Machine. O’Reilly & Asso-
ciates, Inc., Sebastopol, California, March 1997.
[MF97] Gary McGraw and Edward W. Felten. Java Security: Hostile Applets,
Holes, and Antidotes. John Wiley and Sons, New York, New York, 1997.
security/tech/authcode/authcode-f.htm.
microsoft.com/ie/security/ie4security.htm.
163
[Mic97b] Microsoft Corporation, Redmond, Washington. Trust-Based Security
jsecwp.htm.
[MMS97] John C. Mitchell, Mark Mitchell, and Ulrich Stern. Automated analy-
sis of cryptographic protocols using Mur'. In Proceedings of the 1997
on Network and Distributed System Security (NDSS ’97), San Diego, Cal-
ifornia, 1997.
[MRR98] Dahlia Malkhi, Michael Reiter, and Avi Rubin. Secure execution of
Java applets using a remote playground. In Proceedings of the 1998
[MTH90] Robin Milner, Mads Tofte, and Robert Harper. The Definition of Stan-
dard ML. MIT Press, Cambridge, Massachusetts, 1990.
164
[Nat85] National Computer Security Center, Fort Meade, Maryland. Depart-
naturalbridge.com.
[NBF+ 80] Peter G. Neumann, Robert S. Boyer, Richard J. Feiertag, Karl N. Levitt,
documentation/signedobj/jarfile/.
signedobj/capabilities/index.html.
[NL96] George C. Necula and Peter Lee. Safe kernel extensions without run-
htmfiles/0511/x0011.htm.
165
[Obj96] Object Management Group. Common Secure Interoperability, July 1996.
October 1996.
[PD96] Seungjoon Park and David L. Dill. Verification of flash cache coher-
[PPTT90] Rob Pike, Dave Presotto, Ken Thompson, and Howard Trickey. Plan 9
from Bell Labs. In Proceedings of the Summer 1990 UKUUG Conference,
[Ros96a] Jim Roskind. Evolving the Security Model For Java From Navigator 2.x
library/technote/security/sectn1.html.
166
[Ros96b] Jim Roskind. Java and security. In Netscape Internet Developer Confer-
com/misc/developer/conference/.
[SA98] Raymie Stata and Martı́n Abadi. A type system for Java bytecode
1974.
[Sch96] Bruce Schneier. Applied Cryptography. John Wiley and Sons, New York,
New York, 2nd edition, 1996.
167
[SESS96] Margo I. Seltzer, Yasuhiro Endo, Christopher Small, and Keith A.
[SGB+ 98] Emin Gün Sirer, Robert Grimm, Brian N. Bershad, Arthur J. Gregory,
[Sib96] W. Olin Sibert. Malicious data and computer security. In 19th National
[Sie96] Jon Siegel, editor. CORBA Fundamentals and Programming. John Wiley
cs.washington.edu, 1997.
168
[Ste78] Guy L. Steele. Rabbit: a compiler for Scheme. Technical Report AI-
1994.
on Lisp and Functional Programming, pages 1–12, New York, June 1990.
ACM Press.
[TL98] Patrick Tullman and Jay Lepreau. Nested Java processes: OS structure
for mobile code. In Eighth ACM SIGOPS European Workshop, Septem-
ber 1998.
http://www.transvirtual.com.
169
[vABW96] Leendert van Doorn, Martı́n Abadi, Michael Burrows, and Edward
[WABL94] Edward Wobber, Martı́n Abadi, Michael Burrows, and Butler Lamp-
son. Authentication in the Taos operating system. ACM Transactions
[WBDF97] Dan S. Wallach, Dirk Balfanz, Drew Dean, and Edward W. Felten. Ex-
[WF98] Dan S. Wallach and Edward W. Felten. Understanding Java stack in-
[WLAG93] Robert Wahbe, Steven Lucco, Thomas E. Anderson, and Susan Gra-
ham. Efficient software-based fault isolation. In Proceedings of the
170
[WRF96] Dan S. Wallach, Jim A. Roskind, and Edward W. Felten. Flexible, ex-
1996.
171