Protecting the confidentiality of information manipulated by a computing system is one of the most important challenges facing today's cybersecurity community. Many complex systems, such as operating systems, hypervisors, web browsers, and distributed systems, require a user to trust that private information is properly isolated from other users. Real-world systems are full of bugs, however, so this assumption of trust is not reasonable.
The goal of this dissertation is to apply formal methods to complex security-sensitive systems, in such a way that we can guarantee to users that these systems really are trustworthy. Unfortunately, there are numerous prohibitive challenges standing in the way of achieving this goal.
One challenge is how to specify the desired security policy of a complex system. In the real world, pure noninterference 'is too strong to be useful. It is crucial to support more lenient security policies that allow for certain well-specified information flows between users, such as explicit declassifications. Furthermore, the specified policy must be comprehensible to users at a high level of abstraction, but also must apply to the low-level system implementation.
A second challenge is that real-world systems are usually written in low-level languages like C and assembly, but these languages are traditionally difficult to reason about. Additionally, even if we successfully verify individual C and assembly functions, how do we go about linking them together? The obvious answer is to do the linking after the C code gets compiled into assembly, but this requires trusting that the compiler did not accidentally or maliciously introduce security bugs. This is a very difficult problem, especially considering that a compiler may fail to preserve security even when it correctly preserves functional behavior.
A third challenge is how to actually go about conducting a security proof over low-level code. Traditional security type systems do not work well since they require a strongly-typed language, so how can a security violation be detected in untyped C or assembly code? In fact, it is actually common for code to temporarily violate a security policy, perhaps for performance reasons, but then to not actually perform any observable behavior influenced by the violation; how can we reason that this kind of code is acceptably secure? Finally, how do we conduct the proof in a unified way that allows us to link everything together into a system-wide guarantee?
In this dissertation, we make two major contributions that achieve our goal by overcoming all of these challenges. The first contribution is the development of a novel methodology allowing us to formally specify, prove, and propagate information-flow security policies using a single unifying mechanism, called the "observation function" . A policy is specified in terms of an expressive generalization of classical noninterference, proved using a general method that subsumes both security-label proofs and information-hiding proofs, and propagated across layers of abstraction using a special kind of simulation that is guaranteed to preserve security.
To demonstrate the effectiveness of our new methodology, our second major contribution is an actual end-to-end security proof, fully formalized and machine-checked in the Coq proof assistant, of a nontrivial operating system kernel. Our artifact is the first ever guaranteed-secure kernel involving both C and assembly code, including compilation from the C code into assembly. Our final result guarantees the following notion of isolation: as long as direct inter-process communication is not used, user processes executing over the kernel cannot influence each others' executions in any way. During the verification effort, we successfully discovered and fixed some interesting security holes in the kernel, such as one that exploits child process IDs as a side channel for communication.
We also demonstrate the generality and extensibility of our methodology by extending the kernel with a virtualized time feature allowing user processes to time their own executions. With a relatively minor amount of effort, we successfully prove that this new feature obeys our isolation policy, guaranteeing that user processes cannot exploit virtualized time as an information channel.
|School Location:||United States -- Connecticut|
|Source:||DAI-B 78/07(E), Dissertation Abstracts International|
|Keywords:||Formal Methods, Information-Flow Security, Program Logics, Software Verification|
Copyright in each Dissertation and Thesis is retained by the author. All Rights Reserved
The supplemental file or files you are about to download were provided to ProQuest by the author as part of a
dissertation or thesis. The supplemental files are provided "AS IS" without warranty. ProQuest is not responsible for the
content, format or impact on the supplemental file(s) on our system. in some cases, the file type may be unknown or
may be a .exe file. We recommend caution as you open such files.
Copyright of the original materials contained in the supplemental file is retained by the author and your access to the
supplemental files is subject to the ProQuest Terms and Conditions of use.
Depending on the size of the file(s) you are downloading, the system may take some time to download them. Please be