ASSURANCE IN TRUSTED O.S.

Typical Operating System Flaws
Throughout the years, many vulnerabilities have been uncovered in many operating systems. They have gradually been corrected, and the body of knowledge about likely weak spots has grown with the time.
Known Vulnerabilities

  1. User interaction is the largest single source of operating system vulnerabilities, for several reasons: The user interface is performed by independent, intelligent hardware subsystems. The human computer interface often falls outside the security kernel or security restrictions implemented by an operating system. Code to interact with users is often much more complex and much more dependent on the specific device hardware than code for any other component of the computing system. For these reasons, it is harder to review this code for correctness, let alone to verify it formally. User interactions are often character oriented. Again, in the interest of fast data transfer, the operating systems designers may have tried to take shortcuts by limiting the number of instructions executed by the operating system during actual data transfer.  Sometimes the instructions eliminated are those that enforce security policies as each character is transferred.
  2. Ambiguity in access policy: On one hand, we want to separate users and protect their individual resources. On the other hand, users depend on shared access to libraries, utility programs, common data, and system tables. The distinction between isolation and sharing is not always clear at the policy level, so the distinction cannot be sharply drawn at implementation.
  3. Incomplete mediation: Recall that Saltzer recommended an operating system design in which every requested access was checked for proper authorization. However, some systems check access only once per user interface operation, process execution, or machine interval. The mechanism is available to implement full protection, but the policy decision on when to invoke the mechanism is not complete. Therefore, in the absence of any explicit requirement, system designers adopt the "most efficient" enforcement; that is, the one that will lead to the least use of machine resources.
  4. Generality: It is a protection weakness, especially among commercial operating systems for large computing systems. Implementers try to provide a means for users to customize their operating system installation and to allow installation of software packages written by other companies. Some of these packages, which themselves operate as part of the operating system, must execute with the same access privileges as the operating system. For example, there are programs that provide stricter access control than the standard control available from the operating system. The "hooks" by which these packages are installed are also trapdoors for any user to penetrate the operating system. Thus, several well-known points of security weakness are common to many commercial operating systems. Let us consider several examples of actual vulnerabilities that have been exploited to penetrate operating systems.

Testing
Testing is the most widely accepted assurance technique. As Boebert observes, conclusions from testing are based on the actual product being evaluated, not on some abstraction or precursor of the product. This realism is a security advantage. However, conclusions based on testing are necessarily limited, for the following reasons:

  1. Testing can demonstrate the existence of a problem but passing tests does not demonstrate the absence of problems.
  2. Testing adequately within reasonable time or effort is difficult because the combinatorial explosion of inputs and internal states makes testing very complex.
  3. Testing based only on observable effects, not on the internal structure of a product, does not ensure any degree of completeness.
  4. Testing based on the internal structure of a product involves modifying the product by adding code to extract and display internal states. That extra functionality affects the product's behavior and can itself be a source of vulnerabilities or mask other vulnerabilities.
  5. Testing real-time or complex systems presents the problem of keeping track of all states and triggers. This problem makes it hard to reproduce and analyze problems reported as testers proceed.

Formal Verification

  1. The most rigorous method of analyzing security is through formal verification. Formal verification uses rules of mathematical logic to demonstrate that a system has certain security properties.
  2. In formal verification, the operating system is modeled, and the operating system principles are described as assertions.
  3. The collection of models and assertions is viewed as a theorem, which is then proven.
  4. The theorem asserts that the operating system is correct. That is, formal verification confirms that the operating system provides the security features it should and nothing else.
  5. Proving correctness of an entire operating system is a formidable task, often requiring months or even years of effort by several people.
  6. Computer programs called theorem provers can assist in this effort, although much human activity is still needed.

Validation
Formal verification is instance of the more general approach to assuring correctness: verification. Validation is the counterpart to verification, assuring that the system developers have implemented all requirements. Thus, validation makes sure that the developer is building the right product (according to the specification), and verification checks the quality of the implementation. There are several different ways to validate an operating system.

  1. Requirements checking: One technique is to cross-check each operating system requirement with the system's source code or execution-time behavior. The goal is to demonstrate that the system does each thing listed in the functional requirements. This process is a narrow one, in the sense that it demonstrates only that the system does everything it should do. In security, we are equally concerned about prevention: making sure the system does not do the things it is not supposed to do. Requirements checking seldom addresses this aspect of requirements compliance.
  2. Design and code reviews: Design and code reviews usually address system correctness (that is, verification). But a review can also address requirements implementation. To support validation, the reviewers scrutinize the design or the code to ensure traceability from each requirement to design and code components, noting problems along the way (including faults, incorrect assumptions, incomplete or inconsistent behavior, or faulty logic). The success of this process depends on the rigor of the review.
  3. System testing: The programmers or an independent test team select data to check the system. These test data can be organized much like acceptance testing, so behaviors and data expected from reading the requirements document can be confirmed in the actual running of the system. The checking is done in a methodical manner to ensure completeness.