Development of agreed upon terminologies and principles of classification (a taxonomy) are two of the necessary prerequisites to systematic studies in any field of inquiry [McK82:3]. The development of a comprehensive taxonomy in the field of computer security has been an intractable problem of increasing interest [Amo94:31]. Even the potential for partial success in this area makes this effort valuable.
The first step in the development of a comprehensive taxonomy for the classification of computer and network security attacks and incidents was to define computer security. This was done by first examining alternative definitions of computer security and then narrowing the definitions toward the following formal definition: Computer security is preventing attackers from achieving objectives through unauthorized access or unauthorized use of computers and networks. This formal definition provided a boundary to the computer and network security field that was then expanded into the taxonomy described in Chapter 6.
5.1. Simple Computer Security Definitions
In the early days of computing, computer security was of little concern. The number of computers and the number of people with access to those computers was limited [GaS96:11; Amo94:1]. The first computer security problems, however, emerged as early as the 1950's, when computers began to be used for classified information. Confidentiality (also termed secrecy) was the primary security concern [RuG91:9], and the primary threats were espionage and the invasion of privacy. At that time, and up until recently, computer security was primarily a military problem, which was viewed as essentially being synonymous with information security. From this perspective, security is obtained by protecting the information itself.
By the late 1960's, the sharing of computer resources and information, both within a computer and across networks, presented additional security problems. Computer systems with multiple users required operating systems that could keep users from intentionally or inadvertently interfering with each other [GaS96:15]. Network connections also provided additional potential avenues of attack that could not generally be secured physically. Disclosure of information was no longer the only security concern. Added to this was concern over maintaining the integrity of the information. Conventional wisdom dating from this period was that governments are primarily concern with preventing the disclosure of information, while businesses are primarily concerned with protecting the integrity of the information, although this is becoming less the case [Amo94:4].
In their popular text on Internet security and firewalls, Cheswick and Bellovin define computer security to be "keeping anyone from doing things you do not want them to do to, with, on, or from your computers or any peripheral devices [ChB94:3]." Using this definition, computers are seen to be targets that can be attacked ("do to"), or tools that can be used ("do . . . with, on, or from"). From this perspective, computer security is distinguished from information security. "Computer security is not a goal, it is a means toward a goal: information security [ChB94:4]."
A more operational definition is presented by Garfinkel and Spafford in their text on Unix and Internet security: "A computer is secure if you can depend on it and its software to behave as you expect . . . . This concept is often called trust: you trust the system to preserve and protect your data [GaS96:6]." The authors intend for this definition to include natural disasters and buggy software as security concerns, but to exclude software development and testing issues.
These definitions are relatively informal, and as a result, they are not adequate to the development of a taxonomy of computer security problems. Ideally, a definition would unambiguously demarcate the boundaries of the field of concern. For example, natural disasters and buggy software both can result in damage to computer files, and, therefore, a very broad definition of computer security would include both of these. As a practical matter, however, the computer security field is not usually considered to be this inclusive. Garfinkel and Spafford include these concerns in their definition of computer security, but they narrow their focus on "techniques to help keep your system safe from other people - including both insiders and outsiders, those bent on destruction, and those who are simply ignorant or untrained [GaS96:7]."
5.2. Narrowing the Definition of Computer Security
There are many events that could result in damage to of loss of computer files that are included in the broad, informal definitions of computer security, but they are more appropriately considered part of related security fields. Theft of computer equipment would certainly result in the loss of computer files, but this type of theft is similar to the theft of the copy machine, telephone, jewelry, or any other physical object. Methods to provide security for physical objects are well-developed, and are not unique to computer equipment. Environmental threats, such as earthquakes, floods, lightning, power fluctuations, humidity, dust, varying temperatures, and fire, can also result in damage to computer files, but they also can cause damage to other property. It seems customary for authors to include these threats within their broad computer security definitions, but they then proceed to exclude discussions of these problems in their texts or papers on computer security. The definition of computer security developed here is intended to explicitly exclude these areas.
Another similar area involves software. "Buggy" software is certainly a threat to computer files. Improperly implemented software could cause files to be damaged or lost. But this does not, of course, mean that we should include software development as a subset of the computer security field. Most software development issues, instead, fall outside of the computer security field. Software errors, however, clearly lead to security problems: they sometimes create vulnerabilities that can then be exploited. In fact, software that operates correctly can also be a security problem when it is operated in a manner which was not intended. Software problems will be included in the taxonomy developed in Chapter 6 as a method for the introduction of system vulnerabilities that could be exploited to breach computer security.
A common method to narrow the definition of computer security is to concentrate on the three categories of computer security: confidentiality, integrity, and availability [RuG91:9, Lan81:251].
Confidentiality requires that information be accessible only to those authorized for it, integrity requires that information remain unaltered by accidents or malicious attempts, and availability means that the computer system remains working without degradation of access and provides resources to authorized users when they need it [Kum95:1].
This concentration focuses computer security on the protection of computer files, and ensuring the availability of the computer and network system. This focus is too narrow for at least two reasons. First, as will be shown in Chapters 7 and 10, the most common type of attack seen on the Internet appears to be motivated by the objective to gain access to a superuser or root account on a Unix-based computer system. More specifically, the access sought is to a command interpreter or shell which has full access to the computer. In other words, the access sought is to a process that is operating (the shell) and not necessarily to the files. Many attackers indeed are attempting to use the process access to gain access to the files, but many are simply after the process access itself.
The other reason this focus is too narrow is found in the security architecture of Unix-based computer systems, where security is based on protection of objects, which includes both processes and files. Access to processes is commonly restricted by accounts to which the user must log in, such as by entering the correct user name and password. Once an attacker gains access to a process, then the process must be used to gain access to files. In other words, access to a file system requires two steps: access to a process, then access to the file. This is illustrated by a typical Unix process, such as the /bin/cp utility (used to copy files). A user gets access to this utility upon successfully logging into an account. Access to the /bin/cp utility, however, does not mean that the user can now use this process to copy any file. When a process runs, it may access only a limited collection of files that are associate with the user [Tan92:193]. The user may, therefore, use the /bin/cp utility only to copy files for which that user has the appropriate permission.
In addition to using processes to access files, processes may also be used to access data that is in transit across a network. In this case, these data are not contained in files which would be located in primary memory (the computer's volatile random-access memory), or in secondary memory (storage disks). They are instead a stream of data packets in transit. These can be accessed by processes operating at the origin host for the data transmissions, at the destination host, or at hosts in between through which the data pass.
In summary, conceptualizing computer security as being based on providing confidentiality, integrity, and availability in a computer system [Kum95:1] narrows the focus to the files in a system. Confidentiality and integrity specifically refer to the prevention of disclosure, alteration or deletion of the information contained in computer files [RuG91:9-10]. As discussed above, however, this is only one of the levels of access in a typical computer security system. Access controls are used to restrict access to processes, files, and data in transit.
5.3. Toward a More Formal Definition
With these criticisms in mind, I used the following two questions as a starting point for developing a more formal definition of computer security:
5.3.1. What resources are we trying to protect? - As the previous discussion suggests, the resources that we want to protect are the processes, files and data in transit, on computers and networks. As stated by Tanenbaum,
A process is basically a program in execution. It consists of the executable program, the program's data and stack, its program counter, stack pointer, and other registers, and all other information needed to run the program [Tan92:12].
A file is "a collection of records or data designated by name and considered as a unit by the user [LaL96:441]." These are usually stored in secondary memory (disks). Data in transit are packets of data that are being transmitted across a network.
Some authors suggest including other objects, such as databases, or semaphores [Tan92:193]. At the level of abstraction required for this research, it seemed unnecessary to make these distinctions. As such, processes were assumed to include their variables (such as semaphores) and the temporary files in volatile memory, and files were assumed to include databases, directories, etc. that are stored in secondary memory.
From the operational viewpoint, processes, files, and data in transit are not independent categories. While processes can be targeted separately, files and data in transit can only be reached through processes. On the other hand, before a process is activated, it is stored as a file. The important point, however, is that processes, files, and data in transit are secured separately. Because of this, it is appropriate to include all three separately as the "resources we are trying to protect."
The exception to this is physical attacks. In these cases, files or data in transit could be reached without first accessing a process. An example of this would be stealing floppy disks, hard disks or entire computers. As stated earlier, methods to provide security for physical objects are well-developed, and are not unique to computer equipment. As such, theft of hardware will not be included in this definition of computer security. Another possibility, however, would be the use of a data tap where a cable carrying network traffic is "listened" to by a device external to the network. Even the electromagnetic emanations surrounding a computer, sometimes called Van Eck radiation [Sch94:141], can be "listened" to for data being processed on the computer. These types of physical attacks are of concern in this research, although later chapters show that there is no example of such attacks in any of the CERT®/CC records. Of course, they would be hard to detect if they had occurred.
5.3.2. Against what? - This question could be interpreted in several ways. One way is as a question about what is being used to perform an attack. For example, an attacker could use a self-replicating computer code, such as a virus or worm, or the attacker could run a shell script that exploits a software bug to defeat access controls on a process. These are all "tools" that the attacker may use to accomplish an objective (discussed in Chapter 6). From the operational viewpoint, this interpretation is on the "means" portion of "means, ways, and ends," which is a common paradigm in military strategy that "defines objectives, identifies courses of action to achieve them, and provides the resources to support each course of action [Gue93:xv]."
The somewhat opposite perspective is to interpret "against what?" to mean the "ends" part of "means, ways, and ends." Computers must, therefore, be protected against the "ultimate objective," "purpose" or "target" of an attack. From this perspective, computer security is about preventing such crimes such as theft, fraud, espionage, extortion, vandalism, and terrorism.
A third interpretation, also from the "ends" part of "means, ways, and ends," has already been discussed: computer and network files and data in transit must be protected from being read, altered or deleted (Section 5.2). In addition, computers and networks must be available when we want them [Amo94:3]. Cohen presents this viewpoint as follows:
I have taken the perspective that, regardless of the cause of a protection failure, there are three and only three sorts of things that can result:
1. Otherwise defect-free information can become corrupt,
2. Services that should be available can be denied, and/or
3. Information can get to places it should not go. [Coh95:54]
Cohen terms each of these results as disruptions, which he specifically calls corruption, denial, and leakage [Coh95:54-55]. Steps taken to prevent disruption, which we can term protections, have already been discussed as integrity, availability, and confidentiality.
Each of these interpretations has its conceptual advantages, as well as its limitations. Computer and network processes, files, and data in transit must be protected from the "means" of attack, such as computer viruses, the exploitation of system vulnerabilities, etc. They must also be protected from the "ends" of attack: crimes, including theft, fraud, espionage, extortion, vandalism, and terrorism. Files and data in transit must be protected from corruption or leakage, and computers and networks must be available for use. In short, all of these interpretations of "what" computer and network processes and files must be protected against should be included in the definition of computer security.
In order to provide such a comprehensive definition of computer security, I adopted an interpretation of "against what" as being against the "ways" of attacks. This perspective is between the "means" and "ends" perspectives presented above. Two example attacks will illustrate this interpretation. In the first example, an attacker copies a password file from the target system using TFTP (trivial file transfer protocol). The password cracking program crack is used on this password file to obtain the password of a user's account. The attacker then uses telnet to sign into this account. Once in this account, the attacker runs a shell script to exploit a vulnerability and gain root privileges which the attacker uses to copy sensitive files and software. In the second example, an attacker floods the target system with nuisance electronic mail (e-mail), which causes the target system's hard disk to reach its storage limits and the system to stop processing.
As shown in Table 5.1, in the first example, the "means" of attack include tftp, crack, telnet, a shell script, and the exploitation of vulnerabilities in the system. The "ends" of the attack are the leakage of sensitive files and software. In the second example, the "means" of attack is a flood of e-mail, with the "ends" being a denial-of-service shutdown of the system.
copies password file, gains access to user account, then root privileges | tftp, crack, telnet, shell script, vulnerabilities | unauthorized access | copy files, software |
sends e-mail to flood system | e-mail program | unauthorized use | denial-of-service |
Table 5.1 also shows the "ways" of each of the example attacks. In the first example, tftp, crack, telnet, etc., are all used to defeat the access controls on the system in order to accomplish the ends of the attack: to copy files and software. Here the attacker is not authorized for the access. This is different from the second attack where the access to the e-mail program and even the target system is authorized. The access, however, is used in an unauthorized manner in order to flood the target system with e-mail and cause it to shut down. This is the perspective taken in my definition of computer security: on the "ways" of computer and network attacks. The two "ways" possible are either to gain unauthorized access, or, given an authorized access, to use that access in an unauthorized manner.
This separation of the "ways" into unauthorized access and unauthorized use is not mutually exclusive, and using one or the other term is not exhaustive. More specifically, access and use are not the same concept, although they are related in an attack. For example, when an attacker bypasses access controls (unauthorized access) in order to accomplish an objective, the attacker is also making inappropriate use of computers and networks (unauthorized use). An alternative would be to use the two terms unauthorized access and authorized access. The problem with this combination is the use of the word "authorized" which implies not only the access but also the action (use) is authorized. Because I felt that it was more important to emphasize the unauthorized nature of an attackers activities, I chose to use the first pair of terms (unauthorized access and unauthorized use), but it should be understood that unauthorized use implies authorized access. In addition, it should be understood that unauthorized access implies that this access will result in an unauthorized use.
5.4. A Formal Definition of Computer Security
The choice of perspectives is not a neutral process. There is a dependence on the questions being answered and on the purpose of the investigation. As stated by Landwehr, et al.,
A taxonomy is not simply a neutral structure for categorizing specimens. It implicitly embodies a theory of the universe from which those specimens are drawn. It defines what data are to be recorded and how like and unlike specimens are to be distinguished. In creating a taxonomy of computer program security flaws, we are in this way creating a theory of such flaws, and if we seek answers to particular questions from a collection of flaw instances, we must organize the taxonomy accordingly [LBM94:214].
The taxonomy presented as part of this research was influenced by wanting to describe, classify and analyze the observed Internet security incidents. That is one of the primary reasons that a taxonomy of attacks is being developed. It is also influenced by viewing attacks as processes that, when successful, lead attackers to their desired objectives. This influence, and the above discussions leads to a definition of computer security using the common characteristic of all attacks: the attacker is trying to achieve an objective. The definition used for this research is as follows:
Computer security is preventing attackers from achieving objectives through unauthorized access or unauthorized use of computers and networks.
This definition provides the desired demarcation
of the computer security field. Concerns about computer equipment
theft and environmental threats are excluded. Software flaws
are included, but only if they result in vulnerabilities to the
system that could be exploited to provide unauthorized access
or use. Both the means used to gain unauthorized access or use
(virus, Trojan horse, telnet, etc.), as well as the ends of attacks
(corruption, disclosure, or denial-of-service leading to theft,
espionage, fraud, etc.), are included because they require unauthorized
access or unauthorized use. The definition also excludes unintentional
events [Amo94:2].