Previous Page
Next Page

Certification Objective 1.01–Describe Principles of Information Security

First, let's define information security. If ten different people were asked to define information security, we might well receive ten different answers, but what is surprising is that they might all be correct. Nevertheless, the universal, classic definition of information security is brief and simple:

Information security is the confidentiality, integrity, and availability of information.

Indeed, all the principles, standards, and mechanisms you will encounter in this book are dedicated to these three abstract but fundamental goals of confidentiality, integrity, and availability of information and information processing resources—also referred to as the C-I-A triad or information security triad.

Confidentiality

In the context of information security, confidentiality means that information that should stay secret stays secret and only those persons authorized to access it may receive access. From ancient times, mankind has known that information is power, and in our information age, access to information is more important than ever. Unauthorized access to confidential information may have devastating consequences, not only in national security applications, but also in commerce and industry. Main mechanisms of protection of confidentiality in information systems are cryptography and access controls. Examples of threats to confidentiality are malware, intruders, social engineering, insecure networks, and poorly administered systems.

Integrity

Integrity is concerned with the trustworthiness, origin, completeness, and correctness of information as well as the prevention of improper or unauthorized modification of information. Integrity in the information security context refers not only to integrity of information itself but also to the origin integrity—that is, integrity of the source of information. Integrity protection mechanisms may be grouped into two broad types: preventive mechanisms, such as access controls that prevent unauthorized modification of information, and detective mechanisms, which are intended to detect unauthorized modifications when preventive mechanisms have failed. Controls that protect integrity include principles of least privilege, separation, and rotation of duties—these principles are introduced later in this chapter.

Availability

Availability of information, although usually mentioned last, is not the least important pillar of information security. Who needs confidentiality and integrity if the authorized users of information cannot access and use it? Who needs sophisticated encryption and access controls if the information being protected is not accessible to authorized users when they need it? Therefore, despite being mentioned last in the C-I-A triad, availability is just as important and as necessary a component of information security as confidentiality and integrity. Attacks against availability are known as denial of service (DoS) attacks and are discussed in Chapter 7. Natural and manmade disasters obviously may also affect availability as well as confidentiality and integrity of information, though their frequency and severity greatly differ—natural disasters are infrequent but severe, whereas human errors are frequent but usually not as severe as natural disasters. In both cases, business continuity and disaster recovery planning (which at the very least includes regular and reliable backups) is intended to minimize losses.

Exam Watch 

Understanding the fundamental concepts of confidentiality, integrity, and availability of information and their interaction is crucial for this exam. Make sure you know their definitions, summarized here, and can give examples of controls protecting them:

  • Confidentiality is the prevention of unauthorized disclosure of information.

  • Integrity aims at ensuring that information is protected from unauthorized or unintentional alteration, modification, or deletion.

  • Availability aims to ensure that information is readily accessible to authorized users.

Now that the cornerstone concepts of confidentiality, integrity, and availability have been discussed, let's take a look at identification, authentication, and authorization processes and methods, which are some of the main controls aimed at protecting the C-I-A triad.

Identification

Identification is the first step in the identify-authenticate-authorize sequence that is performed every day countless times by humans and computers alike when access to information or information processing resources are required. While particulars of identification systems differ depending on who or what is being identified, some intrinsic properties of identification apply regardless of these particulars—just three of these properties are the scope, locality, and uniqueness of IDs.

Identification name spaces can be local or global in scope. To illustrate this concept, let's refer to the familiar notation of Internet e-mail addresses: while many e-mail accounts named jack may exist around the world, an e-mail address jack@company.com unambiguously refers exactly to one such user in the company .com locality. Provided that the company in question is a small one, and that only one employee is named Jack, inside the company everyone may refer to that particular person by simply using his first name. That would work because they are in the same locality and only one Jack works there. However, if Jack were someone on the other side of the world or even across town, to refer to jack@company.com as simply jack would make no sense, because user name jack is not globally unique and refers to different persons in different localities. This is one of the reasons why two user accounts should never use the same name on the same system—not only because you would not be able to enforce access controls based on non-unique and ambiguous user names, but also because you would not be able to establish accountability for user actions.

To summarize, for information security purposes, unique names are required and, depending on their scope, they must be locally unique and possibly globally unique so that access control may be enforced and accountability established.

Authentication

Authentication, which happens just after identification and before authorization, verifies the authenticity of the identity declared at the identification stage. In other words, it is at the authentication stage that you prove that you are indeed the person or the system you claim to be. The three methods of authentication are what you know, what you have, or what you are. Regardless of the particular authentication method used, the aim is to obtain reasonable assurance that the identity declared at the identification stage belongs to the party in communication. It is important to note that reasonable assurance may mean different degrees of assurance, depending on the particular environment and application, and therefore may require different approaches to authentication: authentication requirements of a national security– critical system naturally differ from authentication requirements of a small company. Because different authentication methods have different costs and properties as well as different returns on investment, the choice of authentication method for a particular system or organization should be made after these factors have been carefully considered.

What You Know

Among what you know authentication methods are passwords, passphrases, secret codes, and personal identification numbers (PINs). When using what you know authentication methods, it is implied that if you know something that is supposed to be known only by X, then you must be X (although in real life that is not always the case). What you know authentication is the most commonly used authentication method thanks to its low cost and easy implementation in information systems. However, what you know authentication alone may not be considered strong authentication and is not adequate for systems requiring high security.

Exam Watch 

Strong authentication is the use of two or more different authentication methods, such as a smart card and a PIN, or a password and a form of biometrics such as a fingerprint or retina scan.

What You Have

Perhaps the most widely used and familiar what you have authentication methods are keys—keys we use to lock and unlock doors, cars, and drawers; just as with doors, what you have authentication in information systems implies that if you possess some kind of token, such as a smart card or a USB token, you are the individual you are claiming to be. Of course, the same risks that apply to keys also apply to smart cards and USB tokens—they may be stolen, lost, or damaged. What you have authentication methods include an additional inherent per-user cost. Compare these methods with passwords: it costs nothing to issue a new password, whereas per-user what you have authentication costs may be considerable.

What You Are

What you are authentication refers to biometric authentication methods. A biometric is a physiological or behavioral characteristic of a human being that can distinguish one person from another and that theoretically can be used for identification or verification of identity. Biometric authentication methods include fingerprint, iris, and retina recognition, as well as voice and signature recognition, to name a few. Biometric authentication methods are less well understood than the other two methods but when used correctly, in addition to what you have or what you know authentication, may significantly contribute to strength of authentication. Nevertheless, biometrics is a complex subject and is much more cumbersome to deploy than what you know or what you have authentication. Unlike what you know or what you have authentication methods, whether or not you know the password or have the token, biometric authentication systems say how much you are like the subject you are claiming to be; naturally this method requires much more installation-dependent tuning and configuration.

Authorization

After declaring identity at the identification stage and proving it at the authentication stage, users are assigned a set of authorizations (also referred to as rights, privileges, or permissions) that define what they can do on the system. These authorizations are most commonly defined by the system's security policy and are set by the security or system administrator. These privileges may range from the extremes of "permit nothing" to "permit everything" and include anything in between.

Exam Watch 

Authorization is the process of ensuring that a user has sufficient rights to perform the requested operation and preventing those without sufficient rights from doing the same. At the same time, authorization is also the process which gives rights depending on the identity of the user—be it a human or another system.

As you can see, the second and third stages of the identify-authenticate-authorize process depend on the first stage, and the final goal of the whole process is to enforce access control and accountability, which is described next. User account management and access control in Solaris 10 are described in more detail in Chapters 9 and 10.

Accountability

Accountability is another important principle of information security that refers to the possibility of tracing actions and events back in time to the users, systems, or processes that performed them, to establish responsibility for actions or omissions. A system may not be considered secure if it does not provide accountability, because it would be impossible to ascertain who is responsible and what did or did not happen on the system without that safeguard. Accountability in the context of information systems is mainly provided by logs and the audit trail.

Logs

System and application logs are ordered lists of events and actions and are the primary means of establishing accountability on most systems. However, logs (as well as the audit trail, which is described next) may be considered trustworthy only if their integrity is reasonably assured. In other words, if anyone can write to and/or erase logs or the audit trail, they would not be considered dependable enough to serve as the basis for accountability. Additionally, in case of networked or communication systems, logs should be correctly timestamped and time should be synchronized across the network so events that affect more than one system may be correctly correlated and attributed.

Audit Trail

The difference between the audit trail and logs is not clearly defined. However, we may say that logs usually show high-level actions, such as an e-mail message delivered or a web page served, whereas audit trails usually refer to lower-level operations such as opening a file, writing to a file, or sending a packet across a network. While an audit trail provides more detailed information about the actions and events that took place on the system, it is not necessarily more useful, in a practical sense of the word, than logs, simply because abundance of detail in an audit trail makes it more resource and time consuming to generate, store, and analyze. Another aspect by which logs and audit trails differ is their source: logs are usually and mostly generated by particular system software or applications, and an audit trail is usually kept by the operating system or its auditing module. Auditing and audit analysis in Solaris 10 are covered in detail in Chapter 5.

Functionality vs. Assurance

Having introduced the concept of accountability and how it is implemented on most systems, it's time to look at perhaps one of the most challenging issues of information security: the issue of functionality versus assurance. The best way to illustrate this is to refer to your own first-hand experience with computers: how many times has a computer failed to do something that you expected of it, and how many times did it do something you didn't want it to do? It is this difference between our expectations (as well as vendors' advertising of product features) and what happens in fact that is referred to as functionality versus assurance.

A particular system may claim to implement a dozen smart security features, but this is very different from being able to say with a high degree of confidence that it indeed implements them, implements them correctly, and will not behave in an unexpected manner. Another way of looking at the functionality versus assurance issue is that functionality is about what a system can do and assurance is about what a system will not do.

Although no quick and easy solutions are available in this case, we will discuss functionality and assurance issues in more detail in Chapter 3 of this book with regard to standards, certification, and accreditation.

Privacy

Privacy in the information security context usually refers to the expectation and rights of individuals to privacy of their personal information and adequate, secure handling of this information by its users. Personal information here usually refers to information that directly identifies a human being, such as a name and address, although the details may differ in different countries.

In many countries, privacy of personal information is protected by laws that impose requirements on organizations processing personal data and set penalties for noncompliance. The European Union (EU) in particular has strict personal data protection legislation in place, which limits how organizations may process personal information and what they can do with it. The U.S. Constitution also guarantees certain privacy rights, although the approach to privacy issues differs between the United States and Europe.

Since privacy is not only a basic human need but also a legally protected right in most countries, organizations should take necessary precautions to protect the confidentiality and integrity of personal information they collect, store, and process. In particular, organizations' information security policies should define how personal information is to be collected and processed. Because of these requirements, although not in the C-I-A triad, privacy is also an inseparable part of information security and must be addressed in all information security policies as part of the information security requirements.

Non-repudiation

Non-repudiation in the information security context refers to one of the properties of cryptographic digital signatures that offers the possibility of proving whether a particular message has been digitally signed by the holder of a particular digital signature's private key. Non-repudiation is a somewhat controversial subject, partly because it is an important one in this day and age of electronic commerce, and because it does not provide an absolute guarantee: a digital signature owner, who may like to repudiate a transaction maliciously, may always claim that his or her digital signature key was stolen by someone and that someone actually signed the digital transaction in question, thus repudiating the transaction. The following types of non-repudiation services are defined in international standard ISO 14516:2002, Guidelines for the use and management of trusted third party services.

Approval  Non-repudiation of approval provides proof of who is responsible for approval of the contents of a message.

Sending  Non-repudiation of sending provides proof of who sent the message.

Origin  Non-repudiation of origin is a combination of approval and sending.

Submission  Non-repudiation of submission provides proof that a delivery agent has accepted the message for transmission.

Transport  Non-repudiation of transport provides proof for the message originator that a delivery agent has delivered the message to the intended recipient.

Receipt  Non-repudiation of receipt provides proof that the recipient received the message.

Knowledge  Non-repudiation of knowledge provides proof that the recipient recognized the content of the received message.

Delivery  Non-repudiation of delivery is a combination of receipt and knowledge, as it provides proof that the recipient received and recognized the content of the message.

There is also a difference between the legal concept of non-repudiation and non- repudiation as an information security/cryptographic concept. In the legal sense, an alleged signatory to a paper document is always able to repudiate a signature that has been attributed to him or her by claiming any one of the following:

  • Signature is forged

  • Signature is a result of fraud by a third party

  • Signature was unconscionable conduct by a party to transaction

  • Signature was obtained using undue influence by a third party

In the information security context, one should keep in mind that the cryptographic concept of non-repudiation may, and often does, differ from its legal counterpart. Moreover, in some countries there is a trend of moving the burden of proof from the party relying on the signature (which is applicable to regular on-paper signatures) to the alleged signatory party, who would have to prove that he or she did not sign something. Chapter 11 of this book looks at cryptography in more detail.


Previous Page
Next Page