Following are the key points from certification objectives in this book that are covered on the exam. It's in your best interest to study this guide until you can answer all questions in Appendix B correctly before taking Sun's exam.
Information security is the confidentiality, integrity, and availability of information.
Confidentiality is the prevention of unauthorized disclosure of information.
Integrity is the means of ensuring that information is protected from unauthorized or unintentional alteration, modification, or deletion.
Availability ensures that information is readily accessible to authorized viewers at all times.
Identification is the means by which a user (human, system, or process) provides a claimed unique identity to a system.
Authentication is a method for proving that you are who you say you are.
Strong authentication is the use of two or more different authentication methods: such as a smart card and PIN, or a password and a form of biometrics such as a fingerprint or retina scan.
Authorization is the process of ensuring that a user has sufficient rights to perform the requested operation, and preventing those without sufficient rights from doing the same.
The principle of least privilege stipulates that users are granted no more privileges than those absolutely necessary to do the required job.
The purpose of the segregation (or separation) of duties is to avoid the possibility of a single person being responsible for different functions within an organization. Rotation of duties is a similar control that is intended to detect abuse of privileges or fraud and is practiced to avoid becoming overly dependent on a single member of staff. By rotating staff, the organization has more chances of discovering violations or fraud.
A security policy is a high-level document or set of documents that in particular identifies the information assets of the organization, stipulates who owns them and how they may or may not be used, and sets requirements for their use along with sanctions for misuse.
The security life cycle process is intended to prevent, detect, respond, and deter—and repeat the cycle again, keeping in mind the lessons learned.
Preventive controls include firewalls, logical and physical access control systems, and security procedures that are devised to prevent violations of security policy from occurring.
Detection controls include network/host intrusion detection systems, physical movement, alarms, and cryptographic checksums on transmitted information (to detect unauthorized modifications).
Incident response is a subdiscipline of information security; it is the formal set of defined and approved actions based on the information security policy and best practices that are to be taken in response to a security incident.
Deterrent controls include good information security management, regular audits, security-aware staff, well-administered systems, good employee morale, and security certifications.
Security-aware employees are the best partners of the organization and its information security efforts, while staff that have no idea about security practices or simply don't care are the worst enemy. One doesn't need determined adversaries to suffer a security breach—a clueless insider who doesn't understand the consequences may expose the organization to risks that could otherwise have been avoided.
Security policies are one of the mechanisms that define and convey the information security requirements of the organization's management to the staff of the organization. The security policy is the high-level document or set of documents that in particular identifies the information assets of the organization, stipulates who owns them and how they may or may not be used, and sets requirements for their use along with sanctions for misuse.
Security procedures are developed by subject-matter specialists within the organization with the assistance of security professionals and/or information systems auditors. Procedures may be application- and/or version-specific and as such need to be kept current with the current information systems environment of the organization. System and security administrators play a key role in developing and enforcing security procedures.
Security guidelines are nonbinding recommendations on how to develop, define, and enforce security policies and procedures.
Security standards are mandatory either because they are dictated by the security policy, law, or regulations or because the entity in question has decided to adhere to the standard.
Physical security addresses the physical vulnerabilities, threats, and countermeasures necessary to control risks associated with physical destruction, unauthorized access, loss, theft, fire, natural disasters (floods, earthquakes, tornados), environmental issues (air conditioning, ventilation, humidity control), and all associated issues.
Although Sun Certified Security Administrator for Solaris certification candidates are not required to have Sun Certified System or Network Administrator certifications, they are expected to be familiar with subjects covered by their exam objectives and have at least six months of experience administering Solaris systems.
The Solaris operating system complies to EAL4.
A secure system is a system that has certain security functionalities and that provides certain assurance that it will function in accordance with and enforce a defined security policy in a known environment provided it is operated in a prescribed manner.
A trusted system or component has the power to break one's security policy. Trusted path is the term used to describe the secure communication channel between the user and the software (an application or the operating system itself). A trusted path exists when a mechanism is in place to assure the users that they are indeed interacting with the genuine application or the operating system and not software that impersonates them.
A threat describes the potential for attack or exploitation of a vulnerable business asset. This term defines the cost of an attack weighed against the benefit to the attacker that can be obtained through such an attack. It does not describe when an administrator decides to accept a specific risk.
A vulnerability describes how susceptible you are to an attack and how likely you are to succumb to an attack if it occurs.
Risk assessment is a critical element in designing the security of systems and is a key step in the accreditation process that helps managers select cost- effective safeguards.
Attacks from disgruntled employees are most dangerous because they have the closest physical and logical access to the internal infrastructure, applications, and data. Disgruntled employees also have a good understanding of business and technical climate, organization, and capabilities.
A script kiddie is a novice with little experience who has access to tools and documentation that can be used to interfere with a system's assets.
Eavesdropping is a passive attack that affects confidentiality of information. Regular Internet protocols are insecure and prone to eavesdropping attacks, because they transmit information unencrypted. It is relatively easy to defend against eavesdropping attacks by using protocols that encrypt information before transmitting it over the network.
Traffic analysis is a passive attack aimed at such aspects of communication as time, direction, frequency, flow, who sent the communication, and to whom it is addressed. An important issue to note is that encryption does not protect against traffic analysis unless specifically designed and implemented with traffic analysis resistance.
Timing analysis is about measuring time between actions or events in information systems and networks. Timing analysis may be effective when used in concert with other attack types in complex attacks.
Social engineering is used by potential attackers to manipulate people to do what the attacker wants them to do without them realizing the hidden agenda. The only defense against social engineering is having security-aware and risks-aware staff and management.
Buffer overflow attacks are perhaps the most primitive and the most effective of attacks. In a buffer overflow attack, the target system or application is sent more data or data different from what it is designed to handle, which usually results in a crash of the target or execution of the part of sent data. The data sent to the target may contain machine code or instructions that may be executed as a result of buffer overflow, thus giving the attacker a way in or making it simpler to gain access.
Denial of service (DoS) attacks are directed at the availability of information and information systems. Denial of service attacks exhaust all available resources—be it network bandwidth, number of maximum simultaneous connections, disk space, or RAM—to prevent legitimate users from using the system.
Spoofing refers to attacks in which the source of information is falsified with malicious intent. Spoofing attacks are usually used to circumvent filtering and access control based on source addresses. The most effective defense against spoofing is the use of cryptographic authentication and digital signatures.
Man-in-the-middle attacks involve an attacker located physically or logically between two or more communicating parties on a network, where the attacker poses as the remote party to communicate and actively engages in masquerading as the remote party. Man-in-the middle attacks are difficult to protect against unless a well-designed and well-implemented cryptographic authentication system is in place.
Replay attacks are usually directed against simple authentication mechanisms but may also be used against poorly designed or implemented cryptographic protocols. During a replay attack, the attacker intercepts and records a valid authentication session and later replays whole or part of it again to gain unauthorized access. Replay attacks are active attacks that are usually launched after a successful eavesdropping or man-in-the-middle attack.
Connection or session hijacking refers to taking over an already established connection or session with assigned identity and authorizations. Insecure network protocols that do not provide continuous authentication and do not use strong cryptographic algorithms to protect the confidentiality and integrity of data transmissions are especially vulnerable to connection hijacking.
Brute-force attacks are usually used against passwords, cryptographic keys, and other security mechanisms. In a brute-force attack the adversary performs an exhaustive search of the set in question to find the correct password or cryptographic key. The defense against brute-force attacks is to make the amount of time and computations required to conduct an exhaustive search impossible to afford by using a sufficiently large set—that is, longer passwords and keys.
In a dictionary attack, the adversary uses a list ("dictionary") of possible passwords or cryptographic keys to perform a search of the set to find the correct password or key. The defense against dictionary attacks is to use passwords or keys that are unlikely to be included in such a list.
Issue the logins command and view the /etc/shadow file to determine which accounts are locked or disabled and which do not currently have assigned passwords. These techniques are useful when identifying user login status.
You can disable user logins by either creating a /etc/nologin file, bringing the system down to single-user mode with the command init S, or disabling user accounts from the Solaris Management Console (SMC) interface.
Failed login attempts from terminal sessions are stored in the var/adm/ loginlog file.
Syslog can monitor all unsuccessful login attempts. To do so, edit the /etc/ default/login file and make sure that the SYSLOG=YES and SYSLOG_FAILED_LOGINS=0 entries are uncommented.
You can customize the System Logging Facility to log failed login access attempts after a predefined number of tries by editing the SYSLOG_ FAILED_LOGINS=0 entry in the /etc/default/login file to some number such as SYSLOG_FAILED_LOGINS=3. At this point, the system will log access attempts only after the first three failures.
You can customize the System Logging Facility to close the login connections after some predefined number of failures by uncommenting the RETRIES entry in the /etc/default/login file, and making sure the value is set to some number (5 is the default value). By default, after five failed login attempts in the same session, the system will close the connection.
The su program usage is monitored through the /etc/default/su file as SULOG=/ var/adm/sulog, and the syslog logging facility will determine whether or not to log all su attempts with the SYSLOG=YES entry.
In real time, you can display superuser access attempts on the console by uncommenting the CONSOLE=/dev/console entry in the /etc/ default/su file.
To disable remote superuser login access attempts (disabled by default), simply uncomment the CONSOLE=/dev/console entry in the /etc/default/ login file.
Events that are capable of creating audit logs include system startup and shutdown, login and logout, identification and authentication, privileged rights usage, permission changes, process and thread creation and destruction, object creation and manipulation, application installation, and system administration.
The audit_control file can be modified to preselect audit classes and customize audit procedures.
The audit policy is automatically started in the audit_startup script.
The audit_warn script generates mail to an e-mail alias called audit_warn. You can change the alias by editing the etc/security/audit_warn file and changing the e-mail alias in the script at entry ADDRESS=audit_warn, or by redirecting the audit_warn e-mail alias to a different account.
When auditing is enabled, the contents of the etc/security/audit_startup file determine the audit policy.
To audit efficiently, Sun recommends randomly auditing only a small percentage of users at any one time, compressing files, archiving older audit logs, monitoring in real time, and automatically increasing unusual event auditing.
In the audit_control file, the flags and naflags arguments define which attributable and nonattributable events should be audited for all users on the system.
You can manually issue the bsmrecord command to add events that should be audited.
The audit_event file is the event database that defines which events are part of classes you can audit.
The audit event numbers—with the exception of 0, which is reserved as an invalid event number—are 1–2047 for Solaris Kernel events, 2048–32767 for Solaris programs (6144–32767 also includes SunOS 5.X user-level audit events), and 32768–65535 for third-party applications.
The audit_user file defines specific users and classes of events that should always or never be audited for each user.
Syslog audit files should never be placed in the same locations as binary data.
Syslog files should be monitored and archived regularly to accommodate potentially extensive outputs.
Execute the bsmconv script to enable and disable the auditing service.
Issue the audit -s command to refresh the kernel and the auditconfig -conf command to refresh the auditing service.
To display audit records formats, use the bsmrecord command.
To merge audit files into a single output source to create an audit trail, use the auditreduce command.
Device policy is enabled by default and enforced in the kernel to restrict and prevent access to devices that are integral to the system. Device allocation is not enabled by default and is enforced during user-allocation time to require user authorization to access peripheral devices.
To view device policies for all devices or specific ones, use the getdevpolicy command.
To modify or remove device policies for a specific device, use the update_drv -a -p policy device-driver command; where policy is the device policy or policies (separated by a space) for device-driver, which is the device driver whose device policy you wish to modify or remove.
The AUE_MODDEVPLCY audit event is part of the as audit class by default, which is used to audit changes in device policy. To audit device policies, you'll need to add the as class to the audit_control file flags argument.
Run the bsmconv script to enable the auditing service, which also enables device allocation.
The ot audit class is used to audit device allocation. To audit an allocatable device, you'll need to add the ot class to the audit_control file flags argument.
Users with the appropriate rights and authorization can allocate and deallocate devices. The authorization required to allocate a device is solaris.device.allocate. The authorization required to forcibly allocate or deallocate a device is solaris.device.revoke.
Users with the appropriate rights and authorization can allocate a device by issuing the allocate device-name command and deallocate a device by issuing the deallocate device-name command.
The Basic Audit Reporting Tool (BART) can report file-level changes that have occurred on the system.
To compare a control manifest with a new comparison manifest issue the command
bart compare options control-manifest compare-manifest > bart-report
The most common forms of DoS attacks include program buffer overflow, malformed packets (that is, overlapping IP fragments), Teardrop, Ping of Death, Smurf, Bonk, Boink, NewTear, WinNuke, Land, LaTierra, and SYN attacks.
After penetrating a target system, an attacker would typically attempt to erase any traces of the incident by deleting activity logs and leaving backdoors in place to allow later clandestine access to the system.
When default executable stacks with permissions set to read/write/execute are allowed, programs may be targets for buffer overflow attacks. A buffer overflow occurs when a program process or task receives extraneous data that is not properly programmed. As a result, the program typically operates in such a way that an intruder can abuse or misuse it.
During a SYN attack, the attacker abuses the TCP three-way handshake by sending a flood of connection requests (SYN packets) while not responding to any of the replies. To verify that this type of attack is occurring, you can check the state of the system's network traffic with the netstat command.
In a Teardrop attack, the attacker modifies the length and fragmentation offset fields in IP packets, which causes the target to crash.
Ping of Death is a malformed ICMP packet attack, in which an attacker sends an oversized ping packet in an attempt to overflow the system's buffer.
A Smurf attack involves a broadcasted ping request to every system on the target's network with a spoofed return address of the target.
To help prevent DoS attacks against the Solaris operating system, Sun advocates disabling executable stacks, disabling extraneous IP services/ ports, employing egress filtering, using firewalls, monitoring networks, and implementing a patch update program.
Sun recommends that you always monitor programs that are executed with privileges as well as the users that have rights to execute them. You can search your system for unauthorized use of the setuid and setgid permissions on programs to gain superuser privileges using the find command:
find directory -user root -perm -4000 -exec ls -ldb {} \; >/tmp/filename
where find directory checks all mounted paths starting at the specified directory, which can be root (/), sys, bin, or mail; -user root displays files owned only by root; -perm -4000 displays only files with permissions set to 4000, -exec ls -ldb displays the output of the find command in ls -ldb format; and >/tmp/filename writes results to this file.
To defend against stack smashing, you can configure attributes so that code cannot be executed from the stack by setting the noexec_user_stack=1 variable in the /etc/system file. If you disable executable stacks, programs that require the contrary will be aborted, so it's crucial first to test this procedure on a nonproduction system.
The inetd.conf defines how the inetd daemon handles common Internet service requests. To disable an unneeded port and prevent unauthorized access to the associated service, comment out the service in the /etc/inetd.conf file with the hash character and then restart the inetd process or reboot the server if the service started through the inetd daemon.
Use the showrev -p command from a terminal session to view your system's current patches.
To install a patch, use the patchadd command: patchadd /dir/filename; where /dir/ is the folder that contains the patch and filename is the name of the patch.
A Trojan horse program is a malicious program that is disguised as some useful software. Trojan examples include a shell script that spoofs the login program and a malicious substitute switch user (su) program.
Device-specific files in the /etc and /devices directories are common targets for attackers to attempt to gain access to the operating system, especially for creating backdoors to the system.
A worm is a self-replicating program that will copy itself from system-to- system, sometimes using up all available resources on infected systems or installing a backdoor on the system.
A logic bomb is code that is inserted into programming code and is designed to execute under specific circumstances.
A fork bomb is a process that replicates itself until it consumes the maximum number of allowable processes.
A rootkit utility can be used not only to provide remote backdoor access to attackers but also to hide the attacker's presence on the system. Some types of rootkit utilities exploit the use of loadable kernel modules to modify the running kernel for malicious intent.
To harden your system and help protect against Trojan horse programs, Sun recommends user awareness education, installing and updating anti- virus software, removing unnecessary compilers, securing file and directory permissions, and monitoring path variables.
Path variables should not contain a parameter indicated with a dot (.) that could cause the system to search for executables or libraries within that path, as well as a search path for root that contains the current directory.
To monitor and help prevent unauthorized changes from being made to system files, Sun recommends using the Automated Security Enhancement Tool (ASET), the Basic Security Module (BSM), Tripwire, and the Solaris cryptographic framework.
Automated Security Enhancement Tool (ASET) enables you to monitor and restrict access to system files and directories with automated administration governed by a preset security level (low, medium, or high). The seven tasks that ASET can regularly perform are system files permissions tuning, system files checks, user and group checks, system configuration files check, environment variables check, EEPROM check, and firewall setup.
To run ASET at any given time, simply log in as root or become superuser, and then issue the command /usr/aset/aset -l level -d pathname; where level is the security level value (either low, medium, or high), and pathname is the working directory for ASET (the default is /usr/asset).
To avoid resource encumbrance, ASET tasks should be run during off-peak hours or when system activities are low.
Verify whether files were maliciously altered by using message digest algorithms. A message digest is a digital signature for a stream of binary data used as verification that the data was not altered since the signature was first generated. The MD5 and the Secure Hashing Algorithm (SHA1) are among the most popular message digest algorithms.
Using the digest command, you can compute a message digest for one or more files. In the Solaris cryptographic framework environment, you can perform digest computations using the following syntax: digest -v -a algorithm input-file > digest-listing; where -v displays the output with file information, -a algorithm is the algorithm used to compute a digest (that is, MD5 or SHA1), input-file is the input file for the digest to be computed, and digest-listing is the output file for the digest command.
The Solaris Fingerprint Database (sfpDB) is a free tool from Sun that allows you to check the integrity of system files through cryptographic checksums online. By doing so, you can determine whether system binaries and patches are safe in accordance with their original checksums stored at Sun, which includes files distributed with Solaris OE media kits, unbundled software, and patches.
Frequently using integrity checking mechanisms such as checksums and the Solaris Fingerprint Database can help detect maliciously altered programs.
If a rootkit is detected, Sun recommends restoring the operating system from trusted sources, followed by the reinstallation of applications, and finally restoring data from secured backups.
Kernel-level rootkits are not as easily detectable using integrity checking mechanisms given that the kernel itself is involved in the process. Sun recommends building a kernel that monitors and controls the system's treatment of its loadable kernel modules, especially for perimeter security or outside systems operating as gateways, web, and mail agents. If restricting loadable kernel modules is not practical, Sun recommends taking advantage of the Solaris Cryptographic services.
The system file (/etc/system) contains commands that are read when the kernel is initialized. These commands can be used to modify the system's operation concerning how to handle loadable kernel modules. Commands that modify the handling of LKMs require you to specify the module type by listing the module's namespace, thus giving you the ability to load a loadable kernel module or exclude one from being loaded.
With Role-Based Access Control (RBAC), system administrators can delegate privileged commands to non-root users without giving them full superuser access.
The principle of least privilege states that a user should not be given any more privilege or permissions than necessary for performing a job.
A rights profile grants specific authorizations and/or privilege commands to a user's role. Privilege commands execute with administrative capabilities usually reserved for administrators.
Sun's best practices dictate that you do not assign rights profiles, privileges, and authorizations directly to users, or privileges and authorizations directly to roles. It's best to assign authorizations to rights profiles, rights profiles to roles, and roles to users.
Applications that check authorizations include audit administration commands, batch job commands, device commands, printer administration commands, and the Solaris Management Console tool suite.
Privileges that have been removed from a program or process cannot be exploited. If a program or process was compromised, the attacker will have only the privileges that the program or process had. Other unrelated programs and processes would not be compromised.
Roles get access to privileged commands through rights profiles that contain the commands.
Commands that check for privileges include commands that control processes, file and file system commands, Kerberos commands, and network commands.
The four sets of process privileges are the effective privilege set (E), which are privileges currently in use; the inheritable privilege set (I), which are privileges a process can inherit; the permitted privilege set (P), which are privileges available for use now; and the limit privilege set (L), which are outside privilege limits of which processes can shrink but never extend.
With RBAC, a user role whose rights profile contains permission to execute specific commands can do so without having to become superuser.
A rights profile can be assigned to a role or user and can contain authorizations, privilege commands, or other rights profiles.
The rights profile name and authorizations can be found in the prof_attr database, the profile name and commands with specific security attributes are stored in the exec_attr database, and the user_attr database contains user and role information that supplements the passwd and shadow databases.
A role is a type of user account that can run privileged applications and commands included in its rights profiles.
Before implementing Role-Based Access Control (RBAC), you should properly plan by creating profiles and roles that adhere to company policy and abide by the principle of least privilege when assigning permissions.
A right is a named collection, consisting of commands, authorizations to use specific applications (or to perform specific functions within an application), and other, previously created, rights, whose use can be granted or denied to an administrator.
The roleadd command can be used to create roles and associates a role with an authorization or a profile from the command line.
From the command line, the usermod command associates a user's login with a role, profile, and authorization in the /etc/user_attr database, which can also be used to grant a user access to a role.
A role is a special user account used to grant rights.
Users can assume only those roles they have been granted permission to assume. Once a user takes on a role, the user relinquishes his or her own user identity and takes on the properties, including the rights, of that role.
To audit a role, you should add the ua or the as event to the flags line in the audit_control file, and then start the auditing service.
Access control lists (ACLs) provide better file security by enabling you to define file permissions for each user class.
The ls command is used to list files and some information about the files contained within a directory.
The chown command is used to change file ownership.
The chgrp command is used to change group ownership of a file.
The chmod command is used to change permissions on a file. The command changes or assigns the mode of a file (permissions and other attributes), which may be absolute or symbolic.
When setuid permission is set on an executable file, a process that runs this file is granted access on the basis of the owner of the file. This permission presents a security risk as attackers can find a way to maintain the permissions that are granted to them by the setuid process even after the process has finished executing.
You should always monitor the system for unauthorized setuid and setgid permissions to gain superuser privileges.
Unless you have added ACL entries that extend UNIX file permissions, the plus sign (+) does not display to the right of the mode field.
To set an ACL on a file, use the setfacl command. Note that if an ACL already exists on a file, the -s option replaces the entire ACL with the new one. To verify the file has your ACL, issue the getfacl filename command.
Algorithms can be symmetric secret key or asymmetric public key computational procedures used for encryption. In symmetric algorithms, the same key is used for both encryption and decryption, and in asymmetric algorithms, two keys are used—one to encrypt and another to decrypt a message.
Providers are cryptographic plug-ins that applications, end users, or kernel operations—which are all termed "consumers"—use. The Solaris cryptographic framework allows only three types of plug-ins: user-level plug-ins, kernel-level plug-ins, and hardware plug-ins.
To monitor and help prevent unauthorized changes from being made to system files, Sun recommends using the Automated Security Enhancement Tool (ASET), the Basic Security Module (BSM), Tripwire, and the Solaris cryptographic framework.
Random keys can be generated using the encrypt and mac commands.
To create a symmetric key, use the dd command:
dd if=/dev/urandom of=keyfile bs=n count=n
where if=file is the input file (for a random key, use the /dev/urandom file), of=keyfile is the output file that holds the generated key, bs=n is the key size in bytes (for the length in bytes divide the key length in bits by 8), and count=n is the count of the input blocks (the number for n should be 1).
To compute a message digest for one or more files, issue the digest command:
digest -v -a algorithm input-file > digest-listing
where -v displays the output with file information, -a algorithm is the algorithm used to compute a digest (that is, MD5 or SHA1), input-file is the input file for the digest to be computed, and digest-listing is the output file for the digest command.
To create a MAC of a file, use the command:
mac -v -a algorithm -k keyfile input-file
where -v displays the output in the following format: algorithm (input-file) = mac; -a algorithm is the algorithm to use to compute the MAC (type the algorithm as the algorithm appears in the output of the mac -l command); -k keyfile is the file that contains a key of algorithm-specified length; and input-file is the input file for the MAC.
To encrypt and decrypt a file, simply create a symmetric key and then issue the encrypt command:
encrypt -a algorithm -k keyfile -i input-file -o output-file
where -a algorithm is the algorithm to use to encrypt the file (type the algorithm as the algorithm appears in the output of the encrypt -l command); -k keyfile is the file that contains a key of algorithm- specified length (the key length for each algorithm is listed, in bits, in the output of the encrypt -l command); -i input-file is the input file that you want to encrypt (this file is left unchanged); and -o output- file is the output file that is the encrypted form of the input file. To decrypt the output file, you simply pass the same key and the same encryption mechanism that encrypted the file but to the decrypt command.
To display the list of installed providers, issue the cryptoadm list command.
To display a list of mechanisms that can be used with the installed providers, issue the cryptoadm list -m command.
To display the mechanism policy for the installed providers and the provider feature policy, issue the cryptoadm list -p command.
To prevent the use of a user-level mechanism, issue the cryptoadm disable provider \ mechanism(s) command.
To disable a kernel software, issue the cryptoadm disable provider command; to restore an inactive software provider, issue the cryptoadm refresh command; to remove a provider permanently, issue the cryptoadm uninstall command.
The Secure NFS service uses Secure RPC (Remote Procedure Call) to authenticate users who make requests to the service.
The authentication mechanism (Diffie-Hellman) uses Data Encryption Standard (DES) encryption to encrypt the common key between client and server with a 56-bit key
Normally, a user login password is identical to the Secure RPC password, where the login process passes the secret key to the keyserver. If the passwords are different, then the user must always run the keylogin command. When the command is included in the user's environment configuration file (that is, ~/.login, ~/.cshrc, or ~/.profile), the command runs automatically whenever the user logs in.
The process of generating a conversation key when a user initiates a transaction with a server begins with the keyserver randomly generating a conversation key. The kernel uses the conversation key, plus other material, to encrypt the client's timestamp. Next the keyserver looks up the server's public key in the public key database and then uses the client's secret key and the server's public key to create a common key. At that point, the keyserver encrypts the conversation key with the common key.
When decrypting a conversation key after the server receives the transmission from the client, the keyserver that is local to the server looks up the client's public key in the public key database. The keyserver then uses the client's public key and the server's secret key to deduce the common key. The kernel uses the common key to decrypt the conversation key, and then calls the keyserver to decrypt the client's timestamp with the decrypted conversation key.
Returning the verifier to the client and authenticating the server starts when the server returns a verifier, including the index ID, which the server records in its credential cache; and the client's timestamp minus 1, which is encrypted by the conversation key. The client receives the verifier and authenticates the server. The client knows that only the server could have sent the verifier because only the server knows what timestamp the client sent. With every transaction after the first transaction, the client returns the index ID to the server in its next transaction. The client also sends another encrypted timestamp. The server sends back the client's timestamp minus 1, which is encrypted by the conversation key.
By requiring authentication for use of mounted NFS file systems, you increase the security of your network.
The Pluggable Authentication Module (PAM) framework lets you plug in new authentication technologies without changing system entry services, and configure the use of system entry services (ftp, login, telnet, or rsh, for example) for user authentication.
The PAM software consists of a library, various service modules, and a configuration file. The pam.conf file defines which modules to use and in what order the modules are to be used with each application. The PAM library provides the framework to load the appropriate modules and to manage the stacking process. The PAM library provides a generic structure to which all of the modules can plug in. The PAM framework provides a method for authenticating users with multiple services by using stacking. Depending on the configuration, the user can be prompted for passwords for each authentication method. The order in which the authentication services are used is determined through the PAM configuration file.
If the PAM configuration file is misconfigured or the file becomes corrupted, all users might be unable to log in. Since the sulogin command does not use PAM, the root password would then be required to boot the machine into single-user mode and fix the problem.
For security reasons, PAM module files must be owned by root and must not be writable through group or other permissions. If the file is not owned by root, PAM does not load the module. To load the module, ensure that the ownership and permissions are set so that the module file is owned by root and the permissions are 555. Then edit the PAM configuration file, /etc/ pam.conf, and add this module to the appropriate services. Then you must reboot the system before you can verify that the module has been added.
To prevent rhost-style access from remote systems with PAM, remove all of the lines that include rhosts_auth.so.1 from the PAM configuration file. This prevents unauthenticated access to the local system from remote systems. To prevent other unauthenticated access to thê/.rhosts files, remember to disable the rsh service by removing the service entry from the /etc/inetd.conf file.
Changing the PAM configuration file does not prevent the service from being started.
SASL adds authentication support to network protocols so applications can utilize optional security services by calling the SASL library as a security layer inserted between the protocol and the connection.
SASL mechanisms are named by strings, from 1 to 20 characters in length, consisting of uppercase letters, digits, hyphens, and/or underscores.
During the authentication protocol exchange, the mechanism performs authentication, transmits an authorization identity (that is, userid) from the client to the server, and negotiates the use of a mechanism-specific security layer.
The security layer takes effect immediately following the last response of the authentication exchange for data sent by the client and the completion indication for data sent by the server. Once the security layer is in effect, the protocol stream is processed by the security layer into buffers of cipher-text.
SASL provide services including plug-in support, determining the necessary security properties from the application to aid in the choice of a security mechanism, listing available plug-ins to the application, choosing the best mechanism for a particular authentication attempt, routing the authentication data between the application and the chosen mechanism, and providing information about the SASL negotiation back to the application.
In Solaris Secure Shell, authentication is provided by the use of passwords, public keys, or both, where all network traffic is encrypted. Some of the benefits of Solaris Secure Shell include preventing an intruder from being able to read an intercepted communication as well as from spoofing the system.
The standard procedure for generating a Solaris Secure Shell public/private key pair is by using the key generation program ssh-keygen -t rsa.
To change your passphrase, type the ssh-keygen command with the -p option, and answer the prompts.
To start a Solaris Secure Shell session, type the ssh command and specify the name of the remote host.
You can avoid providing your passphrase and password whenever you use Solaris Secure Shell by automatically starting an agent daemon, ssh-agent. You can start the agent daemon from the .dtprofile script.
You can specify that a local port be forwarded to a remote host. The connection from this port is made over a secure channel to the remote host. Similarly, a port can be specified on the remote side.
Use the scp command to copy encrypted files between hosts. You can copy encrypted files either between a local host and a remote host, or between two remote hosts. The command operates similarly to the rcp command, except that the scp command prompts for passwords.
You can use Solaris Secure Shell to make a connection from a host inside a firewall to a host on the other side of the firewall. This task is done by specifying a proxy command for ssh either in a configuration file or as an option on the command line. In general, you can customize your ssh interactions through a configuration file. You can customize either your own personal file in $HOME/.ssh/config or you can customize an administrative configuration file in /etc/ssh/ssh_config. The files can be customized with two types of proxy commands: one for HTTP connections and another for SOCKS5 connections.
The Sun Enterprise Authentication Mechanism (SEAM) uses strong authentication that verifies the identities of both the sender and the recipient, verifies the validity of data to provide data integrity, and encrypts the data during transmission to guarantee privacy.
SEAM is based on the ticket-based Kerberos. You only need to authenticate yourself to SEAM once per session; therefore, all transactions during the session are automatically secured.
The initial SEAM authentication process begins when a user or service starts a SEAM session by requesting a ticket-granting ticket (TGT) from the Key Distribution Center (KDC). The KDC creates a TGT and sends it back, in encrypted form, to the user or service. The user or service then decrypts the TGT by using their password. Now in possession of a valid TGT, the user or service can request tickets for all sorts of network operations, for as long as the TGT lasts.
Subsequent SEAM authentication continues when a user or service requests a ticket for a particular service. The KDC sends the ticket for the specific service to the requesting user or service. The user or service sends the ticket to the server that hosts the requested service. The server then allows the user or service access.
A realm is a logical network which defines a group of systems that are under the same master KDC. Issues such as the realm name, the number and size of each realm, and the relationship of a realm to other realms for cross-realm authentication should be resolved before you configure SEAM.
Realm names can consist of any ASCII string (usually the same as your DNS domain name, in uppercase). This convention helps differentiate problems with SEAM from problems with the DNS namespace, while using a name that is familiar. If you do not use DNS or you choose to use a different string, then you can use any string.
The number of realms that your installation requires depends on the number of clients to be supported, the amount of SEAM traffic that each client generates, how far apart the clients are, and the number of hosts that are available to be installed as KDCs (each realm should have at least two KDC servers—a master and a slave).
When you configure multiple realms for cross-realm authentication, you need to decide how to tie the realms together. You can establish a hierarchical relationship between the realms that provides automatic paths to the associated domains. You could establish the connection directly, which can be especially helpful when too many levels exist between two hierarchical domains or when there is no hierarchal relationship at all. In this case the connection must be defined in the /etc/krb5/krb5.conf file on all hosts that use the connection.
The mapping of host names onto realm names is defined in the domain_ realm section of the krb5.conf file.
For the principal names that include the FQDN of an host, it is important to match the string that describes the DNS domain name in /etc/resolv.conf. SEAM requires that the DNS domain name be in lowercase letters when you are entering the FQDN for a principal. The DNS domain name can include uppercase and lowercase letters, but only use lowercase letters when you are creating a host principal.
When you are using SEAM, it is strongly recommended that DNS services already be configured and running on all hosts. If DNS is used, it must be enabled on all systems or on none of them. If DNS is available, the principal should contain the Fully Qualified Domain Name (FQDN) of each host.
The ports used for KDC are defined in the /etc/services and /etc/krb5/krb5.conf files on every client, and in the /etc/krb5/kdc.conf file on each KDC. They include port 88 and 750 for the KDC, and port 749 for the KDC administration daemon.
Slave KDCs generate credentials for clients just as the master KDC does. The slave KDCs provide backup if the master becomes unavailable. Each realm should have at least one slave KDC. Additionally, you can have too many slave KDCs. Not only does each slave retain a copy of the KDC database, which increases the risk of a security breach, but the KDC database must be propagated to each server which can cause latency or delay to get the data updated throughout the realm.
The database that is stored on the master KDC must be regularly propagated to the slave KDCs.
All hosts that participate in the Kerberos authentication system must have their internal clocks synchronized within a specified maximum amount of time. This process is known as clock skew.
An encryption type is an identifier that specifies the encryption algorithm, encryption mode, and hash algorithms used in SEAM.
To restrict network access to the server using telnet, ftp, rcp, rsh, and rlogin to Kerberos-authenticated transactions only, edit the telnet entry in /etc/ inetd.conf. Add the -a user option to the telnet entry to restrict access to those users who can provide valid authentication information:
telnet stream tcp nowait root /usr/sbin/in.telnetd telnetd -a user
Edit the ftp entry in /etc/inetd.conf. Add the -a option to the ftp entry to permit only Kerberos-authenticated connections:
ftp stream tcp nowait root /usr/sbin/in.ftpd ftpd -a
Disable Solaris entries for other services in /etc/inetd.conf. The entries for shell and login need to be commented out or removed:
# shell stream tcp nowait root /usr/sbin/in.rshd in.rshd # login stream tcp nowait root /usr/sbin/in.rlogind in.rlogind
Restricting access to both master KDC servers and slave KDC servers so that the databases are secure is important to the overall security of the SEAM installation.