Team LiB
Previous Section Next Section

9.2. Host Security

Going backward from applications, host security is the first layer we encounter. Though we will continue to build additional defenses, the host must be secured as if no additional protection existed. (This is a recurring theme in this book.)

9.2.1. Restricting and Securing User Access

After the operating system installation, you will discover many shell accounts active in the /etc/passwd file. For example, each database engine comes with its own user account. Few of these accounts are needed. Review every active account and cancel the shell access of each account not needed for server operation. To do this, replace the shell specified for the user in /etc/password with /bin/false. Here is a replacement example:




Restrict whom you provide shell access. Users who are not security conscious represent a threat. Work to provide some other way for them to do their jobs without the shell access. Most users only need to have a way to transport files and are quite happy using FTP for that. (Unfortunately, FTP sends credentials in plaintext, making it easy to break in.)

Finally, secure the entry point for interactive access by disabling insecure plaintext protocols such as Telnet, leaving only secure shell (SSH) as a means for host access. Configure SSH to refuse direct root logins, by setting PermitRootLogin to no in the sshd_config file. Otherwise, in an environment where the root password is shared among many administrators, you may not be able to tell who was logged on at a specific time.

If possible, do not allow users to use a mixture of plaintext (insecure) and encrypted (secure) services. For example, in the case of the FTP protocol, deploy Secure FTP (SFTP) where possible. If you absolutely must use a plaintext protocol and some of the users have shells, consider opening two accounts for each such user: one account for use with secure services and the other for use with insecure services. Interactive logging should be forbidden for the latter; that way a compromise of the account is less likely to lead to an attacker gaining a shell on the system.

9.2.2. Deploying Minimal Services

Every open port on a host represents an entry point for an attacker. Closing as many ports as possible increases the security of a host. Operating systems often have many services enabled by default. Use the netstat tool on the command line to retrieve a complete listing of active TCP and UDP ports on the server:

# netstat -nlp
Proto Recv-Q Send-Q Local Address   Foreign Address   State      Program name
tcp        0      0*         LISTEN     963/mysqld
tcp        0      0*         LISTEN     834/xinetd
tcp        0      0*         LISTEN     834/xinetd
tcp        0      0*         LISTEN     13566/httpd
tcp        0      0*         LISTEN     1060/proftpd
tcp        0      0*         LISTEN     -
tcp        0      0*         LISTEN     834/xinetd
tcp        0      0*         LISTEN     979/sendmail
udp        0      0*                    650/syslogd

Now that you know which services are running, turn off the ones you do not need. (You will probably want port 22 open so you can continue to access the server.) Turning services off permanently is a two-step process. First you need to turn the running instance off:

# /etc/init.d/proftpd stop

Then you need to stop the service from starting the next time the server boots. The procedure depends on the operating system. You can look in two places: on Unix systems a service is started at boot time, in which case it is permanently active; or it is started on demand, through the Internet services daemon (inetd or xinetd).

Reboot the server (if you can) whenever you make changes to the way services work. That way you will be able to check everything is configured properly and all the required services will run the next time the server reboots for any reason.

Uninstall any software you do not need. For example, you will probably not need an X Window system on a web server, or the KDE, GNOME, and related programs.

Though desktop-related programs are mostly benign, you should uninstall some of the more dangerous tools such as compilers, network monitoring tools, and network assessment tools. In a properly run environment, a compiler on a host is not needed. Provided you standardize on an operating system, it is best to do development and compilation on a single development system and to copy the binaries (e.g., Apache) to the production systems from there.

9.2.3. Gathering Information and Monitoring Events

It is important to gather the information you can use to monitor the system or to analyze events after an intrusion takes place.

Synchronize clocks on all servers (using the ntpdate utility). Without synchronization, logs may be useless.

Here are the types of information that should be gathered:

System statistics

Having detailed statistics of the behavior of the server is very important. In a complex network environment, a network management system (NMS) collects vital system statistics via the SNMP protocol, stores them, and acts when thresholds are reached. Having some form of an NMS is recommended even with smaller systems; if you can't justify such an activity, the systat package will probably serve the purpose. This package consists of several binaries executed by cron to probe system information at regular intervals, storing data in binary format. The sar binary is used to inspect the binary log and produce reports. Learn more about sar and its switches; the amount of data you can get out if it is incredible. (Hint: try the -A switch.)

Integrity validation

Integrity validation softwarealso often referred to as host intrusion detection softwaremonitors files on the server and alerts the administrator (usually in the form of a daily or weekly report) whenever a change takes place. It is the only mechanism to detect a stealthy intruder. The most robust integrity validation software is Tripwire ( It uses public-key cryptography to prevent signature database tampering. Some integrity validation software is absolutely necessary for every server. Even a simple approach such as using the md5sum tool (which computes an MD5 hash for each file) will work, provided the resulting hashes are kept on a different computer or on a read-only media.

Process accounting

Process accounting enables you to log every command executed on a server (see Chapter 5).

Automatic log analysis

Except maybe in the first couple of days after installing your shiny new server, you will not review your logs manually. Therefore you must find some other way to keep an eye on events. Logwatch ( looks at the log files and produces an activity report on a regular basis (e.g., once a day). It is a modular Perl script, and it comes preinstalled on Red Hat systems. It is great to summarize what has been going on, and unusual events become easy to spot. If you want something to work in real time, try Swatch ( Swatch and other log analysis programs are discussed in Chapter 8.

9.2.4. Securing Network Access

Though a network firewall is necessary for every network, individual hosts should have their own firewalls for the following reasons:

  • In case the main firewall is misconfigured, breaks down, or has a flaw

  • To protect from other hosts on the same LAN and from hosts from which the main firewall cannot protect (e.g., from an internal network)

On Linux, a host-based firewall is configured through the Netfilter kernel module ( In the user space, the binary used to configure the firewall is iptables. As you will see, it pays off to spend some time learning how Netfilter works. On a BSD system, ipfw and ipfilter can be used to configure a host-based firewall. Windows server systems have a similar functionality but it is configured through a graphical user interface.

Whenever you design a firewall, follow the basic rules:

  • Deny everything by default.

  • Allow only what is necessary.

  • Treat internal networks and servers as hostile and give them only minimal privileges.

What follows is an example iptables firewall script for a dedicated server. It assumes the server occupies a single IP address (, and the office occupies a fixed address range It is easy to follow and to modify to suit other purposes. Your actual script should contain the IP addresses appropriate for your situation. For example, if you do not have a static IP address range in the office, you may need to keep the SSH port open to everyone; in that case, you do not need to define the address range in the script.

# IP address of this machine
# IP range of the office network
# flush existing rules
# accept traffic from this machine
# allow access to the HTTP and HTTPS ports
$IPT -A INPUT -m state --state NEW -d $ME -p tcp --dport 80 -j ACCEPT
$IPT -A INPUT -m state --state NEW -d $ME -p tcp --dport 443 -j ACCEPT
# allow SSH access from the office only
$IPT -A INPUT -m state --state NEW -s $OFFICE -d $ME -p tcp --dport 22 
# To allow SSH access from anywhere, comment the line above and uncomment
# the line below if you don't have a static IP address range to use
# in the office
# $IPT -A INPUT -m state --state NEW -d $ME -p tcp --dport 22 -j ACCEPT
# allow related traffic
# log and deny everything else

As you can see, installing a host firewall can be very easy to do, yet it provides excellent protection. As an idea, you may consider logging the unrelated outgoing traffic. On a dedicated server such traffic may represent a sign of an intrusion. To use this technique, you need to be able to tell what constitutes normal outgoing traffic. For example, the server may have been configured to download operating system updates automatically from the vendor's web site. This is an example of normal (and required) outgoing traffic.

If you are configuring a firewall on a server that is not physically close to you, ensure you have a way to recover from a mistake in firewall configuration (e.g., cutting yourself off). One way to do this is to activate a cron script (before you start changing the firewall rules) to flush the firewall configuration every 10 minutes. Then remove this script only after you are sure the firewall is configured properly.

9.2.5. Advanced Hardening

For systems intended to be highly secure, you can make that final step and patch the kernel with one of the specialized hardening patches:

These patches will enhance the kernel in various ways. They can:

  • Enhance kernel auditing capabilities

  • Make the execution stack nonexecutable (which makes buffer overflow attacks less likely to succeed)

  • Harden the TCP/IP stack

  • Implement a mandatory access control (MAC) mechanism, which provides a means to restrict even root privileges

  • Perform dozens of other changes that incrementally increase security

I mention grsecurity's advanced kernel-auditing capabilities in Chapter 5.

Some operating systems have kernel-hardening features built into them by default. For example, Gentoo supports grsecurity as an option, while the Fedora developers prefer SELinux. Most systems do not have these features; if they are important to you consider using one of the operating systems that support them. Such a decision will save you a lot of time. Otherwise, you will have to patch the kernel yourself. The biggest drawback of using a kernel patch is that you must start with a vanilla kernel, then patch and compile it every time you need to upgrade. If this is done without a clear security benefit, then the kernel patches can be a great waste of time. Playing with mandatory access control, in particular, takes a lot of time and nerves to get right.

To learn more about kernel hardening, see the following:

9.2.6. Keeping Up to Date

Maintaining a server after it has been installed is the most important thing for you to do. Because all software is imperfect and vulnerabilities are discovered all the time, the security of the software deteriorates over time. Left unmaintained, it becomes a liability.

The ideal time to think about maintenance is before the installation. What you really want is to have someone maintain that server for you, without you even having to think about it. This is possible, provided you:

  1. Do not install software from source code.

  2. Choose an operating system that supports automatic updates (e.g., Red Hat and SUSE server distributions) or one of the popular free operating systems that are promptly updated (Debian, Fedora, and others).

For most of the installations I maintain, I do the following: I install Apache from source, but I install and maintain all other packages through mechanisms of the operating system vendor. This is a compromise I can live with. I usually run Fedora Core on my (own) servers. Updating is as easy as doing the following, where yum stands for Yellowdog Updater Modified:

# yum update

If you are maintaining more than one server, it pays to create a local mirror of your favorite distribution and update servers from the local mirror. This is also a good technique to use if you want to isolate internal servers from the Internet.

    Team LiB
    Previous Section Next Section