Previous Page Next Page

9. Detecting Honeypots

9.1 Detecting Low-Interaction Honeypots

9.2 Detecting High-Interaction Honeypots

9.3 Detecting Rootkits

9.4 Summary

Although honeypots are a great resource for investigating adversaries or automatic exploitation via worms, the amount of information we can learn depends on how realistic the honeypots are. If an adversary breaks into a machine and immediately notices that she broke into a honeypot, her reaction might be to remove all evidence and leave the machine alone. On the other hand, if the fact that she broke into a honeypot remains undetected, she could use it to store attack tools and launch further attacks on other systems. This makes it very important to provide realistic-looking honeypots. For low-interaction honeypots, it is important to deceive network scanning tools and for high-interaction honeypots, the whole operating system environment has to look very real. This is not a problem for a physical high-interaction honeypots, but for a system running under a virtual machine, it becomes more difficult to hide its nature.

In this chapter, we discuss several techniques for detecting different kinds of honeypots. We show how to detect both low-interaction honeypots like Honeyd or nepenthes and if one has broken into a virtual high-interaction honeypot like UML or VMware. To illustrate how adversaries typically proceed in attacking or detecting honeypots, we will introduce several of the techniques and diverse tools available to help them.

Although honeypot detection might seem to be of more benefit to malicious adversaries, in computer security, it is important to understand all aspects of a system. If you don't understand the flaws of your technology, you will not be able to fix them.

9.1. Detecting Low-Interaction Honeypots

We already know that low-interaction honeypots do not provide a complete operating system environment to adversaries. So, clearly, one way to detect them is the fact they cannot be broken into or that they do not provide interesting or complicated services. For low-interaction honeypots, it is also possible to create configurations that are completely unrealistic, such as running a Windows web server and a Unix FTP server. However, low-interaction honeypots are most often used as network sensors and not really meant to withstand targeted attempts at detecting them.

The main level of interaction with a low-interaction honeypot is via the network. In practice, this means that there is a physical machine with a real operating system in which the low-interaction honeypot is running. Resources are shared by the operating system between all processes that run on it. If we can find a way to take resources away from the honeypot process, we will notice that the honeypots are slowing down or have higher response latencies than before. If we could log into the operating system, we could start a CPU-intensive process to create this effect. However, as we usually don't have this level of access, we have to find ways to create the extra load via the network. For example, if the low-interaction honeypot system was colocated with a web server, expensive HTTP requests to the web server could slow down the low-interaction honeypots.

A very simple experiment to demonstrate this interaction is the following. Machine A runs the NetBSD operating system at IP address On A, we deploy Honeyd to create a low-interaction virtual honeypot B at IP address We run two different measurements. The first measurement uses the ping tool to send 100 ICMP ping requests to B.

$ ping -c 100| tee ping.noload
PING ( 56 data bytes
64 bytes from icmp_seq=0 ttl=255 time=0.443 ms
64 bytes from icmp_seq=1 ttl=255 time=0.430 ms
64 bytes from icmp_seq=2 ttl=255 time=0.434 ms
64 bytes from icmp_seq=3 ttl=255 time=0.421 ms

The second experiment places additional load on the NetBSD machine A by sending as many ping packets as possible to its IP address. This is called a ping flood and is available via the -f flag to root users. While A is receiving and handling all the extra network traffic, we measure the latency of 100 ICMP pings to B again:

$ ping -f
$ ping -c 100 | tee ping.loaded
PING ( 56 data bytes
64 bytes from icmp_seq=0 ttl=255 time=0.541 ms
64 bytes from icmp_seq=1 ttl=255 time=0.595 ms
64 bytes from icmp_seq=2 ttl=255 time=0.802 ms

From the recorded data, we created a histogram with 30 disjoint intervals, also known as bins, and associated the latency of each ping reply with the corresponding bin in the histogram. For both measurements, loaded and unloaded, we expect a Gaussian distribution. The distributions can be differentiated if the intervals and don't overlap, where latin small letter a with macron and σ (a) are the mean and standard deviation of the distribution. The latencies for the unloaded ping experiments have a mean of 0.444 ms with a standard deviation of ±0.043, whereas the loaded latencies have a mean of 1.29 ms with a standard deviation of ±0.34. Because there is no significant overlap between these distributions, it is very simple to distinguish between these cases with only a few samples. Figure 9.1 shows a visualization of our experimental results. As we can see, the graph shows no significant overlap either.

Figure 9.1. The latency distribution of ICMP ping requests to a Honeyd virtual honeypot for two different measurements. The first measurement records the latency when no additional load has been placed on the host machine. The second measurement records the latency when the host machine is receiving an extra load.

Our simple experiment has verified that we can measure a correlation of load on the host machine with the latencies provided from the virtual honeypot. Any other correlations can also be exploited. For example, if you were to create multiple honeypots via Honeyd or LaBrea, placing a load on one of them is going to affect the latencies and response times of all the others. This is an experiment that you can conduct in fashion similar to the preceding one. If you were to ping flood one virtual honeypot, the others will become slower. On the other hand, the same experiment against physical servers is not going to show a correlation unless you manage to create congestion on your network.

Clearly, sending a ping flood to another network on the Internet is easy to detect and too expensive to conduct on a large scale. So far, we have tried to measure the correlation of CPU resources on different IP addresses and used that as a way to detect a honeypot. If we could easily derive the physical attributes of a machine just by looking at its network packets, we might be in an even better position to discern a virtual honeypot from an actual physical server. As it turns out, TCP provides us with some information directly reflecting the state of the underlying server. We are talking about the TCP timestamp option that is used by network stacks to determine Retransmission Timeout (RTO). The timestamp is updated at a certain frequency from the physical clock in the machine. We also know that all physical clocks have a certain clock skew. That is, they are gaining or losing time the longer they run.

By opening TCP connections to a host and recording the provided timestamp for each connection, it is possible to observe the skew over time. We expect that each physical system or operating system is going to exhibit a different kind of skew. Tadayoshi Konho et al., researchers from the University of California, San Diego, used this idea to fingerprint physical devices over the network [49]. In one of their measurements, Honeyd was configured to simulate 100 Linux and 100 Windows machines. They measured the clock skew of all 200 honeypots and noticed that they were the same! This is rather unusual, as we would expect the clock skew on every machine to be slightly different. Because the clock skew was the same on all 200 machines, it was obvious that they were simulated by the same physical server and that the measurements reflected the clock skew of that single machine. This information disclosure has been fixed in recent versions of Honeyd by providing a different clock skew to each operating system and each virtual honeypot. However, as with the preceding ping latencies, any measurement that allows us to derive information about the underlying hardware makes a honeypot detectable.

A completely different kind of approach is to analyze the network responses from low-interaction honeypots for discrepancies. For example, LaBrea tries to tarpit incoming TCP connections. To do so, it makes use of legal but rarely used TCP techniques. Anyone who looks at a tcpdump of a LaBrea connection can tell immediately what is going on.

Systems like Honeyd, on the other hand, try to deceive adversaries into believing that they are talking to a real machine. Any discrepancies in its network behavior can be used to detect it. In the past, there have been several ways to detect a virtual honeypot created by Honeyd in an almost trivial fashion. In January 2004, Honeyd Security Advisory 2004-001 was released and provides us with one such example:

Topic: Remote Detection Via Simple Probe Packet

Version: All versions prior to Honeyd 0.8

Severity: Identification of Honeyd installations allows an
       adversary to launch attacks specifically against
       Honeyd. No remote root exploit is currently known.


Honeyd is a virtual honeypot daemon that can simulate virtual hosts on
unallocated IP addresses.

A bug in handling NMAP fingerprints caused Honeyd to reply to TCP
packets with both the SYN and RST flags set. Watching for replies, it
is possible to detect IP addresses simulated by Honeyd.

Although there are no public exploits known for Honeyd, the detection
of Honeyd IP addresses may in some cases be undesirable.

This sounds a bit dull, but what it means is that a single TCP packet, with both SYN and RST, to an open port could solicit a reply from Honeyd. No other machine on the Internet would reply to such a packet. A single packet fingerprint allows for efficient scanning of large portions of the Internet. As it turns out, some people in the underground have been doing exactly that. Three months before the security advisory was released, PHC wrote the following in a fake Phrack publication:

Project Honeynet Enumeration
by anonymous Phrack High Council Member

[...] As a token of our gratitude for your continued patronage of thetrue underground scene, we would like to present a list of honeypots
for recreational packeting purposes.




The PHC people found a number of experimental Honeyd installations that one of the authors had been running at the University of Michigan. In case there is any confusion about the message, PHC is asking for others to launch denial of service attacks against these IP addresses. At the time the fake Phrack issue was published, the preceding honeypots had not been operating for over six months. That means that anyone who knew about the flaw could have mapped the Internet for more than nine month before an official fix to this problem was available.

Our last example of detecting Honeyd via network probes comes from John Oberheide et al. of Merit Network Inc. He noticed that Honeyd reassembled fragmented IP packets incorrectly. According to RFC 791, corresponding fragments are identified by matching against the source address, destination address, identification number, and protocol number. Unfortunately, Honeyd did not implement the matching step correctly and forgot to compare the protocol number when reassembling fragmented packets; see the code snippet from Honeyd's ipfrag.c:

#define DIFF(a,b) do {
   if ((a) < (b)) return -1;
   if ((a) > (b)) return 1;
} while (0)

fragcompare(struct fragment *a, struct fragment *b)
   DIFF(a->ip_src, b->ip_src);
   DIFF(a->ip_dst, b->ip_dst);
   DIFF(a->ip_id, b->ip_id);

   return (0);

As you can see, the IP field is not being compared. This resulted in Honeyd reassembling fragments with the same source address, destination address, and identification number, but with different protocols. In a normal operation, this does not affect the functionality of the honeypot because it is improbable that fragments would match for only three of the four fields. However, it is easy for an adversary to craft packets that exhibit this problem. Such packets would be reassembled by Honeyd, most likely resulting in a reply packet, whereas other operating systems would just discard them.

To test this theory, John Oberheide developed a fingerprinting tool called Winnie, which can be downloaded from

The way he chose to trigger this flaw was to split a TCP SYN packet into several fragments where the protocol field of only one of the fragments was different from TCP. The same approach also works for ICMP ping packets. A correctly implemented network stack would drop these fragments because it cannot reassemble a complete packet. The fragment with the differing protocol field would remain missing. Not so for Honeyd; it would receive these fragments and reassemble them into a complete TCP SYN packet. The complete TCP SYN packet would then trigger a response. SYN/ACK for an open port or RST for a closed port.

To find Honeyd installations, an adversary need only send out a large number of these fragments to IP addresses all over the Internet and listen for any replies. Since real operating systems correctly implement fragment reassembly, the adversary can have high confidence that all responses hosts are due to Honeyd-based honeypots.

Detecting nepenthes installations is also possible. Since nepenthes only emulates the vulnerable parts of network services, this is rather easy to detect. An attacker could, for example, scan a given machine for open TCP ports. He could use nmap and enable version detection via the command line switch -sV. Nmap then tries to identify the network service and its version for an open TCP port. The following listing provides a sample output of scanning a default installation of nepenthes running on a Linux machine:

$ sudo nmap -sV

Starting Nmap 4.11 ( http// ) at 2007-01-17 14:46 PDT
Interesting ports on (
Not shown: 1658 closed ports
21/tcp open ftp
22/tcp open ssh OpenSSH 4.3 (protocol 2.0)
25/tcp open smtp?
42/tcp open nameserver?
80/tcp open http?
110/tcp open pop3?
135/tcp open msrpc?
139/tcp open netbios-ssn?
143/tcp open imap?
220/tcp open imap3?
443/tcp open https?
445/tcp open microsoft-ds?
465/tcp open smtps?
993/tcp open imaps?
995/tcp open pop3s?
1023/tcp open netvenuechat?
1025/tcp open NFS-or-IIS?
2105/tcp open eklogin?
3372/tcp open msdtc?
5000/tcp open UPnP?
10000/tcp open snet-sensor-mgmt?
17300/tcp open kuang2?


The suspicious machine has many open TCP ports and — most important — a really uncommon combination of open network ports. Besides common network services like FTP, SSH, HTTP, and POP3, the host also seems to have several Windows services running. Furthermore, several of the open network ports are rather uncommon like, for example, TCP port 17300, which is commonly used by the backdoor left by the Kuang2 virus. Such a configuration would presumably not be used by a legitimate server and is thus a strong hint that this system could be a honeypot.

Moreover, nmap is only able to identify the version of one service. All ports on which nepenthes emulates vulnerabilities cannot be correctly identified, since nepenthes does not offer enough interaction for identification. This is another hint for an attacker to conclude that this host is in fact a honeypot.

A clear sign that a given host is running nepenthes can be found if you just connect to TCP port 21:

$ nc 21
220 ---freeFTPd 1.0---warFTPd 1.65---

Normally, you would expect to get the banner of the FTP server. But nepenthes replies with two different banners: one for freeFTPd and the other for warFTPd. Both FTP server have vulnerabilities that are emulated by nepenthes, and by sending this banner, common exploits are tricked. But a human can clearly identify this uncommon response and conclude that this is indeed a honeypot.

Previous Page Next Page