Andrew‎ > ‎Technology‎ > ‎Spillway Algorithm‎ > ‎

A Testbed for Study of Distributed Denial of Service Attacks

Andrew H. Barkley
Steve Liu
Quoc Thong Le Gia
Matt Dingfield
Yashodhan Gokhale


Distributed denial of service attacks are a growing issue in Internet security, as simple tools for their creation have become available. A controlled environment for analysis and defense of orchestrated attacks similar to those in the wild is a necessary first step in their prevention. In this paper, we present a testbed for generation, control and defense of DDoS attacks. Our experiments show that, with proper countermeasures in place, it is possible to protect public network resources from DDoS attacks.

The Attack

A Denial of Service (DoS) attack of a system is a malicious misuse of resources that leaves a system unable to perform adequately. By generating an excessive number of legitimate requests to the target system, a DoS attack attempts to consume the computing resources of the system, rather than compromise its contents. When it comes to attacks, DoS attacks are more of a “stick in the spokes” style of sabotage than other “breaking and entering” techniques that are traditional concerns of computer security. The misuse of resources is a difficult problem, as a myriad of dangers in possible misuse must be considered, and it is very difficult to differentiate legitimate requests from the vicious ones, due to their identical formats.

A Distributed Denial of Service (DDoS) attack is an attack in which processes on many machines work together to perform a DoS attack on a target machine. Examples of such attacks are the ones that rendered many popular commercial websites inaccessible in early 2000. These attacks exploited a weakness in the Transport Control Protocol (TCP), as its mechanism for imposing reliability on an unreliable medium can be easily misused. A simplification of the attack is that the many machines send forged TCP “SYN” packets at a target machine in unison at a combined rate much higher than the target machine can service. This “SYN flood” attack has many side effects that congest both the network and the protocol stack of the target machine. Such coordinated attacks generally require intrusion of the many attacking machines so that attacker processes can be installed.

The possibilities of similar DDoS attacks seem endless, but they have an important feature in common: extreme amounts of traffic directed at a particular target machine.

The extreme traffic directed toward a target machine under a DDoS attack creates a network congestion around the target machine that will be called a “hot spot” [2]. Naturally, the traffic destined for the hot spot is called “hot traffic,” while other traffic is called “cold traffic.” A hot spot often causes congestion of neighboring routers, so cold traffic can be greatly affected by attacks as well. 

The Testbed

The problem of DDoS attacks suggests a need to have a testbed environment in which DDoS attacks can be safely coordinated, and allowing attack detection and prevention schemes to be safely tested, without affecting operational networks.

Desirable characteristics of the testbed are:

  1. Central control and monitoring of DDoS attacks
  2. Use of inexpensive components that are logically equivalent to their realistic counterparts with respect to experiments
  3. Ability to tie in prevention schemes
  4. Integration of the monitoring and prevention data

In our experiment, we built a testbed using x86 machines running Linux. Each of these machines has multiple network interface cards, allowing machines to serve as routers, attack generators, or attack targets. The relatively low routing performance of the test machines has not caused any problems, and the configurability of the machines greatly simplifies the monitoring process.

The Linux kernel has simple facilities to manipulate packet routes between machines of a multi-staged network. Although this feature has the potential as a basis for a prevention system itself, there is a simpler feature that allows for finer control and monitoring of traffic flow in the Linux kernel's firewall features. The firewall features allow for installation of rules for counting and forwarding packets of arbitrary protocol. Native system calls are necessary to install and remove filters, but their statistics are available in piped output that behaves as a file. Fortunately, these existing kernel facilities provide the bulk of the computation-intensive component of the system, so that all that is really needed to be custom built for the monitoring and prevention component is a distributed collection and control framework.

The collection and control framework is implemented by running a “router process” on each machine in the testbed. The router process implements an ASCII text stream protocol with which to communicate with its neighbors in order to control them and gather information. For simplicity, our testbed is organized into a single attack target and multiple attack sources to form a tree topology (figure 1), with the target as the root of the tree. The tree topology constraint provides for simpler analysis, as there are no redundant routes to the target. This simplification was made as the shortest path spanning tree would also be fixed with respect to the target in dynamic router configurations of more complex topology, ignoring load-balancing on redundant paths.

Figure 1: Example Network Topology

Given the constraint of a single target machine, each of the router processes is configured with neighbor information, labeling each neighbors as either an upstream machine as a next hop router toward the target machine (null on the target machine), or a downstream machine that uses the local router to reach the target machine (null on the leaves of the tree). Both congestion control commands and experimental instructions are propagated from the root down to the leaves of the hierarchy. The DDoS experiments can be externally enabled, disabled, and polled for statistics through commands to the root machine of the tree.

There is a need for two other components in the testbed: attack generation and a means of attack detection to act as a trigger for the router processes. The type of attack chosen for the initial system is valid HTTP traffic, so the attackers need to make web page requests from the target machine acting as a web server. This has the advantage that it simulates both a possible DDoS attack and real-world worst-case loads on the web server.

The web server chosen to act in attack detection and triggering of the router processes is the open-source Apache HTTP server suite. To minimize changes to the Apache structure, and to avoid excessive performance loss, a simple modification was made to the Apache source code so that it sends a UDP packet to its host machine on a known port upon receiving each web page request. Another process listening to the designated UDP port counts these requests over time intervals, and triggers the monitoring system when the rate of requests exceeds a threshold. The process also implements a simple text stream protocol to retrieve the time interval statistics.

The last component is an attacker process that can be remotely configured and triggered with another simple text stream protocol. Its parameters are the number of threads with which to concurrently make requests, the root URL of the website from which to begin traversing the site, and the duration of the attack. The process can be configured for few threads to simulate normal traffic, and many threads to increase the load on the web server. When many attacker processes are triggered in unison, the desired DDoS behavior results.

Data is centralized and attacks are launched from a GUI application (figure 2). This interface provides graphical feedback with current data in the process of an experiment, and allows exporting of the experimental data to file for later analysis.

Figure 2: Router 1 in the GUI

A Prevention Algorithm

Monitoring a DDoS attack has limited use in the absence of a prevention scheme, as the target system will soon be saturated by the attack as a hot spot develops, yielding no further useful information and usually disabling the target system. With this in mind, the router processes were designed with a few configurable parameters to control the firewall facility of the operating system kernel. If a threshold rate of packets of hot traffic is exceeded on a router, the router will drop all hot traffic until the rate is below threshold for a certain duration. Thus, offending hot traffic will be choked off away from the hot spot while cold traffic is unaffected. Of course, it is ideal to choke off the traffic as close to the source as possible. The blocking duration is meant to allow the hot spot to disappear, as the congestion will dissipate when no longer maintained by hot traffic. Configuration of this threshold and blocking duration are externally performed through the router process' communication protocol. This is an extremely simple and effective algorithm as long as the threshold and blocking duration for each router are chosen wisely in advance.

The dynamic behavior of the system can be stated as three steps:

  1. The target process detects the onset of an attack and enables the router processes to prevent it.
  2. Upon being enabled, the router processes monitor for traffic of the given protocol and destination. If the rate of that traffic exceeds a threshold rate through a router, the router will block the traffic for a short while.
  3. Traffic returns to normal, and the routers no longer block packets. If the traffic is normal for a suitable amount of time, the routers can safely cease monitoring.

The effectiveness of this algorithm is largely dependent on the topology of the routers with respect to the attacking traffic, and the choice of the threshold and blocking duration parameters.

It is of note that decoupling the detection and monitoring/prevention components allows for a prevention model in which attacks are detected by target processes that then enable the monitoring/prevention system for that type of hot traffic. This reduces the necessary work by the routers. However, there are some DDoS attacks such as “ICMP smurfing” in which there is no target process beyond perhaps the TCP/IP stack to detect a hot spot, as the destructive side-effect is meant only to be consumption of network bandwidth by the hot spot. The disadvantage of this situation is that the routers would constantly need to monitor for such an attack to prevent it; however, preventing it has much less impact as less valid hot traffic is present to be blocked. 

Comparison with Available Solutions

To prevent DDoS attack, there are several solutions proposed by the CERT advisory board. The solutions are classified into three categories: detection, prevention and response [3][4][5]. Detection involves a variety of tools to detect the presence of DDoS tools used by hackers like trin00 and TFN on the system. Prevention includes setting up firewalls, packet filters and increasing the security of the server. Response to the attack includes how to protect data and recover the system after the attack.

All of the solutions proposed by CERT and other Internet organizations lack one important feature: the defense of the system during the attack so that the server is intact for normal traffic and the attacking traffic is filtered out. The deployment of a firewall that reads the IP address of every incoming packet can filter out the unwanted packets but it will cause excessive delay on the routers on receiving a high load of traffic, so that a hot spot can still form.

Rather than preventing the DDoS attack indirectly like solutions proposed by CERT, our system can prevent the attack actively without reading the source IP address of every incoming packet. By counting the packets on every router that forwards packets to the target, we can identify the source of unusually high traffic and prevent it from maintaining a hot spot. Because we do not filter by the IP address, our system can prevent DDoS attacks that fake the source IP address of the packets as in SYN-floods or smurfing.

A Simple Experiment

A proof-of-concept experiment needs to exhibit four characteristics:

  1. Correct behavior in the absence of an attack
  2. Attack detection
  3. Blocking of attacking traffic
  4. Return to normal behavior when the attack subsides.

Figure 1 shows the topology of the test networks. Connected lines between machines indicate broadcast networks. All machines are of similar configuration, and run the router process so that traffic information can be logged when the system is enabled. We used a low-end machine for the root node to make it easier to overload with a DDoS attack. That machine runs the modified web server that will act as the target. The two designated router machines are configured with a known threshold for blocking at 200 packets per second, and a blocking duration of ten seconds. The client machines run the attacker software. Clients 1 and 3 use the attack software to provide normal traffic levels to the web server through both routers. The two clients of router 1 will serve as attackers, so only router 1 should perform any blocking.

The type of attack used is HTTP based, using authentic traffic. The web server is given a baseline of traffic that it should handle normally. Three 30-second waves of traffic are directed at the web-server through router 1. Areas of interest in the attacks are shown with thick lines on figure 3. In figure 4, the line for router 1 is thick when it is blocking. The first wave from seconds 36 to 66 is intentionally below the server's detection threshold of 60 requests per second, and is served normally by the web server. The second wave from seconds 117 to 147 is above threshold, and the web server triggers the routing system to begin monitoring HTTP traffic, resulting in a block of the last ten seconds of the wave in router 1. The normal traffic from client 1 is also blocked, as visible in both figures.

Figure 3: Example Web Server Statistics

Figure 4: Example Router Statistics

The third wave from seconds 196 to 226 is also above threshold, and is detected early by router 1. However, because the wave is 30 seconds long and the blocking duration is ten seconds, the wave is detected and blocked twice.


The experiment is indeed successful in exhibiting the four desired experimental characteristics. It also shows some practical use for the system as it is, because a similar experiment with the routers set to larger thresholds managed to reliably crash the server machine on the second wave. Therefore, even small configurations like this could aid in system availability.

An interesting behavior is exhibited in the attackers when their traffic is being blocked, as they reduce their packet output as shown in figure 4. As the TCP packets from the attackers and well-behaved clients are simply dropped during blocking, the attackers themselves slow down during blocking as the TCP/IP stack waits for acknowledgements to its requests from the web server. This is a cause for the oscillation of blocking witnessed in the third attack wave. Interestingly enough, this is sort of a reverse DoS attack as a prevention mechanism. Under other flooding attacks that are logically connectionless with respect to the attacker, the router would be able to continue to detect an attack during blocking, as there would be incoming packets of the specific type to count although they would be dropped thereafter.

Future Work

The testbed currently is limited to HTTP traffic. It can be readily extended to other types of traffic to investigate attacks related to other protocols and other service types.

Another large limitation is in the algorithm itself, which currently requires intelligent configuration by hand of the router thresholds and blocking durations. More intelligent algorithms to automatically tweak those parameters for desired performance are under development. These algorithms will actively tailor the prevention system to be as specific as possible in blocking attacks, actively improving the service to as many valid hosts as possible. Performance of these algorithms are of critical necessity when the experiment is scaled to a larger tree with multiple levels of routers, as the routers will need to communicate with each other under high-load conditions.

An undesirable effect of the algorithm is in configurations with multiple tiers of routers. If a packet successfully reaches the target, it is counted by every router along the path. If the routers are of similar threshold, it is likely that multiple routers along the path will block concurrently. This could be less than ideal if only one router is needed to block the offending traffic, as valid hot traffic is blocked as well. However, the worst-case is still better than the results of the DDoS attack even for the simple algorithm.

The simple algorithm should generalize well to more highly connected networks than the tree topology used in the experiment, as the routers would still be capable of the same monitoring and blocking requirements. However, a new complication in enhancing the algorithm surfaces as the network topology changes. The router process would likely need more detailed knowledge of the topology in order to effectively control itself with information from its neighboring routers.

Another necessary generalization is of the target machine. It is an unlikely occurrence that one machine will be the target and detector of all DDoS attacks through the cloud of routers around it. There is a simple solution to this in that the router processes would be enabled for a particular destination machine, but this does provide complexity in the propagation of commands between routers, similar to the generalization of network connectivity.

It is obvious that even the type of algorithms previously mentioned are unrealistic if used as rules for all routers. There is no point in every router on the Internet attempting to defeat a single DDoS attack. However, if only the routers that are necessarily involved work toward prevention participate, it is a different story. If all routers between the attacking networks and the target run the prevention algorithm, then the attack is traced back to the source and blocked as closely as possible. In the presence of an attack, this would provide the least inconvenience to cold traffic. It also provides an indicator of the source of the attack to administrators, who should be able to secure their systems. This example is the most extreme of possible active defenses. Unfortunately, there is a significant delay in the system. At the onset of the attack, the size of the preventing routing structure will need to be grown in increments as the networks are monitored for a particular attack, and it would take many increments to cover the major paths of the attack. If the attack is launched from topologically disparate points, then there may be little merit in this type of algorithm, as only routers close to the target will need to block the attack.

From the results of the testbed experiments, it would seem that a more robust and fast implementation could lessen the effects of modern DDoS attacks in the real world, as long as there is not a more troublesome DDoS threat of exploitation in the new routers.


1 A. Barkley, J. Liu, Q. LeGia, M. Dingfield, and Y. Gokhale, “A Testbed for Study of Distributed Denial of Service Attacks (WA 2.4),” in Proc. IEEE Systems, Man., and Cybernetics Information Assurance and Security Workshop '00, West Point, New York, June 2000.

2J. Liu, K. Shin, and C. Chang, “Prevention of Congestion in Packet-Switched Multistage Interconnection Networks,” IEEE Trans. on Parallel & Dist. Systems, vol. 6, no. 5, pp. 531-541, May 1995.

3 “CERT® Advisory CA-2000-01: Denial-of-Service Developments,” Carnegie Mellon Software Engineering Institute,, Jan. 2000. [Accessed Sept. 2000]

4 “CERT® Security Improvement Modules,” Carnegie Mellon Software Engineering Institute,, July 2000. [Accessed Sept. 2000]

5 “Strategies to Protect Against Distributed Denial of Service (DDoS) Attacks Cisco Newsflash,”, Feb. 2000. [Accessed Sept. 2000]