IP traceback

IP traceback

IP traceback is a name given to any method for reliably determining the origin of a packet on the Internet. The datagram nature of the Internet makes it difficult to determine the originating host of a packet – the source id supplied in an IP packet can be falsified (Internet protocol spoofing) allowing for Denial Of Service attacks (DoS) or one-way attacks (where the response from the victim host is so well known that return packets need not be received to continue the attack). The problem of finding the source of a packet is called the IP traceback problem. IP Traceback is a critical ability for identifying sources of attacks and instituting protection measures for the Internet. Most existing approaches to this problem have been tailored toward DoS attack detection. Such solutions require high numbers of packets (tens of thousands) to converge on the attack path(s). By nature, a solution requiring large packet volume is specifically targeted toward DoS attacks and tend to be probabilistic in nature.

A brute force solution to traceback can be obtained by having every router mark every packet as it passes through it. An alternative brute force solution requires every router to keep a record of every packet that passes through it. Such solutions are infeasible because: (1) they require a large and unbounded space in each packet (or router); (2) they require a large overhead at every router. Thus, most existing approaches to traceback attempt to reduce the above two effects.

IP Traceback was first suggested by [http://www.acm.org/sigs/sigcomm/sigcomm2000/conf/paper/sigcomm2000-8-4.pdf Savage et al] and later added to by many [http://sherry.ifi.unizh.ch/savage00practical.html others] .

Of the approaches described, some are probabilistic, and a few are deterministic: [http://christian.bongiorno.org/iptraceback/docs/iptraceback.PDF] , [http://ieeexplore.ieee.org/xpls/abs_all.jsp?isnumber=4065007&arnumber=4087711&count=73&index=11] . Some try to store information about packets in routers along the way, and some try to store information in the packet in either the fragment id field or the options field.

Packet marking

Probabilistic packet marking

Savage et al suggest probabilistically marking packets as they traverse routers in the Internet. More specifically, they propose that router mark the packet, with low probability (say, 1/20,000), with either the router’s IP address or the edges of the path that the packet traversed to reach the router.

For the first alternative, analysis shows that in order to gain the correct attack path with 95% accuracy as many as 294,000 packets are required [1] . The second approach, edge marking, requires that the two nodes that make up an edge mark the path with their IP addresses along with the distance between them (the latter requires 8 bits to represent the maximum hop count allowable in IP). This approach would require more state information in each packet than simple node marking but would converge much faster. They suggest 3 ways to reduce the state information of these approaches into something more manageable.

The first approach is to XOR each node forming an edge in the path with each other. Node "a" inserts its IP address into the packet and sends it to "b". Upon being detected at "b" (By detecting a 0 in the distance), "b" XORs it’s address with the address of a. This new data entity is called an edge id and reduces the required state for edge sampling by half. Their next approach is to further take this edge id and fragment it into "k" smaller fragments. Then, randomly select a fragment and encode it, along with the fragment offset so that the correct corresponding fragment is selected from a downstream router for processing.

When enough packets are received, the victim will receive all edges and all fragments so that an attack path can be reconstructed (even in the presence of multiple attackers). The low probability of marking reduces the associated overheads. Moreover, only a fixed space is needed in each packet.

Due to the high number of combinations required to rebuild a fragmented edge id, the reconstruction of such an attack graph is computationally intensive according to research by Song and Perrig . Furthermore, the approach results in a large number of false positives. As an example, with only 25 attacking hosts in a DDoS attack the reconstruction process takes days to build and results in thousands of false positives [2] .

Accordingly, Song and Perrig propose the following traceback scheme: instead of encoding the IP address interleaved with a hash, they suggest encoding the IP address into an 11 bit hash and maintain a 5 bit hop count, both stored in the 16-bit fragment ID field. This is based on the observation that a 5-bit hop count (32 max hops) is sufficient for almost all Internet routes. Further, they suggest that two different hashing functions be used so that the order of the routers in the markings can be determined. Next, if any given hop decides to mark it first checks the distance field for a 0, which implies that a previous router has already marked it. If this is the case, it generates an 11-bit hash of it’s own IP address and then XORs it with the previous hop. If it finds a non-zero hop count it inserts it’s IP-hash, sets the hop count to zero and forwards the packet on. If a router decides not to mark the packet it merely increments the hop count in the overloaded fragment id field.

Song and Perrig [2] identify that this is not robust enough against collisions and thus suggest using a set of independent hash functions, randomly selecting one, and then hashing the IP along with a FID or function id and then encoding this. They state that this approach essentially reduces the probability of collision to (1/(211)m). For further details see Song and Perrig [2] .

Deterministic packet marking

Belenky and Ansari, outline a deterministic packet marking scheme. In their letter they describe a more realistic topology for the Internet – that is composed of LANs and ASs with a connective boundary – and attempt to put a single mark in the packet on inbound packets at the point of network ingress. Their idea is to put, with random probability of .5, the upper or lower half of the IP address of the ingress interface into the fragment id field of the packet, and then set a reserve bit indicating which portion of the address is contained in the fragment field. By using this approach they claim to be able to obtain 0 false positives with .99 probability after only 7 packets.

Unfortunately, their approach is extremely light on details and fails to address how upstream routers are to account for their marking if they too are also marked. They also fail to discuss trans-leaf segments, shared segments and transit segments. Also, their approach does not take into account the non-unique status of an IP address that NAT confers on network topologies. No follow up work to this paper has been discovered.

Rayanchu and Barua provide another spin on this approach (called DERM). Their approach is similar in that they wish to use and encoded IP address, of the input interface, in the fragment id field of the packet. Where they differ from Belenky and Ansari is that they wish to encode the IP address as a 16-bit hash of that IP address. Initially they choose a known hashing function. They state that there would be some collisions if there were greater than 2^16 edge routers doing the marking. In truth, it need not require so many to achieve a collision (by nature of probabilities); this collision space is further increased by the fact that inbound interface address are used and any “router” will very likely have multiple interfaces; all with differing IP addresses.

They attempt to mitigate the collision problem by introducing a random distributed selection of a hash function from the universal set, and then applying it to the IP address (while a hash identifier is discussed, how it is communicated with the receiver is unknown). In either hashing scenario, the source address and the hash are mapped together in a table for later lookup along with a bit indicating which portion of the address they have received. Through a complicated procedure and a random hash selection, they are capable of reducing address collision.

By using a deterministic approach they reduce the time for their reconstruction procedure for their mark (the 16 bit hash). However, by encoding that mark through hashing they introduce the probability of collisions, and thus false-positives. As a parallel, just like Belenky and Ansari, their approach does not address NAT – a significant shortcoming.

Moreover, Shokri and Varshovi introduced the concepts of Dynamic Marking and Mark-based Detection to the field of IP Traceback in DDPM ("Dynamic Deterministic Packet Marking"). In Dynamic Marking it is possible to find the attack agents in a large scale DDoS network. Moreover, in the case of a DRDoS it enables the victim to trace the attack one step further back to the source, to find a master machine or the real attacker with only a few numbers of packets. The proposed marking procedure increases the possibility of DRDoS attack detection at the victim through Mark-based Detection. In Mark-based method, the detection engine takes into account the marks of the packets to identify varying sources of a single site involved in a DDoS attack. This significantly increases the probability of detection. In order to satisfy the end-to-end arguments approach, fate-sharing and also respect to the need for scalable and applicable schemes, only edge routers implement a simple marking procedure. The fairly negligible amount of delay and bandwidth overhead added to the edge routers make the DDPM implementable.

Router based approach

In this type of solution, the router is charged with maintaining information regarding packets that pass through it. For example, Sager proposes to log packets and then data mine them later. This has the benefit of being out of band and thus not hindering the fast path. However, given link speeds on core routers, this simply will not scale. Tera-bytes would be required for logging and even if that much space were used, the sheer I/O demand on the disks would slow everything down if the logs were written locally. Of course, probabilistic sampling techniques can reduce the data demands for this scheme.

Snoeren et al propose marking within the router. The idea proposed in their paper is to generate a fingerprint of the packet, based upon the invariant portions of the packet (source, destination, etc.) and the first 8 bytes of payload (which is unique enough to have a low probability of collision). More specifically, m independent simple hash functions each generate an output in the range of 2n-1. A bit is then set at the index generated to create a fingerprint when combined with the output of all other hash functions. All fingerprints are stored in a 2n bit table for later retrieval. The paper shows a simple family of hash functions suitable for this purpose and present a hardware implementation of it.

The space needed at each router is limited and controllable (2n bits). A small n makes the probability of collision of packet hashes (and false identification) higher. When a packet is to be traced back, it is forwarded to originating routers where fingerprint matches are checked. As time passes, the fingerprint information is “clobbered” by hashes generated by other packets. Thus, the selectivity of this approach degrades with the time that has passed between the passage of the packet and the traceback interrogation. A large drawback of this approach is that it requires significant changes to router infrastructure – true for any packet-marking scheme.

Another known take on the router-based schemes comes from Hiroaki Hazeyamay et al . In their approach, they wish to integrate the SPIE approach as outlined by Snoeren [6] (above), with their approach of recording the layer 2 link-id along with the network ID (VLAN or true ID), the MAC address of the layer 2 switch that received the packet and the link id it came in on. This information is then put into two lookup tables – both containing the switch (layer 2 router) MAC id for lookup. They rely on the MAC:port tuple as a method of tracing a packet back (even if the MAC address has been spoofed).

This mechanism certainly could work, but there is a finite space on any given switch to store such information. To help mitigate this problem, they use Snoeren’s hashing approach and implementation (SPIE) – modifying it to accept their information for hashing. By their own admission, their algorithm is slow (O(N2)) and, with only 3.3 million packet hashes being stored the approximate time before the digest tables are invalid is 1 minute. This dictates that any attack response must be real-time – a possibility only on single-administrative LAN domains. To their credit though, they are using real hardware implementations of their idea – which alleviates some arguments about plausibility.

Out-of-band approaches

The ICMP traceback scheme Steven M. Bellovin proposes probabilistically sending an ICMP traceback packet forward to the destination host of an IP packet with some low probability. Thus, the need to maintain state in either the packet or the router is obviated. Furthermore, the low probability keeps the processing overhead as well as the bandwidth requirement low. Bellovin suggests that the selection also be based on pseudo-random numbers to help block attempts to time attack bursts. The problem with this approach is that routers commonly block ICMP messages because of security issues associated with them.

Trace-back of active attack flows

In this type of solution, an observer tracks an existing attack flow by examining incoming and outgoing ports on routers starting from the host under attack. Thus, such a solution requires having privileged access to routers along the attack path.

To bypass this restriction and automate this process, Stone proposes routing suspicious packets on an overlay network using ISP edge routers. By simplifying the topology, suspicious packets can easily be re-routed to a specialized network for further analysis.

This is an interesting approach. By nature of DoS, any such attack will be sufficiently long lived for tracking in such a fashion to be possible. Layer-three topology changes, while hard to mask to a determined attacker, have the possibility of alleviating the DoS until the routing change is discovered and subsequently adapted to. Once the attacker has adapted, the re-routing scheme can once again adapt and re-route; causing an oscillation in the DoS attack; granting some ability to absorb the impact of such an attack.

Other approaches

Burch and Cheswick propose a controlled flooding of links to determine how this flooding affects the attack stream. Flooding a link will cause all packets, including packets from the attacker, to be dropped with the same probability. We can conclude from this that if a given link were flooded, and packets from the attacker slowed, then this link must be part of the attack path. Then recursively upstream routers are “coerced” into performing this test until the attack path is discovered. The most important problem with this approach is that it is resource intensive and highly intrusive. In fact, this approach may be viewed itself as a DoS attack.

The traceback problem is complicated because of spoofed packets. Thus, a related effort is targeted towards preventing spoofed packets; known as Ingress Filtering. Ingress Filtering restricts spoofed packets at ingress points to the network by tracking the set of legitimate source networks that can use this router. Park and Lee present an extension of Ingress Filtering at layer 3. They present a means of detecting false packets, at least to the subnet, by essentially making use of existing OSPF routing state to have routers make intelligent decisions about whether or not a packet should be routed. Their approach makes sense and can be implemented inside of routers with little overhead.

Ingress filtering is recommended IETF practice and is widely deployed. However, a sophisticated attacker can still spoof packets from the set of networks that its Ingress Filter allows. This is especially true for networks that are used for transit, where the set of legal sources may be large. Ingress filtering extensions down to the leaf networks have been proposed to further limit spoofing. Regardless, by blocking packets with known illegitimate addresses at routing devices, the space of the attack origin is substantially reduced.

Ultimately, until there is something on the magnitude of a legal mandate, IP Traceback may never exist.

See also

*Egress filtering
*Ingress filtering
*Internet protocol spoofing


Wikimedia Foundation. 2010.

Игры ⚽ Поможем написать курсовую

Look at other dictionaries:

  • traceback — noun Determination of origin; the process of tracing something back to its source …   Wiktionary

  • traceback — /ˈtreɪsbæk/ (say traysuhbak) noun 1. Internet any method which traces a data packet to its source. 2. any system to provide identification through a series of stages, as used in the sheep and cattle industry …  

  • Viterbi decoder — A Viterbi decoder uses the Viterbi algorithm for decoding a bitstream that has been encoded using forward error correction based on a convolutional code. There are other algorithms for decoding a convolutionally encoded stream (for example, the… …   Wikipedia

  • Longest common subsequence problem — Not to be confused with longest common substring problem. The longest common subsequence (LCS) problem is to find the longest subsequence common to all sequences in a set of sequences (often just two). Note that subsequence is different from a… …   Wikipedia

  • Stefan Savage — (born 1969) is an American computer science researcher, currently an Associate Professor in the Systems and Networking Group at the University of California San Diego. Savage is widely cited in the areas of network worms and malware propagation,… …   Wikipedia

  • California Proposition 2 (2008) — Proposition 2, the proposed Standards for Confining Farm Animals initiative statute, is a California ballot proposition in that state s general election on November 42008. The proposition would add a chapter to Division 20 of the California… …   Wikipedia

  • 2008 United States salmonellosis outbreak — The 2008 United States salmonellosis outbreak began in the spring of 2008 when hundreds of people throughout the United States fell ill after consuming contaminated food. The U.S. Food and Drug Administration currently believes that the… …   Wikipedia

  • Stack trace — A stack trace (also called stack backtrace or stack traceback) is a report of the active stack frames instantiated by the execution of a program.Although stack traces may be generated anywhere within a program, they are mostly used to aid… …   Wikipedia

  • National Animal Identification System — The National Animal Identification System, (NAIS) is a government run program in the United States intended to extend government animal health surveillance by identifying and tracking specific animals.[1] Administered at the federal level by the… …   Wikipedia

  • Jia Kenmin — is an Asian American inmate on the HBO drama Oz played by Michael Delmare. Character overview Prisoner 01J813. Convicted February 2, 2001 Possession and distribution of a controlled substance. Sentence: 15 years, up for parole in five. Imprisoned …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”