Network Security

Core Internet Protocols

Paul Krzyzanowski

November 14, 2024

Introduction: Internet Design Principles

The design of the Internet traces its origins to the initial goals of the ARPANET, the precursor to the modern Internet, developed in the late 1960s. ARPANET was designed to be a robust, decentralized network capable of connecting diverse computer systems, ensuring communication even in the event of failures. These goals laid the groundwork for what would evolve into the global Internet. The key design principles of the Internet include:

  1. Packet Switching and Breaking Data into Packets: The Internet uses packet switching, where data is broken into small, manageable packets. Each packet is sent independently and reassembled at the destination, enabling efficient and flexible use of network resources.

  2. Interconnection of Networks with Routers: The Internet is a network of interconnected networks, achieved through routers that connect independent and diverse networks into one cohesive system. Routers forward packets between networks, forming the backbone of the Internet. This makes the Internet a logical network that’s built over multiple different physical networks.

  3. Store and Forward Delivery: Routers implement a store-and-forward mechanism, temporarily storing packets before forwarding them to the next hop. This ensures delivery even under temporary network disruptions.

  4. No Centralized Control: The Internet operates without a centralized hub or control. Each router makes its own decisions about the best next hop for packets, contributing to the resilience and scalability of the network.

  5. Assumption of Unreliable Communication: The Internet assumes that the underlying communication may be unreliable. Protocols at higher layers, such as TCP, ensure reliability by managing retransmissions and sequencing.

  6. End-to-End Principle: Intelligence is handled at the edges of the network, not within the core. Endpoints (e.g., user devices or servers) are responsible for critical functions such as confidentiality, authentication, integrity, prioritization, reliability, sequencing, and compression. This was a key design decision for keeping the core of the internet efficient. If you need security, reliabile communication, or guaranteed sequencing, you have to implement this in software at the endpoints.

  7. Layered Architecture: Internet protocols are organized into layers, with each layer handling a specific aspect of communication. For example, the physical layer handles hardware transmission, the network layer (IP) manages packet routing, and the transport layer (TCP/UDP) ensures data delivery.

Protocol Layers

Networking protocol stacks are usually described using the OSI layered model. For the Internet, the layers are:

  1. Physical. This represents the actual hardware: the cables, connectors, voltage levels, modulation techniques, etc.

  2. Data Link. This layer defines the local area network (LAN). In homes and offices, this is typically Ethernet (802.1) or Wi-Fi (802.11). Ethernet and Wi-Fi use the same addressing scheme and were designed to be bridged together to form a single local area network.

  3. Network. The network layer creates a single logical network and routes packets across physical networks. The Internet Protocol (IP) is responsible for this. There are two versions of this that are currently deployed: IPv4 and IPv6.

    • IPv4 was first deployed in 1983 and supports 32-bit addresses, which have been exhausted due to the growth of the internet.
    • IPv6, designed as a successor, uses 128-bit addresses and was first deployed in 2012. However, adoption has been slow in regions where IPv4 is still widely used, such as the U.S., since IPv4 and IPv6 networks are not directly interoperable.
  4. Transport. The transport layer is responsible for creating logical software endpoints (ports) so that one application can send a stream of data to another via an operating system’s sockets interface.

    • TCP (Transmission Control Protocol) ensures reliable, connection-oriented communication using sequence numbers, acknowledgment numbers, and retransmission mechanisms.
    • UDP (User Datagram Protocol) provides a simpler, connectionless protocol that sends packets to a destination without guaranteeing delivery or reliability, suitable for time-sensitive applications like streaming or gaming.
  5. Application. The application layer is where user-facing applications interact with the network. This layer includes the protocols and standards used by applications to communicate over the transport layer. Examples of application layer protocols include HTTP/HTTPS for web browsing; SMTP, POP, and IMAP for email; DNS for resolving domain names to IP addresses; and FTP for file transfers. This layer relies on transport-layer protocols to deliver data to the appropriate destinations. It is where the internet becomes meaningful to users, enabling interactions like loading web pages, sending messages, or streaming media.

Data link layer

In an Ethernet network, the data link layer is handled by Ethernet transceivers and Ethernet switches. Security was not a consideration in the design of this layer and several fundamental attacks exist at this layer. Wi-Fi also operates at the data link layer and added encryption on wireless data between the device and access point. Note that Wi-Fi’s encryption is not end-to-end, between hosts, but ends at the access point.

Switch CAM table overflow

Sniff all data on the local area network (LAN).

Ethernet frames1 are delivered based on their 48-bit MAC2 address. IP address are meaningless to ethernet transceivers and to switches since IP is handled at higher levels of the network stack. Ethernet was originally designed as a bus-based shared network; all devices on the LAN shared the same wire. Any system could see all the traffic on the Ethernet. This resulted in increased congestion as more hosts were added to the local network.

Ethernet switches alleviated this problem by using a dedicated cable between each host and the switch and extra logic within the switch. The switch routes an ethernet frame only to the Ethernet port (the connector on the switch) that is connected to the system that contains the desired destination address. This switched behavior isolates communication streams - other hosts can no longer see the messages flowing on the network that are targeted to other systems.

Unlike routers, switches are not programmed with routes. Instead, they learn which computers are on which switch ports by looking at the source MAC addresses of incoming ethernet frames. An incoming ethernet frame indicates that the system with that source address is connected to that switch port.

To implement this, a switch contains a switch table (a MAC address table). This table contains entries for known MAC addresses and their interface (the switch port). The switch then uses forwarding and filtering:

When a frame arrives for some destination address D, the switch looks up D in the switch table to find its interface. If D is in the table and on a different port than that of the incoming frame, the switch forwards the frame to that interface, queueing it if necessary.

If D is not found in the table, then the switch assumes it has not yet learned what port that address is associated with, so it forwards the frame to ALL interfaces.

This procedure makes the switch self-learning: the switch table is empty initially and gets populated as the switch inspects source addresses.

A switch has to support extremely rapid lookups in the switch table. For this reason, the table is implemented using content addressable memory (CAM, also known as associative memory). CAM is expensive and switch tables are fixed-size and not huge. The switch will delete less-frequently used entries if it needs to make room for new ones.

The CAM table overflow attack exploits the limited size of this CAM-based switch table. The attacker sends bogus Ethernet frames with random source MAC addresses. Each newly-received address will displace an entry in the switch table, eventually filling up the table. With the CAM table full, legitimate traffic will now be broadcast to all links. A host on any port can now see all traffic.

Countermeasures for CAM table attacks require the use of managed switches that support port security. These switches allow you to configure individual switch ports to limit the number of addresses the table will hold for each switch port.

Another option to prevent this attack is to use a switch that supports the 802.1x protocol. This is a protocol that was created to improve security at the link layer. With 802.1x in place, all traffic coming to a switch port is initially considered to be “unauthorized”. The switch redirects the traffic, regardless of its destination address, to an authentication server.

If the user authenticates successfully, the authentication server then configures a rule in the switch that will allow traffic coming from that user’s MAC address to be accepted by the switch. The port becomes “authorized” for that specific address. This is a common technique that is used to allow users to connect to public access wireless networks.

VLAN hopping (switch spoofing)

Sniff all data from connected virtual local area networks.

Companies often deploy multiple local area networks in their organization to isolate users into groups on separate networks. This isolates broadcast traffic between groups of systems and allows administrators to set up routers and firewalls that can restrict access between these networks. Related users can all be placed on a single LAN. For instance, we might want software developers to be on a physically distinct local area network than the human resources or finance groups. Partitioning different types of employees onto different local area networks is good security practice.

However, users may be relocated within an office and switches may be used inefficiently. Virtual Local Area Networks (VLANs) create multiple logical LANs using a single switch. The network administrator can assign each port on a switch to a specific VLAN. Each VLAN is a separate broadcast domain so that each VLAN acts like a truly separate local area network. Users belonging to one VLAN will never see any traffic from the other; it would have to be routed through an IP router.

Switches may be extended by cascading them with other switches: an ethernet cable from one switch simply connects to another switch. With VLANs, the connection between switches forms a VLAN trunk3 and carries traffic from all VLANs to the other switch. To support this behavior, a VLAN Trunking protocol was created, called the IEEE 802.1Q standard – the Extended Ethernet frame format. 802.1Q simply takes a standard ethernet frame and adds a VLAN tag that identifies the specific VLAN number from which the frame originated.

A VLAN hopping attack employs switch spoofing: an attacker’s computer sends and receives 802.1Q frames and the switch and will believe that the connected computer is another switch and consider it to be a member of all VLANs on the system.

Depending on switch tables and forwarding policies, the attacker might not receive all the traffic but the attacker make that happen by performing a CAM overflow on the switch. The attacker’s computer will receive all broadcast messages, which often come from services advertising their presence. The attacker can also create and inject ethernet packets onto any VLAN. Recall that all higher-level protocols, such as UDP, are encapsulated within ethernet packets.

Defending against this attack requires a managed switch where an administrator can disable unused ports and associate them with an unused VLAN. Auto-trunking should be disabled on the switch so that each port cannot become a trunk. Instead, trunk ports must be configured explicitly for the ports that have legitimate connections to other switches.

ARP cache poisoning

Redirect IP packets by changing the IP address to MAC address mapping.

Recall that IP is a logical network that sits on top of physical networks. If we are on an Ethernet network and need to send an IP datagram4, that IP datagram needs to be encapsulated within an Ethernet frame. The Ethernet frame has to contain a destination MAC address that corresponds to the destination machine or the MAC address of a router, if the destination address is on a different LAN. Before an operating system can send an IP packet it needs to figure out what MAC address corresponds to that IP address.

There is no relationship between an IP and Ethernet MAC address. To find the MAC address when given an IP address, a system uses the Address Resolution Protocol, ARP. The sending computer creates an Ethernet frame that contains an ARP message with the IP address it wants to query. This ARP message is then broadcast: all network adapters on the LAN receive the message. If a computer receives this message and sees that its own IP address matches the address in the query, it then sends back an ARP response. This response identifies the MAC address of the system that owns that IP address.

To avoid the overhead of issuing this query each time the system has to use the IP address, the operating system maintains an ARP cache that stores recently used addresses. To further improve performance, hosts cache any ARP replies they see, even if they did not originate them. This is done on the assumption that many systems use the same set of IP addresses and the overhead of making an ARP query is substantial. Along the same lines, a computer can send an ARP response even if nobody sent a request. This is called a gratuitious ARP and is often sent by computers when they start up as a way to give other systems on the LAN the IP:MAC address mapping without them having to ask for it at a future time.

Note that there is no way to authenticate that a response is legitimate. The asking host does not have any idea of what MAC address is associated with the IP address. Hence, it cannot tell whether a host that responds really has that IP address or is an imposter.

An ARP cache poisoning attack is one where an attacker creates fake ARP replies that contain the attacker’s MAC address but the target’s IP address. This will direct any traffic meant for the target over to the attacker. It enables man-in-the-middle or denial of service attacks since the real host will not be receiving that IP traffic. Because other systems pick up ARP replies, the ARP cache poisoning reply will affect all the systems on the LAN.

There are several defenses against ARP cache poisoning. One defense is to ignore replies that are not associated with requests. However, you need to hope that the reply you get is a legitimate one since an attacker may respond more quickly or perhaps launch a denial of service attack against the legitimate host and then respond.

Another defense is to give up on ARP broadcasts and simply use static ARP entries. This works but can be an administrative nightmare since someone will have to keep the list of IP and MAC address mappings and the addition of new machines to the environment.

Finally, one can enable something called Dynamic ARP Inspection. This essentially builds a local ARP table by using DHCP (Dynamic Host Configuration Protocol) Snooping data as well as static ARP entries. Any ARP responses will be validated against DHCP Snooping database information or static ARP entries. The DHCP snooping database is populated whenever systems first get configured onto the network. This assumes that the environment uses DHCP instead of fixed IP address assignments.

DHCP server spoofing

Configure new devices on the LAN with your choice of DNS address, router address, etc.

When a computer joins a network, it needs to be configured to use the Internet Protocol (IP) on that network. This is most often done automatically via DHCP, the Dynamic Host Configuration Protocol. It is used in practically every LAN environment and is particularly useful where computers (including phones) join and leave the network regularly, such as Wi-Fi hotspots. Every access point and home gateway provides DHCP server capabilities.

A computer that joins a new network broadcasts a DHCP Discover message. As with ARP, we have the problem that a computer does not know whom to contact for this informations. It also does not have an IP address, it sends the query as an Ethernet broadcast, hoping that it gets a legitimate response.

A DHCP server on the network picks up this request and sends back a response that contains configuration information for this new computer on the network:

  • IP address – the IP address for the system
  • Subnet mask – which bits of the IP address identify the local area network
  • Default router – gateway to which all non-local datagrams will be routed
  • DNS servers – servers that system can query to find IP addresses for a domain name
  • Lease time – how long this configuration is valid

With DHCP Spoofing, any system can pretend to be a DHCP server and spoof responses that would normally be sent by a valid DHCP server. This attacker can provide the new system with a legitimate IP address but with a false address for the gateway (the default router). This will cause the computer to route all non-local datagrams to the attacker.

The attacker can provide a false DNS server in the response. This will cause domain name queries to be sent to a server chosen by the attacker, which can give false IP addresses to redirect traffic for chosen domains.

As with ARP cache poisoning, the attacker may launch a denial of service attack against the legitimate DHCP server to keep it from responding or at least delay its responses. If the legitimate server sends its response after the imposter, the new host will simply ignore the response.

There aren’t many defenses against DHCP spoofing. Some switches (such as those by Cisco and Juniper) support DHCP snooping. This allows an administrator to configure specific switch ports as “trusted” or “untrusted." Only specific machines, those on trusted ports, will be permitted to send DHCP responses. Any other DHCP responses will be dropped. The switch will also use DHCP data to track client behavior to ensure that hosts use only the IP address assigned to them and that hosts do not generate fake ARP responses.

Network (IP) layer

The Internet Protocol (IP) layer is responsible for getting datagrams (packets) to their destination. It does not provide any guarantees on message ordering or reliable delivery. Datagrams may take different routes through the network and may be dropped by queue overflows in routers.

Source IP address authentication

Anyone can impersonate an IP datagram.

One fundamental problem with IP communication is that there is absolutely no source IP address authentication. Clients are expected to use their own source IP address but anybody can override this if they have administrative privileges on their system by using a raw sockets interface.

This enables one to forge messages to appear that they come from another system. Any software that authenticates requests based on their IP addresses will be at risk.

Anonymous denial of service

The ability to set an arbitrary source address in an IP datagram can be used for anonymous denial of service attacks. If a system sends a datagram that generates an error, the error will be sent back to the source address that was forged in the query. For example, a datagram sent with a small time-to-live, or TTL, value will cause a router that is hit when the TTL reaches zero to respond back with an ICMP (Internet Control Message Protocol) Time to Live exceeded message. Error responses will be sent to the forged source IP address and it is possible to send a vast number of such messages from many machines (by assembling a botnet) across many networks, causing the errors to all target a single system.

Routers

Routers are nothing more than computers with multiple network links and often with special-purpose hardware to facilitate the rapid movement of packets across interfaces. They run operating systems and have user interfaces for administration. As with many other devices that people don’t treat as “real” computers, there is a danger that they routers will have simple or even default passwords. For instance, you can go to cirt.net to get a database of thousands of default passwords for different devices.

Moreover, owners of routers may not be nearly as diligent in keeping the operating system and other software updated as they are with their computers.

Routers can be subject to some of the same attacks as computers. Denial of service (DoS) attacks can keep the router from doing its job. One way this is done is by sending a flood of ICMP datagrams. The Internet Control Message Protocol is typically used to send routing error messages and updates and a huge volume of these can overwhelm a router. Routers may also have input validation bugs and not handle certain improper datagrams correctly.

Route table poisoning is the modification of the router’s routing table either by breaking into a router or by sending route update datagrams over an unauthenticated protocol.

Transport layer (UDP, TCP)

UDP and TCP are transport layer protocols that allow applications to establish communication channels with each other. Each endpoint of a channel is identified by a port number (a 16-bit integer that has nothing to do with Ethernet switch ports). The port number allows the operating system to direct traffic to the proper socket. Hence, both TCP and UDP segments5 contain not only source and destination addresses but also source and destination ports.

UDP, the User Datagram Protocol, is stateless, connectionless, and unreliable.

As we saw with IP source address forgery, any system can create and send UDP messages with forged source IP addresses. UDP interactions have no concept of sessions as far as the operating system is concerned and do not use sequence numbers, so attackers can inject messages directly without having to take over a session.

TCP, the Transmission Control Protocol, is stateful, connection-oriented, and reliable. Every packet contains a sequence number (a byte offset) and the operating system assembles received packets into their correct order. The receiver also sends acknowledgments so that any missing packets will be retransmitted.

To handle in-order, reliable communication, TCP needs to establish state at both endpoints. It does this through a connection setup process that comprises a three-way handshake.

  1. SYN: The client sends a SYN segment.
    The client selects a random initial sequence number (client_isn). This is the starting sequence number for the segments it will send.

  2. SYN/ACK: The server responds with a SYN/ACK.
    The server receives the SYN segment and now knows that a client wants to connect to it. It allocates memory to store the connection state and to store received, possibly out-of-sequence segments. The server also generates an initial sequence number (server_isn) for its side of the data stream. This is also a random number. The response also contains an acknowledgment of the client’s SYN request with the value client_isn+1.

  3. ACK: The client sends a final acknowledgment.
    The client acknowledges receipt of the server’s SYN/ACK message by sending a final ACK message that contains an acknowledgment of server_isn+1.

Note that the initial sequence numbers are random rather than starting at zero as one might expect. There are two reasons for this.

The primary reason is that message delivery times on an IP network are unpredictable, and it is possible that a recently closed connection may receive delayed messages, confusing the server on the state of that connection.

The security-sensitive reason is that if sequence numbers were predictable, then it would be quite easy to launch a sequence number prediction attack where an attacker would be able to guess at likely sequence numbers on a connection and send masqueraded packets that will appear to be part of the data stream. Random sequence numbers do not make the problem go away but make it more challenging to launch the attack, particularly if the attacker does not have the ability to see traffic on the network.

SYN flooding

In the second step of the three-way handshake, the server is informed that a client would like to connect and allocates memory to manage this new connection. Given that kernel memory is a finite resource, the operating system will allocate only a finite amount of TCP buffers in its TCP queue. After that, it will refuse to accept any new connections.

In the SYN flooding attack, the attacker sends a large number of SYN segments to the target. These SYN messages contain a forged source address of an unreachable host, so the target’s SYN/ACK responses never get delivered anywhere. The handshake is never completed but the operating system has allocated resources for this connection. There is a window of time before the server times out on waiting for a response and cleans up the memory used by these pending connections. Meanwhile, all TCP buffers have been allocated and the operating system refuses to accept any more TCP connections, even if they come from a legitimate source. This window of time can usually be configured. Its default value is 10 seconds on Windows systems.

SYN flooding attacks cannot be prevented completely. One way of lessening the impact of these attacks is the use of SYN cookies. With SYN cookies, the server does not allocate memory for buffers & TCP state when a SYN segment is received. It responds with a SYN/ACK that contains an initial sequence number created as a hash of several known values:

	hash(src_addr, dest_addr, src_port, dest_port, SECRET)

The SECRET is not shared with anyone; it is local to the operating system. When (if) the final ACK comes back from a legitimate client, the server will need to validate the acknowledgment number. Normally this requires comparing the number to the stored server initial sequence number plus 1. We did not allocate space to store this value, but we can recompute the number by re-generating the hash, adding one, and comparing it to the acknowledgment number in the message. If it is valid, the kernel believes it was not the victim of a SYN flooding attack and allocates resources necessary for managing the connection.

TCP Reset

A somewhat simple attack is to send a RESET (RST) segment to an open TCP socket. If the server sequence number is correct, then the connection will close. Hence, the tricky part is getting the correct sequence number to make it look like the RESET is part of the genuine message stream.

Sequence numbers are 32-bit values. The chance of successfully picking the correct sequence number is tiny: 1 in 232, or approximately one in four billion. However, many systems will accept a large range of sequence numbers approximately in the correct range to account for the fact that packets may arrive out of order, so they shouldn’t necessarily be rejected just because the sequence number is incorrect. This can reduce the search space tremendously, and an attacker can send a flood of RST packets with varying sequence numbers and a forged source address until the connection is broken.

Routing protocols: Autonomous Systems and BGP

The Internet was designed to connect multiple independently managed networks, each of which may use different hardware and link-layer protocols. Routers allow traffic to flow between these networks, which include local area networks as well as wide area networks.

A range of IP addresses as well as the underlying routers and network infrastructure, all managed as one administrative entity, is called an Autonomous System (AS). For example, the part of the Internet managed by Comcast is an autonomous system (Comcast has 42 of them in different regions). The networks managed by Verizon constitute a few autonomous systems as well. For our discussion, think of ASes as ISPs or large data centers such as Google or Amazon. Incidentally, Rutgers is an Autonomous System: AS46, owning the range of IP addresses starting with 128.6.

Routers connected to routers in other ASes use the Border Gateway Protocol, or BGP to determine where to route their data. With BGP, each autonomous system exchanges routes and reachability information on IP addresses with other autonomous systems to which it has connections. For example, Comcast can tell Verizon what IP addresses it can reach. Verizon passes that information to the other ASes to which it has links. BGP uses a distance vector routing algorithm to enable the routers to determine the most efficient path to use to send packets that are destined for other networks. Unless an administrator explicitly configures a route, BGP will generally be configured to pick the shortest route.

BGP Prefixes

In BGP, a prefix refers to a block of IP addressesthat a router advertises as reachable. The prefix indicates the range of IP addresses that a particular network (or Autonomous System, AS) can deliver traffic to. A prefix is written in CIDR (Classless Inter-Domain Routing) notation, such as 128.6.0.0/16. Here, 128.6.0.0 is the starting IP address of the range and /16 indicates a mask, indicating that the first 16 bits represent the range of addresses.

BGP Hijacking

So what are the security problems with BGP? Edge routers in an autonomous system use BGP to send route advertisements to the routers of connected autonomous systems. An advertisement is a list of IP address prefixes the AS can reach (shorter prefixes mean a larger range of addresses) and, for each prefix, the AS_PATH: the sequence of ASes that sent the advertisement for that IP prefix.

These messages are sent over a TCP connection between the routers without authentication, integrity checks, or encryption. With BGP hijacking, a malicious AS or a malicious party that has access to the network link or a connected router can send advertisements for arbitrary routes. The information will propagate throughout the Internet.

Path forgery

Routers at the borders of each AS take into account various parameters in deciding how to build routing tables. For example, some ISPs set up no-cost peering agreements, where they route traffic between each other for free. A network administrator may prioritize that versus taking a route that costs the company money. In the purest form, however, the goal of a routing algorithm is to find the best route to the destination, which generally means the one that results in the smallest number of hops.

One decision routers use to determine routes is the AS_PATH for each prefix in BGP advertisements, which tells it the number of hops a packet will take to the final destination. If other attributes (e.g., cost to connect) are the same, routers will favor the shortest path (BGP is the basic component of implementing a distance vector routing algorithm).

If an attacker advertises a prefix with the shortest path, other ASes will route traffic through that AS, believing it is the most efficient route to those addresses.

Prefix forgery

Another form of attack is sending BGP advertisements with longer prefixes than legitimate advertisements. For example, AS46, belonging to Rutgers, may advertise the prefix 128.6.0.0/16, which represents any address where the top 16 bits are 128.6 (0x8006). A longer prefix identifies a smaller group of addresses. For example, 128.6.13.0/24 would refer to systems where the top 24 bits match 128.6.13 (which happens to refer to the iLab machines).

Routers give a higher priority to longer prefixes. This logic is useful since it allows an organization to provide different routes to subsets of its IP address space. For instance, if Rutgers had to provide a different route to the set of iLab systems, it could advertise 128.6.13.0/24 along with 128.6.0.0/16. This would allow routes to 128.6.13.* to be handled differently from all other routes to 128.6...

This behavior gives an attacker the opportunity to advertise more specific routes (longer prefixes), which will result in prioritizing those routes even if they are not the shortest path.

The YouTube BGP hijack

A high-profile BGP attack occurred against YouTube in 2008. Pakistan Telecom received a censorship order from the Ministry of Information Technology and Telecom to block YouTube traffic to the country. The company sent spoofed BGP messages claiming to offer the best route for the range of IP addresses used by YouTube. It did this by using a longer address prefix than the one advertised by YouTube (longer prefix = fewer addresses). Because a longer prefix is more specific, BGP gives it a higher priority.

YouTube is its own AS and announces its network of computers with a 22-bit prefix. Pakistan Telecom advertised the same set of IP addresses with a 24-bit prefix. A longer prefix means the route supports fewer addresses and thus refers to fewer computers, and BGP gave Pakistan Telecom’s routes a higher routing priority. This way, Pakistan Telecom hijacked those routes.

Within minutes, routers worldwide directed their YouTube requests to Pakistan Telecom, which would simply drop them. YouTube tried countermeasures, such as advertising more specific networks, such as a /26 network, which advertised blocks of 64 addresses. The AS to which Pakistan Telecom was connected was also reconfigured to stop relaying the routes advertised by Pakistan Telecom, but it took about two hours before routes were restored.

For more information on past high-profile BGP attacks, take a look at A Brief History of the Internet’s Biggest BGP Incidents by Doug Madory, Director of Internet Analysis at Kentik.

Dealing with BGP attacks

By attacking BGP, the attacker can redirect traffic to malicious servers configured with the same IP address as legitimate ones, intercept traffic to snoop on it or record it, or perform a denial of service attack to keep IP traffic from reaching legitimate systems.

There are currently approximately 71,000 autonomous systems and most have multiple administrators. We live in the hope that none of them are malicious, their administrators cannot be bribed or blackmailed, and that all routers are properly configured and properly secured.

It is difficult to change BGP since tens of thousands of independent entities use it worldwide. Two partial solutions to this problem emerged. The Resource Public Key Infrastructure (RPKI) framework simply has each AS get an X.509 digital certificate from a trusted entity (the Regional Internet Registry). Each AS signs its set of route advertisements with its private key, and any other AS can validate that list of advertisements using the AS’s certificate. This ensures that another AS cannot advertise an IP address as an origin. That is, a malicious AS cannot advertise a longer prefix for a range of IP addresses. However, it cannot stop an AS from advertising that it can get there in a smaller number of hops.

An alternate, but related, solution is BGPsec, which is an IETF standard. Instead of signing an individual AS’s routes, every BGP message between ASes is signed. This ensures that than entire path is signed and prevents intruders outside the AS from intercepting traffic and changing the BGP path to redirect traffic.

Both solutions require every AS to employ this solution. If some AS is willing to accept untrusted route advertisements and will relay them to other ASes as signed messages then the integrity is meaningless. Moreover, most BGP hijacking incidents took place because legitimate system administrators misconfigured (or reconfigured, based on their government’s directives) route advertisements. They were not the actions of attackers who hacked into a router.

Domain Name System (DNS)

The Domain Name System (DNS) is a tree-structured hierarchical service that maps Internet domain names to IP addresses. A user’s computer runs the DNS protocol via a program known as a DNS stub resolver. It first checks a local file for specific preconfigured name-to-address mappings. Then, it checks its cache of previously found mappings. Finally, it contacts an external DNS resolver, which is usually located at the ISP or is run as a public service, such as Google Public DNS, Cloudflare DNS, or OpenDNS.

We trust that the name-to-address mapping is legitimate. Web browsers, for instance, rely on this to enforce their same-origin policy, which involves validating content based on the domain name rather than its IP address.

However, DNS queries and responses are sent using UDP with no authentication or integrity checks. The only validation is that each DNS query contains a Query ID (QID). A DNS response must have a matching QID so the client can match it with the query it issued. These responses can be intercepted and modified or simply forged. Malicious responses can return a different IP address that will direct IP traffic to different hosts.

Pharming attack

A pharming attack is an attack on the configuration information maintained by a DNS server – either modifying the information used by the local DNS resolver or modifying the information in a remote DNS server. By changing the name to IP address mapping, an attacker can cause software to send packets to the wrong system.

A pharming attack is a cyberattack that redirects users from a legitimate website to a fraudulent one, often to steal sensitive information such as login credentials or payment details. Unlike phishing, which tricks users into clicking on fake links, pharming manipulates the underlying network or system to achieve redirection without the user’s knowledge. Specific methods of pharming attacks include:

  1. Modifying the Victim’s hosts File: Attackers use malware or social engineering to change the victim’s local hosts file (/etc/hosts on Linux, BSD, and macOS systems; c:\Windows\System32\Drivers\etc\hosts on Windows). This file maps domain names to IP addresses, and altering it can redirect traffic intended for a legitimate domain (e.g., bank.com) to a malicious IP address controlled by the attacker.

  2. Compromising the Router or DHCP Server: By exploiting vulnerabilities in routers or DHCP servers, attackers can modify DNS server settings. This causes all devices on the compromised network to contact the attacker’s DNS server to resolve domain names.

  3. DNS Server Poisoning: Attackers target DNS servers themselves, inserting malicious entries into the server’s cache. When users attempt to access a legitimate domain, the DNS server provides the attacker’s malicious IP address instead of the correct one.

Preventative measures include keeping devices and firmware updated, using secure DNS (like DNSSEC), and monitoring for suspicious activity in network configurations.

DNS cache poisoning (DNS spoofing attack)

DNS queries first check the local host’s DNS cache to see if the results of a past query have been cached. This yields a huge improvement in performance since a network query can be avoided. If the cached name-to-address mapping is not valid, then the wrong IP address is returned to the program that asked for it.

A DNS cache poisoning attack, also known as DNS spoofing, involves corrupting the DNS cache with false information to redirect users to malicious websites. In the general case, DNS cache poisoning refers to any mechanism where an attacker is able to provide malicious responses to DNS queries, resulting in those responses getting cached locally.

For instance, if an attacker installs malware that inspects ethernet packets on the network, the malware can detect DNS queries and issue forged responses. The response’s source address can even be forged to appear that it’s coming from a legitimate server. The local DNS resolver will accept the data because there is no way to verify whether it is legitimate or not.

Here is another way that browser-based DNS cache poisoning attack can be performed via JavaScript on a malicious website. The attack takes advantage of the fact that a DNS response for a subdomain, such as a.bank.com can contain information about a new DNS server for the entire bank.com domain. The goal of the attacker is to redirect requests for bank.com, even if the IP address for the domain is already cached in the system.

The browser requests access to a legitimate site but with an invalid subdomain. For example, a.bank.com. Because the system will not have the address of a.bank.com cached, it sends a DNS query to an external DNS resolver using the DNS protocol.

The DNS query includes a query ID (QID) x1. At the same time that the request for a.bank.com is made, JavaScript launches an attacker thread that sends 256 responses with random QIDs (y1, y2, y3, …}. Each of these DNS responses tells the server that the DNS server for bank.com is at the attacker’s IP address.

If one of these responses happens to have a matching QUD, the host system will accept it as truth that all future queries for anything at bank.com should be directed to the name server run by the attacker. If the responses don’t work, the script can try again with a different subdomain, b.bank.com. The attack might take several minutes, but there is a high likelihood that it will eventually succeed.

Summary: An attacker can run a local DNS server that will attempt to provide spoofed DNS responses to legitimate domain name lookup requests. If the query ID numbers of the fake response match those of a legitimate query (trial and error), the victim will get the wrong IP address, which will redirect legitimate requests to an attacker’s service.

Note the difference between pharming and DNS cache poisoning lies in their scope and methods:

Pharming is a broader attack that redirects users from legitimate websites to malicious ones by manipulating the domain name resolution process. It can involve altering the victim’s local hosts file, compromising routers, or poisoning DNS servers.

DNS cache poisoning (DNS s) focuses on manipulating DNS responses to mislead users temporarily.

DNS cache poisoning defenses

Several defenses can prevent this form of attack. The first two we discuss require non-standard actions that will need to be coded into the system.

Randomized source port

We can randomize the source port number of the query. Since the attacker does not get to see the query, it will not know where to send the bogus responses. There are 216 (65,536) possible ports to try.

Double queries

The second defense is to force all DNS queries to be issued twice. The attacker will have to guess a 16-bit query ID twice in a row and the chances of doing that successfully are infinitesimally small.

DNS over TCP

We can make these attacks far more difficult by using DNS over TCP rather than UDP. Inserting a message into a TCP session is much more difficult than just sending a UDP packet since you need to get the correct sequence numbers as well as source address and port numbers. You also need to have access to a raw sockets interface to create a masqueraded TCP segment.

DNS servers can be configured to user either or both protocols. TCP is often avoided because it creates a much higher latency for processing queries and results in a higher overhead at the DNS server.

DNSSEC

The strongest solution is to use a more secure version of the DNS protocol. DNSSEC, which stands for Domain Name System Security Extensions, was created to allow a DNS server to provide authenticated, signed responses to queries.

Every response contains a digital signature signed with the domain zone owner’s private key. For instance, Rutgers would have a private key and responses to queries for anything under rutgers.edu would be accompanied with a signature signed with Rutgers' private key. This authenticates the origin of the data and ensures its integrity – that the data has not been later modified.

The receiver needs to validate the signature with a public key. Public keys are trusted because they are distributed in a manner similar to X.509 certificates. Each public key is signed by the next top-level domain. For example, the public key for Rutgers.edu would be signed with the private key of the owner of .edu domain, EDUCAUSE. Everyone would need a root public key to verify this chain of trust.

DNSSEC has been around since 2008 and is in use, but widespread adoption has been really slow. It is difficult to overcome industry inertia and a lack of desire to update well-used protocols. It also requires agreements between various service providers and vendors. Systems can be reluctant to use it because it’s more compute-intensive and results in larger data packets.

DNS Rebinding

Web application security is based on the same-origin policy. This restricts the resources that JavaScript can access. Browser scripts can access cookies and other data on pages only if they share the same origin. The underlying assumption is that resolving a domain name takes you to the correct server.

The DNS rebinding attack allows JavaScript code on a malicious web page to access private IP addresses in the victim’s network.

The attacker configures the DNS entry for a domain name to have a short time-to-live value (TTL). When the victim’s browser visits the page and downloads JavaScript from that site, that JavaScript code is allowed to interact with the domain thanks to the same origin policy. However, right after the user’s browser downloads the script, the attacker reconfigures the DNS server so that future queries will return an address in the user’s internal network. The JavaScript code can then try to request resources from that system since, as far as the browser is concerned, the origin is the same because the name of the domain has not changed. The attacker may have to guess but most local addresses start with 192.168 or 10.0. By continuing to use short TTL values, the JavaScript code can continue to issue DNS queries and allow the attacker to try different addresses.

Summary: short time-to-live values in DNS allow an attacker to change the address of a domain name so that scripts from that domain can now access resources inside the private network.

See this Medium article for more details and a walkthrough of how attacks take place..


  1. At the data link layer, packets are called frames.  ↩︎

  2. MAC = Media Access Control and refers to the hardware address of the Ethernet device. Bluetooth, Ethernet, and Wi-Fi (802.11) share the same addressing formats.  ↩︎

  3. a trunk is the term for the connection between two switches.  ↩︎

  4. At the network layer, a packet is referred to as a datagram.  ↩︎

  5. at the transport layer, we refer to packets as segments. Don’t blame me. I don’t know why we need different words for each layer of the protocol stack.  ↩︎

Last modified November 27, 2024.
recycled pixels