Plenty has already been written about last week's distributed denial of service (DDoS) attack targeting the servers of Spamhaus, an organization that fights email spam. With the event past, one remaining point is that the attack, which flooded exchange points with data at a rate of 300 Gbps, is the beginning of large-scale DDoS campaigns using shifting tactics to exploit infrastructure vulnerabilities. Companies of all sizes need to review and update their security tactics.
Call it the age of asymmetrical cyber warfare. Direct attacks against servers, a standard DDoS tactic, are less successful as companies harden security at previously vulnerable points. That's what attackers initially tried against Spamhaus. When that failed, they redirected their traffic toward other vulnerable points in the regional network, attempting to bog down key connection points in Amsterdam, London, and Hong Kong.
Hitting the network at new points to exploit well-known and lesser-known vulnerabilities isn't a new tactic, but it's bound to increase.
We saw how unseen weak points and unforeseen contingencies can affect networks last fall. Servers handling major websites like Gawker, Huffington Post, the IEEE and other organizations went offline or were difficult to reach following Hurricane Sandy. This was not directly due to the storm's effects but rather due to power outages in the server "hotels" that house thousands of companies' servers in the New York City area. The outages largely happened because the backup diesel generators keeping the hotels powered simply ran out of gas.
While it's extremely unlikely that attackers could exploit the "out of gas" vulnerability to shut down servers, it is important to look at vulnerabilities throughout the infrastructure and protect against possible exploits. Yes, that is an obvious statement on its surface. But data center hotel operators in Manhattan didn't expect to have to struggle to get fuel deliveries following the storm.
Did it really happen? Does that matter?
Within a day of the size and extent of the Spamhaus attack being reported, critics began questioning the reported 300 Gbps number. Gizmodo theorized that the attack was not happening at all, because the Internet appeared to be working at normal levels at the attack's supposed height. Others accused the parties in the attack--Spamhaus, its provider CloudFlare, and reputed attacker CyberBunker--of staging it as a publicity stunt.
"There definitely was an attack," said Carlos Morales, vice president, global sales engineering and operations at Arbor Networks, in an interview with FierceTelecom. He noted that the company's research of customers that have end user stations at various points around the Internet showed that a DDoS event was happening.
"The type of attack in place was a DNS reflection attack, using open DNS resolvers to proxy an attack and reflect or amplify to a victim," Morales said. In some cases a request can be amplified up to 30 times, he explained.
While Arbor Networks was not involved in mitigation and did not record the attack traffic in its data banks, the size of the attack as reported by Spamhaus security provider CloudFlare caught other providers' attention. "The Spamhaus attack was so over the top that it was hard not to notice," said Morales.
If the attack was so large, why didn't parts of the Internet become inaccessible due to that traffic congestion? Mike Smith, senior security evangelist at Akamai, told FierceTelecom that the distributed nature of the Internet played a role in reducing its impact.
"You'll see isolated pockets of problems, but the system is designed to route around that damage. What we've seen, if you were inside of Amsterdam [AMS-IX] where some target servers were, you would see some degradation. If not, you didn't have much of a problem," Smith said. "I know this is clichéd, but if you think of the Internet as the highway system, it would be like major interstates being congested. If your route crosses that path you'll see the jam, but if you're not part of that network you won't see any of that."
Both Smith and Morales said that while a 300 Gbps attack is unprecedented, it doesn't necessarily mean parts of the Internet will shut down.
"We've done live streaming events … those by themselves are bigger than 300 gigs," said Smith. "That has an impact in some places and is usually more widespread than the network congestion in Amsterdam."
Resiliency doesn't mean security
The ability of most network traffic to reroute around the DDoS event doesn't mean the attackers couldn't have been successful.
First, the size of the attack is a clear indication that it can and will happen again. "Now that it's been broken open, I suspect yes," said Morales. "It doesn't take DNS amplification to do it. Take for instance the financial attacks (recent DDoS events targeting financial institutions like TD Bank). They're 50, 60 gigs each one. They've been up above 100 gigs aggregate. Each of those attacks is using no more than one-tenth of bots available to attackers. So they could easily quadruple the size of the attack. It's definitely got us on pins and needles."
Smith agreed that large-scale events will become more frequent. "It's a function of network speeds and the number of attacking nodes attackers can get ahold of. As bandwidth gets faster, the ability to put through attack traffic increases."
Second, the event focused attention on a known exploitable point in DNS servers.
"Some coverage was unduly breathless, but the attack was significant for at least three reasons beyond its sheer size: 1) it exposed the extent to which DNS servers were unsecured, 2) it revealed deeper vulnerabilities in DNS, and 3) it suggested the difficulties legal systems encounter when faced with transnational exploits," according to Monday's Cyberwire newsletter.
Third, the Internet, despite its distributed nature, still has some critically vulnerable points in its network infrastructure. "If people were cutting undersea cables, like in the Middle East, it has more impact than the attack on Spamhaus," said Smith. Last week's attempted cut of the SeaMeWe-4 submarine cable off Egypt is an example.
Which leads to the real lesson of the Spamhaus incident: Companies need to take a fresh look not just at how they secure their servers and related infrastructure, but also at their defense and contingency planning.
While security providers like Arbor and CloudFlare have been aware of the possibility of 300 Gbps-plus DDoS events for at least four years, many service providers and their enterprise customers aren't as prepared as they should be.
"Too many companies turn a blind eye to security," said Morales. "That's not the best way to run a company, especially its security. They're setting themselves up for a big failure."
Smith pointed to the importance of best practices in securing the network. "BCP 38 is a best practice for Internet providers. You only allow traffic out of your network that originates inside your network," he explained. "It's been common wisdom for a while, but there are ISPs that do not do egress filtering. They allow people to spoof their addresses they're coming from. … This has become an Internet health problem."
Morales echoed that sentiment. "For ISPs, we encourage them to implement best practices: (have) reverse path forwarding at the edge; protect the routing infrastructure and have the control plane on a separate network; and have sufficient mitigation capacity to deal with types and sizes of attacks."
He advised enterprises to get in touch with their ISPs and cloud providers to make sure they provide mitigation services. "The last thing you want to do is be stuck negotiating managed services while dealing with a problem."--Sam