In today’s day and age people usually take on a lot more than they can handle. The first task is okay, the second gets a little harder, but as the third, fourth, fifth, and sixth start piling on one gets overwhelmed and ends up crashing! Well the same thing happens to websites, when they are hit with a Distributed Denial of Service attack. A specific attack that relates to this situation is the HTTP Flood.
The HTTP Flood is a common layer 7 attack used when performing DDoS. In general effective layer 7 attacks require the attacker to have a better understanding of the website and its functionalities. The reason being is that there will be a specific attack created for the website being targeted. This is why an HTTP Flood typically requires less bandwidth than other attacks. This specific attack could also be referred to as a HTTP GET Flood attack, and for good reason as it uses valid GET requests to retrieve information. This is just one method of HTTP Floods, as they could also be performed with PUT, POST, and other methods. If you think about GET Floods from a less technical standpoint, you will notice that valid GET requests are being made at an extremely high volume so much so that the server cannot react quick enough to all requests.
Considering the accessibility and low level of attack difficulty the requests made are usually for specific and complex information in order to overwhelm the CPU and server memory. The high volume of requests usually means botnets are utilized for this type of an attack. This enables a large volume of requests to be sent out, resulting in the website being overwhelmed a lot quicker.
To think about this from a less technical perspective lets imagine a retail store clerk. In this example the clerk will be the web-server and the customers will be sending GET requests for store items. A customer may come in and request for a product and easy enough the clerk will head to the product location and return with the request fulfilled.
Now imagine this clerk comes back and now there are two customers requesting a product, then tens of customers, then hundreds, and then thousands of customers. Well one clerk would get overwhelmed and exhausted, causing them to be unable to serve any of the customers. This exemplifies a server when there is an application attack. To continue with the example, these customers are not asking for a generic product, they are all asking for different and specific products, just as GET requests are specific to overwhelm the capacity of the CPU. For example one customer could request a sack of potatoes instead of a can of soup, and then all the customers begin requesting heavier products, the clerks capacity would get overwhelmed just as the CPU capacity does.
Large companies aren’t strangers to this type of an attack, but many believe they are completely protected by their current DDoS Mitigation and processes. It is the right action to mitigate these attacks, but there still may be vulnerabilities in specific layers of the OSI model that go undetected. It is important to consistently assure the quality of these protection services. One crucial suggestion in sharpening DDoS mitigation systems is to assure the quality of the main layers of the OSI model, by performing controlled DDoS attacks, and then working on what needs to be fixed to perform at an optimal level. This way companies could test their systems, and measure the reaction of the team responsible for protecting them. It may sound extreme to test vulnerabilities by attacking it, but that’s the most effective method in avoiding real attacks, and their implications. My suggestion for people who have DDoS Mitigation in place is to battle test it, and see where the vulnerabilities are lurking. Effective methods of battle testing your DDoS Mitigation could be found on our Online Webinar. The idea of mitigating DDoS attacks is one thing, but ensuring the mitigation is effective is what needs to be done, to be fully protected!