4 Common Misconceptions in DDoS Mitigation

After several years in cybersecurity and specifically in the DDoS mitigation space, I often come across certain common and widespread misconceptions.

Here are the top four misconceptions I hear from customers: 


1: DDoS attacks are always big

Based on my interactions with CISOs and backed by recent reports on the subject, one of the top misconceptions is that DDoS attacks are big.  Networking is a complex topic – there are many ways to make a service unavailable, and most of them do not focus on massive rates. A reason for this misconception could spring from the fact that big attacks are the ones that get widespread media attention.

There is just a minimum rate you need to reach, and MazeBolt’s data shows that this rate is in between the range of 0.5 and 10Gbps. Research confirms that large attacks of 100Gbps and above have fallen by 64 percent since 2019. However, there has been a startling 158 percent increase in attacks sized 5Gbps. or less.

Enterprises struggle to distinguish between low-rate attacks and legitimate traffic, and at the same time, find it difficult to maintain a low false-negative rate. Similar to the big attacks, small size attacks can bring down the services rapidly and can create an equivalent impact on businesses; urging companies to be prepared and review their web security arrangements.

2: DDoS attacks may be common but we have never experienced one

This thought is by far the most harmful of all DDoS misconceptions.  DDoS attacks are getting smarter and sneakier, and companies aren’t sufficiently prepared to dodge devious threats. In 2020, successful DDoS attacks witnessed a 200% growth.  From Netflix to TwitterWikipedia to international banks, gaming and gambling sites, DDoS attacks have spared no industry segment.  Although the statistics may exhibit that the attacks target `certain types’ of enterprises, in reality, 9 out of 10 businesses have claimed to experience an attack, with an average downtime of 30 minutes. Gartner estimates that a single minute of downtime costs most businesses $5,600, or more than $300,000 per hour.  The aftermath of a DDoS attack leads to monetary loss, operational challenges and loss of customer trust. Ironically, enterprises who believe they are safe from web attacks are the ones who suffer the most debilitating threats because they are unprepared.

3: I don’t care about region, so geo-blocking will be effective

Geo-blocking is used to block malicious traffic using IP geo location software.  Enterprises reject traffic originating from locations that have had a history of launching DDoS attacks. The problem with this arrangement is that the genuine traffic from these locations also gets blocked. Geo-blocking essentially restricts traffic and thereby stunts business growth.  Despite this problem, very often enterprises see this as a quick fix to prevent DDoS attacks. However, hackers are smart, and they find ways to bypass geolocation blocking algorithms by spoofing their IP addresses.

In most cases, geo-blocking is not accurate. It provides an approximate estimation which displays the information on the “best guess” basis. IP ranges sold between countries and regions, further adds to the inaccuracy. Geo-blocking ultimately fails if the attacker spoofs the source IP or uses a reflection attack based on location.

4: My DDoS mitigation solution offers full protection

Even with the most sophisticated DDoS mitigation and testing solutions deployed, most companies experience a staggering 48% DDoS vulnerability level. The vulnerability gap stems from DDoS mitigation solutions & infrequent Red Team DDoS testing being reactive, instead of continuously evaluating and closing vulnerabilities.

Currently, mitigation solutions are inept to re-configure and fine-tune their DDoS mitigation policies, leaving their ongoing visibility limited, and forcing them to troubleshoot issues at the very worst possible time, i.e. when a successful DDoS attack brings down systems. These solutions are all reactive, reacting to an attack, and not closing DDoS vulnerabilities before an attack happens. 


Identifying vulnerabilities through DDoS Red Team Testing is not workable in the long run. The testing simulates a small variety of DDoS attack vectors in a controlled manner to validate the human response (Red Team) and procedural handling for a successful DDoS attack. Red Team testing is a static test done on dynamic systems and usually carried out twice a year. It does not diagnose a company’s vulnerability level for DDoS attacks, and any information gained from this testing is valid for that point in time only. Further, Red Team testing disrupts the IT systems and requires a planned maintenance window.

Introducing RADAR™ Testing DDoS Attack Protection

RADAR™ testing, MazeBolt’s new patented DDoS protection solution is part of the MazeBolt security platform. RADAR™ testing works continuously and non-disruptively to deliver advanced intelligence to remediate any DDoS vulnerabilities found in your network.  With RADAR™ testing, organizations achieve, maintain, and verify the continuous closing of their DDoS vulnerability gaps. Reducing and maintaining the vulnerability level of damaging DDoS attacks from an average of 48% to under 2% ongoing.

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay Updated.
Get our Newsletter*

Recent posts

Geo-Blocking: a Band-Aid

When most people hear the term Geo-Blocking, they immediately think of Netflix. When you’re trying to watch a video on a streaming service, you might

Read More