With the world having witnessed two years’ worth of digital transformation in just a few months, the number of businesses that engage with customers largely via digital channels is now truly staggering. Cybercriminals have been quick to recognise this and have consequently ramped up their attacks against web applications. And while phishing and ransomware are still the top causes of breaches, Barracuda researchers found scammers are now increasingly turning to bots and automation to make their attacks more efficient and effective in avoiding detection.
Bots are coded programs used to automate a definite task on the internet. While good bots make sharing information or conducting searches easier, bad ones are used for malicious purposes like conducting automated attacks for exploiting vulnerabilities in web applications. These attacks can range from fake bots posing as Google bots to avoid detection, to distributed denial of service attacks (DDoS) trying to direct junk traffic towards an online service with the intent of flooding it to the point of knocking it offline. Bot attacks now encompass everything from web scraping – where bots are used to gather content or data – to bots that try to beat CAPTCHAs, or conduct ad fraud, card fraud and inventory fraud. In a recent Threat Spotlight released by Barracuda Networks, it was identified that bots pretending to be a Google bot or similar accounted for over 12% of the web application attacks whereas Application DDoS was also surprisingly dominant, making up more than 9% of the attacks.
Cybercriminals are becoming increasingly reliant on automated tools for targeting web applications. Researchers at Barracuda Networks detected that nearly 20% of the attacks detected were fuzzing attacks trying to find the points at which applications break in order to exploit. Fuzzing is basically an automated process of finding hackable software bugs by randomly feeding different permutations of data into a target programme until one of those combinations reveals a vulnerability. In fact, in a sample of two months of blocked data on web application attacks during November and December 2020, it was found that the top five attacks using automated tools were fuzzing attacks, injection attacks, fake bots, App DDoS and blocked bots. Nearly 12% of the attacks were injection attacks and most of the attackers were using automated tools like sqlmap to try getting into the applications. This trend shows no signs of receding given the relative ease of execution and the possibility of large returns for cybercriminals.
There has been an overwhelming number of data exfiltration attempts made for credit card numbers and social security numbers, etc., of which, VISA accounted for more than three-quarters of these attacks followed distantly by JCB with more than 20% and MasterCard, Diners and American Express at much smaller volumes. Clearly, there’s a growing size of automated traffic online.
While not all bots are bad, there are enough malicious and corrupted ones to warrant that organisations start paying close attention. Cybersecurity teams should start taking necessary steps to safeguard their applications against bad bots as digital transactions are expected to continue in full swing even in the post-pandemic world. A well-configured web application firewall with anti-bot protection, DDoS protection, API security and credential stuffing protection must be installed as a service solution by businesses to effectively detect advanced automated attacks.
It’s important to recognise that the disconnection between application security and security operations teams and e-commerce, fraud and network security professionals makes it possible for bots to pose a threat to business operations. In any organisation, effective bot management usually relies on the application security and security operations teams. Yet, it is the e-commerce, fraud and network security professionals that commonly consume the data from bot management tools. This disconnect can lead to the commerce or fraud teams being left out of critical bot management decisions. To effectively safeguard web applications against newer attacks, businesses must emphasise collaboration between the internal teams, including security, customer experience, e-commerce and marketing.
It is highly likely that the automated attacks using bots will continue to increase as communications between web applications continues to rise. This means it is vital to stay informed about current threats and how they are evolving considering that the new attacks tend to get overlooked due to a lack of understanding and shadow applications are being deployed without appropriate protections. Meanwhile, IT security teams can be better prepared by working with other departments across the business to track bad bots and keep them from becoming a security risk to the business as a whole. Organisations should look for a WAF-as-a-Service or WAAP solution that includes bot mitigation, DDoS protection, API security and credential stuffing protection and make sure it is properly configured to protect against newer attacks.
By Toni El Inati, RVP Sales, META & CEE, Barracuda Networks