How Bot and Fraud Mitigation Can Work Together to Reduce Risk


Onions are great for analogies, as are buckets full of stuff from the beach. In this piece, I’d like to take a look at how both of these analogies can help us understand how bot and fraud mitigation can work together to help enterprises both improve their security postures and lower their fraud losses.

Onions are great for analogies, as are buckets full of stuff from the beach. In this piece, I’d like to take a look at how both of these analogies can help us understand how bot and fraud mitigation can work together to help enterprises both improve their security postures and lower their fraud losses.

The obvious analogy when it comes to an onion is that of peeling away different layers. When we look at the digital channels (web and mobile) of online applications, we find a whole host of different activities. Some of it is desired, while much of it may be undesired. Yet, more often than not, we try to defend our online applications as a whole, without trying to peel away individual layers of activity that might help us see and understand what is actually going on a whole lot better.

Similarly, say I give you a bucket of water, sand, and rocks from the beach, and I ask you to pull out all of the rocks. You could certainly put your hands in the bucket and attempt to pull out all of the rocks one by one. Or, you might run and get some sort of a strainer, pour the contents of the bucket through the strainer, and find yourself left with all of the rocks. The first method is a brute force of sorts - diving right in without considering whether tools may help complete the job more efficiently. It is equivalent to looking at the onion without peeling away any of the layers. The second method, on the other hand, uses tools to more efficiently complete the work. That is akin to peeling away layers of the onion to better understand it.

When looking to detect security breaches and fraud events within our online applications, we must first understand that we most likely have a combination of automated traffic (bots), manual fraud (fraudsters), and legitimate customer traffic (what we want). Having all three of these mixed together creates a large volume of data, much of it noise. It is extremely difficult to identify, analyze, and investigate any traffic of interest when looking across the entirety of the traffic, noise included.

Thus, to more effectively protect our online applications from security and fraud threats, we must revisit our analogies. We must peel away the layers of the onion, and we must filter the bucket of beach stuff. Or, to put it another way, we can take a three-pronged approach to more effectively monitoring our online applications for security and fraud issues with far less noise:

1. Automated traffic: Bots, whether good or bad, are not the legitimate human customers we desire. In some cases, bots can comprise 80%-90% of the traffic an online application sees. As such, the first step to improving security and fraud monitoring is removing all of the automated traffic. Rules and signatures aren’t enough here - understanding how to differentiate the intent and behavior of bots vs. humans is key. Successfully filtering out the automated traffic reduces both the noise level and the risk level tremendously and allows the security and fraud teams to focus on what remains - mainly manual (human) traffic. Some of that traffic will be legitimate and wanted, while some of that traffic will be fraudulent and unwanted.

2. Manual fraud: Fraudsters are highly motivated, creative, and clever.  They make their living figuring out how to abuse the business logic of your online applications in order to cause you fraud losses.  They learn how to hide amongst your legitimate users.  When you figure out how to detect them, they change their behavior.  Here again, rules and signatures are not enough - an understanding of how to differentiate the intent and behavior of fraudsters vs. legitimate users is a must.  Successfully doing that results in the ability to filter out and block the majority of fraud.  This results in greatly reduced fraud losses, as well as a further reduction in the noise clouding visibility.  In addition to those incredible benefits, it opens up the doors to another possibility.

3. Reduce friction: Reliably filtering out unwanted automation and fraud gives us another possibility. If we can reliably identify the traffic we don’t want, we can also reliably identify the traffic we do want. If we can do that, then we can stop piling on the user experience friction for our known good users. In other words, if I can reliably tell that a given user is both human and a legitimate user, then why hassle them with friction, such as MFA challenges and otherwise?  The conventional wisdom that the introduction of those friction points is essential to stopping security breaches and fraud attacks breaks down when we can accomplish that without the need to burden the user. The ability to reliably identify unwanted traffic (whether bot or fraud) opens up an entirely new realm of possibilities when it comes to reducing user experience friction. That, in turn, mitigates another risk that is often overlooked - the risk of lost revenue due to users abandoning the application.

In recent years, advances in the ability to reliably detect automation (bots) and fraud have opened up new possibilities for security and fraud teams. By understanding how to identify, isolate, and remove unwanted bot and fraud traffic, enterprises can remove the noise clouding the visibility they have into their online applications. This, in turn, allows those enterprises to focus on better protecting their online applications and on optimizing the user experience for their legitimate users.


By Joshua Goldfarb on Wed, 10 Aug 2022 13:46:08 +0000
Original link