Fighting deepfakes via payments


“Since many of these deepfake software services accept credit cards, payments providers are on the front lines of detecting these companies,” writes one corporate compliance officer

Alan Primitivo is director of compliance and product operations at Burlingame, California-based G2 Risk Solutions.

Recent headlines have spotlighted how criminal actors use “deepfakes” to spread misinformation and carry out social engineering attacks.

The social disruption caused by these hyper-realistic forgeries has captured public attention and concern. The absence of comprehensive regulation, combined with the growing availability of deepfake software-as-a-service tools, is fueling a difficult problem to tackle.

The payments industry plays a critical role in the deepfake fight. The digital commerce landscape is inundated with online services that facilitate deepfake creation via artificial intelligence.

Since many of these deepfake software services accept credit cards, payments providers are on the front lines of detecting these companies and stopping malicious actors from profiting.

Deepfakes were once the domain of technical experts, requiring significant computing power and expertise. However, this landscape is rapidly changing.

"Deepfakes-as-a-service" websites now allow almost anyone with an internet connection, credit card, and malicious intent to create highly convincing deepfakes. These platforms offer easy-to-use interfaces, pre-trained AI models, and even access to databases of source material.

The existence of affordable and accessible deepfake creation services makes this serious problem easy to scale — and difficult to police in the absence of comprehensive laws.

Since many deepfake service providers accept credit card payments, the payments system is thrust into the problem. Credit card brands, along with the payments providers who enable merchants to accept credit card payments, often set standards that are considerably ahead of legislation.

Card brands are issuing steep assessments against banks and payments providers that offer payment services to deepfake creation service providers, even if they do so unintentionally.

For financial, social, and ethical reasons, payments providers must actively identify these merchants and quickly shut off payment services to cripple their operations. To identify and stop deepfake creation services, the payments industry requires advanced technology and human expertise.

While some malicious merchant sites do little to hide their intentions, most employ techniques to evade detection. These can include misrepresenting themselves during onboarding, omitting key information, or laundering transactions to hide the true nature of their transactions.

Using advanced technology, it is possible to process vast amounts of information across the entire global merchant landscape to flag suspicious websites and identify deeply complicated transaction patterns. However, technology on its own is not yet capable of complete accuracy or nuanced evaluation or investigation. Human judgment remains crucial.

To find these bad actors, payments risk analysts must often play detective. In many cases, bad actors are processing payments via unwitting third parties who have no idea that they are participating.

The deepfake threat is serious, but deploying technology alongside human ingenuity can equip payments providers with the tools necessary to create a more trustworthy digital commerce landscape.


By Alan Primitivo on June 12, 2024
Original link