As banking leaders and fintech builders know all too well, the rise of generative AI is making it increasingly difficult in online and digital communications to know what is real and what’s not.
Fraud is a challenge that is keeping bankers, business owners, and investors up at night. In Visa’s 2025 Global eCommerce Payments & Fraud report, 98% of surveyed merchants said they experienced one or more types of fraud in the past 12 months. With Gen AI, this can be a forged signature, a fabricated video, or a falsified financial record.
That wave of AI-enabled deception has particularly exposed a growing vulnerability across industries that rely on digital trust. “It’s becoming a lot harder to answer: ‘Was this really you?’ Everything we believe and accept as real needs to rely on verifiable records,” said Pat Kinsel, CEO of Proof, which launched a new product today designed to verify digital content through cryptographic digital identity. It’s a tall order, but demand is growing as businesses handling sensitive transactions—like mortgage closings, insurance claims, or fund transfers—are looking to prove ownership without making the identity check process more convoluted.
Speaking of fraud, today we’re also examining the rising risks of stablecoin-related financial crime. As stablecoins gain traction through regulatory acceptance and adoption by major financial institutions, and lacking the technological safeguards—at least so far—of incumbent payment rails, stablecoins’ speed and value make for an attractive vector for malfeasance. The same qualities that make stablecoins efficient tools for modern finance also make them powerful instruments for fraud—posing new challenges for regulators and the institutions implementing them.
–The Editors