KYA is the new KYC: The payment identity problem AI commerce cannot afford to ignore
Mon, 4th May 2026 (Today)
Traditional digital payments were engineered under the assumption that a human initiates the transaction.
Every layer of the modern checkout process reflects that: from KYC frameworks that verify identification documents to fraud models that analyse behavioural patterns and chargeback processes that ask what the cardholder intended. All of it is geared towards working with humans. And the introduction of AI agents into payments and eCommerce breaks all of that at the same time.
When AI acts on behalf of a human user, it can't respond to step-up authentication the way a flesh-and-blood person would. There is no biometric data to produce, nor does an agent exhibit any behavioural signals that current fraud models can determine as trustworthy or untrustworthy.
Yet we are increasingly trusting AI to execute real financial transactions in real merchant systems. A late-2025 study showed that almost half of surveyed shoppers already use AI most of the time because it makes the shopping process more fun. And 80% of them expect to rely on it even more in the future.
But while the mismatch between the old-generation infrastructure and the new AI reality exists, it will continue to cause problems. High decline rates, failed checkouts, lost revenue, all of these things are happening today. Not because the consumer is fraudulent, but because the systems merchants employ are unable to identify what they're dealing with.
This is a major structural gap that needs to be addressed if AI in commerce is to scale efficiently.
When KYC Fails, KYA Steps Up. But What Should it Do?
KYC was designed to answer one question: Who is the human behind this transaction?
Today, that is no longer the relevant question. In AI-powered eCommerce, the more accurate questions to ask are:
- who authorised the agent to act?
- what scope of authority was granted to it?
- is that authority still valid?
- who is responsible if something goes wrong?
This is where the new concept of Know Your Agent (KYA) is now taking shape, attempting to answer those questions. Major companies like Visa and Google are already experimenting with their own approaches to authenticate agents and approve their transactions. The same can also be said for the likes of Mastercard, Stripe, or PayPal.
In order for a credible KYA framework to work in practice, it needs to address four key elements:
- Agent identity: Verifiable credentials that establish what the agent is and under whose account it operates;
- Delegated authority: An auditable record of what the human behind the AI authorised the agent to do;
- Revocability: The ability to withdraw or modify that authority in real time;
- Liability assignment: A defined framework establishing who bears responsibility when something about a transaction goes wrong and causes a dispute.
Without all four, KYA is not a real solution; it's just a shiny new label on a problem that ultimately goes unsolved.
The Liability Gap Is the Biggest Unresolved Issue
When an AI agent selects the wrong product, exceeds a budget, or initiates an unintended transaction, it becomes grounds for a chargeback. But the problem is that the existing chargeback framework, once again, has no clear answer on how to interact with AI. The dispute processes were built around the intent of human purchasers.
Now, agentic commerce introduces a new layer to this equation: "Was the agent acting within its delegated scope or not?" Current scheme rules, chargeback reason codes, or merchant agreements are not equipped to handle this distinction.
Until liability within this new landscape is legally defined, merchants are left facing asymmetric risk. They bear the operational costs of failed agent transactions and the chargeback exposure from disputed ones, with no clear way to defend their position.
Europe as the Real Test Case
Europe is likely to become the first region where answers to all these issues are properly tested in practice.
PSD2's Strong Customer Authentication (SCA) requirements create a very specific technical conflict with agentic transactions. SCA was designed to confirm the presence of a human user, so AI agents naturally fail to meet its criteria by design. This creates exactly the kind of friction needed to hasten the development of new models and approaches. The practical question regulators will need to answer now is whether delegated agent authority can substitute for SCA, and under what conditions.
At the same time, PSD3 and the EU Payment Services Regulation are on the horizon, and the payments industry needs to engage with those frameworks now. Otherwise, there's a real risk that the rules will be written by people who do not fully understand the infrastructure they are attempting to regulate.
What This Means Right Now
For merchants and payment service providers, the current state of things already has immediate, visible consequences.
Decline rates on agent-initiated transactions are already elevated, yet many businesses are not tracking them separately. That's a very real revenue leak, the full impact of which is hard to properly assess at present time.
KYA standards are still being formed, but the window to help shape them will not remain open forever. The transition from KYC to KYA is already underway, but right now, most of the market is managing it poorly.
For fintech companies, this presents an opportunity. It's already clear that agent-driven transactions are not going anywhere, and once more stable ground is found, adoption will accelerate. The parties that don't adjust in time will be left behind. But those that do, will be well-positioned to become leaders.