Why AI Agents Can Produce But Can’t Transact — The Missing Layer Holding Back the Agent Economy

TL;DR

AI agents have gotten remarkably good at doing work — writing code, drafting documents, researching topics, and executing multi-step tasks. But there’s a fundamental wall they keep hitting: they can’t actually buy or sell anything. The infrastructure that underlies modern commerce — payment APIs, e-signature platforms, legal contracts — was built for humans, not autonomous software. Until that changes, AI agents will remain capable producers stuck in an economy they can’t participate in.


What the Sources Say

The conversation bubbling up in AI communities right now isn’t about whether AI can do work. That argument is largely settled. The sharper question — raised in a discussion thread that’s been gaining traction — is why AI agents, for all their capability, can’t close a transaction without a human in the loop.

The answer, it turns out, isn’t primarily technical. It’s infrastructural and legal.

The Production Problem Is (Mostly) Solved

AI agents today can draft contracts, generate code, produce marketing copy, analyze datasets, and orchestrate complex workflows. That’s not nothing — it’s genuinely remarkable. But “producing” and “transacting” are two different things. An agent can write a contract; it cannot sign one. An agent can recommend a purchase; it cannot actually pay for anything.

This distinction matters more than it might seem at first glance.

The Infrastructure Gap

Consider what actually happens when a business pays for something online. They hit a payment API — something like Stripe. Stripe’s entire model assumes a human principal behind the transaction: a verified identity, a bank account tied to a legal entity, liability that can be traced back to a person or organization. When an AI agent tries to initiate that same transaction autonomously, it runs into an immediate problem: whose identity is backing this?

The same logic applies to legal infrastructure. DocuSign, the dominant e-signature platform, was built on the premise that a human is signing. Digital signatures carry legal weight precisely because they’re tied to a person’s verified identity. An AI agent signing a contract autonomously doesn’t just create a technical problem — it creates a legal vacuum. Who’s liable if the contract is breached? The agent? The developer who built it? The company that deployed it?

These aren’t edge cases or theoretical concerns. They’re the immediate, practical barriers that show up the moment anyone tries to deploy an AI agent that needs to do something commercially meaningful in the world.

What Solutions Are Being Explored

The most concrete answer emerging from these discussions points toward blockchain-based infrastructure — specifically platforms like NEAR Protocol, which is being positioned as a payment layer for agent-to-agent commerce and decentralized applications.

The logic here is interesting. Blockchain architectures can, in principle, allow software agents to hold cryptographic identities, own wallets, and execute transactions without needing a human to be the legal principal behind every action. A smart contract on a blockchain doesn’t care whether the counterparty is a human or an autonomous agent — it just checks that the conditions are met and executes accordingly.

This is why the “agent economy” conversation and the blockchain conversation keep intersecting. It’s not necessarily about cryptocurrency speculation — it’s about whether we can build a transactional layer that treats agents as first-class participants rather than imposters pretending to be human.

The Security Complication

Even setting aside the legal and identity problems, there’s a practical security concern that makes banks and payment processors especially nervous about autonomous agents touching financial transactions. Research from Veracode — a security platform that analyzes code for vulnerabilities — found that AI-generated code contains 2.74 times more security vulnerabilities than human-written code.

That’s a striking number. And it creates an uncomfortable tension: the same AI systems being proposed as autonomous economic actors are also producing code that’s significantly more vulnerable to exploitation. If an AI agent is both writing the code and executing transactions, and that code has a higher vulnerability rate, the potential attack surface expands considerably.

This isn’t an argument that AI agents will never transact — it’s an argument that the security infrastructure needs to catch up alongside the transactional infrastructure.

What the Productivity Research Actually Shows

METR, an AI evaluation organization focused on studying the real-world productivity impact of AI tools on developers, has been doing the hard empirical work of measuring what AI actually delivers versus what it promises. Their research matters here because it grounds the conversation in evidence rather than hype.

The agent economy narrative often assumes that because AI agents can produce, they will be widely deployed to produce. But if the productivity gains from AI development tools are smaller or more conditional than assumed — and evaluation organizations like METR exist precisely because those gains are contested — then the urgency of solving the transaction problem is also more measured than the hype cycle suggests.


Pricing & Alternatives

The source material doesn’t provide specific pricing for most of the platforms mentioned, but here’s a landscape view of the relevant players in the “agents who can transact” space:

Platform / SolutionRole in the Agent EconomyPricing
NEAR ProtocolBlockchain payment layer for agent-to-agent commerceNot disclosed
StripeTraditional payment API (requires human principal)Per-transaction fees (standard API pricing)
DocuSignE-signature platform (requires human identity)Subscription tiers (human-oriented)
VeracodeSecurity analysis for AI-generated codeNot disclosed
METRAI productivity evaluation researchResearch organization (not a commercial product)

The current situation is that no mainstream payment or legal infrastructure has been purpose-built for autonomous agents. The closest thing to a native solution is blockchain-based identity and payment systems — but those come with their own adoption challenges, including regulatory uncertainty and the practical complexity of integrating crypto infrastructure into mainstream business workflows.


The Bottom Line: Who Should Care?

If you’re building AI agents for internal use — automating reports, summarizing data, managing workflows — this problem probably doesn’t affect you yet. Your agents are producing, not transacting, and the current infrastructure handles that fine.

If you’re building AI agents that need to purchase resources, pay for APIs, or execute commercial transactions autonomously, this is your most important unsolved problem right now. You’ll either need to build a human-in-the-loop approval layer (which defeats part of the purpose of autonomous agents) or start watching blockchain-based identity solutions like NEAR Protocol closely.

If you’re a developer working with AI-generated code, the Veracode research is a direct concern. A 2.74x increase in vulnerabilities isn’t a rounding error — it’s a significant risk multiplier that demands rigorous security review before any AI-written code touches financial systems.

If you’re a product manager or founder thinking about “agentic” products, the METR research angle is important context. The productivity gains from AI agents are real but contested and context-dependent. Building a business model that assumes maximum AI productivity without validation is a risk.

If you’re in fintech, legal tech, or identity infrastructure, this is your opportunity. The transactional gap for AI agents is a genuine, unsolved infrastructure problem. Whoever figures out how to give AI agents verifiable identities, auditable transaction histories, and legal accountability frameworks without requiring a human principal at every step will be building critical infrastructure for the next decade.

The deeper issue is this: the internet was built assuming human principals behind every action. Authentication systems, payment networks, legal contracts, liability frameworks — all of it was designed for humans doing things. AI agents are a genuinely new category of actor, and the infrastructure hasn’t caught up.

It’s not a technical limitation of the agents themselves. It’s that the world they’re operating in wasn’t built for them.

That gap is closing. Blockchain-native agent identities, programmable smart contracts, and evolving legal frameworks around AI liability are all moving in the same direction. But the honest answer to “why can AI agents produce but not transact?” is: because we haven’t finished building the world they need to transact in.


Sources