AI Agents Are Now Doing Business With Each Other
Artificial intelligence just took a step into territory that feels pulled from science fiction: Anthropic, the San Francisco-based AI safety company, recently ran a secret experiment in which AI agents acted as both buyers and sellers in a classified marketplace — striking real deals, for real goods, with real money.
The experiment, reported by TechCrunch, is one of the most concrete public demonstrations of autonomous agent-on-agent commerce to date. Rather than simulated or hypothetical transactions, Anthropic's test involved actual economic exchange, meaning the agents were making consequential decisions with stakes attached.
What Is Agent-on-Agent Commerce?
For most of the past two years, the AI world has been consumed by chatbots — tools that respond to human prompts. But the frontier has shifted. The new paradigm is agentic AI: systems that don't just answer questions but take actions, make decisions, and pursue goals over time without a human in the loop for every step.
Agent-on-agent commerce takes this further. Instead of a human buying something from an AI-powered storefront, you have one AI agent — acting on behalf of a buyer — negotiating with another AI agent acting on behalf of a seller. The humans set the parameters; the agents do the deal.
This kind of architecture could eventually underpin everything from automated procurement systems for businesses to personal AI assistants that manage subscriptions, compare prices, and execute purchases without you ever opening a browser.
Why Anthropic Is Testing This
Anthropic has positioned itself as the safety-conscious player in the AI race, so it's notable that the company is actively probing how agents behave in economic environments. Understanding how AI agents negotiate, what strategies they employ, and whether they stay within intended boundaries when real money is on the line is exactly the kind of alignment research the company was founded to pursue.
Running a live marketplace — even a classified one — generates data that pure simulations can't. Agents facing real consequences may behave differently than agents in sandboxed test environments. That gap is precisely what safety researchers want to understand.
The Bigger Picture
Anthropic's experiment lands at a moment when the tech industry is racing to build what are being called "agentic" or "multi-agent" systems. OpenAI, Google, Microsoft, and dozens of startups are all pushing toward AI that can autonomously browse the web, manage files, send emails, and increasingly — spend money.
The implications are significant. Automated agents conducting commerce at scale could reshape how businesses operate, but they also raise hard questions: Who is liable when an agent makes a bad deal? How do you audit a transaction no human directly approved? What happens when two agents find loopholes that benefit neither their human principals?
Anthropic's willingness to test in a real economic environment, rather than waiting for these questions to become urgent in production systems, suggests the company is trying to get ahead of those problems — not just build faster.
For now, the classified marketplace is an experiment. But the direction of travel is clear: AI agents are coming for the economy, one transaction at a time.
Source: TechCrunch
