Low-Ambition Companies Will Suffer from AI

There are two ideas in the AI Zeitgeist that you come across almost daily. The first one is this:

"If you want to be a competitive worker, you need to know how to work with AI. Because if you don’t, you’ll be outpaced by the workers who do."

The second idea, even sometimes part of the same take, goes like this:

"Companies that adopt AI are going to do layoffs because AI agents can do the work of humans that are slower and more expensive."

It baffles me that these two ideas can somehow coexist when they are very obviously at odds with each other. At the very least, they misrepresent how AI agents work and the role that humans play in managing them.

The Overstated Autonomy of AI

My mom lives in Southern California, where she can take a Waymo to get around. She absolutely loves it. She’s relieved to not have to talk to an Uber driver, she likes the pace and consistency in how a Waymo drives, and she loves the convenience of doing it all from her phone.

But she still has to tell the Waymo where she wants to go. It doesn’t decide for her. Nor does it schedule the trips for her. Even if in the near future it started to recognize her habits, noting how she wants to go to the store at 8am on Wednesdays, it is still deriving its purpose from my mom’s intentions. And this is all for a pretty narrowly defined task, go from point A to point B. AI today doesn’t self-generate intention.

A manager who decides to replace employees with AI agents might think, “I’ll just give these AI agents my intentions and manage the agents instead of people.” Even assuming a fleet of agents can actually do extensive work autonomously today (they can’t), there’s still a huge constraint: the manager’s intentions.

Intentions need detail to lead to good decisions. They need elaboration. You can’t just tell an agent, “I want to make a lot of money,” and expect it to fill in the blanks. There are too many blanks. If such a thing were possible, a manager could just tell their employees the same thing. “Go make me money.” That’s hardly management at all, if you think about it.

Anthropic, the makers of Claude, illustrated all of this perfectly in a video they released just this morning. Meet Claudius, the AI agent who runs a vending machine business.

In this video, Anthropic is transparent about some of the pitfalls they encountered trying to get an autonomous AI agent to run a simple, profitable business. Claudius was easily manipulated by customers, confused about what was real and what wasn’t, and lost a lot of money. In the end, it only worked when they gave Claudius a boss (Seymour Cash, another AI agent). Of course, Seymour had his own bosses, the humans designing the experiment.

This is all part of work being done by Andon Labs, who designed a benchmark from this experiment called VendingBench (recently updated to Version 2). The purpose of the benchmark is to test how well agents can sustain a set of complex tasks over a long time horizon. Even brand-new frontier models, while capable of making a profit, can still end the benchmark prematurely.

The reality today is that AI agents are not truly autonomous. In my opinion, they won’t be for a long time to come. (There are good arguments they should never exist.) To succeed, they need to know how to choose what to work on, especially for anything longer than just a few hours at a time. For now, they don’t have a way to meaningfully make that choice absent human direction.

(Perhaps in the near- or far-distant future AI agents will choose entirely on their own what problems to solve or what products to produce. In the dystopian versions of this, we have no reason to think that they’ll want to produce anything that’s actually helpful to human beings.)

Elaborate, detailed intention is what matters to a successful AI agent, otherwise it’s a Waymo with no destination. This is why prompt engineering is a thing. And if you want a team of agents, you need to elaborate intentions for each of them, repeatedly. No one human manager can do this at scale, just as they can’t effectively manage a team beyond a certain number of employees. The manager is the constraint.

Ambition

Recognizing the constraint, I can’t think of any reason for a manager to replace employees with agents. Except if the manager is low-ambition, thinking “My old team could do X, and now I can have agents that do X.” Why in the world, if you can use faster and cheaper AI, would you stop at X?

Instead, keeping employees and training them is the only reasonable thing to do because it means expanding the constrained resource of intention. Employees who share the vision of the team, can make decisions about intention to make it granular and elaborated enough for agents to go do the work.

Software development is where the biggest employment impacts are happening now. And companies are already starting to see the mistake of replacing developers with agents. It turns out junior developers are worth more with AI, not less. I’ve done a lot of coding with AI agents since May. When the agent screws up or produces something buggy, the likeliest cause is my lack of skill in not giving the coding agent a clear enough set of intentions. I’ve had to learn extensively about how different technologies work so I can get the agent to write better code. Laying off developers, instead of giving them AI agents to direct, is low-ambition and shortsighted.

There are definitely workers today performing commodity tasks, things like data entry that AI can do easily and quickly. But those workers are squandered if they’re just laid off. What goes out the door with them is the ability to manage AI agents with intention. Organizations will need more of that, not less.

Even just in the short-run, I’m confident that the market will reward high-ambition companies that are hiring and training people to direct AI agents. Those companies will produce far more and make it faster. And they will leave the low-ambition, fire-all-the-humans companies in the dust.