When AI gets a manager, you know the game has changed

Summary
The most dangerous seat at the table is the one where you’re still present, but the AI does the talking.When you need to hire a person just to manage your artificial intelligence (AI), something fundamental has changed.
Not in the future. Not in theory. Right now.
I recently met a startup founder who told me—half-laughing, half-serious—that he was hiring an “agent manager."
“Prompts are all over the place," he said. “Tone is off. The output needs babysitting. I need someone to train and track our AI agents so the team can just get on with their work."
We both laughed. But later, I realised: this isn’t absurd. It’s inevitable.
Read this | India must forge its own AI path amid a foundational tug of war
For the last year, we’ve debated whether AI will replace human workers. But we’re missing the real shift unfolding right under our noses: humans are now managing AI—not just building it, not just using it, but structuring teams around it, delegating to it, and in many cases, becoming dependent on it.
Microsoft’s 2025 Work Trend Index quantifies the shift: 82% of global leaders say AI agents will be deeply embedded in their organisations within the next 12–18 months. One in four companies has already deployed them at scale. But what matters more than adoption is how unevenly that change is playing out.
At one of India’s top five global capability centres (GCCs), a pilot team recently tested a “human-to-agent ratio"—assigning one agent per employee. The results were revealing. Not just productivity gains, but deep friction. Some employees hesitated to delegate. Others over-delegated. Most weren’t confident about what the agent was doing behind the scenes. The tech wasn’t the problem. Trust was.
This is the invisible divide emerging across workplaces—not between humans and machines, but between those who know how to collaborate with AI and those still figuring it out.
Microsoft calls the leading companies “Frontier Firms." I see something slightly different: agent-literate organizations. Places where managing digital teammates is just part of the job. Where people are learning that prompting isn’t about clever phrasing—it’s about structure, tone, and cultural nuance. Where performance reviews might soon include a line item for “AI fluency."
It sounds ridiculous. Until you realise it’s already happening.
One founder in Pune runs an ops team of five, supported by seven agents. The agents draft emails, generate reports, and chase vendors.
The humans supervise, escalate, and course-correct. “We’re not hiring another ops exec anytime soon," he said. “We’re hiring someone to train the agents."
It sounds efficient. But here’s the part we don’t talk about: not everyone on that team has the same voice in how those agents behave—the person who knows how to speak as the agent sets the tone. The rest follow. Or worse—stay silent.
That’s what this moment is really about.
Not “Will AI take my job?"
But: Who gets to shape how AI shows up at work?
Who trains it, manages it, critiques it—and who is left trying to work around it?
We’re not heading towards a divide between white-collar and blue-collar anymore. That line is already fading. The real split now is between the agent-native and the agent-blind. The people who know how to talk to AI, shape it, and make it work for them—and the ones who don’t, or can’t. It’s not just a skill gap. It’s a confidence gap. A permission gap. And it’s growing fast.
In every room I’ve been in lately, it’s the same thing: a few people steering the conversation with AI, and the rest quietly adapting around it.
Read this | Mint Primer | Clicks & growls: Why AI’s hearing the call of the wild
Fluency is becoming power. Silence is becoming costly.
Microsoft’s data shows the power gap widening: Leaders are 25–30% more likely than employees to use AI regularly, trust it with critical work, and see it as a career accelerator. That’s not a skills gap. It’s a shift in agency, a quiet realignment of power and confidence.
And like every transition in work, it’s happening unevenly, with the loudest people learning fastest, and the rest adapting in the margins.
This is why I believe the most urgent challenge of this moment isn’t just upskilling. It’s building AI fluency with psychological safety and creating cultures where people can experiment, fumble, question the output, and push back when the machine gets it wrong.
Because this isn’t about becoming prompt engineers, it’s about becoming context designers.
It’s about knowing when to hand over—and when to hold on. When to trust, and when to intervene. Knowing that yes, AI can finish your sentence—but it’s still your voice on the line.
The founder hiring an agent manager? He may sound like a punchline. But he’s probably ahead of the curve. Because within five years, every role will have an AI layer—and not every team will have the space, safety, or support to adapt in time.
Access is not enough. Adoption is not enough.
What we need is fluency. And the cultural permission to build it.
Also read | AI hallucination spooks law firms, halts adoption
Because the real risk isn’t that AI takes your job. It’s that you’re still in the room, but no longer part of the conversation.
And by the time you notice, the decisions might already be made—without your voice, and without your kind of thinking.
topics
