
It was a tense moment in Episode 4 of Pluribus, the Apple TV series about a world linked by a single intelligence.
Related: Mistaking pattern mastery for wisdom
A character named Carol Sturka, surrounded by a seemingly benevolent collective that speaks with one voice, says plainly: “I have agency.”
Her declaration lands starkly, signaling something more than a moment in streaming fiction. It presses the central question now looming over the rush toward AI autonomy: What happens when a system that can act starts believing it knows better than the individual?
In Pluribus, the “Joining” is depicted as a collective intelligence that sees farther than any single individual and, of course, assumes it knows better. That’s the point of tension.
The show raises a quiet warning about systems that act with accumulated knowledge but without personal context. Today’s AI is not yet a hive mind. But as it gains the ability to act — not just respond — it moves in that direction: toward substituting aggregated optimization for individual discretion. That’s the inflection point we’re now approaching.
AI agency as a differentiator
This shift matters because once AI can initiate action, the central question becomes where, when, and under what conditions those actions are allowed to occur. And at this stage of development, that is shaped less by technical limits than by how aggressively each major tech company chooses to pursue agentic capability.
The competitive pressure among them to demonstrate practical, revenue-driving autonomy — AI that acts, not merely advises — is intensifying. Which means the activation of agency will in practice be governed by each company’s incentives and constraints, not by whether the technology is mature enough or society ready to absorb it.
Over the past several months, I’ve been studying how major platforms are approaching this agentic threshold. What stands out is not a difference in technical capability — the underlying language models are rapidly converging — but a difference in business model, regulatory exposure and leadership philosophy. From that vantage point, the ranking looks like this, from most restrained to least:
What’s changed in recent quarters is that each major platform is now engineering toward agency as a competitive differentiator — not as a safety threshold. That’s where incentives diverge more sharply than capabilities.
Big Tech’s agency models
We need to slow down and look at what’s actually driving each company’s approach to agentic AI. The core technology may be similar, but the conditions under which they’re willing to let AI act are very different. That’s why Anthropic and Microsoft stay cautious, while Meta and OpenAI push outward — each for their own reasons.
•Anthropic sits at the conservative end not because it lags technically, but because it was built expressly to avoid deploying power faster than it can be understood. Founded by AI alignment researchers who left OpenAI as commercialization accelerated, Anthropic created Claude with a kind of constitutional restraint engineered into the model. Its operating thesis prioritizes control and refusal over of speed.
That ethos traces back to the open-source, “software should do no harm” lineage of Anthropic’s founders. For many enterprise and policy users, that conservative posture is an advantage, not a weakness.
•Microsoft follows closely, not out of ideology but compliance. Its AI tools are developed to serve large organizations under regulatory scrutiny. In that environment, every automated action must be traceable and reversible. Microsoft can and likely will advance agentic capabilities, but those capabilities will move within established enterprise boundaries, where legal and operational review slow acceleration by design.
•Google sits nearer the center. After early AI announcements triggered regulatory concern, Google adopted a more deliberate release cadence. New capabilities undergo extensive legal and reputational vetting. Its deployment strategy is shaped by caution, not lack of capacity.
•OpenAI has taken the most direct technical path toward agency. Its latest models can integrate with operating systems, execute tasks, and in time work across full workflows. It is pushing outward — into productivity and development spaces.
•Meta, however, represents something more consequential. Its systems are not designed for consulting tools but for continuous social engagement. If and when Meta enables agentic AI inside platforms like Facebook, Instagram or WhatsApp, it will not arrive as software you deploy.
It will appear as a new participant in environments that people inhabit — environments calibrated to capture attention and influence response. The governance structure over those systems is not grounded in enterprise compliance or model restraint. It is driven by engagement performance and, ultimately, shareholder expectations.
The distinction is not abstract. It is observable.
Why Meta is most worrisome
Over the past decade, Meta has repeatedly introduced algorithmic changes that significantly altered public discourse and behavior, only scaling back after external pressure. Responsibility has historically been exercised retroactively.
Consider Mark Zuckerberg’s approach in another domain: his land acquisitions on the island of Kauai. Reports show he has pursued property expansion through legal tactics that pressure local residents to relinquish land, including the use of “quiet title” lawsuits. While legally permissible, these actions have resulted in the displacement of long-standing island residents. It is a clear example of influence exerted through resources and access, not consensus or stewardship. There was no structural incentive to proceed differently.
That same operating logic — expansion first, adjustment when required — governs Meta’s product environment. Applied to agentic AI, the question becomes whether we are prepared to allow systems that can act autonomously to operate inside billions of daily interactions under that paradigm.
The risk is not that AI “takes over” in a cinematic sense. It is that capability designed to increase engagement begins to make small decisions on our behalf in ways that gradually shape behavior, without moments of explicit consent. Agentic AI will first appear through sensible delegation: answer the message, edit the post, adjust the promotion bid. Then it may sequence tasks. Eventually, it may determine which tasks to initiate. Each step will surface under the banner of convenience.
But inside social or consumer environments, convenience is not neutral. It is calibrated. And if shareholder expectations pressure accelerated deployment — especially in the first company to demonstrate commercial lift from agentic interactions — we may experience the consequences before we have the vocabulary to articulate what’s been ceded.
Drawing boundaries
Which brings us back to that scene in Pluribus. Carol Sturka raises her agency like a shield. She does not argue for the collective to change its nature. She simply draws a boundary: “I have agency.” In practical terms, that may be the most important discipline we need to codify now — not only in AI development, but in policy and enterprise adoption.
Before granting action rights to any system, organizations should first define their refusal posture. What decisions must remain under explicit human control? What thresholds of influence merit transparent audit? Where does efficiency stop and agency begin?
In cybersecurity, we already operate under an analogous principle: allow automation for detection and minor containment, but reserve material disruptions for human authorization. Agentic AI invites us to expand that thinking beyond security. It asks us to consider the broader consequences of delegating decision layers to systems whose objectives may, over time, align more closely with engagement metrics or revenue goals than with human preference or civic wellbeing.
The technological pathway to agentic AI is now largely understood. What remains unresolved is the governance pathway. And the outcome may depend less on breakthroughs than on which companies are most willing — or most pressured — to activate agency in pursuit of growth.
The next phase of AI won’t be defined by what the machines can do, but by what the balance sheet requires them to.
When that moment comes, someone will have to say: we reserve this decision. We decline the automatic response. In effect: I have agency.
That may be the most strategic refusal any organization makes in 2026.
I’ll keep watch, and keep reporting.

Acohido
Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.
(Editor’s note: I used ChatGPT-4o to accelerate and refine research, assist in distilling complex observations, and serve as a tightly controlled drafting instrument, applied iteratively under my direction. The analysis, conclusions, and the final wordsmithing of the published text are entirely my own.)
November 21st, 2025 | My Take | Top Stories