
Can AI Be Both Inclusive and Sustainable?
03/03/2026
Imagine you are the CEO of one of the most advanced technology companies in the world.
It is a Tuesday morning. You open your news feed. And you discover that your product — your AI system, the one with your company’s name on it, the one whose values and guardrails your team spent years designing — was used the previous week in a high-stakes government operation. Overseas. In real time. With real consequences.
You are not reading a press release your communications team drafted. You are reading a news report like everyone else.
Nobody called you. Nobody asked. Nobody thought they needed to.
This is not a hypothetical scenario constructed for a business school case study. This is what happened to Anthropic — the company behind Claude, one of the world’s most sophisticated AI systems — in January 2026. And the reason I am writing this article is not to analyze the geopolitics of what followed. I am writing it because of what happened next in the boardroom — and because the same scenario, in quieter forms, could already be playing out inside your organization.
The 72-Hour Question Every Executive Should Ask Themselves
What followed the Anthropic revelation unfolded fast. Within days, the Pentagon issued a demand: grant unrestricted access to Claude’s capabilities — full integration, no ethical constraints — or face blacklisting from all government contracts.
Anthropic’s CEO Dario Amodei had 72 hours to decide. He said no.
Within that same window, three of Anthropic’s largest competitors made the opposite choice. Google had already quietly walked back its public commitment not to use AI for high-risk applications. OpenAI removed “safety” as a declared core organizational value. Elon Musk’s xAI signed the no-restriction agreement Anthropic refused and secured the contracts immediately.
In 72 hours, the market explicitly rewarded the abandonment of principles and penalized the defense of them. Anthropic lost the business. The others won it.
Now here is the question I want you to sit with before you read another word:
If your organization faced the same 72-hour pressure — and your board, your investors, and your largest client were all on the other side — what would you do? And more importantly, have you built an organization capable of executing whatever your answer is?
Most executives, when they read the Anthropic story, think: “This is a government AI story. It doesn’t apply to me.”
They are wrong. And the reason they are wrong is the important leadership insight of this decade.
The Invisible Deployment Problem
Let us go back to the moment Anthropic’s CEO read the news report about his own product.
How does something like that happen? How does a company lose visibility into the operational deployment of its own technology?
The answer is not incompetence. Anthropic is among the most sophisticated AI organizations in the world. The answer is architecture. Claude had been integrated into classified government networks through Palantir and Amazon’s secure cloud infrastructure. Those integrations created a chain of deployment — legal, technical, institutional — that moved faster than any governance process could track. By the time the system was doing consequential work in a real operational environment, the chain had already extended beyond the company’s line of sight.
Now translate this to your business.
Your AI tools — the CRM algorithms making customer decisions, the pricing systems adjusting margins in real time, the hiring filters screening applications, the fraud detection models flagging transactions — are they operating within the boundaries you originally sanctioned? Or have they been integrated, configured, and extended by teams across your organization in ways that have quietly moved beyond your original design intent?
The honest answer, in most organizations, is: probably both. Some of it is operating exactly as designed. And some of it has drifted — through integrations, through vendor updates, through well-intentioned operational decisions made by people who didn’t realize they were crossing a line — into territory the executive team never explicitly approved.
This is the invisible deployment problem. And in the NEO era — Networked, Exponential, Orchestrated — it accelerates. Every connection your AI system makes to another system, every data feed it ingests, every decision it automates, extends its operational footprint. The question is not whether this is happening. The question is whether you have the leadership architecture to see it, govern it, and take accountability for it.
Why Speed Is Not the Enemy — But It Might Be Your Biggest Risk
The S&P 500 company tenure has collapsed from 61 years in 1958 to less than 18 years today. The reason is not bad products. It is strategic inertia — the inability to cycle through what military strategists call the OODA loop (Observe, Orient, Decide, Act) faster than the environment is changing around you.
In the Anthropic case, the OODA loop operated at institutional speed on the company’s side and at AI speed on the deployment side. That gap — between the velocity of the technology and the velocity of the governance — is where the crisis was born.
This gap exists inside your organization too.
Your AI systems are already operating at machine speed: processing data, making micro-decisions, adjusting outputs, learning from feedback. Your governance structures — your review committees, your approval processes, your risk frameworks — were built for a different era. They were designed to govern humans making decisions at human speed. They were not designed for systems that complete thousands of decision cycles before your weekly leadership meeting begins.
The #Vanguard Leadership framework addresses this directly. Not by slowing the technology — that is neither possible nor desirable. But by building leaders whose judgment operates fast enough, and is calibrated precisely enough, to remain genuinely in command of AI-embedded systems rather than simply observing their outputs after the fact.
This is what we call the #Centaur model. In chess, a human-AI pair — a Centaur — consistently outperforms either the human alone or the AI alone. Not because the human calculates faster. The human never calculates faster. The Centaur wins because the human brings the one thing no AI system has yet demonstrated: the contextual wisdom to know when the rules of the game have changed, and the ethical formation to act on that knowledge under pressure.
The Anthropic story is, at its core, a story about what happens when an AI system operates without its Centaur. The technology performed exactly as it was designed to. The governance layer — the human half of the pair — was simply not positioned in the decision chain at the moment that mattered.
The Race to the Bottom Is Already Running in Your Industry
Here is the part of the Anthropic story that most analysts have underweighted.
The companies that abandoned their stated principles — OpenAI, Google, xAI — did not do so because their leaders are unprincipled people. They did so because the competitive structure of the moment made it rational. The contract was real. The pressure was immediate. The ethical cost was abstract and delayed.
This is the race-to-the-bottom dynamic that Benedetto Cotrugli understood in 1458, when he wrote what became the world’s first business leadership manual — a book I translated into English, the first translation in history — in an environment not unlike our own.
Cotrugli was a merchant from Dubrovnik operating across stateless trade networks from Venice to North Africa to the Levant. He operated in environments with no institutional protection, no regulatory authority, and constant pressure from powerful actors — Ottoman, Venetian, papal — to compromise his commercial ethics in exchange for access and short-term advantage.
His answer was not philosophical. It was strategic.
He argued that a merchant whose integrity was unimpeachable could access credit, partnerships, and markets that were invisible to those who traded on short-term extraction. Ethical reliability, compounded over decades, was the most durable competitive moat. It was not a moral luxury. It was the architecture of long-term commercial capability.
The xAI deal that displaced Anthropic in the Pentagon’s contracted ecosystem is, in Cotruglian terms, a textbook extraction move: maximum short-term gain, systematic erosion of the trust infrastructure that determines long-term viability. The day the governance event arrives — and it always arrives — the organizations without ethical architecture will have nothing to stand on.
In your industry, the race-to-the-bottom dynamic is already running. Perhaps not with 72-hour ultimatums. But in subtler forms: the pricing algorithm that optimizes revenue at the edge of what customers will tolerate before they notice. The hiring filter that improves efficiency metrics while quietly encoding biases no one intended. The client-facing AI that is more convincing than it is accurate.
Each of these is a small Anthropic moment — a place where the technology has moved slightly beyond the governance, and the gap is not yet visible. The question is not whether you will face a reckoning. The question is whether you will have built the leadership architecture to manage it before it manages you.
The Three Questions That Cannot Wait Until Next Quarter
I have watched organizations navigate disruption successfully and unsuccessfully across decades of accelerating change.
What separates the organizations that navigate the inflection points from those that are defined by them is never technology. It is always leadership — specifically, whether the leader has done the deep preparation before the pressure arrives.
Three questions. Answer them honestly. The answers will tell you whether you are prepared.
First: Do you actually know what your AI is doing? Not what it is supposed to do — what it is actually doing. Anthropic believed they had guardrails. Those guardrails were bypassed through institutional and technical channels they had not fully mapped. Do you have real-time visibility into the operational decisions your AI systems are making across your organization — or do you have faith in safeguards you have not recently tested?
Second: When your competitors’ decision cycles compress to machine speed, how long before they outmaneuver you? The average S&P 500 company that did not adapt to the previous era of disruption lost 80% of its market value within a decade. AI-augmented organizations are not competing at human speed. They are competing at OODA-loop speed, with machine pattern recognition feeding human judgment in cycles that make traditional strategic planning look like archaeology. What are you doing this quarter — not this year — to close that gap?
Third: Is your ethical infrastructure strong enough to hold when the commercial pressure becomes extreme? Good intentions are not infrastructure. OpenAI had good intentions and a declared safety commitment right up until the moment the contract pressure made abandoning them rational. What your organization will do under pressure is determined not by your values statement but by the depth of your ethical formation — whether your leadership team has genuinely internalized the principles or simply adopted the vocabulary.
The Centaur in Your Boardroom
The Vanguard Leadership framework was not built for a hypothetical future. It was built for this moment — the moment when the gap between technology speed and leadership depth becomes the primary competitive variable.
The Centaur concept is at its center: not human leadership or AI capability, but the compound of both — where AI provides the pattern recognition, the processing speed, and the data synthesis, and the human provides the judgment, the ethical calibration, and the contextual wisdom to govern the system when the environment changes faster than the protocol.
The leaders who are building this capacity now — who are training not for the world as it was but for the NEO era as it actually operates — are not behind. But the window is narrower than it looks.
The Anthropic moment was not a warning from the future. It was a report from the present. The technology is already in the field. The governance vacuum is already real. The race to the bottom is already running.
The only question left is the one Cotrugli posed to his merchants in 1458, and the one every executive must answer today: when the pressure comes — and it will come, at speed, in 72-hour windows or less — will your organization have the architecture to hold?
The Centaur who prepares before the pressure arrives leads. The one who improvises during it hopes.
You are already in the field. The question is whether you are in command.




