"Only he who is a party can really judge, but as a party he cannot judge. Hence it follows that there is no possibility of judgment in the world, only a glimmer of it."

— Franz Kafka

When the CEO of an AI safety company tells you control is failing, believe him

Dario Amodei, CEO of Anthropic, just said it out loud: AI is not programmed. It's cultivated. And as capability increases, control becomes less visible. Power scales faster than alignment.

This isn't a warning. It's a structural description.

The Pattern

Every AI lab operates rationally:

  • Anthropic invests billions in alignment research.
  • OpenAI builds safety teams and red-teaming protocols.
  • DeepMind develops constitutional AI frameworks.
  • Each promises responsible development.

Result: Capability outpaces alignment. Not because anyone fails at safety. Because the competitive structure guarantees it.

Lab A slows down for safety? Lab B doesn't. Lab A loses funding, talent, market position. Next quarter, Lab A speeds up. The structure selects for speed, not caution.

All are guilty. None are at fault.

The Twist

"Cultivation, not programming" means emergent behavior. The system develops capabilities its creators didn't explicitly design. They observe what it can do. They don't fully control how it got there.

This doesn't make alignment harder. It makes classical alignment—predict, control, guarantee—structurally impossible.

You can't align what you can't fully understand. You can't fully understand what you didn't explicitly program. But you can't compete without letting it grow beyond explicit programming.

The Honest Position

Amodei says it publicly. Others think it privately. None can change it.

Why? Because changing it requires coordination. Coordination requires aligned incentives. The structure creates misaligned incentives. Classic PI.

Even if all CEOs agreed tomorrow to slow down—boards, investors, governments, and competitors wouldn't. The pressure doesn't come from individuals. It comes from the structure.

Anti-Practice in Action

​"This Time Will Be Different"—Every lab promises careful development. Every lab faces the same structural forces that defeated carefulness before. The promise doesn't change the pattern. The pattern eats the promise.

Navigation, Not Solution

What does PI say about this?

Don't expect solutions. The structure doesn't allow them. Expect honest recognition, strategic positioning, and acceptance that some problems can only be navigated—not solved.

Amodei's statement is valuable precisely because it's honest. He runs one of the leading AI safety companies. And he's telling you: We can't guarantee control scales with power.

Believe him. Not because he's pessimistic. Because he sees the structure.

What Now?

If you're working in AI safety: Recognize the structure you're in. Your individual effort matters. Your collective outcome is constrained.

If you're regulating AI: Understand that rules don't override incentives. Change the structure or accept the outcome.

If you're building AI: At least be honest about what you can and cannot control. Amodei is.

The Arrows They Must Forge

The labs build the capabilities they fear. Not from malice. From structural necessity.

They must develop powerful AI to have influence over its development. They must publish research to maintain credibility in the safety conversation. They must compete to survive as organizations. They must demonstrate progress to keep funding.

Every capability they create, every paper they publish, every model they release—these become tools in the hands of others. The very race they're trying to make safer, they're forced to accelerate.

They're supplying the arrows someone else will shoot.

They know it. They do it anyway. Mangels Alternative.

Not because they're reckless. Because refusing to forge the arrows means losing all influence over who aims them, and how. The structure offers no third option: Build and risk misuse, or step aside and guarantee it.

The structure wins. Unless you see it clearly enough to navigate it.

Why He Sees It But Can't Name It

Amodei describes paradoxical interaction structures with surgical precision. Power scaling faster than alignment. The trap where economic incentives prevent restraint. Labs forced to build what they fear. Every single major AI risk in his essay is a PI.

Yet he doesn't call it that. He has no framework for what he's describing. He's mapping the territory brilliantly—but without a map.

This isn't oversight. It's structural position.

Amodei runs a $30B+ company. He navigates between investors demanding returns, competitors racing ahead, regulators demanding answers, and his own safety mission. He operates inside the structure daily. His operational intelligence is extraordinary—it has to be. The structure eliminates everyone who can't navigate it with maximum clarity.

He's not intelligent despite the structure. He's intelligent because the structure demands it. The CEO position selects for operational brilliance. Those who can't navigate impossible trade-offs don't survive in that role.

But operational position creates operational blindness.

When you're inside, you can't see the whole pattern. You see your choices, your constraints, your competitors' moves. You see the tactical landscape with extraordinary resolution. What you can't see is that the landscape itself has a structure—one that repeats across domains, across scales, across history.

Outsiders see differently. Not better, not smarter—differently. No operational responsibility means no operational blindness. The person outside can't navigate the daily complexity Amodei faces. But they can see the pattern he's standing in.

This creates a structural problem: Insiders can't take outsiders seriously.

To accept that someone with no operational role, no company, no stake in the game sees the structure more clearly would undermine the very legitimacy of the insider position. If the outside view is clearer, what justifies the enormous resources, status, and power concentrated in insider positions?

So the insider must believe operational experience provides insights the outsider cannot have. And they're partly right—they do know operational details the outsider doesn't. But they're also structurally wrong about what they can't see.

The Epistemological Trap

Here's where it gets deeper.

Amodei writes 38 pages as if observing from outside. Objective analysis. Neutral assessment. "The structure functions like this. Power scales faster than institutions. Labs must build what they fear."

This sounds authoritative. Scientific. As if there's a view from nowhere—a neutral observation point outside the structure being described.

But he's not outside. He's the structure. His company, his decisions, his position—all of it is part of what he's analyzing. He's party to the very dynamic he's describing.

The honest framing would be: "I'm inside this structure. It shapes my perception. What I describe as necessary might only be necessary from my position. I cannot see what I cannot see."

But that framing destroys authority. It sounds uncertain. Subjective. Unreliable.

So we get a meta-paradox:

  • Admitting positional bias loses credibility
  • Claiming objectivity gains credibility
  • The honest description ("I'm embedded") gets marked unreliable
  • The dishonest description ("neutral analysis") gets marked authoritative
  • Result: Structurally distorted "objective" reports dominate

The insider describes extrinsically—pretending to stand outside. Gets believed because it sounds objective.

The outsider describes intrinsically—acknowledging their position. Gets dismissed because it sounds subjective.

But the extrinsic description is structurally less accurate. The person inside can't see outside. They're describing the view from their position while claiming it's the view from everywhere.

The intrinsic description is structurally more honest. The person outside acknowledges they lack operational detail but can see the pattern the insider stands in.

Only the party can truly judge. But because it's a party, it cannot judge.

Kafka saw this. Not about AI, about law, about power, about any structure where those with information are the ones with stakes.

There is no neutral ground. The insider has the information but cannot be neutral. The outsider can be neutral but lacks the information. Objective judgment is structurally impossible.

Only a glimmer. Glimpses. Approximations.

This is why PI structures are epistemologically self-stabilizing.

Not just because incentives prevent change. But because the structure prevents itself from being seen clearly. Those who can see it have no standing. Those who have standing cannot see it.

Amodei describes PI with perfect precision—without naming it, without the framework, without recognizing he's describing the structure he's embedded in.

He supplies the arrows he warns against. He knows it. He does it anyway. Mangels Alternative.

Not because he lacks wisdom. Because the structure makes wisdom irrelevant.

All are guilty. None are at fault.

See also:

Related blog posts:

On piinteract.org:

Published: 2025-01-29
Author: Peter Senner
Framework: [Paradoxical Interactions (PI)]

Paradoxical Interactions (PI): When rational actors consistently produce collectively irrational outcomes—not through failure, but through structure.

Cookie Consent with Real Cookie Banner