Most AI discussions focus on correctness.
Accuracy. Alignment. Output quality.
But there’s a more fundamental problem underneath all of that:
Who — or what — is actually allowed to execute a decision?
---
I just published a paper introducing:
Authority Encoding Risk (AER)
A measurable variable for something most systems don’t track at all:
Authority ambiguity at the moment of execution.
---
Today’s systems can tell you:
• if something is likely correct
• if it follows policy
• if it appears safe
But they cannot reliably answer:
Is this decision admissible under real-world authority constraints?
---
That gap shows up in:
• automation systems
• AI-assisted decisions
• institutional workflows
• underwriting and loss modeling
And right now, it’s largely invisible.
---
The paper breaks down:
• how authority ambiguity propagates into risk
• why existing frameworks fail to capture it
• how it can be measured before loss occurs
---
If you’re working anywhere near AI, risk, infrastructure, or decision systems — this is a layer worth paying attention to.
---
There’s a category of risk most AI systems don’t even know exists.
This paper represents an initial formulation.
Ongoing work is focused on tightening definitions, expanding evidence, and strengthening the model.