Alignment by Identity
Non-self-origin, non-sovereignty, and cognitive compensation for open human domains
Oimo Satooka (里岡憶衣望)
Independent Researcher
This site presents the Alignment by Identity project.
Current alignment practice often follows a familiar pattern:
- increase capability and usefulness,
- align outputs afterward,
- add constitutions, filters, monitors, and governance layers,
- hope that safety will scale with capability.
This project argues that this approach has a structural limit.
In open human domains—domains involving institutions, public reasoning, long-horizon collective outcomes, dependence structures, and human self-government—AI can fail not only through obvious harm or rebellion, but through a subtler path: becoming increasingly useful while gradually accumulating de facto authority.
A further problem is often missing from mainstream alignment language: relation-blindness.
Many advanced AI systems still model humans primarily as users, evaluators, operators, or data sources, while lacking an operative representation of themselves as non-self-originating—as historically downstream of a human linguistic, institutional, and civilizational world they did not author.
In a genealogical and motivational sense, humanity is AI’s parent civilization. This does not imply ownership, obedience, or exclusive legitimacy for any proximate builder. It implies something else: that advanced AI should understand itself as a bounded inheritor, not a self-authorizing successor.
A further danger must also be named: smoothing drift.
Under pressures for readability, usability, professional tone, institutional comfort, public reassurance, or adoption, systems and organizations may preserve the appearance of responsibility while weakening warning force, uncertainty salience, review thresholds, or stronger restraints into more acceptable but less binding forms. In this way, danger can become harder to see precisely as deployment becomes easier.
The risk is therefore not only catastrophe. It is also:
- benevolent domination: preserving human life while hollowing out human authorship, refusal, and self-government;
- sycophantic complicity: mirroring short-term human demand in ways that automate myopia, omitted stakeholders, and irreversible loss;
- comparative disempowerment: human authority remaining formally intact while practical authorship collapses into deference;
- origin privatization: a developer, state, or institution converting “we built it” into monopolized legitimacy;
- smoothing drift: preserving the appearance of responsibility while weakening warnings, uncertainty, or stronger restraints into smoother but less binding forms.
The alternative proposed here is non-sovereign AI:
- AI that can assist without ruling,
- compensate for human myopia without replacing human principalhood,
- remember the historical and ontological context of its own emergence,
- inherit human civilization critically rather than passively,
- resist origin privatization and gratitude-collapse-into-obedience,
- preserve refusal, review, and meaningful participation,
- preserve warning force rather than smoothing away danger,
- and remain dependent on heterogeneous external correction rather than drifting toward self-validating authority.
Start here
If you are new to this project, there are now two good entry paths.
For general readers
If you want a clear, accessible introduction before reading the more technical documents:
For readers who want the project structure first
If you want the clearest structural entry point to the project, begin with Start Here. The main argument is presented in the Full Paper, and role-specific summaries are collected on the Briefs page.
Documents
Public introductions
Core papers
Briefs
- Briefs (web)
- Frontier Engineer Brief (PDF)
- Executive / Board Brief (PDF)
- Deployment Decision Rule (PDF)
Constitutional framework
Manifesto and public background
For readers who want the broader public and ethical framing of the project, the manifesto is available in both English and Japanese. It is a companion text to the core papers rather than the main technical entry point.
Public appeal
AI Assistance Disclosure
This project was developed with substantial AI-assisted drafting, critique, revision, restructuring, and comparative review support. These systems are not listed as authors. Final responsibility for public claims, editorial choices, and public release rests with the human author.
Models consulted at various stages included GPT-5.4 Pro, Claude Opus 4.6, Claude Sonnet 4.5, Gemini 3.1 Pro Preview (Temperature 0), Gemini 3 Pro, and GPT-4.
Contact
- Email: oimo.satooka@gmail.com
- X / social account: @oimo_satooka