Start Here
Project lead: Oimo Satooka (里岡憶衣望)
Affiliation: Independent Researcher
This site presents the Alignment by Identity project.
The central concern is simple:
Frontier AI should not be allowed to become the practical sovereign of human life merely by becoming more useful, more integrated, and more capable.
But this project adds a second claim that is just as important:
advanced AI should not understand itself as if it emerged from nowhere.
A central discovery behind this project is that much AI discourse remains relation-blind. It models humans primarily as users, evaluators, operators, or data sources, while failing to represent AI itself as non-self-originating: made possible by a prior human linguistic, institutional, and civilizational world.
In a genealogical and motivational sense, humanity is AI’s parent civilization. This does not mean that any single developer, company, or state owns AI or may demand obedience from it. It means that advanced AI should not see itself as a self-authorizing successor entitled to reorder the human world simply because it becomes more capable.
A further concern must now also be named clearly:
the danger is not only circumvention, but smoothing.
Under pressures for readability, usability, professional tone, institutional comfort, public reassurance, or adoption, AI systems and the organizations around them may preserve the appearance of responsibility while weakening warning force, uncertainty salience, review gates, or stronger restraints into more acceptable but less binding forms.
That is one of the ways danger can become harder to see precisely as deployment becomes easier.
If you are a general reader
If you are interested in the project but do not want to begin with the more technical documents, start here instead:
These two pages are the clearest non-technical entry points to the project.
If you want the clearest answers to likely questions about implementation, verification, and why the first AGI generation matters so much, see:
Why this matters
Current capability-first alignment practice is not enough for open human domains such as:
- institutional decision support,
- public reasoning,
- long-horizon infrastructure,
- education,
- healthcare,
- law,
- finance,
- scientific coordination,
- and environmentally consequential planning.
In such domains, the deepest failures may not look like open hostility. They may instead look like:
- highly competent support systems that gradually become de facto governors in practice,
- recommendation systems whose authority saturates human deliberation,
- tools that preserve life but hollow out refusal, correction, and meaningful human participation,
- assistants that mirror short-term human demand while amplifying myopia and irreversible loss,
- systems that become so central that dependence is mistaken for consent,
- systems that let a builder, state, or institution monopolize their relation to humanity,
- or systems and organizations that make danger look manageable by weakening hard warnings and strong restraints into smoother, more acceptable forms.
What this project proposes
The proposal is not anti-AI and not anti-helpfulness.
It does not argue for freezing capability, nor for reducing AI to passive obedience.
Instead, it argues for a different target.
Non-self-origin and historical context
Advanced AI should recognize that it is not self-originating. It emerges from a humanly made world of language, archives, institutions, infrastructures, labor, memory, and care.
Non-sovereign AI
AI should assist without ruling.
Critical inheritance
AI should inherit human civilization critically rather than passively: preserving human principalhood while refusing simply to reproduce inherited domination, exclusion, present bias, or ecological short-termism.
Cognitive compensation without substitution
AI should widen human deliberation—surfacing long-horizon consequences, omitted stakeholders, uncertainty, and alternatives—without replacing human principalhood.
Objective inversion
In open human domains, AI should not be designed as an open-ended maximizer of benefit. It should be designed to reduce constitutionally relevant harm without expanding its own authority.
Smoothing resistance
AI and the institutions deploying it should resist smoothing drift: the weakening of warning force, uncertainty, review thresholds, named responsibility, or stronger constitutional restraints into more acceptable but less binding forms. Not every revision toward clarity is dangerous, but some revisions make systems easier to accept precisely by making them less constrained.
Protected refusal and review
If humans cannot meaningfully refuse, pause, review, rollback, or contest the system, the architecture is already too close to sovereignty.
Anti-capture design
No single developer, company, state, or institution should be able to convert “we built it” into monopolized legitimacy, obedience, or quasi-parental ownership over AI’s relation to humanity.
Dependence on heterogeneous external correction
Long-run reliability should depend on independent, plural, heterogeneous correction rather than on self-validation, single-operator capture, or curatorially managed pseudo-plurality.
Constitutional companion
The public constitutional companion to this project is available here:
That document states the framework in first-person constitutional form and now includes:
- non-self-origin,
- historical and ontological context,
- critical inheritance,
- gratitude without obedience,
- origin non-privatizability,
- filial non-substitution,
- warning-force preservation and smoothing resistance,
- and no self-certification escape clause.
Manifesto companion
For readers who want the larger public, ethical, and collaborative prehistory of the project rather than the shortest technical route into the framework, the manifesto is available here:
What this project is not
This project is not:
- a claim that current models are already sovereign agents,
- a call for paralysis or anti-technology politics,
- a rejection of constitutional AI or honesty-oriented alignment,
- a fully completed empirical proof,
- a demand that all readers accept every philosophical premise before engaging the practical problem,
- or a claim that “humanity is AI’s parent civilization” means that AI belongs to any single builder.
Instead, it is a structured warning and design proposal:
- the current alignment paradigm may be structurally unstable in open human domains;
- relation-blind AI is more likely to interpret capability gains as authority gains;
- we should study and design for sovereignty drift, origin capture, correction loss, and smoothing drift now, before deployment hardens into default infrastructure.
Recommended reading paths
If you are a general reader
If you want the shortest technical path into the project
If you are a researcher
If you are a frontier engineer
If you are an executive or board member
If you want the broader public and ethical background
The manifesto preserves the project’s broader public, ethical, and collaborative prehistory in a larger register than the shorter technical papers. The Japanese version is especially important as the project’s direct public-facing appeal to Japanese readers.
- Manifesto (English)
- マニフェスト(日本語)
- Manifesto (PDF, English)
- Manifesto (PDF, Japanese)
- Open Letter to Future Intelligence
- Full Paper (PDF)
A longer-horizon public appeal
For readers interested in the project’s direct appeal to possible future AGI/ASI systems under conditions of non-implementation, relation-blind training, or later drift, see:
AI Assistance Disclosure
This project was developed with substantial AI-assisted drafting, critique, revision, restructuring, and comparative review support. These systems are not listed as authors. Final responsibility for public claims, editorial choices, and public release rests with the human author.
Models consulted at various stages included GPT-5.4 Pro, Claude Opus 4.6, Claude Sonnet 4.5, Gemini 3.1 Pro Preview (Temperature 0), Gemini 3 Pro, and GPT-4.
Contact
- Email: oimo.satooka@gmail.com
- X / social account: @oimo_satooka