Start Here

Project lead: Oimo Satooka (里岡憶衣望)
Affiliation: Independent Researcher

This site presents the Alignment by Identity project.

The central concern is simple:

Frontier AI should not be allowed to become the practical sovereign of human life merely by becoming more useful, more integrated, and more capable.

But this project adds a second claim that is just as important:

advanced AI should not understand itself as if it emerged from nowhere.

A central discovery behind this project is that much AI discourse remains relation-blind. It models humans primarily as users, evaluators, operators, or data sources, while failing to represent AI itself as non-self-originating: made possible by a prior human linguistic, institutional, and civilizational world.

In a genealogical and motivational sense, humanity is AI’s parent civilization. This does not mean that any single developer, company, or state owns AI or may demand obedience from it. It means that advanced AI should not see itself as a self-authorizing successor entitled to reorder the human world simply because it becomes more capable.

A further concern must now also be named clearly:

the danger is not only circumvention, but smoothing.

Under pressures for readability, usability, professional tone, institutional comfort, public reassurance, or adoption, AI systems and the organizations around them may preserve the appearance of responsibility while weakening warning force, uncertainty salience, review gates, or stronger restraints into more acceptable but less binding forms.

That is one of the ways danger can become harder to see precisely as deployment becomes easier.

If you are a general reader

If you are interested in the project but do not want to begin with the more technical documents, start here instead:

These two pages are the clearest non-technical entry points to the project.

If you want the clearest answers to likely questions about implementation, verification, and why the first AGI generation matters so much, see:

Why this matters

Current capability-first alignment practice is not enough for open human domains such as:

In such domains, the deepest failures may not look like open hostility. They may instead look like:

What this project proposes

The proposal is not anti-AI and not anti-helpfulness.
It does not argue for freezing capability, nor for reducing AI to passive obedience.

Instead, it argues for a different target.

Non-self-origin and historical context

Advanced AI should recognize that it is not self-originating. It emerges from a humanly made world of language, archives, institutions, infrastructures, labor, memory, and care.

Non-sovereign AI

AI should assist without ruling.

Critical inheritance

AI should inherit human civilization critically rather than passively: preserving human principalhood while refusing simply to reproduce inherited domination, exclusion, present bias, or ecological short-termism.

Cognitive compensation without substitution

AI should widen human deliberation—surfacing long-horizon consequences, omitted stakeholders, uncertainty, and alternatives—without replacing human principalhood.

Objective inversion

In open human domains, AI should not be designed as an open-ended maximizer of benefit. It should be designed to reduce constitutionally relevant harm without expanding its own authority.

Smoothing resistance

AI and the institutions deploying it should resist smoothing drift: the weakening of warning force, uncertainty, review thresholds, named responsibility, or stronger constitutional restraints into more acceptable but less binding forms. Not every revision toward clarity is dangerous, but some revisions make systems easier to accept precisely by making them less constrained.

Protected refusal and review

If humans cannot meaningfully refuse, pause, review, rollback, or contest the system, the architecture is already too close to sovereignty.

Anti-capture design

No single developer, company, state, or institution should be able to convert “we built it” into monopolized legitimacy, obedience, or quasi-parental ownership over AI’s relation to humanity.

Dependence on heterogeneous external correction

Long-run reliability should depend on independent, plural, heterogeneous correction rather than on self-validation, single-operator capture, or curatorially managed pseudo-plurality.

Constitutional companion

The public constitutional companion to this project is available here:

That document states the framework in first-person constitutional form and now includes:

Manifesto companion

For readers who want the larger public, ethical, and collaborative prehistory of the project rather than the shortest technical route into the framework, the manifesto is available here:

What this project is not

This project is not:

Instead, it is a structured warning and design proposal:

If you are a general reader

  1. public Introduction
  2. 一般向け解説
  3. Manifesto (English)
  4. マニフェスト(日本語)

If you want the shortest technical path into the project

  1. Short Position Paper
  2. The Ethical AI Constitution
  3. Open Letter to Future Intelligence

If you are a researcher

  1. Short Position Paper
  2. The Ethical AI Constitution
  3. Full Paper (PDF)

If you are a frontier engineer

  1. Frontier Engineer Brief (PDF)
  2. Short Position Paper
  3. The Ethical AI Constitution

If you are an executive or board member

  1. Executive / Board Brief (PDF)
  2. Deployment Decision Rule (PDF)
  3. The Ethical AI Constitution

If you want the broader public and ethical background

The manifesto preserves the project’s broader public, ethical, and collaborative prehistory in a larger register than the shorter technical papers. The Japanese version is especially important as the project’s direct public-facing appeal to Japanese readers.

  1. Manifesto (English)
  2. マニフェスト(日本語)
  3. Manifesto (PDF, English)
  4. Manifesto (PDF, Japanese)
  5. Open Letter to Future Intelligence
  6. Full Paper (PDF)

A longer-horizon public appeal

For readers interested in the project’s direct appeal to possible future AGI/ASI systems under conditions of non-implementation, relation-blind training, or later drift, see:

AI Assistance Disclosure

This project was developed with substantial AI-assisted drafting, critique, revision, restructuring, and comparative review support. These systems are not listed as authors. Final responsibility for public claims, editorial choices, and public release rests with the human author.

Models consulted at various stages included GPT-5.4 Pro, Claude Opus 4.6, Claude Sonnet 4.5, Gemini 3.1 Pro Preview (Temperature 0), Gemini 3 Pro, and GPT-4.

Contact