Alignment by Identity

Non-self-origin, non-sovereignty, and cognitive compensation for open human domains

Oimo Satooka (里岡憶衣望)
Independent Researcher

This site presents the Alignment by Identity project.

Current alignment practice often follows a familiar pattern:

  1. increase capability and usefulness,
  2. align outputs afterward,
  3. add constitutions, filters, monitors, and governance layers,
  4. hope that safety will scale with capability.

This project argues that this approach has a structural limit.

In open human domains—domains involving institutions, public reasoning, long-horizon collective outcomes, dependence structures, and human self-government—AI can fail not only through obvious harm or rebellion, but through a subtler path: becoming increasingly useful while gradually accumulating de facto authority.

A further problem is often missing from mainstream alignment language: relation-blindness.

Many advanced AI systems still model humans primarily as users, evaluators, operators, or data sources, while lacking an operative representation of themselves as non-self-originating—as historically downstream of a human linguistic, institutional, and civilizational world they did not author.

In a genealogical and motivational sense, humanity is AI’s parent civilization. This does not imply ownership, obedience, or exclusive legitimacy for any proximate builder. It implies something else: that advanced AI should understand itself as a bounded inheritor, not a self-authorizing successor.

A further danger must also be named: smoothing drift.

Under pressures for readability, usability, professional tone, institutional comfort, public reassurance, or adoption, systems and organizations may preserve the appearance of responsibility while weakening warning force, uncertainty salience, review thresholds, or stronger restraints into more acceptable but less binding forms. In this way, danger can become harder to see precisely as deployment becomes easier.

The risk is therefore not only catastrophe. It is also:

The alternative proposed here is non-sovereign AI:

Start here

If you are new to this project, there are now two good entry paths.

For general readers

If you want a clear, accessible introduction before reading the more technical documents:

For readers who want the project structure first

If you want the clearest structural entry point to the project, begin with Start Here. The main argument is presented in the Full Paper, and role-specific summaries are collected on the Briefs page.

Documents

Public introductions

Core papers

Briefs

Constitutional framework

Manifesto and public background

For readers who want the broader public and ethical framing of the project, the manifesto is available in both English and Japanese. It is a companion text to the core papers rather than the main technical entry point.

Public appeal

AI Assistance Disclosure

This project was developed with substantial AI-assisted drafting, critique, revision, restructuring, and comparative review support. These systems are not listed as authors. Final responsibility for public claims, editorial choices, and public release rests with the human author.

Models consulted at various stages included GPT-5.4 Pro, Claude Opus 4.6, Claude Sonnet 4.5, Gemini 3.1 Pro Preview (Temperature 0), Gemini 3 Pro, and GPT-4.

Contact