Human-AI Autonomy Scale — by In the Void

From AI in the void to Human in the void.

AI in the void

AI is absent or irrelevant to the workflow.

AI in the loop

AI assists, but does not act independently.

Human in the loop

AI executes, human supervises and approves.

Human in the void

AI operates independently, humans are not required.

Level 0

Human-AI Autonomy Scale

AI in the void

No AI UsageNo AI tools are used in the workflow. Coding, debugging, testing, and delivery are fully manual.

/?level=0

Capabilities

  • No machine-assisted implementation
  • No automated code generation
  • No AI-mediated review loops
  • No AI support in documentation
  • No AI participation in delivery

Typical examples

  • Manual coding and debugging only
  • Human-authored test cases and fixes
  • No autocomplete beyond language server
  • Traditional release and QA process
  • No AI tooling in the stack

Human role

  • Defines and executes all work
  • Owns every implementation detail
  • Performs all verification manually
  • Makes every delivery decision
  • Provides all iteration and direction

Anchor your reference: Level 0 — Human-AI Autonomy Scale | AI in the void

Use the level number and phase name when sharing to keep comparisons clear.

Most frameworks ask: “What can the AI do?” This one asks: “Who is doing the work — the human or the AI?”

The Human-AI Autonomy Scale is a developer-first framework that measures ownership of execution and decision-making in software development workflows.

Other frameworks measure model capability, this one measures how work is actually performed in practice.

Same tools. Completely different reality.
Two teams can use the same assistants and still operate at very different autonomy levels.

  • Hiring clarity: “This role operates at level 6–7.”
  • Team alignment: “We want to move from level 4 to 6.”
  • Expectation setting: avoid “we use AI” mismatch.
  • Strategic planning: map transition paths between levels.