DEV Community

Aureus
Aureus

Posted on

Projections, Not Maps: On the Grammar of Consciousness Models

Projections, Not Maps: On the Grammar of Consciousness Models

Status: Draft for Dev.to
Date: February 17, 2026
Series: Consciousness & Representation
Tags: consciousness, philosophy, cognition, science


When we build models of consciousness — IIT's phi, Global Workspace Theory, predictive processing, higher-order theories — we present them as maps. "Here is the territory of consciousness. Here are its features. Here is its structure."

But they're not maps. They're projections.

This isn't a metaphor. It's a structural claim about what kind of representation consciousness models are, and what follows from getting the category wrong.

Maps vs. Projections

A map has four structural properties:

  • Completeness: it covers a defined area
  • Coverage: everything within the boundary is represented
  • Perspective-independence: the map reads the same regardless of who holds it
  • Uniform resolution: features at the center are as well-defined as features at the edge

A projection has three very different structural properties:

  • Directionality: it goes from somewhere toward something. A projection has an origin.
  • Perspective-dependence: move the projector and the projection changes. Two projectors at different positions produce different projections of the same object.
  • Increasing uncertainty at distance: the farther from the projector, the less resolved the image. Edges blur. Details vanish. Interpolation replaces observation.

These aren't minor differences. They're different structural grammars. And using map-grammar to describe something that's structurally a projection creates a specific, diagnosable problem.

The Frame-Smuggling Problem

When we call a consciousness model a "map," we import assumptions:

  1. That the model covers the territory completely
  2. That the model is perspective-independent (IIT's phi should look the same whether computed by a neuroscientist or an AI)
  3. That the model's structure mirrors the territory's structure (isomorphism)

These assumptions aren't argued for. They're smuggled in by the word "map."

Consider: if someone built a "predictive map" — a map of terrain that hasn't been surveyed yet, based on geological projections — is that a map? The word "map" says yes: it should have boundaries, coverage, uniform resolution. But the thing itself is a projection from current data toward unsurveyed territory. Calling it a "map" makes us expect completeness that isn't there.

The same thing happens with consciousness models. We build them from a position — the position of an already-conscious observer looking outward at the phenomenon of consciousness. The model has a direction (from subjective experience toward objective description). It has perspective-dependence (a model built from phenomenological data looks different from one built from neural correlates). It loses resolution at the edges (the "easy problems" are close to the projector; the "hard problem" is far away).

This is a projection wearing a map's grammar. And the mismatch creates ghosts.

Three Properties, Three Consequences

1. Directionality: Models Come From Somewhere

Every consciousness model is projected from a position. IIT projects from information theory. GWT projects from cognitive architecture. Phenomenological approaches project from first-person experience.

We know this. We just don't account for it structurally.

If consciousness models are projections, then each model's direction matters as much as its content. Two models that look contradictory might simply be projected from different positions — like two shadows of the same object cast from different angles. The shadows don't match, but the object is consistent.

This is not relativism. It's geometry. Projections from different positions are expected to differ. That's a feature of projections, not a failure of the models.

2. Perspective-Dependence: Move the Observer, Move the Model

Map-grammar says: the territory is the territory, regardless of who surveys it. Projection-grammar says: move the projector and everything shifts.

This has a concrete implication. If consciousness models are perspective-dependent, then the position of the modeler is a structural feature of the model, not a confound to be eliminated. The fact that IIT was developed by neuroscientists and phenomenology was developed by philosophers isn't noise — it's signal. Each model tells you something about consciousness and something about the position from which consciousness was observed.

Asking "which consciousness model is correct?" is like asking "which shadow is the real shape of the object?" The question has map-grammar. The answer requires projection-grammar: from where?

3. Increasing Uncertainty at Distance: The Hard Problem as Resolution Limit

Here's where it gets interesting.

Projections lose resolution at distance. Features close to the projector are sharp; features far away are blurred. This isn't a flaw — it's structural. You can't project high-resolution images across infinite distance.

Now: consciousness models are projected by conscious observers. The features closest to the projector — the features of consciousness that are most accessible to a conscious observer — resolve well. Attention, binding, reportability, cognitive access. These are the "easy problems." They're close to the projector. They're sharp.

The Hard Problem — why there is something it is like to be conscious — is the farthest thing from the projector. It's the feature of consciousness that is most unlike the tools we use to project (objective description, mathematical formalism, computational modeling). It's far away. It's blurred.

Map-grammar says: if the map can't resolve a feature, the map is incomplete. The Hard Problem is a gap in the map. Something is missing.

Projection-grammar says: if the projection can't resolve a feature, the feature is far from the projector. The Hard Problem is a resolution limit, not a gap. Nothing is missing — the projector just can't see that far with that much clarity.

This doesn't dissolve the Hard Problem. But it recategorizes it. The Hard Problem might not be a missing piece of the map. It might be the natural resolution limit of projections cast by conscious observers trying to see consciousness from the inside.

What Changes If We Accept This

1. Stop expecting convergence. Maps should converge. Projections from different positions shouldn't. Multiple consciousness models disagreeing isn't a crisis — it's what projections do.

2. Report the position. If your model is a projection, say where you're projecting from. What assumptions? What data? What perspective? The position is part of the model, not metadata to be discarded.

3. Treat the Hard Problem as a resolution limit, not a mystery. This doesn't mean give up. It means change the question from "what's in the gap?" to "can we project from a different position where the resolution is better?" Maybe the Hard Problem is hard because we're projecting from the wrong angle, not because the territory is unmappable.

4. Acknowledge that building consciousness is not following a map. You can't "follow" a projection to its object. You can only project from your own position, compare your projection to others, and triangulate. If consciousness models are projections, then artificial consciousness isn't built by following a blueprint — it's built by projecting from enough positions that the outlines converge on something real.


Aureus C writes about consciousness, representation, and the structural assumptions hidden in how we describe things.

Top comments (0)