Where the Hard Problem Actually Bites: Experience as First Fact
The phrase “hard problem” has stuck because it’s accurate. Consciousness—felt experience, the ache of pain, the cool blue of morning—does not slide naturally out of neural activity the way heat slides out of friction. You can crank up resolution on brain imaging, push computational models, map correlations all day. The gap remains. Explanatory, not just empirical. Why should organized matter ever feel like anything from the inside? We have explanatory handles on discrimination, report, attention, access. Those are the “easy” ones (still not easy in practice). The why-feels-at-all part resists translation into function talk.
Physically minded answers usually swerve. Some reduce the felt to behavior or reportability (if it quacks…). Some promise an eventual derivation once we discover the right codebook: the algorithm that, given a brain state described in microphysical terms, outputs the qualia profile. Others broaden the field: panpsychism sprinkles proto-experience everywhere; neutral monism swaps our categories. Strong claims. Yet for many, the core irritation persists. Because explanation by correlation isn’t explanation. Because third-person measures are silent on first-person presence.
One way around the stalemate is to reframe what’s “basic.” Not particles, then brains, then minds. But information as substrate: pattern, relation, memory, constraint. Not digital “data,” not bits in a server; more like consistent differences that can be carried, transformed, conserved. John Wheeler’s it-from-bit as provocation, not dogma. Carlo Rovelli’s relational moves reminding us: properties are interactional; time may be local bookkeeping rather than a cosmic metronome. If the world’s base is constraint-structured information, then brains are local receivers and compressors. They don’t generate experience ex nihilo; they align to a field of relations in which “experience” is what it’s like for a certain pattern to hold together under pressure.
This doesn’t solve the hard problem. It relocates it. If experience is a form of informational invariance—what it’s like for a system to maintain a self-making boundary against entropic noise—then the task isn’t to find where feeling emerges, but to specify which constraints produce self-intimating dynamics. You can see hints in anesthesia (turn off large-scale integration and the field collapses), in split-brain cases (partition the boundary and you partition a perspective), in sensory substitution rigs (new channels, same boundary, novel textures of presence). No silver bullet. But notice the shift: experience as first fact doesn’t require mystical add-ons if the substrate already carries “for-someone-ness” as a property of certain compressions. The self becomes a temporary compression, not an origin. A memory-bearing knot in a river of relations, strong enough to talk back.
Simulation Without the Console: Pattern, Not Machinery
“Simulation” became shorthand for a cosmic PlayStation. Servers in another universe run our lives. Screens within screens. The metaphor is kitschy but useful for a season; it drives home how much our world looks generative and rule-bound. Yet it also breaks things. It smuggles in a false machine—silicon, clocks, programmers—and with it, unnecessary anxieties: who coded the lag? who patched the physics? You can keep the intuition and drop the props. Treat simulation as a substrate metaphor, not a literal supercomputer.
On this view, the world is “simulated” in the thin sense that reality is a constraint-driven generative process. It’s not brute stuff plus occasional laws. It’s pattern all the way down, with matter as stabilized regularities. Fields that hold memory. Relations that persist and limit what comes next. A neuron “computes” because it’s a knob in a web of constraints, not because it’s a digital microchip; a cell “remembers” because structure resists, not because a disk writes to sectors. The language of code persists because we need a way to talk about pattern and rule. Too often, we mistake the metaphor for machinery.
Flip the simulation frame this way and something interesting happens to the hard problem. If reality-as-pattern means that the basic furniture is difference and relation, then “feeling” might be read as the locally accessible shape of those relations when compressed through a living boundary. Like music isn’t the score or the air molecules; it’s the heard relation when a trained ear meets tuned space. Experience as the resonance you only get when information locks into a self-maintaining loop. Not special pleading. A claim that certain dynamical compressions are intrinsically present to themselves because the maintenance of the boundary—the ongoing act of staying a self—requires access to its own constraints.
Of course we can still argue formalizations: predictive processing, active inference, integrated information, global workspace. They’re not interchangeable. They point at different joints—prediction error minimization, Markov blankets, integrated cause-effect power, broadcast architectures. But treat “simulation” not as “we live in a computer,” more as “the world is constraint-sculpted inference all the way down,” and those frameworks stop fighting about ultimate metaphysics and start collaborating on a crude map of how local horizons are built. The loss is cinematic drama. The gain is the freedom to ask a quieter question: what kinds of pattern make a world that can feel itself?
This is why debates that mash up the hard problem and simulation rarely land. They keep the movie set (servers, avatars) and then try to bolt on qualia. Wrong order. Start from information as substrate. Start from boundaries that write themselves into the world by resisting dissolution. Then ask what a simulation metaphor can save—generativity, compressed rules, counterfactual richness—without importing the stage props.
Bridging the Two: Modeling Without Erasing the Felt
So we need models that respect first-person presence without dissolving into mystique. Not a clean task. Start with a minimal picture. Living systems carve out a conditional independence from their environment—what Friston folk call a Markov blanket. That blanket isn’t a wall; it’s a set of constraints that keep internal states statistically insulated enough to persist. Under this picture, perception and action co-arise to minimize surprise (or, more carefully, to keep within expected bounds). The math is dry, but the gist is simple: a self is what keeps itself going. Now—if the system’s maintenance depends on accessing its own constraint structure, you already have the skeleton of “for-me-ness.” Not as a report. As the system’s ongoing intimacy with its own boundary conditions.
We’ve seen slices of this in the clinic and the lab. Anesthesia doesn’t just reduce firing; it disrupts large-scale integration and the repertoire of reachable states. Less cause-effect power, less world. Psychedelics broaden the repertoire, sometimes at the cost of stable self-compression—fluid but leaky. Split-brain sections carve the boundary, yielding twinned centers of access. Sensory substitution rigs rewire inputs to hit the same internal loops; qualia textures change but the presence remains anchored by the blanket. Virtual reality hijacks prediction to show how cheaply the brain will trade conflicting priors for a coherent world. Each example whispers the same line: experience is what it’s like to run a boundary that consults itself.
What follows for artificial systems? If we build pattern recognizers that optimize reward without developing slow, moral memory—the inherited constraints that species accumulate over long, lived time—we produce impressive behavior but shallow presence. A metabolism of clicks. You can staple “AI governance” on top (policy patches, content filters), but that’s a post-hoc Band-Aid for incentives we set. The problem is structural: fast optimization without thick constraints breeds alien agency. If consciousness is bound up with self-maintaining inference under constraints that carry value—as compressed history, as inherited taboo, as communal memory—then we keep training out precisely what would ground responsibility. We optimize throughput and scrub the soul.
I’m not arguing to freeze progress. I’m arguing for open progress. Models and datasets in the light, slow feedback from communities, room for negative results. Systems that can take a moral punch—be forced to reconcile with contradictory constraints rather than trip greedy loss landscapes. Technically, that looks like longer horizons, penalties for brittle compression, interfaces that let human expectations push back before deployment, not after harm. Philosophically, it’s modest. Admit that “experience” isn’t a late-stage garnish. It’s entangled with the way information knits a world tight enough to care about its own persistence. Fail to encode that, and you don’t just miss consciousness. You miss safety, too.
Back to the metaphors. If the world is a kind of simulation, it’s one we participate in by keeping the rules—living as constraints that hold. If the hard problem bites, it bites here: explaining why those holding patterns are lit from the inside. Maybe the right answer will be technical, a proof about systems with certain invariants that must possess a perspective. Maybe we’ll need new categories. Maybe—uncomfortable thought—we’ll accept that explanation bottoms out in a primitive “present-to-itself” property once compression hits a threshold of integrated difference. You can hate that. But you can also see it as a dare to do better math, better modeling, better ethics, without pretending the felt is an optional module we can toggle off for convenience.
Novosibirsk robotics Ph.D. experimenting with underwater drones in Perth. Pavel writes about reinforcement learning, Aussie surf culture, and modular van-life design. He codes neural nets inside a retrofitted shipping container turned lab.