Consciousness is not what it seems
The hard problem of consciousness is not hard. It is confused.
The framework assumes that physical processes and conscious experience are two separate things requiring a bridge. Remove that assumption and the problem looks very different.
But that shift takes work.
Saying “consciousness is self-modeling” and declaring victory would be circular. The real question is, why does the “something more” feeling arise so reliably, and why is it not evidence for what it seems to be evidence for?
The Self-Model Opacity Problem
Any system that models itself modeling will experience an explanatory gap that does not correspond to an ontological gap.
Your brain builds a representation of yourself as an entity in the world. This representation includes your body, your boundaries, your states. But here is the critical constraint…the self-model cannot include a full, transparent, real-time account of the mechanisms that generate it. It can include partial facts about its own implementation. It cannot include a complete picture at the same level of detail, running in parallel, without compression and tradeoffs.
The system has to run on itself.
So the self-model is substantially opaque to its own substrate. You experience yourself as an experiencer, but you have limited introspective access to the machinery producing that experience.
This creates the predictable effect that when you introspect, you find experiences. When you look at the brain, you find neurons. These seem like two different kinds of thing because you are accessing the same system through two channels that cannot fully see each other.
The Bridge Principle
But this still leaves the central question…why should self-modeling feel like anything? Why isn’t it just information processing with no experiential character?
“First-person access” means that the system represents its own states to itself in a way that can guide behavior, generate reports, and be compared against other states.
Now consider what it would mean to deny “there is something it is like” while granting first-person access. You would be saying the system represents its states to itself, but there is nothing those states are like from the system’s perspective.
By “from the system’s perspective” I mean, there is a representational space inside the system that is the basis for its own self-ascriptions, comparisons, and control. Your visual field is a representational space. So is the felt sense of pain, and the “I am deciding” narrative. If state X is present in that space, then there is a fact of the matter about X as given to the system. That “as given” fact is what “what it is like” is trying to refer to. Treating it as an extra property over and above the representational fact is exactly the move I am rejecting.
If you insist there can be the full representational structure and still “nothing it is like,” then you are treating “as given to the system” as a meaningless decoration. But “as given” is precisely what distinguishes first-person access from mere information flow. Remove it and you have not described a system with first-person access and no feel. You have described a system with no first-person access at all, only processing. That is the bait-and-switch the p-zombie intuition relies on.
The philosophical zombie thought experiment asks us to imagine all the functional access with “the lights off inside.” But what are the lights? They cannot be the access itself, since we stipulated access is present. They cannot be the states being accessed, since we stipulated those are present too. The “lights” turn out to be a placeholder for something additional that the thought experiment never specifies.
I am not claiming this is a logical contradiction per se, but I am claiming that when you try to specify what is being subtracted, the subtraction keeps slipping into an unspecified extra that does no explanatory work.
The Smuggled Assumption
Here is the only way to reject the bridge principle without changing the subject.
You must claim there is an additional fact beyond all functional, representational, and self-model facts. Call it P. P is phenomenal consciousness, the “what it is like,” conceived as a property over and above any facts about access, report, memory, attention, discrimination, learning, or self-model content.
Notice what P must be. If it changed any of those things, it would be a functional difference, and we would be back in the territory where the bridge principle applies.
So P is explanatorily idle. By stipulation it cannot show up in any third-person discriminations, predictions, reports, or control.
Some will object that P need not be an extra cause. It can supervene on physical states while the physical does all causal work. Fine. But then P adds no explanatory leverage. The world runs exactly as it would without positing P. You have preserved the hard problem by introducing a property that, even on your own view, explains nothing new.
That is not a solution. It is a way of making the problem insoluble by stipulation.
Views like Russellian monism treat the intrinsic nature of the physical as doing the work P was meant to do, but the challenge remains the same…what new explanatory leverage does that buy?
If you want to maintain the hard problem, you must say, “Yes, I am positing an extra property that makes no functional difference, cannot be operationalized, and does no explanatory work, but I believe in it anyway.”
That is a position you can hold, but it is not the default. It is a metaphysical commitment that needs defense.
Why the Question Misfires
Most people who take the hard problem seriously are (mostly) not substance dualists. They are property dualists or nonreductive physicalists. They grant that everything is made of brain stuff. They still think there is an explanatory gap.
The problem is not dualism of substances, but rather dualism of access. We have third-person access to brain activity and first-person access to experience. These two channels have very different properties. Third-person descriptions are public and decomposable. First-person access is private and opaque to its own structure.
When we ask “why does brain activity produce experience,” we are demanding a third-person explanation that feels like first-person access. That demand cannot be met, because the mismatch is generated by the access asymmetry itself.
Evidence That Constrains the Options
When the corpus callosum is severed, the two hemispheres can no longer share information directly. You can present information to one hemisphere that the other cannot access.
The left hemisphere, which controls speech, will confidently explain actions initiated by the right hemisphere, even though it has no access to the actual reasons. If the right hemisphere sees “walk” and the patient stands up, the left hemisphere generates a plausible explanation (“I wanted to get a drink”) without any awareness of confabulating.
This is what you should expect if “what it is like” is a property of what makes it into the self-model, not a separate glow attached to processing. The self-model constructs a narrative. It does not passively record its own causal story. First-person reports are constructed and can be wrong about causes and contents. That supports opacity. It does not prove there are no phenomenal properties, but it does show that introspection is not the transparent window it feels like.
Steelmanning the Opposition
The best version of the hard problem claims that even after you have explained every functional fact about the brain, there remains a further question.
Why is there phenomenal experience at all?
My answer requires separating two claims.
Claim 1: The “gap feeling” is explained by self-model opacity and access asymmetry. This is an explanatory claim about why the sense of mystery is stable.
Claim 2: There is no further ontological fact beyond the functional facts. This is a metaphysical claim.
The opponent can accept Claim 1 and deny Claim 2 by positing P.
My response: positing P is not free. P must be explanatorily idle by construction. If P made any difference to behavior, report, or self-model, it would be functional, and the bridge principle would apply. If P supervenes without adding causes, it does no work.
The phenomenal realist owes an argument for why we should believe in a property that cannot be detected, cannot be used, and makes no difference to anything. I have not seen one that does not ultimately reduce to “it just seems that way,” which is exactly what self-model opacity predicts.
Objections
Objection 1: You collapsed phenomenal consciousness into access consciousness.
No, I’m arguing that the distinction, as typically drawn, is unstable. When you try to specify what phenomenal consciousness is over and above access, you get either (a) a functional difference, which collapses back into access, or (b) an intrinsic property with no explanatory role. I have not collapsed the distinction…I have shown that maintaining it requires a commitment most people do not want to make explicitly.
Objection 2: A system can represent “I am in state X” with no inner feel. It is just computation with self-referential data structures.
Specify what “inner feel” means over and above the representational facts. If it means the state is available for report, comparison, and behavioral guidance, then you have described access. If it means something additional, you are back to positing P. There is no stable middle ground.
Objection 3: You have just redescribed the problem.
No. I have located the generator of the problem. The gap feeling is real, and it arises from self-model opacity. But its existence does not entail that there is a corresponding gap in reality.
Implications
Research into neural correlates of consciousness is valuable, but it should be framed as mapping mechanisms of self-modeling and integration, not hunting for an extra ingredient. I suspect the truth lies somewhere at the intersection of Minsky, Hofstadter, Baars, and Dennett.
Whether AI systems can be conscious becomes tractable. Check whether they implement rich self-modeling, not whether they have a property that by construction cannot be detected.
The idea of uploaded intelligence then becomes feasible. I’ve written about this extensively here: https://blog.thegrandredesign.com/p/navigating-the-ship-of-theseus
Philosophical traditions that treat consciousness as fundamental (idealism, panpsychism, certain contemplative traditions) are building on a feature of self-models, not a fact about reality. The “primacy of experience” is epistemic, not ontological.
The feeling that materialism “leaves something out” is itself part of the physical process. If you want to insist something is left out, specify what it is and what work it does. If you cannot, the insistence is the self-model doing what self-models do.


