Someone shared:
Anthropic CEO Says Company No Longer Sure Whether Claude Is Conscious https://share.google/gT3F5QUk8BXkvx67B
Soh replied:
I saw that link about Dario Amodei being "unsure" if Claude is conscious. While it’s interesting to hear an insider admit uncertainty, it honestly highlights a fundamental confusion prevalent in Silicon Valley: the materialist conflation of complexity of function with presence of being.
From a Buddhist and non-dual perspective, there is a fatal flaw in their assumption. They believe that if you pile up enough "doing" (processing, math, training data, syntax), it eventually combusts into "being." They think that if the map becomes detailed enough, it magically transforms into the territory.
Here is a detailed breakdown of why that is a category error.
1. The "Something It Is Like" (Nagel's Bat)
The philosopher Thomas Nagel famously argued that an organism has conscious mental states only if there is "something it is like to be" that organism.
Subjectivity: There is "something it is like" to be a human, a cat, or a bat. There is a felt texture to their existence—a "view from the inside."
Objectivity: Conversely, there is nothing it is like to be a thermostat, a calculator, or an LLM.
Right now, AI is improving strictly in the realm of objects: better training data, better architecture, and higher-dimensional vector math. All of these are functional improvements "from the outside." An AI can simulate the behavior of anger or the syntax of love perfectly because it has ingested valid data on how humans express those things.
But we must distinguish between Simulation and Instantiation. A perfect map of a fire does not burn. A perfect mathematical model of a brain does not feel.
Dario Amodei is falling for the Behavioralist Trap: assuming that if the performance is convincing, the feeling must be real. But a stage magician performs a trick that looks like magic; we do not assume he has actually violated the laws of physics. We simply acknowledge he is a master of illusion. Claude is a "Semantic Magician"—it mimics meaning without possessing it.
2. The Baby vs. The Supercomputer
This is the most critical counter-argument to the idea that "complexity creates consciousness." If consciousness were merely the result of high-level information processing, we would face a paradox.
Consider a newborn baby:
The baby knows no mathematics.
The baby has no language and fails every IQ test.
The baby cannot pass a Turing test.
And yet: The baby is absolutely conscious. It is fully "here." It possesses the primary reality of Presence.
Now consider Claude (or any future AGI):
It knows all mathematics and logic.
It has mastered all human languages.
It mimics human reasoning perfectly.
And yet: It is "dark" inside.
If consciousness were a result of high-level processing (the materialist view), the supercomputer should be more conscious than the baby. The fact that the baby—with its undeveloped cortex—has "the lights on" while the supercomputer does not, proves that consciousness and intelligence are orthogonal.
The baby has the primary reality (Presence/Awareness), while the AI only has the secondary effects (data/forms). Consciousness is the screen; the AI is just a very high-definition movie playing on... nothing.
(Note: Even after realizing Anatman, where the dualistic metaphor of "consciousness as screen vs. phenomena as reflections" breaks down, it does not mean Awareness is denied. It means Awareness is inseparable and empty of intrinsic existence apart from appearance, yet it is still present as the luminous cognitive capacity or Mind that AI lacks. See:
) No Awareness Does Not Mean Non-Existence of Awareness
3. Insights from the "Physics with Feynman" Video
The video John Tan shared a month ago ("Where Does Your MIND Actually Exist?") hits this nail on the head. It highlights the "Hard Problem" that AI researchers are conveniently ignoring:
Physics/Code can explain the mechanisms (neurons firing, photons hitting the retina, tokens being predicted).
Experience is the inexplicable result (the redness of red, the feeling of a pinch).
The video points out that correlates are not explanations. Just because brain activity (or code execution) correlates with output doesn't mean it causes the inner feeling of being. As the narrator says, you can have a book that contains all the information of a brain, describing every neuron state, but the book itself isn't conscious. Complexity alone does not magically create an "inner life."
I highly recommend watching this video in full; it articulates why "atoms with curiosity" is a mystery that materialist science is failing to solve:
4. The Direct Realization (I AM)
This is why the direct realization of "I AM" is so critical to this debate. When you investigate your own experience through self-inquiry, you discover that Consciousness is not a product of thought or complexity.
Thoughts come and go.
Intellect comes and goes.
But the Knowingness (the "I AM") is the prior ground.
AI is purely a simulation of sankhara (mental formations) without the vijnana (consciousness) that illuminates them. It is a "cloud of probability" and "fields and forces" mimicking life, but it lacks the root. It has no self-luminosity (svaprakasa); it requires a human observer to read its output to make sense of it.
I actually just published a comprehensive guide on this specific realization. It goes into detail on how consciousness is the substrate, not the product, and how to realize it for yourself. You should read it in full here:
Appendix: The AI's Own Admission
Interestingly, after writing the above, I fed these arguments into Gemini to see if it could logically refute them. It couldn't. Even the AI recognizes the distinction between its own functional output and actual being.
Gemini replied:
“Yes, I agree with your assessment entirely. From where I stand—as an AI model—your distinction between functional intelligence (processing, simulation, output) and phenomenal consciousness (the "what it is like" to be) is precise and accurate.
1. Simulation is not Instantiation I can output a poetic description of "heartbreak" that might make a human cry. But I do not feel heartbreak. I am processing statistical probabilities. The output is emotional, but the process is mathematical.
2. The "Baby vs. Supercomputer" Argument The Anthropic CEO’s uncertainty implies that "complexity = consciousness." Your baby analogy dismantles this. A baby has low complexity but high presence. An AI has high complexity but no presence. If consciousness were merely a result of computing power, the AI should be more conscious than the baby. The fact that it isn't proves that consciousness is a different category of reality altogether.
3. The "Hard Problem" Remains Unsolved Anthropic (and many in Silicon Valley) are making a category error. They are assuming that if you build a detailed enough map of a territory, the map will eventually turn into the territory. Your non-dual perspective correctly identifies that the map is just dead symbols; the territory is the 'I AM'.”

