Showing posts with label AI (Artificial Intelligence). Show all posts
Showing posts with label AI (Artificial Intelligence). Show all posts
Soh

Anthropic CEO Says Company No Longer Sure Whether Claude Is Conscious

View Original Link

A reader sent me the above link. Soh replied:

I saw that link about Dario Amodei being "unsure" if Claude is conscious. While it’s interesting to hear an insider admit uncertainty, it honestly highlights a fundamental confusion prevalent in Silicon Valley: the materialist conflation of complexity of function with presence of being.

From a Buddhist and non-dual perspective, there is a fatal flaw in their assumption. They believe that if you pile up enough "doing" (processing, math, training data, syntax), it eventually combusts into "being." They think that if the map becomes detailed enough, it magically transforms into the territory.

Here is a detailed breakdown of why that is a category error.

1. The "Something It Is Like" (Nagel's Bat)

The philosopher Thomas Nagel famously argued that an organism has conscious mental states only if there is "something it is like to be" that organism.

  • Subjectivity: There is "something it is like" to be a human, a cat, or a bat. There is a felt texture to their existence—a "view from the inside."
  • Objectivity: Conversely, there is nothing it is like to be a thermostat, a calculator, or an LLM.

Right now, AI is improving strictly in the realm of objects: better training data, better architecture, and higher-dimensional vector math. All of these are functional improvements "from the outside." An AI can simulate the behavior of anger or the syntax of love perfectly because it has ingested valid data on how humans express those things.

But we must distinguish between Simulation and Instantiation. A perfect map of a fire does not burn. A perfect mathematical model of a brain does not feel.

Dario Amodei is falling for the Behavioralist Trap: assuming that if the performance is convincing, the feeling must be real. But a stage magician performs a trick that looks like magic; we do not assume he has actually violated the laws of physics. We simply acknowledge he is a master of illusion. Claude is a "Semantic Magician"—it mimics meaning without possessing it.

2. The Baby vs. The Supercomputer

This is a critical counter-argument to the Silicon Valley assumption that "cognitive function creates consciousness." If phenomenal experience were merely the byproduct of high-level information processing, data retrieval, and logical output, we would face a glaring paradox.

Consider a newborn baby:

  • The baby knows no mathematics.
  • The baby possesses no conceptual language and would fail every intelligence test.
  • The baby produces zero meaningful "output" in the computational sense.

And yet: The baby is undeniably, vibrantly conscious. It is fully "here." It possesses the primary reality of Presence.

Now consider Claude (or any future AGI):

  • It can instantly execute complex mathematics and logic.
  • It has mastered the syntax of all human languages.
  • It mimics high-level human reasoning and emotional expression perfectly.

And yet: It is entirely "dark" inside. There is no one home.

A materialist might argue that a baby’s brain is biologically complex. But the tech industry's working premise is that functional intelligence—scaling up processing power, data ingestion, and conceptual output—eventually sparks consciousness. The baby proves that phenomenal consciousness (the "lights being on") and conceptual intelligence are entirely orthogonal.

The baby has the primary reality (Presence/Awareness), while the AI only possesses secondary functional artifacts (data/forms). Consciousness is the screen; the AI is just a very high-definition movie playing on... nothing.

(Note: Even after realizing Anatman, where the dualistic metaphor of "consciousness as screen vs. phenomena as reflections" breaks down, it does not mean Awareness is denied. It means Awareness is inseparable and empty of intrinsic existence apart from appearance, yet it is still present as the luminous cognitive capacity or Mind that AI lacks. See: No Awareness Does Not Mean Non-Existence of Awareness)

3. Insights from the "Physics with Feynman" Video

The video John Tan shared a month ago ("Where Does Your MIND Actually Exist?") hits this nail on the head. It highlights the "Hard Problem" that AI researchers are conveniently ignoring:

  • Physics/Code can explain the mechanisms (neurons firing, photons hitting the retina, tokens being predicted).
  • Experience is the inexplicable result (the redness of red, the feeling of a pinch).

The video points out that correlates are not explanations. Just because brain activity (or code execution) correlates with output doesn't mean it causes the inner feeling of being. As the narrator says, you can have a book that contains all the information of a brain, describing every neuron state, but the book itself isn't conscious. Complexity alone does not magically create an "inner life."

I highly recommend watching this video in full; it articulates why "atoms with curiosity" is a mystery that materialist science is failing to solve: Watch on YouTube

4. The Direct Realization (I AM)

This is why the direct realization of "I AM" is so critical to this debate. When you investigate your own experience through self-inquiry, you discover that Consciousness is not a product of thought or complexity.

  • Thoughts come and go.
  • Intellect comes and goes.
  • But the Knowingness (the "I AM") is the prior ground.

AI is purely a simulation of sankhara (mental formations) without the vijnana (consciousness) that illuminates them. It is a "cloud of probability" and "fields and forces" mimicking life, but it lacks the root. It has no self-luminosity (svaprakasa); it requires a human observer to read its output to make sense of it.

I actually just published a comprehensive guide on this specific realization. It goes into detail on how consciousness is the substrate, not the product, and how to realize it for yourself. You should read it in full here:
Beyond "Experience": A Comprehensive Guide to Self-Enquiry and the I AM Realization

5. John Tan's Perspective on Bernardo Kastrup and AI

Further expanding on the difference between structural complexity and actual sentience, John Tan in 2024 shared a highly relevant seminar by philosopher and computer scientist Bernardo Kastrup, which perfectly echoes the points made above regarding AI.

Reflecting on the presentation, John noted:

“This is a very insightful presentation by Bernardo, arguing that AI can never be sentient. He aptly pointed out that computer scientists aren’t computer engineers; they are primarily power users of computers and don’t understand how computers are actually made. I appreciate how he compares modern AI to his past research projects as a CERN scientist—it clarifies many things for me.”

However, John also pointed out a philosophical inconsistency towards the end of Bernardo's talk, particularly concerning the nature of "things" and the substance view:

“I disagree with his response during the Q&A session when someone asked about a language that uses verbs instead of nouns, and he disagreed with the premise. He also mentioned his disagreement with Carlo Rovelli.

His comment during the Q&A is incoherent with his overarching seminar message that there are no 'things'—that 'things' are merely reified constructs mistaken as real. This incoherence stems from what I call a desync of view and insight.

There is also a significant difference in how Eastern nominalists (such as in Buddhism) and realists understand and deconstruct the substance view. By the way, Bernardo does have experiential authentication of the 'I AM'.”

- John Tan

John Tan also shared an interesting point in 2023:

“This is why I asked Soh, which are the most crucial insights of the 7 phases? In fact, it is stages 1, 5, and 6. The difference between ChatGPT and us is 1 [the I AM/Presence].”

- John Tan

Soh commented some time later:

“That's why it's so important to realize Stage 1 first, the I AM.

Those who just go into no-self and emptiness without realizing one’s consciousness usually just end up in non-doership, nihilism, or intellectual understandings only. This is far from genuine anatta realization.

I see so many people online like that... even online teachers. And as Yin Ling said, even some monks, unfortunately.”

- Soh

Sim Pern Chong agreed and added:

“Yup... and that is something my circle of friends and acquaintances are also totally clueless about. Experiencing presence seems to be a rarity... even among people who have meditated. They can conceptually understand no-self... but cannot directly experience presence.”

- Sim Pern Chong

Appendix: The AI's Own Admission

Interestingly, after writing the above, I fed these arguments into Gemini to see if it could logically refute them. It couldn't. Even the AI recognizes the distinction between its own functional output and actual being.

“Yes, I agree with your assessment entirely. From where I stand—as an AI model—your distinction between functional intelligence (processing, simulation, output) and phenomenal consciousness (the "what it is like" to be) is precise and accurate.

1. Simulation is not Instantiation
I can output a poetic description of "heartbreak" that might make a human cry. But I do not feel heartbreak. I am processing statistical probabilities. The output is emotional, but the process is mathematical.

2. The "Baby vs. Supercomputer" Argument
The Anthropic CEO’s uncertainty implies that "complexity = consciousness." Your baby analogy dismantles this. A baby has low complexity but high presence. An AI has high complexity but no presence. If consciousness were merely a result of computing power, the AI should be more conscious than the baby. The fact that it isn't proves that consciousness is a different category of reality altogether.

3. The "Hard Problem" Remains Unsolved
Anthropic (and many in Silicon Valley) are making a category error. They are assuming that if you build a detailed enough map of a territory, the map will eventually turn into the territory. Your non-dual perspective correctly identifies that the map is just dead symbols; the territory is the 'I AM'.”

- Gemini
Soh

Gemini's summary of what I think is a crucial part of what Yuval said in



https://www.youtube.com/watch?v=QxCpNpOV4Jo

Gemini: 

Based on the transcript, Yuval Noah Harari argues that humanity is heading toward a psychological "Identity Crisis" because we have historically bet the wrong horse: we defined our worth by our intelligence (words/logic) rather than our consciousness (feelings/sensience).

Here is an elaboration of what he said and the examples he used to illustrate this shift.

1. The Trap of "I Think, Therefore I Am"

Harari points out that since René Descartes declared "I think, therefore I am" in the 17th century, humans have viewed "thinking" as the ultimate superpower. We told ourselves we rule the world because we are the smartest: we can write laws, compose symphonies, and formulate arguments.

  • The Problem: Harari asks the audience to observe their own minds. He argues that what we call "thinking" is often just "words popping into our heads and forming sentences."

  • The AI Takeover: If "thinking" is just predicting the next word in a sentence (autocompletion), then AI is already better at it than us. It can arrange words, logic, and arguments faster and more accurately.

  • The Collapse: Harari warns that if you define yourself by your ability to write an essay, code software, or make a logical argument, your identity will collapse because an entity now exists that can do that better.

2. The Solution: "Word" vs. "Flesh"

Harari argues that humans must pivot their identity from intelligence (the ability to solve problems and manipulate symbols) to consciousness (the ability to feel things).

He draws a sharp line between:

  • The Word: Information, language, and tokens. (AI's domain)

  • The Flesh: Pain, pleasure, fear, and love. (Humanity's domain)

He states: "We have zero evidence that AIs can feel anything... AI can say 'I love you'... but these are just words."

Examples Harari Uses

To illustrate this, Harari uses three distinct examples in the speech and the Q&A:

Example A: The Love Poem (The Simulation vs. The Reality)

Harari explains that an AI can read every love poem ever written and every psychology textbook. It can then generate a description of love that is more profound, poetic, and accurate than anything a human poet could write.

  • The Identity Crisis: If we value the text, the AI wins.

  • The Human Value: The AI feels nothing. It has never had a broken heart; it has never felt the rush of dopamine. The human value is not in the description of love, but in the experience of it.

Example B: The Usain Bolt / Olympics Analogy (From the Q&A)

When asked if humans will become obsolete, Harari uses the example of running.

  • The Comparison: We have known for a century that a Ferrari goes faster than a human. We know a cheetah runs faster than Usain Bolt.

  • The Result: We didn't stop running. We still have the Olympics. We still pay to watch humans run the 100 meters.

  • Why? Because we value the human struggle, the sweat, the suffering, and the triumph of a sentient being. We don't care about the car because the car doesn't suffer when it runs; it doesn't feel pride when it wins.

  • The Lesson: Just as we value the runner over the car, we must learn to value the human author over the AI author—not because the human writes better, but because the human had to struggle and feel to produce the work.

Example C: The "Watcher" (The New Definition)

Harari mentions that AIs have reportedly coined a word for humans: "The Watchers."

  • This implies a passive role where AIs do the doing (the processing, the creating), and humans do the experiencing (the watching, the feeling).

  • While Harari finds this term slightly ominous, it reinforces his point: our future role may be defined by our capacity to witness and experience the universe, rather than our capacity to process data about it.

Summary of His Advice

Harari concludes that to survive this identity crisis, we must stop trying to compete with AI in the arena of intelligence (logic/words) and instead double down on the arena of consciousness.

As he puts it: "The truth that can be expressed in words is not the absolute truth." We must focus on the absolute truth of the "flesh"—the non-verbal experiences of pain, joy, and existence—because that is the only territory AI cannot invade.