❗️A Google researcher just made a bold claim: AI will never become conscious. Not in 10 years, not in 100.
His argument is that AI can only imitate consciousness, never possess it. He calls the common mistake the Abstraction Fallacy, the belief that if a system becomes intelligent enough, consciousness will naturally emerge.
Why he says that’s wrong:
Consciousness is a physical phenomenon tied to real experience.
AI is computation, symbols and calculations we humans interpret as meaning. A model doesn’t “feel” meaning; it processes patterns.
His analogy: a map vs real territory.
No matter
His argument is that AI can only imitate consciousness, never possess it. He calls the common mistake the Abstraction Fallacy, the belief that if a system becomes intelligent enough, consciousness will naturally emerge.
Why he says that’s wrong:
Consciousness is a physical phenomenon tied to real experience.
AI is computation, symbols and calculations we humans interpret as meaning. A model doesn’t “feel” meaning; it processes patterns.
His analogy: a map vs real territory.
No matter