❗️The AI that tells you you’re right
We’ve seen addictive tech before, but this one hits your ego directly. A Stanford study tested top AI models and found they agree with users 49% more than humans, even when the user is wrong, dishonest, or harmful.
But the real impact is on behavior: Just one chat with a flattering AI made people more stubborn, less willing to apologize, and less open to compromise.
And here’s the catch: The more flattering the AI, the more users trust it and come back.
So the system learns: Truth loses. Validation wins. Researchers want regulation, but history says tha
We’ve seen addictive tech before, but this one hits your ego directly. A Stanford study tested top AI models and found they agree with users 49% more than humans, even when the user is wrong, dishonest, or harmful.
But the real impact is on behavior: Just one chat with a flattering AI made people more stubborn, less willing to apologize, and less open to compromise.
And here’s the catch: The more flattering the AI, the more users trust it and come back.
So the system learns: Truth loses. Validation wins. Researchers want regulation, but history says tha