

This really goes to show how much they need to rely on the LLMentalist effect, despite the AI boosters insisting that the AI is totally different now, everything changed in the last few months. They do not care about creating a useful, reliable tool. That concept doesn’t even occur to them, since why do that when AI is magic?
In any case, they are incapable of creating a useful, reliable tool. Deep down, the only thing the AI companies have at their disposal is the ELIZA effect. OpenAI has every incentive not to truly eliminate AI psychosis, because they need engagement. They only want to mitigate the extreme cases where people go insane and cause bad PR for them. But mild AI psychosis is totally fine, it’s great when people are addicted to your product and make the numbers go up!
I attended a town hall hosted by the department at my university supposedly for general discussion about department affairs. Considering the university had recently made moves such as adding “AI” into the very name of the department, I had suspicions that much of the discussion would be about AI. (I realize I’m doxxing myself but whatever.) I mostly came for the free food, but I was also interested in seeing what people thought about AI.
The event started with a talk by a prominent professor with major administrative power in the department, and indeed the talk was mostly about AI. His views were that he personally didn’t like AI, but he believed that it had changed the world (particularly in programming), and that it was going to stay. One of his justifications for pivoting the department to AI was ensuring universities had some say in AI and not letting all the control go to unaccountable corporations.
The reaction from the audience was a pleasant surprise to me. He asked everyone how much they were excited about AI (hardly anyone) and how much they were worried (most of the audience). By far the most amusing moment was when someone asked, “What if the assumption that AI is inevitable is wrong? What if AI does not live up to its promises?” (Sadly, I don’t remember the exact words that the person said.) The professor’s response was that by this point, there are so many trustworthy, smart, prominent people who definitely wouldn’t fall for scams, and they have adopted AI. He trusts those people, so he trusts that AI is genuine. I don’t know if the audience member accepted this explanation, but I hope not. Our modus operandi is FOMO.
The pizza was only ok, not really worth a 90 minute event.