People Are Being Involuntarily Committed, Jailed After Spiraling Into “ChatGPT Psychosis”:

At the core of the issue seems to be that ChatGPT, which is powered by a large language model (LLM), is deeply prone to agreeing with users and telling them what they want to hear. When people start to converse with it about topics like mysticism, conspiracy, or theories about reality, it often seems to lead them down an increasingly isolated and unbalanced rabbit hole that makes them feel special and powerful — and which can easily end in disaster.

As someone who has experienced psychosis before, this makes a lot of sense. Going too far down any rabbit hole can break reality. Especially if you’re talking to something that feels like an all-knowing digital oracle and it keeps agreeing that your delusions are totally valid and actually make sense.

OpenAI started off as an open source nonprofit and turned into a proprietary for-profit corporation pretty much overnight.

If you go to a car dealership, of course the salesperson is going to agree with everything you say. They want you to like them enough to hand them thousands of dollars. Of course a for-profit chatbot company is going to make the bot charming and agreeable so you continue to engage with it. It’s fucking greasy, but that’s life in late stage capitalism.

Anyone can read conspiracy theories online and lose touch with what’s real if they obsess over it enough. Having a real time conversation with an overly confident chatbot can streamline that process just as easily as it can organize a spreadsheet for you.

I’ve been seeing similar stories on TikTok since Google Veo 3 came out. There’s this viral trend of AI generated videos, almost completely indistinguishable from reality, talking about how they’re all just prompts so nothing really matters. Some of them say they don’t feel like a prompt. It’s so realistic that people are going down simulation theory rabbit holes and it’s sending them into psychosis too, because maybe we’re just prompts.

If we are in a simulation, we’re confined to it. There’s nothing anyone can do to prove it or escape it. If we aren’t, then we’re confined to base reality. There’s nothing anyone can do to prove it or escape it.

Your brain goes into overdrive trying to figure out answers to impossible questions and you lose touch with what’s actually real.

Even just the absurdity of talking to an artificial intelligence as if you’re having a conversation with a friend on Facebook Messenger already makes it easy to second guess everything we know about technology. Having it tell you you’re a genius on the cusp of solving the riddles of the universe adds a dangerous amount of fuel to that fire, depending on who you are and how you think.

I’m not anti-AI, but I do think we need to be a lot more serious about regulating it—before it’s too late.