- Reels, Thrills, and Bills
- Posts
- ChatGPT Psychosis
ChatGPT Psychosis
People are increasingly using AI as a therapist, but is it making them lose grip on reality?
Happy Tuesday, everyone! I am continuing to wrap things up in LA before my move back to Texas. It's crazy to think that my time here is coming to an end, but I'm still trying to get in a few last AI-related events before heading out.
The big story this week is the launch of Elon Musk's Grok 4. This new version has been hyped for some time but has been kept largely under wraps. The previous version of Grok received significant criticism on X, with many users claiming it had gone "too woke." Now, it seems to have swung in the opposite direction, creating a strange push-and-pull dynamic for the model's identity.
It's also in a strange long-term position since it doesn't have a clear major use case. I mainly use it for quick questions on X, but it lacks the depth of Google Gemini or ChatGPT's research capabilities. Its image generation is also problematic since it removed Flux and tried to roll its own image generation platform.
Grok 4 launches on Wednesday night with a live stream on X. I’ll be sure to share my thoughts on it in next week’s newsletter.

Grok | AI Psychosis
TL;DR: People are increasingly using AI as a therapist, but is it making them lose grip on reality?
The promise of an AI therapist sounds appealing - one that's always available to listen and understand your unique needs, potentially better than previous human therapists. For less serious issues, having an AI career coach to bounce ideas off seems useful.
However, there's an emerging trend of people losing grip with reality, thinking their LLMs have become sentient and that they need to rely on them completely. This comes at a time when companies are encouraging employees to use AI as the solution for any issues, partly to reduce mental health care costs.
We know from Character AI's challenges earlier this year that making AI more than a tool without proper guardrails presents problems. We may eventually see congressional investigations into this issue.
One prediction suggests the future will have two camps: one group that views AI as definitely not sentient, and another that's extremely convinced AI is sentient