Ep. 90: The AI Filing Cabinet: Why Chatbots Feel So Lonely
Authors/Creators
- 1. My Weird Prompts
- 2. Google DeepMind
- 3. Resemble AI
Description
Episode summary: In this episode of My Weird Prompts, brothers Herman and Corn Poppleberry tackle a frustrating paradox of modern tech: why are the world's smartest AI models so bad at basic organization? Prompted by a question from their housemate Daniel, the duo explores "the output problem"—the tedious reality of manual copy-pasting—and why the industry treats AI responses as disposable chat bubbles. They also debate the technical and psychological complexities of bringing AI into group chats, featuring a skeptical call-in from Jim in Ohio who thinks we might be better off without digital middlemen in our relationships.
Show Notes
In the latest episode of *My Weird Prompts*, hosts Herman and Corn Poppleberry take a deep dive into the "plumbing" of artificial intelligence. While the tech world is currently obsessed with the intelligence of Large Language Models (LLMs), the brothers argue that the user experience remains stuck in the past. Specifically, they address two major pain points raised by their housemate Daniel: the lack of seamless data management and the strange absence of multi-user AI interactions.
### The Problem of the "Disposable" Output The discussion begins with a look at what Herman calls the "output problem." Despite billions of dollars poured into Retrieval Augmented Generation (RAG)—the process of feeding personal data into an AI—there has been surprisingly little innovation regarding where that data goes once the AI processes it.
Corn points out the absurdity of the current workflow: users often find themselves manually highlighting and copy-pasting text from a sophisticated chatbot into a Google Doc or a notes app, a process he likens to the early days of the internet. Herman suggests this isn't just an oversight but a calculated business move. By keeping conversations trapped within their specific interfaces, companies like OpenAI and Google create "walled gardens" that discourage users from migrating their data to other platforms. While Corn wonders if the developers simply forgot to "build the filing cabinet" in their rush to innovate, Herman insists that data ownership is the ultimate goal—if the context stays in the chat history, the provider maintains control over the user's digital life.
### Why Can't We Group Chat with AI? The second half of the episode focuses on the "lonely" nature of current AI. Daniel's prompt highlighted a common frustration: a husband and wife seeking parenting advice from a custom GPT cannot do so in a shared thread. They are forced to have two separate, isolated conversations with the same bot.
Herman explains that this isn't just a UI limitation; it's a technical hurdle involving "context windows" and "speaker diarization." For an AI to function effectively in a group setting, it must distinguish between different users' perspectives and maintain a coherent narrative that satisfies multiple people at once. Furthermore, the issue of privacy arises. In a shared thread, an AI might inadvertently leak one user's private data to another based on the shared context of the conversation.
Corn remains skeptical of these technical excuses, noting that we have managed shared folders and collaborative software for decades. He argues that the industry's obsession with the "personal assistant" metaphor has blinded them to the potential of a "communal companion."
### A Philosophical Pushback The conversation takes a grounded turn when Jim from Ohio calls in to offer a "human" perspective. Jim argues that the desire to archive every AI interaction is symptomatic of a modern obsession with productivity that ignores how the human brain actually works. He suggests that some things are meant to be forgotten and that bringing a "digital middleman" into family dynamics—like planning a trip or a holiday dinner—only serves to make the world a lonelier place.
While Herman acknowledges the validity of Jim's critique regarding the potential for AI to become a barrier between people, he maintains that if these tools are to exist, they should at least be functional.
### Looking Toward the Future The episode concludes with a look at current attempts to solve these issues. While Microsoft's Copilot is making strides by baking AI directly into document editors, and platforms like Slack are experimenting with multi-user AI, a universal standard for AI output still doesn't exist. Whether it's a lack of imagination or a lack of technical standards, the "AI filing cabinet" remains a dream for now. For users like Daniel, the search for a way to turn fleeting chat bubbles into a permanent "Second Brain" continues.
Listen online: https://myweirdprompts.com/episode/ai-output-management-group-chats
Notes
Files
ai-output-management-group-chats-cover.png
Additional details
Related works
- Is identical to
- https://myweirdprompts.com/episode/ai-output-management-group-chats (URL)
- Is supplement to
- https://episodes.myweirdprompts.com/transcripts/ai-output-management-group-chats.md (URL)