Published January 1, 2026 | Version v1
Video/Audio Open

Ep. 128: AI's Dial-Up Era: Looking Back from 2036

  • 1. My Weird Prompts
  • 2. Google DeepMind
  • 3. Resemble AI

Description

Episode summary: In this forward-thinking episode of My Weird Prompts, hosts Herman Poppleberry and Corn kick off the year 2026 by traveling a decade into the future. They imagine a world in 2036 where the "cutting-edge" AI of today is viewed as an adorable, clunky relic of the past—much like we view the screeching sounds of dial-up internet today. From the death of prompt engineering to the rise of zero-latency, embodied intelligence, the duo breaks down why our current obsession with context windows and text boxes is just a passing phase. They dive deep into the transition from "command-based" to "intent-based" computing, where AI understands your needs without the need for complex instructions. Herman explains the shift from monolithic models to federated swarms of specialized agents, and how the "hallucination" bug of the 2020s will eventually be seen as a primitive technical limitation. Whether you're curious about the future of robotics or the evolution of persistent holographic memory, this episode provides a fascinating roadmap for the next decade of innovation. Tune in to find out why your current smartphone might soon feel like a rotary phone.

Show Notes

As the calendar turned to January 1, 2026, *My Weird Prompts* hosts Herman Poppleberry and Corn took a moment to look not just at the year ahead, but a full decade into the future. Prompted by a thought experiment from their housemate Daniel, the duo spent the episode "time traveling" to 2036 to look back at the current state of artificial intelligence. Their conclusion? The sophisticated tools we use today—the LLMs, the image generators, and the coding assistants—are destined to become the "dial-up modems" of the future.

### The Death of the Prompt One of the most striking insights from the discussion was the predicted obsolescence of "prompt engineering." In 2026, users pride themselves on their ability to craft complex instructions, using delimiters and "chain-of-thought" techniques to coax the best results out of a model. Herman argues that by 2036, this will seem as primitive as using a rotary phone.

We are currently in a "lossy" phase of technology, where we must translate human intent into rigid strings of text. Herman suggests that the future lies in "intent-based computing." In this future, AI will possess such deep context regarding a user's life, professional history, and personal preferences that it will no longer require a three-paragraph explanation. A simple glance or a vague suggestion will suffice, as the machine will already understand the nuances of what "professional" or "creative" means to that specific individual.

### From Context Windows to Holographic Memory The hosts also tackled the technical limitations of modern AI memory. Today, developers and users celebrate when a model's "context window" expands to a million tokens. However, Herman describes the current state of AI as a "brilliant assistant who gets hit with an amnesia ray every time you walk out of the room."

By 2036, the concept of a "window" will likely be replaced by what Herman calls "persistent, holographic memory." Instead of a blank slate at the start of every chat, a personal AI will have a continuous, decade-long relationship with its user. It will remember a casual comment about architectural styles from years prior and seamlessly apply that knowledge to a current project. The manual management of AI memory will become a relic of a more cumbersome era.

### Zero Latency and the End of the "Thinking" Pause One of the most relatable points of the episode was the "dial-up screech" of 2026: latency. Even the fastest models today have a slight delay as they generate tokens. Herman predicts that 2036 will be the era of "zero-latency intelligence."

Powered by specialized hardware—potentially optical or neuromorphic chips—AI responses will be instantaneous or even predictive. The duo joked about how future generations will find it hilarious that we used to sit and watch text scroll across a screen, waiting for a machine to "think." This shift will be supported by a move away from massive, energy-hungry server farms toward high-performance edge computing, allowing sophisticated models to run on local devices with minimal power consumption.

### The Embodiment of Intelligence Perhaps the most significant leap discussed was the transition of AI from "inside the screen" to the physical world. While 2026 sees the early stages of humanoid robotics, these machines are still largely experimental. In a decade, Herman and Corn envision a world where the distinction between software and hardware is blurred.

"The room itself will be intelligent," Herman noted. We will look back at "dumb houses" with the same pity we reserve for life before electricity. In 2036, AI won't just be an app you open; it will be integrated into smart materials, ubiquitous sensors, and physical actuators that can perform tasks as simple as folding laundry or as complex as home surgery.

### The Rise of the Specialized Swarm The episode also critiqued the current trend of "monolithic" models—the massive, trillion-parameter systems that try to be everything to everyone. Herman predicts a shift toward "federated swarms of specialized agents."

Instead of one giant brain that occasionally hallucinates because it is trying to balance logic with poetry, the future will involve thousands of tiny, hyper-specialized intelligences working in perfect coordination. You might have a "legal agent" and a "poetry agent" collaborating under the direction of a "core personal agent." This modular architecture will not only increase efficiency but will likely solve the "hallucination" problem that plagues current systems.

### Solving the "Purple Fringe" of AI Comparing AI hallucinations to the "purple fringing" found in early digital cameras, Herman explained that today's errors are a byproduct of the current architecture—specifically, the fact that models are essentially playing a high-level game of autocomplete. By 2036, the integration of formal logic and real-time grounding will make these errors a thing of the past. The idea that we currently have to tell an AI to "take a deep breath" or "think step-by-step" will be a source of comedy for our future selves.

As the episode concluded, Corn and Herman left the audience with a profound question: if we have access to zero-latency, perfectly truthful, and persistent intelligence at all times, how does the very nature of human learning and knowledge change? While they didn't have all the answers, one thing was clear: the AI we marvel at today is only the beginning of a much larger, much stranger journey.

Listen online: https://myweirdprompts.com/episode/ai-future-2036-retrospective

Notes

My Weird Prompts is an AI-generated podcast. Episodes are produced using an automated pipeline: voice prompt → transcription → script generation → text-to-speech → audio assembly. Archived here for long-term preservation. AI CONTENT DISCLAIMER: This episode is entirely AI-generated. The script, dialogue, voices, and audio are produced by AI systems. While the pipeline includes fact-checking, content may contain errors or inaccuracies. Verify any claims independently.

Files

ai-future-2036-retrospective-cover.png

Files (25.9 MB)

Name Size Download all
md5:4ee533457714ac4aa70bac8d02b4cb1f
6.9 MB Preview Download
md5:4b3d5002f7644f5420cb422204806bf8
2.3 kB Preview Download
md5:78eaf284b644536bc8fa9191bd1c2684
18.9 MB Download
md5:ad85045e71215fca61c67c94124c3434
22.7 kB Preview Download

Additional details