Published January 8, 2026 | Version v1
Video/Audio Open

Ep. 200: Beyond Vectors: The Evolution of the Modern AI Tech Stack

  • 1. My Weird Prompts
  • 2. Google DeepMind
  • 3. Resemble AI

Description

Episode summary: In this episode of My Weird Prompts, hosts Herman and Corn dive deep into the shifting landscape of AI data infrastructure as of early 2026. They discuss the transition from flat vector databases to the structural power of Graph RAG, using tools like Obsidian and Neo4j to explain how associative memory improves AI reliability and reduces hallucinations. Finally, they explore the resurgence of Postgres and pgvector, highlighting why "boring" technology and the "all-in-one" database approach are becoming the gold standard for modern, cost-effective AI applications.

Show Notes

In the latest episode of *My Weird Prompts*, hosts Herman Poppleberry and Corn celebrate a major milestone—nearly 300 episodes of exploring the strange and rapidly evolving world of technology. Recorded in early 2026, the discussion centers on a pivotal shift in the AI industry: the transition from simple vector-based retrieval to a more sophisticated, relational approach known as Graph RAG (Retrieval-Augmented Generation). Through the lens of a listener question from their housemate Daniel, the hosts dissect how the "AI tech stack" has matured from the frantic experimentation of 2023 into a more stable, integrated ecosystem.

### The Shift from Vectors to Graphs Herman and Corn begin by reflecting on the early days of the AI boom, when vector databases were the undisputed kings of the stack. At that time, turning text into numerical embeddings—vectors—was the primary way to give Large Language Models (LLMs) access to external data. However, as Herman points out, vectors have a significant limitation: they are essentially "flat." While a vector search can find pieces of information that are semantically similar (fuzzy matching), it lacks the ability to understand the structural relationships between those pieces of information.

To illustrate this, the hosts use a compelling analogy. A vector database is like a pile of bricks; you can find bricks that look similar, but you don't know how they fit together. A graph database, by contrast, is the finished house. It provides the scaffolding—the logic—that tells the AI not just that two things are related, but *how* they are related.

### Personal Knowledge Management as a Blueprint The discussion moves into the world of Personal Knowledge Management (PKM), specifically focusing on the note-taking app Obsidian. Herman, a dedicated user, explains how Obsidian treats notes as nodes in a graph, mirroring the associative nature of the human brain. In the brain, a smell might trigger a memory of a location, which triggers a memory of a conversation.

By early 2026, this "graph-thinking" has moved from personal notes into the enterprise AI space. Herman explains that the industry is seeing the rise of "Graph RAG." By using graph databases like Neo4j, AI systems can perform "multi-hop" queries. Instead of just finding a document about a car engine, the AI can traverse the graph to find specific components, maintenance logs, and the technicians certified to fix them. This structural approach drastically reduces hallucinations because the AI is following a factual map rather than just guessing based on statistical similarity.

### The Return of the "Boring" Tech Stack Perhaps the most provocative part of the discussion is the hosts' analysis of the "boring" AI stack. While specialized vector databases like Pinecone and Milvus initially dominated the market, Herman and Corn observe a massive pendulum swing back toward traditional relational databases—specifically Postgres.

The emergence of the `pgvector` extension has allowed developers to store and query embeddings directly within their existing Postgres tables. Herman argues that for the vast majority of companies, the "boring" choice is actually the superior one. He cites several reasons for this shift:

1. **Data Synchronization:** In a split system (Postgres for data, Pinecone for vectors), keeping information in sync is a nightmare. If a record is deleted in one but not the other, the AI may quote "ghost" data. In Postgres, the transaction is atomic—if the row is gone, the vector is gone. 2. **Hybrid Search:** Modern AI applications often require a mix of semantic search (vectors) and metadata filtering (SQL). Performing a query that asks for "renewable energy documents written in the EU in the last six months" is a single, efficient join in Postgres, whereas it is a complex, two-step process in a fragmented stack. 3. **Cost and Complexity:** Using existing infrastructure is significantly cheaper and requires less specialized DevOps knowledge than maintaining a separate, managed vector service.

### Automated Knowledge Graph Construction Corn raises the question of scalability: is a graph structure viable for a company with millions of documents? Herman explains that the bottleneck used to be human labor—manually defining relationships was impossible at scale. However, by 2026, the process has been revolutionized by LLMs themselves.

Modern pipelines now use AI to read through massive datasets and automatically extract entities and their relationships to build the knowledge graph. This creates a recursive, "scout" model where the AI builds the very map it will later use to navigate the data. This automation has made high-fidelity Graph RAG accessible to enterprises that previously found the complexity of graph databases prohibitive.

### Conclusion: A Maturing Industry The episode concludes with the observation that the AI industry is finally moving past "shiny object syndrome." The focus has shifted from using the newest, most specialized tools to building reliable, cost-effective, and integrated systems. Whether it is the associative power of an Obsidian-like graph or the reliable efficiency of a Postgres database, the goal in 2026 is the same: providing AI with the context and logic it needs to be truly useful. As Herman puts it, the industry is no longer just collecting bricks; it is finally learning how to build the house.

Listen online: https://myweirdprompts.com/episode/graph-rag-ai-tech-stack

Notes

My Weird Prompts is an AI-generated podcast. Episodes are produced using an automated pipeline: voice prompt → transcription → script generation → text-to-speech → audio assembly. Archived here for long-term preservation. AI CONTENT DISCLAIMER: This episode is entirely AI-generated. The script, dialogue, voices, and audio are produced by AI systems. While the pipeline includes fact-checking, content may contain errors or inaccuracies. Verify any claims independently.

Files

graph-rag-ai-tech-stack-cover.png

Files (25.8 MB)

Name Size Download all
md5:147d44ee71f150988295ef21e6a6b363
6.7 MB Preview Download
md5:09897a755a8502eecfb126f5f87fdde0
1.6 kB Preview Download
md5:962a32ee703e2767c3823c75cafffcc8
19.0 MB Download
md5:086f2b95a828505324c737046aa46cc9
22.2 kB Preview Download

Additional details