Published June 16, 2025 | Version v2
Presentation Open

Dealing with Generative AI, Harms and Mitigation Techniques

Creators

Description

This keynote addresses two key questions: Are large language models (LLMs) the right interface for data and information access, and what harms do AI models pose to institutions like libraries today? He explains that current LLMs, while powerful, are fundamentally flawed pattern-matchers rather than true reasoning systems. New techniques like chain-of-thought processing, self-verification, and retrieval-augmented generation (RAG) offer partial improvements but rely on the same unreliable foundations. On the second question, Zhao highlights the growing problem of AI-driven web scraping, noting that most mitigation strategies offer limited protection. He concludes that today’s generative AI LLMs are fundamentally flawed, and while composition techniques offer limited improvement, meaningful progress will require new architectures built with better understanding and ethical data sourcing. In the meantime, AI-driven crawlers pose an immediate threat, with most conventional defences proving ineffective—leaving commercial, network-level blocking as one of the few viable mitigation strategies.

View on YouTube: https://www.youtube.com/watch?v=HL62C3U8epE

Files

OR2025-Ben-Zhao-keynote final.pdf

Files (5.1 GB)

Name Size Download all
md5:d4fb402bf0401af35c2f9b2c72ab22c9
5.8 MB Preview Download
md5:db38b02954f47fa050ae1eff0bbb1520
5.1 GB Preview Download

Additional details

Dates

Available
2025-06-16