Applying User Experience Principles when Researching Generative AI Interfaces
Authors/Creators
Description
Generative AI tools such as chatbots powered by large language models (LLMs) present a new modality for UX
researchers when evaluating system interfaces with users. While conversational UIs help users understand
these tools more quickly, they can present new challenges for researchers. These challenges include the way
user prompting behaviors vary, how generative content changes between instances, and how to contextualize AI-
generated content alongside other parts of the interface.
How do you evaluate UI elements of generative AI chatbots? How does content generated in real time and
conversational UIs change your approach and evaluation criteria? This presentation will cover a case study
where Northwestern University Libraries evaluated an LLM-based research tool and compare and contrast the
approach with non-generative AI experiences. The tool evaluated — the homegrown generative AI research tool
for Northwestern’s Digital Collections (supported by an Institute of Museum and Library Services grant) —
utilizes a conversational UI to help users learn and discover resources from a large catalog of materials.
Attendees will learn approaches for researching and evaluating generative AI experiences, best practices for
developing a test plan, and how to translate findings and recommendations into actionable changes to the
experience.
Files
Files
(8.1 MB)
| Name | Size | Download all |
|---|---|---|
|
md5:52deb439363696f495f42324a59213dd
|
8.1 MB | Download |