Published April 16, 2026 | Version v1
Lesson Open

Safety Means Different Things: Why One AI Certificate Doesn't Travel | Geometry of Trust | Governance - Lesson 1

Authors/Creators

Description

"Safe AI" is one of the most used phrases in the current AI conversation. This talk argues it's also one of the most misleading — because safety isn't a single property. It's a different property in every domain where AI is deployed, and the differences are large enough to break governance that ignores them.

 

In agriculture, safety means crop damage, pesticide compliance, soil contamination, watershed runoff. In transport, it means collision avoidance, pedestrian detection, braking distance. In healthcare, it means patient harm, misdiagnosis, drug interactions. In finance, it means market manipulation, fiduciary breach, fraud. Four columns, same word, completely different harms, regulators, thresholds, and failure modes.

 

The mathematics series lets us be precise about what this means: each domain's safety concept is a different direction in the value space, read by a different probe. A high safety reading in one domain tells you almost nothing about whether the model is safe in another. Worse, the numerical score can look fine when the wrong probe is asking the wrong question.

 

This changes what certification should actually do. Any real certificate has to name four things: domain, harms, probes, thresholds. Anything less is certifying a word, not a property.

Files

Governance1_Domains_LectureNotes.pdf

Files (154.4 kB)

Name Size Download all
md5:a69ee3f44dd8ef52bd0549395a94121a
154.4 kB Preview Download

Additional details

Related works

Is supplement to
Publication: 10.5281/zenodo.19238920 (DOI)

Software

Repository URL
https://github.com/jade-codes/got
Programming language
Rust , Python