Published March 30, 2026 | Version 1.0.0
Software documentation Open

Eastern England SDE Researcher Model Disclosure Control SOP

  • 1. Eastern England Secure Data Environment
  • 2. Health Innovation East

Description

This document explains how researchers working in the Eastern England Secure Data Environment (EE‑SDE) should assess and manage the privacy risks associated with machine‑learning (ML) models before they can be exported from the secure environment. Unlike traditional research outputs, ML models can unintentionally reveal sensitive information because they may behave differently for people whose data was used in training. This creates risks such as identifying whether an individual was in the dataset or inferring sensitive attributes about individuals. 

To reduce these risks, the EE‑SDE uses a toolset called SACRO‑ML, which provides two types of checks. SafeModel supports early “ante‑hoc” checks during model development, helping researchers identify risky parameter choices—such as models that overfit or settings that encourage memorisation. It also produces a structured report for reviewers. The Attacks component performs “post‑hoc” tests by simulating privacy attacks to see whether a trained model might leak information in practice. 

Researchers must prepare their models, document their purpose and training approach, and supply required files before requesting model egress. Output checkers then use SACRO‑ML results to make evidence‑based decisions about whether a model can be safely released, whether it needs revision, or whether the risks are too high. 

Files

EE_SDE_Researcher_MDC_Documentation.pdf

Files (331.0 kB)

Name Size Download all
md5:0db0149d1893d6e56fa8be7a598325dc
331.0 kB Preview Download