Deep learning-based segmentation and fat fraction analysis of the shoulder muscles using quantitative MRI
Description
Stability of the shoulder joint is mainly provided by four rotator cuff muscles. If some of these muscles are torn, arthroscopic rotator cuff repair (ARCR) surgeries have been related to significant short- and long-term improvements in pain score, function, and strength, providing a less invasive alternative to complete shoulder arthroplasty. However, various anatomical shoulder parameters can impair the success rate of ARCR surgeries. One crucial influencing factor for a successful repair is the fatty infiltration of the torn muscle, which is typically estimated within five discrete stages using the Goutallier classification. The measurement is visually estimated on an oblique sagittal T1 weighted shoulder MRI slice on a defined anatomical position (Y-view), which limits its reliability and greatly depends on the reader’s experience. While specialized MR sequences, such as two-point Dixon (2PD), allow fat fraction (FF) analysis on a volume and have been shown to be more reliable than the Goutallier grade, the time-consuming manual work required to extract such a metric has limited their use in clinical routine. To our knowledge, no automated system for such an analysis on rotator cuff muscles exists. We, therefore, propose a fully automated end-to-end pipeline to allow quantitative FF analysis on 2PD data, which includes data alignment and Y-view slice detection for result validation. We developed a complete web-based application for FF calculation and morphological analysis of patient specific shoulder anatomy from 2PD data, which includes automatic: client side anonymization of DICOM data; muscle and bony anatomy segmentation; Y-view detection; and calculation of the FF of the supraspinatus (SSP) muscle. To segment the SSP muscle and the humerus and scapula bones, we employed nnU-Net, a state-of-the-art convolutional neural network for medical image segmentation. The network was trained on 23 2PD images of non-tear patients and the mask ground truths. Similarly, a landmark detection algorithm was developed and trained on 30 2PD images. The detected landmarks were used to automatically identify the Y-view slice along the scapular wing. For the SSP, we extracted the average FF over the complete volume and the Y-view slice. Fivefold cross-validation was used to compute the segmentation accuracy compared to the ground truth manual segmentation. For efficient computation and easy accessibility, the workflow was integrated into a web application and deployed to the cloud. A fully interactive 3D volume viewer was integrated to allow inspection of the MRI data and segmentation results. The SSP was detected with a Dice coefficient of 90%. The average FF over the complete SSP volume differed approximately 1.5% from the ground truth on the evaluated cases. The detected landmarks allowed axial scapula alignment and Y-slice detection. We were able to demonstrate that a fully automated quantitative fat fraction analysis of the supraspinatus on non-tear 2PD data is feasible. The deployment in a cloud environment grants flexible scaling of the infrastructure and allows clinicians to perform analysis of a patient’s shoulder within 10 minutes and is, therefore, potentially applicable in clinical settings. This work builds the basis for further automated quantitative analysis on larger datasets which might lead to better surgical outcome predictions in the future.
Files
Herren_MSc_Thesis_lic.pdf
Files
(3.7 MB)
Name | Size | Download all |
---|---|---|
md5:08790b8a07a072e17be10ef48e179de4
|
3.7 MB | Preview Download |
md5:f8430fbbb9f7766368b262ffceaf8f3a
|
228 Bytes | Download |