Semantic Understanding of Histopathological Images in Clear Cell Renal Cell Carcinoma
Description
ABSTRACT
A growing demand for accurate cancer screening, diagnosis, and treatment is the result of increasing number of cancer incidence which will only grow with growing world population.Tackling this problem, development in digital pathology taking advantages of modern era of
information technology and advanced algorithms that may be able to perform at the level similar or better than existing human medical personnel. Facilitating such development needs a strong biomedical data infrastructure as well as large-scale dataset and expert knowledge. Consolidating semantically rich digital histopathological image by annotating scanned glass slides known as whole-slide images requires a software capable of handling this type of biomedical data and a support for procedures which align with existing pathological routine. Demand for large-scale annotated histopathological datasets are on the raise because they are needed for developments of artificial intelligence techniques to promote automatic diagnosis, mass screening, or phenotype-genotype association study. This thesis presents three main findings: an open annotation framework for efficient collaborative histopathological image annotation with standardized semantic enrichment at a pixel-level precision named OpenHI, a demonstration using the proposed annotation framework to annotate diagnosis slides of clear cell renal cell carcinoma (ccRCC) from TCGA KIRC Project, and exploratory analysis on feasible histopathological image analysis method with a newly proposed analysis approach based on the interpretation of existing grading guideline.
The framework’s responsive processing algorithm can perform large-scale histopathological image annotation and serve as biomedical data infrastructure for digital pathology. It could be extended to annotate histopathological image of various oncological types. The framework is open-source and available at https://gitlab.com/BioAI/OpenHI. The implementation of OpenHI for the annotation of ccRCC tissue slides. Best practices and methods for simulating accurate virtual magnification has been discussed. Annotation results from the trial shows that utilizing pre-defined clusters for multi-expert annotation somewhat align the opinion of two annotators. Finally, selected image features that align with the description of grading guideline have been tested an in-house dataset with manually localized nuclei. The features can achieve nuclei-level classification accuracy of 71.24% with 10-fold cross-validation on a linear SVM-based classifier and overall accuracy of 71.80% (SD = 2.06) on a neural network-based classifier. Further implementation at a slide-level and patient-level may yield better performance.
Notes
Files
pargorn-puttapirat-master-thesis.pdf
Files
(45.8 MB)
Name | Size | Download all |
---|---|---|
md5:5212cbe6b4d5a6371e8ecacbefc9fe9c
|
45.8 MB | Preview Download |