Automated Enriched Medical Concept Generation for Chest X-ray Images.

Published in MICCAI ML-CDS, 2019

Recommended citation: Gasimova, A., 2019. "Automated enriched medical concept generation for chest X-ray images." In Interpretability of Machine Intelligence in Medical Image Computing and Multimodal Learning for Clinical Decision Support (pp. 83-92). Springer, Cham.

Decision support tools that rely on supervised learning require large amounts of expert annotations. Using past radiological reports obtained from hospital archiving systems has many advantages as training data above manual single-class labels: they are expert annotations available in large quantities, covering a population-representative variety of pathologies, and they provide additional context to pathology diagnoses, such as anatomical location and severity. Learning to auto-generate such reports from images present many challenges such as the difficulty in representing and generating long, unstructured textual information, accounting for spelling errors and repetition/redundancy, and the inconsistency across different annotators. We therefore propose to first learn visually-informative medical concepts from raw reports, and, using the concept predictions as image annotations, learn to auto-generate structured reports directly from images. We validate our approach on Indiana University chest x-ray dataset, which consists of posterior-anterior and lateral views of chest x-ray images, their corresponding raw textual reports and manual medical subject heading (MeSH) annotations made by radiologists.

Paper link

Poster