Patient Aware Active Learning for OCT Classification

 Personnel: Yash-yee Logan, Ryan Benkert, Ahmas Mustafa, Mohit Prabhushankar, Ghassan AlRegib

Goal: To make active learning more sensible from a medical perspective.

Challenges: A single disease can present itself in visually diverse formats across multiple patients. This data diversity is captured within medical metadata but remains un-exploited in existing active learning paradigms. Additionally, medical datasets are widely imbalanced both across classes and patients. This ultimately results in existing methods training models without properly accounting for one or more patient disease manifestations from whom there is less data.

Our Work:  We develop a framework [1]  that incorporates clinical insights into the sample selection process of active learning that can be incorporated with existing algorithms. The framework captures diverse disease manifestations from patients to improve generalization performance of OCT classification. We also demonstrate that active learning paradigms developed for natural images are insufficient for handling medical data [2]. 

Workflow of the patient aware active learning framework

References

  1. Y. Logan, R. Benkert, A. Mustafa, G. Kwon, G. AlRegib, "Patient Aware Active Learning for Fine-Grained OCT Classification," IEEE International Conference on Image Processing (ICIP), Bordeaux, France, Oct. 16-19 2022. [PDF][Code]

  2. Y. Logan, M. Prabhushankar, and G. AlRegib, "DECAL: DEployable Clinical Active Learning," in International Conference on Machine Learning (ICML) Workshop on Adaptive Experimental Design and Active Learning in the Real World, Baltimore, MD, Jul. 2022. [PDF][Code]