Evidence-based medicine is the preferred procedure among clinicians for treating patients. Content-based medical image retrieval (CBMIR) is widely used to extract evidence from a large archive of medical images. Developing effective CBMIR systems for clinical practice is essential due to the enormous volume of medical images of heterogeneous characteristics, viz. modalities, organs, and diseases. Deep neural hashing (DNH) has achieved outstanding performance and has become popular for fast retrieval on large-scale image datasets. However, DNH still needs to be improved for handling medical images, which often asks for knowledge of the semantic similarity of such characteristics. This work proposes a structure-based hashing technique termed MODHash to address this challenge. MODHash retrieves images with semantic similarity of the above characteristics as per user preference. The network of MODHash is trained by minimizing characteristic-specific classification loss and Cauchy cross-entropy loss across training samples. Experiments are performed on a radiology dataset derived from the publicly available datasets of Kaggle, Mendeley, and Figshare. MODHash achieves 12% higher mean average precision and 2% higher normalized discounted cumulative gain compared to state-of-the-art for top-100 retrieval. The characteristic-specific retrieval performance is evaluated, demonstrating that MODHash is an effective DNH method for evaluating user preferences.
See how this article has been cited at scite.ai
scite shows how a scientific paper has been cited by providing the context of the citation, a classification describing whether it supports, mentions, or contrasts the cited claim, and a label indicating in which section the citation was made.