Identifying the invasive cancer area is a crucial step in the automated diagnosis of digital pathology slices of the breast. When examining the pathological sections of patients with invasive ductal carcinoma, several evaluations are required specifically for the invasive cancer area. However, currently there is little work that can effectively distinguish the invasive cancer area from the ductal carcinoma in situ in whole slide images. To address this issue, we propose a novel architecture named ResMTUnet that combines the strengths of vision transformer and CNN, and uses multi-task learning to achieve accurate invasive carcinoma recognition and segmentation in breast cancer. Furthermore, we introduce a multi-scale input model based on ResMTUnet with conditional random field, named MS-ResMTUNet, to perform segmentation on WSIs. Our systematic experimentation has shown that the proposed network outperforms other competitive methods and effectively segments invasive carcinoma regions in WSIs. This lays a solid foundation for subsequent analysis of breast pathological slides in the future. The code is available at: https://github.com/liuyiqing2018/MS-ResMTUNet
We introduce Transformer architecture in IC segmentation, combining it with CNN for a dual-branch encoder.
We consider other categories prone to confusion with IC, using a classification branch for effective differentiation.
We incorporate multi-scale and CRF techniques to enhance WSI inference in our model.
See how this article has been cited at scite.ai
scite shows how a scientific paper has been cited by providing the context of the citation, a classification describing whether it supports, mentions, or contrasts the cited claim, and a label indicating in which section the citation was made.