Yolo-MLSAM: SAM Based Breast Cancer Microcalcification Cluster-Segmentation Method
DOI:
https://doi.org/10.54097/1mw0pm54Keywords:
Medical image, Breast cancer, Object detection, Semantic segmentation, Semiautomatic annotationAbstract
Although the HQ-SAM model has achieved some results in improving the accuracy of fuzzy boundary segmentation, it is still difficult to achieve accurate segmentation in medical image processing, especially in the face of small targets such as breast cancer microcalcification clusters, in addition, high labor costs make Prompt operation cumbersome, in order to solve these problems. A novel segmentation method of breast cancer microcalcification cluster based on SAM was proposed. The method first uses Yolov8 neural network model to accurately obtain the lesion region, then uses MLSAM model to perform more detailed semantic segmentation of the lesion region, and finally realizes semi-automatic annotation function, greatly reducing the cost and complexity of manual participation. The experimental results show that compared with the HQ-SAM model, the new method has significantly improved the segmentation performance, and the dice similarity coefficient reaches 81.78%.
Downloads
References
[1] Brahimetaj R, Willekens I, Massart A, et al. Improved automated early detection of breast cancer based on high resolution 3D micro-CT microcalcification images[J]. BMC cancer, 2022, 22(1): 162.
[2] Naeim R M, Marouf R A, Nasr M A, et al. Comparing the diagnostic efficacy of digital breast tomosynthesis with full-field digital mammography using BI-RADS scoring[J]. Egyptian Journal of Radiology and Nuclear Medicine, 2021, 52: 1-13.
[3] Malliori A, Pallikarakis N. Breast cancer detection using machine learning in digital mammography and breast tomosynthesis: A systematic review[J]. Health and Technology, 2022, 12(5): 893-910.
[4] Abhisheka B, Biswas S K, Purkayastha B. A comprehensive review on breast cancer detection, classification and segmentation using deep learning[J]. Archives of Computational Methods in Engineering, 2023, 30(8): 5023-5052.
[5] Nguyen N V, Huynh H T, Le P L. Deep Learning Techniques for Segmenting Breast Lesion Regions and Classifying Mammography Images[C]//International Conference on Future Data and Security Engineering. Singapore: Springer Nature Singapore, 2023: 471-483.
[6] Amer A, Lambrou T, Ye X. MDA-unet: a multi-scale dilated attention U-net for medical image segmentation[J]. Applied Sciences, 2022, 12(7): 3676.
[7] Kirillov A, Mintun E, Ravi N, et al. Segment anything[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023: 4015-4026.
[8] Punn N S, Agarwal S. RCA-IUnet: a residual cross-spatial attention-guided inception U-Net model for tumor segmentation in breast ultrasound imaging[J]. Machine Vision and Applications, 2022, 33(2): 27.
[9] Ning Z, Zhong S, Feng Q, et al. SMU-Net: Saliency-guided morphology-aware U-Net for breast lesion segmentation in ultrasound image[J]. IEEE transactions on medical imaging, 2021, 41(2): 476-490.
[10] Dar M F, Ganivada A. Efficientu-net: a novel deep learning method for breast tumor segmentation and classification in ultrasound images[J]. Neural Processing Letters, 2023, 55(8): 10439-10462.
[11] Shen L, Su J, Huang R, et al. Fusing attention mechanism with Mask R-CNN for instance segmentation of grape cluster in the field[J]. Frontiers in plant science, 2022, 13: 934450.
[12] Li P, Liu C, Li T, et al. EMDFNet: Efficient Multi-scale and Diverse Feature Network for Traffic Sign Detection[J]. arXiv preprint arXiv:2408.14189, 2024.
[13] Mazurowski M A, Dong H, Gu H, et al. Segment anything model for medical image analysis: an experimental study[J]. Medical Image Analysis, 2023, 89: 102918.
[14] Zhang Y, Jiao R. Towards segment anything model (SAM) for medical image segmentation: a survey[J]. arXiv preprint arXiv:2305.03678, 2023.
[15] Zhang K, Liu D. Customized segment anything model for medical image segmentation[J]. arXiv preprint arXiv:2304.13785, 2023.
[16] Ke L, Ye M, Danelljan M, et al. Segment anything in high quality[J]. Advances in Neural Information Processing Systems, 2024, 36.
[17] Yi H, Liu B, Zhao B, et al. Small object detection algorithm based on improved YOLOv8 for remote sensing[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2023.
[18] Vasanthi P, Mohan L. Efficient YOLOv8 algorithm for extreme small-scale object detection[J]. Digital Signal Processing, 2024, 154: 104682.
[19] Lu E, Hu X. Image super-resolution via channel attention and spatial attention[J]. Applied Intelligence, 2022, 52(2): 2260-2268.
[20] Xia C, Wang X, Lv F, et al. Vit-comer: Vision transformer with convolutional multi-scale feature interaction for dense predictions[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 5493-5502.
[21] Xiao H, Li L, Liu Q, et al. Transformers in medical image segmentation: A review[J]. Biomedical Signal Processing and Control, 2023, 84: 104791.
[22] Bui P N, Le D T, Bum J, et al. Semi-supervised learning with fact-forcing for medical image segmentation[J]. IEEE Access, 2023.
[23] Abhisheka B, Biswas S K, Purkayastha B. A comprehensive review on breast cancer detection, classification and segmentation using deep learning[J]. Archives of Computational Methods in Engineering, 2023, 30(8): 5023-5052.
[24] Nguyen H T, Nguyen H Q, Pham H H, et al. VinDr-Mammo: A large-scale benchmark dataset for computer-aided diagnosis in full-field digital mammography[J]. Scientific Data, 2023, 10(1): 277.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Journal of Computing and Electronic Information Management

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.