Please use this identifier to cite or link to this item:
Full metadata record
DC FieldValueLanguage
dc.contributor.authorTran, Ngoc Du-
dc.contributor.authorNguyen, Dinh Quoc Dai-
dc.contributor.authorNguyen, Ngoc Linh Chi-
dc.contributor.authorPham, Van Truong-
dc.contributor.authorTran, Thi Thao-
dc.descriptionLecture Notes in Networks and Systems (LNNS, volume 734); CITA: Conference on Information Technology and its Applications; pp: 157-168.vi_VN
dc.description.abstractPolyp Segmentation is important in helping doctors diagnose and provide an accurate treatment plan. With the emerging of deep learning technology in the last decade, deep learning models especially Unet and its evolved versions, for medical segmentation task have achieved superior results compared to previous traditional methods. To preserve location information, Unet-based models use connections between feature maps of the same resolution of encoder and decoder. However, using the same resolution connections has two problems: 1) High-resolution feature maps on the encoder side contain low-level information. In contrast, high-resolution feature maps on the decoder side contain high-level information that leads to an imbalance in terms of semantic information when connecting. 2) In medical images, objects such as tumours and cells often have diverse sizes, so to be able to segment objects correctly, the use of context information on a scale of the feature map encoder during the decoding process is not enough, so it is necessary to use context information on full-scale. In this paper, we propose a model called CTDCFormer that uses the PvitV2_B3 model as the backbone encoder to extract global information about the object. In order to exploit the full-scale context information of the encoder, we propose the GCF module using the lightweight attention mechanism between the decoder’s feature map and the encoder’s four feature maps. Our model CTDCFormer achieves superior results compared to other state of the arts, with the Dice scores up to 94.1% on the Kvasir-SEG set, and 94.7% on the CVC-ClinicDB set.vi_VN
dc.publisherSpringer Naturevi_VN
dc.subjectPolyp Segmentationvi_VN
dc.subjectMulti Context Decodervi_VN
dc.titleA Multi Context Decoder-based Network with Applications for Polyp Segmentation in Colonoscopy Imagesvi_VN
dc.typeWorking Papervi_VN
Appears in Collections:CITA 2023 (International)

Files in This Item:

 Sign in to read

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.