Please use this identifier to cite or link to this item: https://elib.vku.udn.vn/handle/123456789/2736
Title: AMG-Mixer: A Multi-Axis Attention MLP-Mixer Architecture for Biomedical Image Segmentation
Authors: Le, Hoang Minh Quang
Le, Trung Kien
Pham, Van Truong
Tran, Thi Thao
Keywords: MLP-Mixer
image segmentation
Axial Attention
Multi-axis MLP
Issue Date: Jul-2023
Publisher: Springer Nature
Abstract: Previously, Multi-Layer Perceptrons (MLPs) were primarily used in image classification tasks. The emergence of the MLP-Mixer architecture has demonstrated the continued efficacy of MLPs in other visual tasks. To obtain superior results, it is imperative to have pre-trained weights from large datasets, and the Cross-Location (Token Mix) operation must be adaptively modified to suit the specific task at hand. Inspired by this, we proposed AMG-Mixer, an MLP-based architecture for image segmentation. In particular, recognizing the importance of positional information, we proposed AxialMBconv Token Mix utilizing Axial Attention. Additionally, to reduce Axial Attention’s receptive field constraints, we proposed Multi-scale Multi-axis MLP Gated (MS-MAMG) block which employs Multi-Axis MLP. The proposed AMG-Mixer architecture outperformed State-of-the-Art (SOTA) methods on benchmark datasets including GLaS, Data Science Bowl 2018, and Skin Lesion Segmentation ISIC 2018, even without pre-training. The proposed AMG-Mixer architecture has been confirmed effective and high performing in our study. The code is available at https://github.com/quanglets1fvr/amg_mixer
Description: Lecture Notes in Networks and Systems (LNNS, volume 734); CITA: Conference on Information Technology and its Applications; pp: 169-180.
URI: https://link.springer.com/chapter/10.1007/978-3-031-36886-8_14
http://elib.vku.udn.vn/handle/123456789/2736
ISBN: 978-3-031-36886-8
Appears in Collections:CITA 2023 (International)

Files in This Item:

 Sign in to read



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.