Please use this identifier to cite or link to this item: https://elib.vku.udn.vn/handle/123456789/4011
Title: VLF-VQA: Vietnamese Lightweight Fusion Architecture for Visual Question Answering
Authors: Nguyen, Toan
Quan, Tho
Keywords: Vision Language Models
Visual Question Answering
LoRA
Issue Date: Jul-2024
Publisher: Vietnam-Korea University of Information and Communication Technology
Series/Report no.: CITA;
Abstract: In this paper, we propose a lightweight transformer-based approach to address the challenges of Visual Question Answering (VQA). While many Vision Language Models (VLMs) are based on Large Language Models (LLMs) with billions of parameters and require significant training resources, we optimize a language model GPT-2 by following the fusion architecture. To achieve this goal, we modify GPT-2 by incorporating a cross-attention block to align image and text features from two frozen encoders. During the training process, we apply the LORA fine-tuning technique to minimize training costs while maintaining effectiveness. Our research focuses on three main aspects: Training cost, natural output, and support for the Vietnamese language. We evaluated our approach on datasets for VQA and image captioning, achieving results that are comparable to existing methods while maintaining less far resource consumption. Our code and training weights are available at https://github.com/naot97/VLF-VQA
Description: Proceedings of the 13th International Conference on Information Technology and Its Applications (CITA 2024); pp: 58-69.
URI: https://elib.vku.udn.vn/handle/123456789/4011
ISBN: 978-604-80-9774-5
Appears in Collections:CITA 2024 (Proceeding - Vol 2)

Files in This Item:

 Sign in to read



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.