Please use this identifier to cite or link to this item: https://elib.vku.udn.vn/handle/123456789/4019
Full metadata record
DC FieldValueLanguage
dc.contributor.authorNguyen, Ti Hon-
dc.contributor.authorDo, Thanh Nghi-
dc.date.accessioned2024-07-30T09:18:25Z-
dc.date.available2024-07-30T09:18:25Z-
dc.date.issued2024-07-
dc.identifier.isbn978-604-80-9774-5-
dc.identifier.urihttps://elib.vku.udn.vn/handle/123456789/4019-
dc.descriptionProceedings of the 13th International Conference on Information Technology and Its Applications (CITA 2024); pp: 100-111.vi_VN
dc.description.abstractOur investigation aims to propose a high-performance abstractive text summarization model for Vietnamese languages. We based on the transformer network with a full encoder-decoder to study the high-quality features of the training data. Next, we scaled down the network size to increase the number of documents the model can summarize in a time frame. We trained the model with a large-scale dataset, including 880,895 documents in the training set and 110, 103 in the testing set. The summarizing speed for the testing set significantly improves with 5.93 hours when using a multiple-core CPU and 0.31 hours on a small GPU. The numerical test results of F1 are also close to the state-of-the-art with 51.03% in ROUGE-1, 18.17% in ROUGE-2, and 31.60% in ROUGE-L.vi_VN
dc.language.isoenvi_VN
dc.publisherVietnam-Korea University of Information and Communication Technologyvi_VN
dc.relation.ispartofseriesCITA;-
dc.subjectAbstractive Text Summarizationvi_VN
dc.subjectTransformervi_VN
dc.subjectVietnamese Large-scale Datasetvi_VN
dc.titleTHASUM: Transformer for High-Performance Abstractive Summarizing Vietnamese Large-scale Datasetvi_VN
dc.typeWorking Papervi_VN
Appears in Collections:CITA 2024 (Proceeding - Vol 2)

Files in This Item:

 Sign in to read



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.