Please use this identifier to cite or link to this item: https://elib.vku.udn.vn/handle/123456789/6186
Full metadata record
DC FieldValueLanguage
dc.contributor.authorNguyen, Huu Khanh-
dc.contributor.authorNguyen, Van Viet-
dc.contributor.authorNguyen, Kim Son-
dc.contributor.authorLuong, Thi Minh Hue-
dc.contributor.authorNguyen, T. Vinh-
dc.contributor.authorVu, Duc Quang-
dc.contributor.authorNguyen, Cong Huu-
dc.date.accessioned2026-01-19T09:01:32Z-
dc.date.available2026-01-19T09:01:32Z-
dc.date.issued2026-01-
dc.identifier.isbn978-3-032-00971-5 (p)-
dc.identifier.isbn978-3-032-00972-2 (e)-
dc.identifier.urihttps://doi.org/10.1007/978-3-032-00972-2_49-
dc.identifier.urihttps://elib.vku.udn.vn/handle/123456789/6186-
dc.descriptionLecture Notes in Networks and Systems (LNNS,volume 1581); The 14th Conference on Information Technology and Its Applications (CITA 2025) ; pp: 671-682vi_VN
dc.description.abstractIn this study, we explore the mini language models applications in legal domain, specifically Phi-3.5 Mini, Qwen 2.5 3B and Llama 3.2 3B, for legal multiple-choice question answering. We fine-tuned these models on CaseHOLD dataset to adapt them to the structural and semantic nuances of legal language and reasoning. The results show that fine-tuning improves performance of these models significantly with Phi-3. 5 Mini achieved a Micro F1 score of 76.93%, exceeding previous bests for the field of miniaturised models. Also, Qwen 2.5 3B and Llama 3.2 3B scored similarly competitive scores of 74.27% and 75.40%, respectively, reinforcing their viability as resource-efficient options compared to larger models. Mini language models offer competitive performance with specialize models like Legal-BERT, Caselaw-BERT, while operating on a lower computational resources and ability of natural language understanding. The results from this study illuminate the potential of mini language models as a way to increase access to state-of-art legal natural language processing tools and proposes directions for additional future work to continue exploring their versatility across various legal task and datasets.vi_VN
dc.language.isoenvi_VN
dc.publisherSpringer Naturevi_VN
dc.subjectMini language modelsvi_VN
dc.subjectCaseHoldvi_VN
dc.subjectPhi-3.5vi_VN
dc.subjectQwen 2.5vi_VN
dc.subjectLlama 3.2vi_VN
dc.subjectLegal Question-Answeringvi_VN
dc.titleFine-Tuning Mini Language Models for Legal Multiple-Choice Question Answering: A Comparative Study of Phi-3.5, Qwen 2.5 and Llama 3.2vi_VN
dc.typeWorking Papervi_VN
Appears in Collections:CITA 2025 (International)

Files in This Item:

 Sign in to read



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.