Please use this identifier to cite or link to this item:
https://elib.vku.udn.vn/handle/123456789/6186Full metadata record
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Nguyen, Huu Khanh | - |
| dc.contributor.author | Nguyen, Van Viet | - |
| dc.contributor.author | Nguyen, Kim Son | - |
| dc.contributor.author | Luong, Thi Minh Hue | - |
| dc.contributor.author | Nguyen, T. Vinh | - |
| dc.contributor.author | Vu, Duc Quang | - |
| dc.contributor.author | Nguyen, Cong Huu | - |
| dc.date.accessioned | 2026-01-19T09:01:32Z | - |
| dc.date.available | 2026-01-19T09:01:32Z | - |
| dc.date.issued | 2026-01 | - |
| dc.identifier.isbn | 978-3-032-00971-5 (p) | - |
| dc.identifier.isbn | 978-3-032-00972-2 (e) | - |
| dc.identifier.uri | https://doi.org/10.1007/978-3-032-00972-2_49 | - |
| dc.identifier.uri | https://elib.vku.udn.vn/handle/123456789/6186 | - |
| dc.description | Lecture Notes in Networks and Systems (LNNS,volume 1581); The 14th Conference on Information Technology and Its Applications (CITA 2025) ; pp: 671-682 | vi_VN |
| dc.description.abstract | In this study, we explore the mini language models applications in legal domain, specifically Phi-3.5 Mini, Qwen 2.5 3B and Llama 3.2 3B, for legal multiple-choice question answering. We fine-tuned these models on CaseHOLD dataset to adapt them to the structural and semantic nuances of legal language and reasoning. The results show that fine-tuning improves performance of these models significantly with Phi-3. 5 Mini achieved a Micro F1 score of 76.93%, exceeding previous bests for the field of miniaturised models. Also, Qwen 2.5 3B and Llama 3.2 3B scored similarly competitive scores of 74.27% and 75.40%, respectively, reinforcing their viability as resource-efficient options compared to larger models. Mini language models offer competitive performance with specialize models like Legal-BERT, Caselaw-BERT, while operating on a lower computational resources and ability of natural language understanding. The results from this study illuminate the potential of mini language models as a way to increase access to state-of-art legal natural language processing tools and proposes directions for additional future work to continue exploring their versatility across various legal task and datasets. | vi_VN |
| dc.language.iso | en | vi_VN |
| dc.publisher | Springer Nature | vi_VN |
| dc.subject | Mini language models | vi_VN |
| dc.subject | CaseHold | vi_VN |
| dc.subject | Phi-3.5 | vi_VN |
| dc.subject | Qwen 2.5 | vi_VN |
| dc.subject | Llama 3.2 | vi_VN |
| dc.subject | Legal Question-Answering | vi_VN |
| dc.title | Fine-Tuning Mini Language Models for Legal Multiple-Choice Question Answering: A Comparative Study of Phi-3.5, Qwen 2.5 and Llama 3.2 | vi_VN |
| dc.type | Working Paper | vi_VN |
| Appears in Collections: | CITA 2025 (International) | |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.