Vui lòng dùng định danh này để trích dẫn hoặc liên kết đến tài liệu này:
https://elib.vku.udn.vn/handle/123456789/4272
Nhan đề: | Pre-trained Self-Attention Framework: An Efficient Mechanism for Source Separation |
Tác giả: | Ha, Minh Tan Fhadli, Muhammad Nguyen, Kim Quoc Vu, Quang Duc |
Từ khoá: | The suggested method is tested and assessed on a conventional dataset Model is relearned, enhanced |
Năm xuất bản: | thá-2024 |
Nhà xuất bản: | Springer Nature |
Tóm tắt: | In this work, the pre-trained model of the self-attention framework is proposed for single-channel speech separation. Firstly, all layers in the pre-trained self-attention framework are frozen. The model is then retrained through three stages using the scheduling mechanism for learning rates and the layers of the framework are unlocked following the schedule. This way, the model is relearned, enhanced, and updated from previous knowledge. This is an effective way to improve the advanced model performance while significantly reducing the time and cost of a training model. This method is beneficial in applying existing models to perform a similar task or enhancing model performance. In this strategy, the pre-trained system outperforms the non-pre-trained system since the following phases of the model’s training repurpose characteristics extracted through the previously trained early phases. The suggested method is tested and assessed on a conventional dataset. The findings from experiments suggest that the methodology has higher performance than the baseline framework and outperforms current methods for the monaural speech separation task. |
Mô tả: | Lecture Notes in Networks and Systems (LNNS,volume 882); The 13th Conference on Information Technology and Its Applications (CITA 2024) ; pp: 99-110. |
Định danh: | https://elib.vku.udn.vn/handle/123456789/4272 https://doi.org/10.1007/978-3-031-74127-2_9 |
ISBN: | 978-3-031-74126-5 |
Bộ sưu tập: | CITA 2024 (International) |
Khi sử dụng các tài liệu trong Thư viện số phải tuân thủ Luật bản quyền.