Please use this identifier to cite or link to this item:
https://elib.vku.udn.vn/handle/123456789/4272
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Ha, Minh Tan | - |
dc.contributor.author | Fhadli, Muhammad | - |
dc.contributor.author | Nguyen, Kim Quoc | - |
dc.contributor.author | Vu, Quang Duc | - |
dc.date.accessioned | 2024-12-04T04:04:20Z | - |
dc.date.available | 2024-12-04T04:04:20Z | - |
dc.date.issued | 2024-11 | - |
dc.identifier.isbn | 978-3-031-74126-5 | - |
dc.identifier.uri | https://elib.vku.udn.vn/handle/123456789/4272 | - |
dc.identifier.uri | https://doi.org/10.1007/978-3-031-74127-2_9 | - |
dc.description | Lecture Notes in Networks and Systems (LNNS,volume 882); The 13th Conference on Information Technology and Its Applications (CITA 2024) ; pp: 99-110. | vi_VN |
dc.description.abstract | In this work, the pre-trained model of the self-attention framework is proposed for single-channel speech separation. Firstly, all layers in the pre-trained self-attention framework are frozen. The model is then retrained through three stages using the scheduling mechanism for learning rates and the layers of the framework are unlocked following the schedule. This way, the model is relearned, enhanced, and updated from previous knowledge. This is an effective way to improve the advanced model performance while significantly reducing the time and cost of a training model. This method is beneficial in applying existing models to perform a similar task or enhancing model performance. In this strategy, the pre-trained system outperforms the non-pre-trained system since the following phases of the model’s training repurpose characteristics extracted through the previously trained early phases. The suggested method is tested and assessed on a conventional dataset. The findings from experiments suggest that the methodology has higher performance than the baseline framework and outperforms current methods for the monaural speech separation task. | vi_VN |
dc.language.iso | en | vi_VN |
dc.publisher | Springer Nature | vi_VN |
dc.subject | The suggested method is tested and assessed on a conventional dataset | vi_VN |
dc.subject | Model is relearned, enhanced | vi_VN |
dc.title | Pre-trained Self-Attention Framework: An Efficient Mechanism for Source Separation | vi_VN |
dc.type | Working Paper | vi_VN |
Appears in Collections: | CITA 2024 (International) |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.