Please use this identifier to cite or link to this item:
https://elib.vku.udn.vn/handle/123456789/2723
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Hoang, Van Thanh | - |
dc.contributor.author | Tu, Minh Phuong | - |
dc.contributor.author | Kang-Hyun, Jo | - |
dc.date.accessioned | 2023-09-26T01:34:27Z | - |
dc.date.available | 2023-09-26T01:34:27Z | - |
dc.date.issued | 2023-07 | - |
dc.identifier.isbn | 978-3-031-36886-8 | - |
dc.identifier.uri | https://link.springer.com/chapter/10.1007/978-3-031-36886-8_25 | - |
dc.identifier.uri | http://elib.vku.udn.vn/handle/123456789/2723 | - |
dc.description | Lecture Notes in Networks and Systems (LNNS, volume 734); CITA: Conference on Information Technology and its Applications; pp: 297-305. | vi_VN |
dc.description.abstract | EfficientNet is a convolutional neural network architecture that was created by doing a neural architecture search with the AutoML MNAS framework, which optimized both accuracy and efficiency. It is based on MobileNetV2’s inverted bottleneck residual blocks, as well as squeeze-and-excite blocks. With far lower parameter computation burdens on the ImageNet challenge, EfficientNet may compete with the best. This paper provides a mobile version of EfficientNet that has accuracy similar to the ImageNet dataset but runs nearly twice as fast. | vi_VN |
dc.language.iso | en | vi_VN |
dc.publisher | Springer Nature | vi_VN |
dc.subject | EfficientNet | vi_VN |
dc.subject | MobileNetV2 | vi_VN |
dc.subject | AutoML MNAS framework | vi_VN |
dc.title | A Compact Version of EfficientNet | vi_VN |
dc.type | Working Paper | vi_VN |
Appears in Collections: | CITA 2023 (International) |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.