Please use this identifier to cite or link to this item: https://elib.vku.udn.vn/handle/123456789/2300
Full metadata record
DC FieldValueLanguage
dc.contributor.authorDang, Hoang Quan-
dc.contributor.authorNguyen, Duc Duy Anh-
dc.contributor.authorDo, Trong Hop-
dc.date.accessioned2022-08-16T03:08:34Z-
dc.date.available2022-08-16T03:08:34Z-
dc.date.issued2022-07-
dc.identifier.issn978-604-84-6711-1-
dc.identifier.urihttp://elib.vku.udn.vn/handle/123456789/2300-
dc.descriptionThe 11th Conference on Information Technology and its Applications; Topic: Image and Natural Language Processing; pp.136-144.vi_VN
dc.description.abstractDeep learning is becoming more and more popular, especially well suited for large data sets. Besides, dee learning network training also requires vast computing power. Taking advantage of the power of GPU or TPU can partly solve the massive computing of deep learning. However, training an extensive neural network like Resner 152 on an ImageNet database of about 14 million image is not easy. That's why in this article, we are talking about not only leveraging the power of one GPU but also leveraging the power of multiple GPUs to reduce the training time of complex models by data parallelism method with two approaches Multi-worker Training and Parameter Server Training on two datasets flower and 30VNFoods.vi_VN
dc.language.isoenvi_VN
dc.publisherDa Nang Publishing Housevi_VN
dc.subjectDistributed computingvi_VN
dc.subjectData parallelismvi_VN
dc.subjectDeep learningvi_VN
dc.titleDistributed Training with Data Parallelism Method in TensorFlow 2vi_VN
dc.typeWorking Papervi_VN
Appears in Collections:CITA 2022

Files in This Item:

 Sign in to read



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.