Please use this identifier to cite or link to this item: https://elib.vku.udn.vn/handle/123456789/4286
Full metadata record
DC FieldValueLanguage
dc.contributor.authorLe, A.Huy-
dc.contributor.authorVu, C.D.Quang-
dc.contributor.authorTran, T.Binh-
dc.contributor.authorLe, T.Van-
dc.contributor.authorVuong, B.Thinh-
dc.date.accessioned2024-12-06T04:26:18Z-
dc.date.available2024-12-06T04:26:18Z-
dc.date.issued2024-11-
dc.identifier.isbn978-3-031-74126-5-
dc.identifier.urihttps://elib.vku.udn.vn/handle/123456789/4286-
dc.identifier.urihttps://doi.org/10.1007/978-3-031-74127-2_23-
dc.descriptionLecture Notes in Networks and Systems (LNNS,volume 882); The 13th Conference on Information Technology and Its Applications (CITA 2024) ; pp: 271-282.vi_VN
dc.description.abstractThe study uses a reinforcement learning model to address robot collision avoidance problems in intelligent warehouse environments. We simulate a virtual warehouse and optimize the robot control strategy to avoid collisions and maximize travel time simultaneously, aiming to optimize storage performance. In this virtual warehouse simulation, robots move on a grid map, performing tasks such as picking up and delivering items to designated locations. Throughout the research, we apply reinforcement learning methods, including Deep Q-Learning and Double Deep Q-Learning, comparing and evaluating them to achieve the best results. Additionally, we examine some heuristic algorithms to optimize performance. With these approaches, we have successfully optimized the movement performance of robots and reduced the risk of collisions, opening up the possibility of deployment in natural warehouse environments with the highest level of simulation to ensure the accuracy and practicality of the research findings. In our experiments, we use the PyTorch package to build models and settings for our particular environments. We tested our system in a grid environment with 5, 16, and 32 robots to evaluate the performance and stability of the proposed method. Evaluation results are based on criteria such as task completion rate, path length compared to actual path, and other factors. The results obtained from the tests were quite positive, with the average rate of each robot reaching its destination reaching a high level, up to 99.78%.vi_VN
dc.language.isoenvi_VN
dc.publisherSpringer Naturevi_VN
dc.subjectRobots in Smart Warehouses Using a Focused Decentralized Reinforcement Learning Modelvi_VN
dc.subjectIn our experiments, we use the PyTorch package to build models and settings for our particular environmentsvi_VN
dc.titleCollision Avoidance Problem for Robots in Smart Warehouses Using a Focused Decentralized Reinforcement Learning Modelvi_VN
dc.typeWorking Papervi_VN
Appears in Collections:CITA 2024 (International)

Files in This Item:

 Sign in to read



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.