Please use this identifier to cite or link to this item: https://elib.vku.udn.vn/handle/123456789/2729
Full metadata record
DC FieldValueLanguage
dc.contributor.authorMai, Van Quan-
dc.contributor.authorNguyen, Duc Dung-
dc.date.accessioned2023-09-26T01:45:36Z-
dc.date.available2023-09-26T01:45:36Z-
dc.date.issued2023-07-
dc.identifier.isbn978-3-031-36886-8-
dc.identifier.urihttps://link.springer.com/chapter/10.1007/978-3-031-36886-8_20-
dc.identifier.urihttp://elib.vku.udn.vn/handle/123456789/2729-
dc.descriptionLecture Notes in Networks and Systems (LNNS, volume 734); CITA: Conference on Information Technology and its Applications; pp: 240-249.vi_VN
dc.description.abstractDespite the remarkable result of Neural Scene Flow Fields [10] in novel space-time view synthesis of dynamic scenes, the model has limited ability when a few input views are provided. To enable the few-shots novel space-time view synthesis of dynamic scenes, we propose a new approach that extends the model architecture to use shared priors learned across scenes to predict appearance and geometry at static background regions. Throughout the optimization, our network is trained to rely on the image features extracted from a few input views or from the learned knowledge for reconstructing unseen regions based on the camera view direction. We conduct multiple experiments on NVIDIA Dynamic Scenes Dataset [23] that demonstrate our approach results in a better rendering quality compared to the prior work when a few input views are available.vi_VN
dc.language.isoenvi_VN
dc.publisherSpringer Naturevi_VN
dc.subjectNeRFvi_VN
dc.subjectView synthesisvi_VN
dc.subjectFew-shot view reconstructionvi_VN
dc.titleFew-Shots Novel Space-Time View Synthesis from Consecutive Photosvi_VN
dc.typeWorking Papervi_VN
Appears in Collections:CITA 2023 (International)

Files in This Item:

 Sign in to read



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.