Please use this identifier to cite or link to this item: https://elib.vku.udn.vn/handle/123456789/2732
Full metadata record
DC FieldValueLanguage
dc.contributor.authorLe, Trieu Duong-
dc.contributor.authorNguyen, Duc Dung-
dc.date.accessioned2023-09-26T01:53:39Z-
dc.date.available2023-09-26T01:53:39Z-
dc.date.issued2023-07-
dc.identifier.isbn978-3-031-36886-8-
dc.identifier.urihttps://link.springer.com/chapter/10.1007/978-3-031-36886-8_17-
dc.identifier.urihttp://elib.vku.udn.vn/handle/123456789/2732-
dc.descriptionLecture Notes in Networks and Systems (LNNS, volume 734); CITA: Conference on Information Technology and its Applications; pp: 205-216.vi_VN
dc.description.abstractThe goal of video frame synthesis is from given frames. In other words, it tends to predict single or several future frames (video prediction - VP) or in-between frames (video frame interpolation - VFI). This is one of the challenging problems in the computer vision field. Many recent VP and VFI methods employ optical flow estimation in the prediction or interpolation process. While it helps, it also makes the model more complex and harder to train. This paper proposes a new model for solving the VP task based on a generative approach without utilizing optical flow. Our model uses simple methods and networks like CNN to reduce the hardware pressure, it can learn movement fast within the first few training epoch and still predict high-quality results. We perform experiments on two popular datasets Moving MNIST and KTH. We observe the proposed model and SimVP model on four metrics: MSE, MAE, PSNR, and SSIM in the training process. The experiments show that our model can capture spatiotemporal correlations better than previous models.(The code is available at github.com/trieuduongle/STG-SimVP.)vi_VN
dc.language.isoenvi_VN
dc.publisherSpringer Naturevi_VN
dc.subjectVideo Predictionvi_VN
dc.subjectVideo Frame Extrapolationvi_VN
dc.subjectGANvi_VN
dc.titleSTG-SimVP: Spatiotemporal GAN-Based SimVP for Video Predictionvi_VN
dc.typeWorking Papervi_VN
Appears in Collections:CITA 2023 (International)

Files in This Item:

 Sign in to read



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.