Please use this identifier to cite or link to this item:
https://elib.vku.udn.vn/handle/123456789/2732
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Le, Trieu Duong | - |
dc.contributor.author | Nguyen, Duc Dung | - |
dc.date.accessioned | 2023-09-26T01:53:39Z | - |
dc.date.available | 2023-09-26T01:53:39Z | - |
dc.date.issued | 2023-07 | - |
dc.identifier.isbn | 978-3-031-36886-8 | - |
dc.identifier.uri | https://link.springer.com/chapter/10.1007/978-3-031-36886-8_17 | - |
dc.identifier.uri | http://elib.vku.udn.vn/handle/123456789/2732 | - |
dc.description | Lecture Notes in Networks and Systems (LNNS, volume 734); CITA: Conference on Information Technology and its Applications; pp: 205-216. | vi_VN |
dc.description.abstract | The goal of video frame synthesis is from given frames. In other words, it tends to predict single or several future frames (video prediction - VP) or in-between frames (video frame interpolation - VFI). This is one of the challenging problems in the computer vision field. Many recent VP and VFI methods employ optical flow estimation in the prediction or interpolation process. While it helps, it also makes the model more complex and harder to train. This paper proposes a new model for solving the VP task based on a generative approach without utilizing optical flow. Our model uses simple methods and networks like CNN to reduce the hardware pressure, it can learn movement fast within the first few training epoch and still predict high-quality results. We perform experiments on two popular datasets Moving MNIST and KTH. We observe the proposed model and SimVP model on four metrics: MSE, MAE, PSNR, and SSIM in the training process. The experiments show that our model can capture spatiotemporal correlations better than previous models.(The code is available at github.com/trieuduongle/STG-SimVP.) | vi_VN |
dc.language.iso | en | vi_VN |
dc.publisher | Springer Nature | vi_VN |
dc.subject | Video Prediction | vi_VN |
dc.subject | Video Frame Extrapolation | vi_VN |
dc.subject | GAN | vi_VN |
dc.title | STG-SimVP: Spatiotemporal GAN-Based SimVP for Video Prediction | vi_VN |
dc.type | Working Paper | vi_VN |
Appears in Collections: | CITA 2023 (International) |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.