Vui lòng dùng định danh này để trích dẫn hoặc liên kết đến tài liệu này: https://elib.vku.udn.vn/handle/123456789/2732
Nhan đề: STG-SimVP: Spatiotemporal GAN-Based SimVP for Video Prediction
Tác giả: Le, Trieu Duong
Nguyen, Duc Dung
Từ khoá: Video Prediction
Video Frame Extrapolation
GAN
Năm xuất bản: thá-2023
Nhà xuất bản: Springer Nature
Tóm tắt: The goal of video frame synthesis is from given frames. In other words, it tends to predict single or several future frames (video prediction - VP) or in-between frames (video frame interpolation - VFI). This is one of the challenging problems in the computer vision field. Many recent VP and VFI methods employ optical flow estimation in the prediction or interpolation process. While it helps, it also makes the model more complex and harder to train. This paper proposes a new model for solving the VP task based on a generative approach without utilizing optical flow. Our model uses simple methods and networks like CNN to reduce the hardware pressure, it can learn movement fast within the first few training epoch and still predict high-quality results. We perform experiments on two popular datasets Moving MNIST and KTH. We observe the proposed model and SimVP model on four metrics: MSE, MAE, PSNR, and SSIM in the training process. The experiments show that our model can capture spatiotemporal correlations better than previous models.(The code is available at github.com/trieuduongle/STG-SimVP.)
Mô tả: Lecture Notes in Networks and Systems (LNNS, volume 734); CITA: Conference on Information Technology and its Applications; pp: 205-216.
Định danh: https://link.springer.com/chapter/10.1007/978-3-031-36886-8_17
http://elib.vku.udn.vn/handle/123456789/2732
ISBN: 978-3-031-36886-8
Bộ sưu tập: CITA 2023 (International)

Các tập tin trong tài liệu này:

 Đăng nhập để xem toàn văn



Khi sử dụng các tài liệu trong Thư viện số phải tuân thủ Luật bản quyền.