Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 144 tok/s
Gemini 2.5 Pro 46 tok/s Pro
GPT-5 Medium 22 tok/s Pro
GPT-5 High 24 tok/s Pro
GPT-4o 84 tok/s Pro
Kimi K2 200 tok/s Pro
GPT OSS 120B 432 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Predictive Learning: Using Future Representation Learning Variantial Autoencoder for Human Action Prediction (1711.09265v2)

Published 25 Nov 2017 in cs.CV

Abstract: The unsupervised Pretraining method has been widely used in aiding human action recognition. However, existing methods focus on reconstructing the already present frames rather than generating frames which happen in future.In this paper, We propose an improved Variantial Autoencoder model to extract the features with a high connection to the coming scenarios, also known as Predictive Learning. Our framework lists as following: two steam 3D-convolution neural networks are used to extract both spatial and temporal information as latent variables. Then a resample method is introduced to create new normal distribution probabilistic latent variables and finally, the deconvolution neural network will use these latent variables generate next frames. Through this possess, we train the model to focus more on how to generate the future and thus it will extract the future high connected features. In the experiment stage, A large number of experiments on UT and UCF101 datasets reveal that future generation aids Prediction does improve the performance. Moreover, the Future Representation Learning Network reach a higher score than other methods when in half observation. This means that Future Representation Learning is better than the traditional Representation Learning and other state- of-the-art methods in solving the human action prediction problems to some extends.

Citations (4)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Questions

We haven't generated a list of open questions mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.