A Review of Scene Dynamic Control in Text-Guided Video Prediction Large Models
CSTR:
Author:
Affiliation:

Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences

Clc Number:

TP 391.7

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    In recent years, the rapid development of generative AI has made text-driven video prediction models a hot topic in academia and industry. Video prediction should address temporal dynamics and consistency, requiring precise control of scene structures, subject behaviors, camera movements, and semantic expressions. One major challenge is accurately controlling scene dynamics in video prediction to achieve high-quality, semantically consistent outputs. Researchers have proposed key control methods, including camera control, reference video control, semantic enhancement, and subject feature control. These methods aim to improve generation quality, ensuring outputs align with historical context while meeting user needs. This paper systematically explores the core concepts, advantages, limitations, and future directions of these four control approaches.

    Reference
    Related
    Cited by
Get Citation
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:December 01,2024
  • Revised:December 08,2024
  • Adopted:December 11,2024
  • Online: December 11,2024
  • Published:
Article QR Code