Wan: Open and Advanced Large-Scale Video Generative Models
-
Updated
Dec 15, 2025 - Python
Wan: Open and Advanced Large-Scale Video Generative Models
Diffusion model(SD,Flux,Wan,Qwen Image,Z-Image,...) inference in pure C/C++
HunyuanVideo-I2V: A Customizable Image-to-Video Model based on HunyuanVideo
[CVPR 2025 Highlight🔥] Identity-Preserving Text-to-Video Generation by Frequency Decomposition
Unofficial Implementation of Animate Anyone by Novita AI
Official Implementation of VideoDPO
[SIGGRAPH Asia 2024] TrailBlazer: Trajectory Control for Diffusion-Based Video Generation
Reinforcement Learning Framework for Visual Generation
The Best of Both Worlds: Integrating Language Models and Diffusion Models for Video Generation
Geometry-Consistent Video Diffusion for Robotic Visual Policy Transfer
AI-Generated Video Detection via Perceptual Straightening (NeurIPS2025)
A comprehensive, click to install, fully open-source, Video + Audio Generation AIO Toolkit using advanced prompt engineering plus the power of CogVideox + AudioLDM2 + Python!
Model-Agnostic AI Video Framework
Very Vary Shorts Generator using ChatGPT4 and Diffusion
🎥 Generate videos effortlessly with the Wan 2.6 pipeline, utilizing fal.ai APIs for automated script-to-video conversion with customizable voice and visuals.
Lightweight TypeScript + Express backend with a React frontend for creating and rendering video templates.
A Novel Approach to Video Generation model and their Archives
Website Link Given Below
This is another project i made that uses an ollama ai model dolphin-phi to make the prompt split into frames. And then it passes those frame prompts into an image gen this time an SDXL from huggingface and then uses ffmpeg to fuse the frames into a video it will also safe the frames into the folder.
Add a description, image, and links to the videogeneration topic page so that developers can more easily learn about it.
To associate your repository with the videogeneration topic, visit your repo's landing page and select "manage topics."