<aside> 👉🏻

This is an LLM-generated, hand-fixed summary of the #wan-chatter channel on the Banodoco Discord.

Generated on April 7, 2025.

Created by Adrien Toupet: https://www.ainvfx.com/ Ported to Notion by Nathan Shipley: https://www.nathanshipley.com/

Thanks and all credit for content to Adrien and members of the Banodoco community who shared their work and workflow!

</aside>

<aside> ⚠️

Some images are missing.

Also, Adrien notes: “Please note I didn't verify all the information, so take it as a general overview that might contain error and not as a strict guide.”

</aside>

Back to:

Nathan’s Notes on Open Source AI

Banodoco discord #wan_chatter summary

Last updated: April 7, 2025 - This content was generated using AI. Accuracy is not guaranteed, information may contain errors or omissions.

📕 Table of Contents:

1. Introduction to Wan

Wan 2.1 (originally called WanX) is an open-source AI video generation model suite released by Alibaba Cloud starting February 2025. It represents a significant advancement in open-source video generation capabilities, offering high-quality video generation that rivals some closed-source alternatives. The suite includes Text-to-Video (T2V) and Image-to-Video (I2V) models, along with specialized "Fun" versions for control and inpainting, and the powerful VACE module.

Key Features

2. Model Variants

Core Models (Wan 2.1)

Model Size Description VRAM Requirements (Approx.)
Wan2.1-T2V-1.3B 1.3B Smaller text-to-video model (5.6GB FP32). Good base for VACE. 8GB+
Wan2.1-T2V-14B 14B Larger text-to-video model (FP32/BF16). Higher quality. 24GB+
Wan2.1-I2V-14B-480P 14B Image-to-video model optimized for 480p output. 24GB+
Wan2.1-I2V-14B-720P 14B Image-to-video model optimized for 720p output. 32GB+

"Fun" Models (Released March 2025)