
Wan 2.2 has quickly become one of the most powerful models for AI creators, delivering exceptional results in both video and image generation. With its flexibility, speed, and ability to handle multiple workflows in a single setup, Wan 2.2 is a must-have for anyone working with ComfyUI.
Why Wan 2.2 Stands Out
Wan 2.2 isn’t just another AI model — it’s a versatile tool designed for both creativity and efficiency. Key advantages include:
- All-in-One Workflow – Text-to-video, image-to-video, text-to-image, and upscaling in a single setup.
- Superior Image Fidelity – Excellent skin textures, accurate proportions, and realistic lighting.
- Faster Renders – Optimized workflows and LoRAs can cut generation time by half.
- Flexibility – Works for both artistic and photorealistic projects.
Recommended Setup
In our workflow, we run Wan 2.2 on RunPod using an H100 GPU for fast processing. Our custom ComfyUI template makes it easy to install and configure all required models, including:
- 14B Image-to-Video and Text-to-Video models
- 5B model for upscaling
- LoRAs for texture detail and speed improvements
This setup ensures optimal performance whether you’re generating cinematic videos or high-resolution still images.
Optimizing Results
From our testing, the best results come from:
- Using custom samplers for sharper visuals and better motion consistency
- Applying LoRAs strategically — for example, only on the low noise model to preserve motion fluidity
- Upscaling with the 5B model for enhanced detail without losing quality
Try Wan 2.2 Without Installation
If you don’t want to set up everything locally, you can use Kaijugen, our cloud-based image and video generation platform. Simply upload or link an image, enter your prompt, and generate directly from your browser.

Watch the Full Tutorial
This article covers the highlights — but if you want to see the exact workflow in action, along with parameter settings and real output comparisons, watch our full tutorial: