Skip to content

Wan 22 and FLUX Krea Full Tutorial Automated Install Ready Perfect Presets SwarmUI with ComfyUI

FurkanGozukara edited this page Oct 16, 2025 · 1 revision

Wan 2.2 & FLUX Krea Full Tutorial - Automated Install - Ready Perfect Presets - SwarmUI with ComfyUI

Wan 2.2 & FLUX Krea Full Tutorial - Automated Install - Ready Perfect Presets - SwarmUI with ComfyUI

image Hits Patreon BuyMeACoffee Furkan Gözükara Medium Codio Furkan Gözükara Medium

YouTube Channel Furkan Gözükara LinkedIn Udemy Twitter Follow Furkan Gözükara

Install Wan 2.2 and FLUX Krea with literally 1-click and use our pre-made most amazing quality presets. I did literally 100s of parameter testing you to bring you best Wan 2.2 and FLUX Krea configuration so that you can generate the highest quality videos from images or from text. Moreover, with our FLUX preset, you will be able to generate much better quality images from FLUX Dev with FLUX Krea DEV model. This tutorial will show you everything step by step easiest possible way.

🔗Follow below link to download the zip file that contains SwarmUI installer and AI models downloader Gradio App - the one used in the tutorial for downloading models, presets, prompt generator guide txt ⤵️

▶️ https://www.patreon.com/posts/SwarmUI-Installer-AI-Videos-Downloader-114517862

▶️ How to install SwarmUI main tutorial : https://youtu.be/fTzlQ0tjxj0

🔗Follow below link to download the zip file that contains ComfyUI 1-click installer that has all the Flash Attention, Sage Attention, xFormers, Triton, DeepSpeed, RTX 5000 series support ⤵️

▶️ https://www.patreon.com/posts/Advanced-ComfyUI-1-Click-Installer-105023709

▶️ RunPod SwarmUI & ComfyUI Install Tutorial : https://youtu.be/R02kPf9Y3_w

▶️ Massed Compute SwarmUI & ComfyUI Install Tutorial : https://youtu.be/8cMIwS9qo4M

🔗 Python, Git, CUDA, C++, FFMPEG, MSVC installation tutorial - needed for ComfyUI ⤵️

▶️ https://youtu.be/DrhUHnYfwC0

🔗 SECourses Official Discord 10500+ Members ⤵️

▶️ https://discord.com/servers/software-engineering-courses-secourses-772774097734074388

🔗 Stable Diffusion, FLUX, Generative AI Tutorials and Resources GitHub ⤵️

▶️ https://github.com/FurkanGozukara/Stable-Diffusion

🔗 SECourses Official Reddit - Stay Subscribed To Learn All The News and More ⤵️

▶️ https://www.reddit.com/r/SECourses/

Video Chapters

00:00:00 Introduction: The Ultimate Wan 2.2 Tutorial with Optimized Presets

00:01:03 Free Prompt Generation Tool & Introducing the New FLUX Krea Dev Model

00:02:01 How SwarmUI & ComfyUI Enable Video Generation on Low-End Hardware

00:02:46 Quick Start Guide: Downloading the Latest SwarmUI & ComfyUI Installers

00:03:10 Step-by-Step: How to Update or Perform a Fresh Installation of ComfyUI

00:03:51 Step-by-Step: How to Update or Perform a Fresh Installation of SwarmUI

00:04:18 Essential Setup: Configuring the SwarmUI Backend for ComfyUI

00:04:53 One-Click Setup: Downloading All Required Wan 2.2 Models Automatically

00:05:46 Importing the Ultimate SwarmUI Presets Pack for Best Results

00:06:22 Wan 2.2 Image-to-Video Generation: A Complete Step-by-Step Guide

00:07:33 How to Generate Amazing, Detailed Prompts for Free with Google Studio AI

00:08:12 Starting Your First Generation & How to Monitor Logs for Errors

00:08:53 Pro Tip: How to Fix Low GPU Utilization and VRAM Issues for Max Speed

00:10:32 Wan 2.2 Text-to-Video: Choosing the Right Preset & Workflow

00:11:22 Generating a Detailed Dinosaur Animation Scene from a Simple Text Prompt

00:12:15 In-Depth Analysis: 8 Steps vs 20 Steps & The Impact of LoRA on Quality

00:13:11 Finding the Best Parameters: A Deep Dive into CFG Scale & Step Counts

00:13:42 Advanced Optimization: Using TeaCache for Text-to-Video Generation

00:15:31 FLUX Krea Dev vs FLUX Dev: A Detailed Side-by-Side Image Comparison

00:16:26 How to Easily Train Your Own LoRAs on the New FLUX Krea Dev Model

00:17:02 Complete Workflow for Generating High-Quality Images with FLUX Krea Dev

00:18:20 The Final Verdict: Side-by-Side Result of FLUX Krea Dev vs FLUX Dev

00:19:20 An Experiment: Attempting to Generate Still Images with Wan 2.2

00:21:18 Final Thoughts, Summary, and What's Coming Next in Future Tutorials

Wan Development Team Announces Wan2.2 Video Generation Model

The Wan development team has announced the release of Wan2.2, positioning it as a significant advancement in their video generation model series. The team highlights four key technical improvements in this iteration.

The model introduces a Mixture-of-Experts (MoE) architecture specifically designed for video diffusion models. According to the developers, this approach separates the denoising process across different timesteps using specialized expert models, which they claim expands overall model capacity while maintaining computational efficiency.

Wan2.2 incorporates what the team describes as "cinematic-level aesthetics" through curated training data that includes detailed annotations for visual elements such as lighting, composition, contrast, and color tone. The developers state this enables more precise control over cinematic style generation and customizable aesthetic outputs.

The training dataset for Wan2.2 represents a substantial expansion compared to its predecessor, with the team reporting increases of 65.6% more images and 83.2% more videos in the training corpus.

Some background music by NoCopyrightSounds : https://gist.github.com/FurkanGozukara/681667e5d7051b073f2e795794c46170

Video Transcription

  • 00:00:03 Greetings everyone. As you know, Wan  2.2 has been published a few days ago,  

  • 00:00:09 and since it has been published, I was  relentlessly testing it, analyzing it,  

  • 00:00:15 to bring you the easiest installation and  the very best workflow, very best presets,  

  • 00:00:22 so that you can directly use it in your computer  with the most accurate and the most easy way.

  • 00:00:30 To find out the very best hyperparameters, the  very best configuration, the very best preset, I  

  • 00:00:35 did literally, literally hundreds of generations,  and the results are just amazing. As you see,  

  • 00:00:43 I did tens of grid testing in SwarmUI to find out  what is working and what is not working because  

  • 00:00:51 this is a new model. Many of the workflows  you will find elsewhere will be sub-optimal  

  • 00:00:56 because testing this model to find out the very  best parameters is really taking huge time.  

  • 00:01:03 I have used 8 GPUs having a cloud machine to test  all of the combination and possible parameters.

  • 00:01:10 Moreover, I will show you how to generate amazing  prompts with our video models prompt generation  

  • 00:01:16 TXT file for free inside Google Studio AI. All  you need will be just upload your image or type  

  • 00:01:24 whatever the action scene you want, and it will  generate a very detailed, amazing prompt for Wan  

  • 00:01:31 2.2 or Wan 2.1 models. Moreover, FLUX Krea  Dev has been published just yesterday,  

  • 00:01:37 and I also have presets for it and one-click to  download and install and use it. I also made a  

  • 00:01:44 very good comparison, and this model is just  amazing. So we are going to see all of them.

  • 00:01:49 I can assure you that this is the very best Wan  2.2 tutorial that you will find, the most easy one  

  • 00:01:55 to install and follow and get the best results.  Even though this model is very powerful, we are  

  • 00:02:01 utilizing the ComfyUI backend inside SwarmUI so  that we are able to generate amazing videos with a  

  • 00:02:10 minimal amount of hardware power. So even if your  computer is not very powerful, you will be still  

  • 00:02:16 able to generate these amazing quality videos.  We already have SwarmUI and ComfyUI installation  

  • 00:02:24 video, so check it out for a full tutorial of how  to install. This is the tutorial, the link will  

  • 00:02:29 be in the description of the video. Moreover, if  you want to use SwarmUI and ComfyUI on RunPod, we  

  • 00:02:34 also have a tutorial for that, how to installation  tutorial. And finally, in this tutorial, I show  

  • 00:02:40 how to install and use SwarmUI and ComfyUI on Massed Compute. So we have full installation tutorials.

  • 00:02:46 Therefore, I will be quick in this video. Download  the latest SwarmUI model downloader from the link,  

  • 00:02:54 it will be in the description of the video, and  download the latest ComfyUI installation zip file  

  • 00:03:00 from the post as like this. It will be also in  the description of the video. So first of all,  

  • 00:03:05 you need to update your existing installation  or make a fresh installation. Actually,  

  • 00:03:10 both of them are extremely easy. First, let's  update our ComfyUI. So copy the zip file,  

  • 00:03:16 go back into wherever you have installed  like this, paste it there, right-click,  

  • 00:03:21 and I will extract all files here and overwrite.  Then all I need to do is just double-click and  

  • 00:03:26 run the windows_update_comfyui.bat file, run. It  will update the ComfyUI to the latest version,  

  • 00:03:33 and I am ready. If you are making a fresh  installation, all you need to do is just  

  • 00:03:38 double-click and run the windows_install.bat  file. It will install the ComfyUI. I recommend  

  • 00:03:43 Python 3.10 and install with both Flash Attention  and Sage Attention because we are utilizing both.

  • 00:03:51 For the SwarmUI, copy the downloaded zip file,  then move it into your existing installation,  

  • 00:03:57 extract all the files, overwrite the previous  ones, then windows_update_swarmui.bat file.  

  • 00:04:02 It will update SwarmUI to the latest version  and it will start the instance of SwarmUI. If  

  • 00:04:08 you are making a fresh installation,  you should just double-click and run  

  • 00:04:11 the windows_install_swarmui.bat file. After  SwarmUI started, you need to add your backend.  

  • 00:04:18 Let me demonstrate you how to add your  backend. So I will delete both of my  

  • 00:04:22 existing backends like this. You will get  this empty screen. ComfyUI self-starting,  

  • 00:04:28 okay. You need to give the ComfyUI path. My  ComfyUI installed here, so I copy its path,  

  • 00:04:34 paste it here, then this backslash, main.py,  and in the extra arguments, I'm going to use  

  • 00:04:41 --use-sage-attention to get the maximum speed  and save. This will be running on my first GPU.  

  • 00:04:47 If I add another instance, same way, and if I  make it GPU ID 1, it will use the second GPU.

  • 00:04:53 Okay, so how you are going to use the Wan  2.2? First of all, you need to download  

  • 00:04:57 the models. So for downloading the models,  windows_start_download_models_app.bat file,  

  • 00:05:02 run. It will start our updated downloader. This  is the latest version, version 6. We have improved  

  • 00:05:08 it significantly. All you need to do is just  SwarmUI bundles and download the Wan 2.2 Core 8  

  • 00:05:16 Steps bundle. This will download these following  models into the accurate folders automatically  

  • 00:05:21 with full speed, then you will be able to use  them. The speed will be really, really fast  

  • 00:05:26 depending on your network, and if already a file  has been downloaded, it will skip it. Moreover,  

  • 00:05:32 you can use search. For example, let's say you  want to download the Krea model, just type Krea  

  • 00:05:39 here, and you see Krea FLUX Dev. So download it as  well because I am also going to show you Krea too.

  • 00:05:46 Once your downloads have been completed, go  to your presets, import preset, choose file,  

  • 00:05:52 go back to your installation folder, select the  amazing SwarmUI presets, currently version 7 is  

  • 00:05:58 latest, overwrite existing presets, which  I recommend. If you have your own presets,  

  • 00:06:04 then you can change their name to not have  conflicts. Currently, it failed for some reason,  

  • 00:06:09 let me refresh. Sometimes it may happen. Import  preset, choose file, amazing presets, okay,  

  • 00:06:15 overwrite, import. And then you will get all  these amazing presets that we set up for you.

  • 00:06:22 Okay, as a demonstration, let's begin with  image-to-video. First of all, Quick Tools,  

  • 00:06:27 Reset Params to Default. This is so important.  Quick Tools, Reset Params to Default. Then,  

  • 00:06:32 you see there is Wan 2.2 Image to Video 8 Steps.  8 steps is working amazing with Wan 2.2. However,  

  • 00:06:39 with text-to-video, not yet. We need a new  LoRA. I tested all the quick LoRAs, they are  

  • 00:06:45 not at a good level yet. So click here and direct  apply. You see, I click this three hamburger menu,  

  • 00:06:51 direct apply. It will set all the parameters.  All you need is setting up your init image here.  

  • 00:06:57 Choose file, and let's use this image as a test.  Resolution, use closest aspect ratio. You see,  

  • 00:07:04 it set the resolution. Then what you can change,  you can change the resolution from here. You can  

  • 00:07:11 change the number of frames. Currently, it is set  to 73 frames, so it will be 3 seconds because the  

  • 00:07:18 video FPS is 24. And where do you set the FPS?  The FPS is set in advanced video. So we set the  

  • 00:07:26 frames as many as you want, up to 121, which is  5 seconds. Then you need to type your prompt.

  • 00:07:33 For prompting, I have prepared an amazing file.  So go to the Google Studio AI, this is free,  

  • 00:07:38 and in the installation folder, you will see  that we have video models prompt generate grid.  

  • 00:07:42 Then upload your test image. You don't  even need to type anything. By the way,  

  • 00:07:48 Google Studio AI is currently free to use, you  don't need anything. So run, and it will generate  

  • 00:07:53 a prompt for us. If you want a specific style,  specific action, you can just describe it here  

  • 00:08:00 slightly, and it will still generate an amazing  prompt for you. Okay, we got a very detailed,  

  • 00:08:06 amazing prompt as you are seeing. So I returned  back to my SwarmUI and I copy-paste my prompt  

  • 00:08:12 and I hit generate. The initial start of the  generation totally depends on your hard drive  

  • 00:08:17 and your RAM amount. So it will first load the  models, then it will start the generation. If  

  • 00:08:22 your hard drive is not an SSD, it will take huge  time. And you see, we will see the preview here.

  • 00:08:27 To monitor what is happening with the generation,  go to server, go to logs, select the debug from  

  • 00:08:33 here, and follow what is happening. You will also  get this message because we are using Wan 2.1 fast  

  • 00:08:40 LoRA at the moment. However, it is still working  partially and we are getting good results. I  

  • 00:08:46 compared it with and without LoRA, and LoRA  is really working with image-to-video models.  

  • 00:08:53 However, with text-to-video, it is not that good,  I will show. Currently, because I am recording a  

  • 00:08:58 video and I have other stuff, it is not utilizing  the block swapping accurately. Therefore, you see  

  • 00:09:04 I am not getting a full power of my GPU. So either  I need to restart my computer, stop recording,  

  • 00:09:10 and do everything, or I need to edit my backend  and I need to add --reserve-vram like 10 GB,  

  • 00:09:18 save. It will restart the backend and start a  new generation. So follow your watt usage from  

  • 00:09:25 here to fully utilize your GPU. If you are not  seeing high watt usage here, it means that it  

  • 00:09:32 is using shared VRAM, not block swapping. It  can happen in some scenarios, usually when you  

  • 00:09:38 are running a lot of other stuff on your computer.  So pay attention to that. Now if I just generate,  

  • 00:09:45 it should utilize my GPU much better. Let's  see. Let's follow the watt usage from here.  

  • 00:09:50 The beauty of SwarmUI is that it utilizes the  ComfyUI, therefore it has all the optimizations  

  • 00:09:57 of the ComfyUI, leverages, improvements of the  ComfyUI. And yes, you see I am utilizing 525 watts  

  • 00:10:05 out of 575 watts, and this is great utilization.  So if you are utilizing like 80-85% of your GPU,  

  • 00:10:13 it's a good one, and the preview already started  to appear. So this generation will be exactly like  

  • 00:10:19 this one. You see the prompt is also here, and  if I set everything else, it will be also same.

  • 00:10:25 And how you are going to generate a text-to-video?  So let's interrupt all sessions from here. For  

  • 00:10:32 text-to-video, I will first do Quick Tools, Reset  Params to Default. Then, from the presets, there  

  • 00:10:38 are two presets for text-to-video. Currently,  there isn't a proper LoRA for speeding up the  

  • 00:10:45 text-to-video. I have tested all the LoRAs,  they are made for Wan 2.1, they are not working  

  • 00:10:50 really good in Wan 2.2. But I learned that a new  LoRA will be published soon and I will update my  

  • 00:10:56 downloader and preset after that, hopefully it  is published. So we have Wan 2.2 Text to Video 8  

  • 00:11:02 Steps, and we have Wan 2.2 High Quality 20 Steps.  So currently I recommend 20 steps, but if you  

  • 00:11:10 don't want to wait, you can use 8 steps. Both are  exactly working same. So click here, direct apply,  

  • 00:11:16 but don't forget Quick Tools, Reset Params to  Default. Okay, direct apply. Then what we need  

  • 00:11:22 is typing our prompt here. Again, we can use our  prompt instructor file here. We can do like this,  

  • 00:11:30 so I will delete this, delete this, and delete  this, and I will type, "Generate me an epic scene  

  • 00:11:37 about a dinosaur." Okay, like this, and run. Then  this will generate me a really good scene prompt  

  • 00:11:45 that includes a dinosaur. Whatever the prompt you  want, this is a really, really good one. You see,  

  • 00:11:50 this is 13,000 tokens, so this is a very big  file that says how to generate a prompt. Okay,  

  • 00:11:58 we got an amazing prompt. So now all  I need to do is just type it here,  

  • 00:12:02 then I need to set my frame count. The frame count  in text-to-video is set from here. So you can  

  • 00:12:08 change it to increase or decrease your duration  and generate. Now it should start generation.

  • 00:12:15 And what kind of results you can expect? Like this  one. You see, this is such an amazing animation.  

  • 00:12:21 And what is the difference between 8 steps and 20  steps? So these are the LoRA comparisons. None of  

  • 00:12:29 them is looking really good. Perhaps this one  is best, which we are using. So let me show you  

  • 00:12:34 a proper comparison of 8 steps versus 20 steps.  The top one has LoRA and the bottom one doesn't  

  • 00:12:42 have a LoRA. You see, this is the difference. LoRA  significantly changes the color and also quality  

  • 00:12:49 is not that great. When you look closely, you  will notice that the texture is not that great as  

  • 00:12:55 20 steps. And as you do more steps, of course, it  gets better. For example, I have tested 20 steps,  

  • 00:13:02 25 steps, 30 steps, 40 steps, 50 steps, and  the quality was excellent at the 40 steps. I  

  • 00:13:11 also have compared CFG scales, as you are seeing,  CFG 3.5 performed best for me. CFG 5 is not that  

  • 00:13:18 great and CFG 7 broken it. Many people comparing  without LoRA workflows with CFG 1 and they are  

  • 00:13:25 making a mistake. When you don't use fast LoRA,  the CFG has to be bigger than 1. You see the 20  

  • 00:13:32 steps is generating excellent videos, excellent  animation. If your animation is not that great,  

  • 00:13:37 that is because of the LoRA that you are  using, fast LoRA. I also have compared  

  • 00:13:42 TeaCache. Currently, TeaCache is not working  with Wan 2.2 image-to-video, but it is working  

  • 00:13:47 with text-to-video. With 10% threshold and  20 steps, the result is really, really good,  

  • 00:13:54 still good. So you can also use TeaCache.  However, when I increased it to 15% or 20%,  

  • 00:14:01 it broken a lot the animation until 50 steps  is done. And this is the 50 steps result. This  

  • 00:14:09 is the very best result I can say. 50 steps, 15%  TeaCache threshold, and this is just an amazing  

  • 00:14:17 animation if you ask my opinion. So I recommend  you to test TeaCache as well if you are patient  

  • 00:14:24 to see which result is performing best. Also, our  text-to-video is being generated. Oh, by the way,  

  • 00:14:31 don't forget to set your resolution from here.  You see, currently it is generating 960 to 960.  

  • 00:14:38 I can set it like this or like this, and it will  generate in that resolution. The generation is not  

  • 00:14:44 that slow. Currently, even though I am recording  a video, doing a lot of other stuff, it is 21  

  • 00:14:48 seconds per IT for 73 frames and HD resolution.  And I am utilizing, let's see, yes, it is changing  

  • 00:14:57 the model. My presets are using both high noise  and low noise Wan 2.2 models. So it will do the  

  • 00:15:04 first 50% of the generation with the base model,  then it will switch to the other model. Okay,  

  • 00:15:11 the video has been generated. Yeah, it's a decent  quality, and we used the 8 steps LoRA. You see,  

  • 00:15:19 actually animation is really good. You see the  animation at the back and the video it generated.  

  • 00:15:24 So the prompt that has been generated with our  prompting guidance was really, really great.

  • 00:15:31 Okay, what about FLUX Krea Dev? I have tested  the FLUX Krea Dev with seven different prompts,  

  • 00:15:38 and on the left, we see the FLUX Krea Dev, and  on the right, we see the FLUX Dev. The results  

  • 00:15:43 are significantly better with FLUX Krea Dev. We  can see that this is way better looking. This has  

  • 00:15:48 more quality, realism. And another instance, you  see this is supposed to be like a real picture.  

  • 00:15:55 The FLUX is like 3D render. However, the FLUX  Krea Dev is very realistic. Another instance,  

  • 00:16:01 again, we see that FLUX Krea Dev is just amazing.  You see the prompt is here. By the way, I used  

  • 00:16:07 the prompts that I generated for Wan 2.2 with  the guidance we made, but the prompts are just  

  • 00:16:13 working amazing. This is another case. In every  case, FLUX Krea Dev is yielding better results,  

  • 00:16:19 even in anime, you see. So FLUX Krea Dev is a  significant improvement. Moreover, since it is  

  • 00:16:26 using the same architecture, our LoRA training and  DreamBooth training is fully working. Hopefully,  

  • 00:16:31 I will make a comparison video to show you, but  right away, you can use our Kohya installer, our  

  • 00:16:37 Kohya training presets, and train with FLUX Krea  Dev. All you need to do is just changing the base  

  • 00:16:44 model to FLUX Krea Dev, nothing else. And you see,  this is another instance. And I also added FLUX  

  • 00:16:50 Krea Dev to the downloads of our Kohya installer,  so it will be automatically downloaded for you  

  • 00:16:56 so that you won't have any issues and you will be  able to right away start using this amazing model.

  • 00:17:02 And how to generate images with FLUX Krea Dev?  What is the proper workflow? The proper workflow  

  • 00:17:07 is so easy. Again, go to Quick Tools, Reset  Params to Default, then go to presets and use our  

  • 00:17:14 official FLUX Dev preset. So click here, direct  apply. Then from models, you need to select FLUX  

  • 00:17:21 Krea Dev, and let's use this prompt to generate  an image. And our resolution is this. Okay,  

  • 00:17:27 hit generate. Okay, it says that we are missing  the model. Okay, let's generate again. Yes,  

  • 00:17:33 it is starting to generate. The preset might  have different model naming when it is saved. So  

  • 00:17:40 therefore, pay attention to the model names, just  change the preset selected model. You can also use  

  • 00:17:46 GGUF models. I also added them to the downloader,  but if you use GGUF, make sure to edit metadata  

  • 00:17:53 and set the architecture accurately if it is not  accurately recognized. After saving and changing  

  • 00:18:00 the metadata, you can restart the SwarmUI and  it will be fully usable in the preset. Okay,  

  • 00:18:06 the image is being generated right now, so let's  see what we will get. By the way, Wan 2.2 is also  

  • 00:18:13 really realistic. We can also generate images with  Wan 2.2. I will also show you that in a moment.  

  • 00:18:20 And we got the result. So this is the result of  FLUX Krea Dev. Let's also compare with the FLUX  

  • 00:18:26 Dev. So I will just change the model. I made the  reuse parameters, so we can make a comparison.  

  • 00:18:31 And Sage Attention is working with FLUX models, so  that it is really fast. Let's see the speed. Yeah,  

  • 00:18:38 the speed was 1.20 IT per second, even though I  am using a lot of other stuff, using the GPU. Yes,  

  • 00:18:45 it is loading the model, and it started  generation. By the way, FLUX Dev and FLUX Krea  

  • 00:18:50 Dev are using the same amount of VRAM, RAM, same  speed, exactly same architecture, but they just  

  • 00:18:57 fine-tuned the model and significantly fine-tuned  the model to make it a better model, and it is  

  • 00:19:02 really a better model if you ask my opinion. So  whatever the workflow you had with FLUX Dev should  

  • 00:19:08 work with FLUX Krea Dev. Okay, we are getting  the image. And yes, this was FLUX Krea Dev,  

  • 00:19:13 and this is the FLUX Dev. You see the difference  is extremely significant, many times significant.

  • 00:19:20 Let's also try to generate this image with Wan  2.2. How you can do that? Again, Quick Tools,  

  • 00:19:26 Reset Params to Default, then go to presets and  select our high-quality preset, direct apply.  

  • 00:19:33 All I need to do is now I will type my prompt and  I will change the number of frames. So currently,  

  • 00:19:41 the number of frames are set here. So I'm going  to make it one frame and let's save as WebP and  

  • 00:19:48 generate. Let's see what we are going to get.  Always monitor what is happening from server,  

  • 00:19:52 logs, debug menu. Make sure to change this  to debug to see what is happening. This  

  • 00:19:57 should be also pretty fast. Yes. You  see, even though I am doing 20 steps,  

  • 00:20:01 it is 1.1 second IT. It is almost equal  to FLUX, a little bit slower. However,  

  • 00:20:07 it will change a model because this is a two-model  setup, the new Wan 2.2 models. And after this,  

  • 00:20:14 I will try a bigger resolution to see what  is happening. So for example, let's go back  

  • 00:20:19 to our resolution and let's make it custom,  1920 to 1080. Okay, generate. Okay, this is  

  • 00:20:26 the first image we generated. Yes. So currently,  we need a better workflow for Wan 2.2 generation,  

  • 00:20:34 for image generation. Hopefully, I will also  research this and bring you the accurate preset  

  • 00:20:40 to generate images with Wan 2.2. It is for some  reason not working yet. I need to research this,  

  • 00:20:46 but this is the logic. This is working really good  with Wan 2.1, but probably we need to change some  

  • 00:20:52 other parameters for Wan 2.2. We will see that.  Let's see the higher resolution as well. I wonder  

  • 00:20:57 what we will get. Maybe the higher resolution  will be better. The speed is still decent,  

  • 00:21:01 you see, 2.5 second IT. It is expected,  we increased the resolution to double,  

  • 00:21:06 almost double, maybe more than double actually.  Maybe we need more steps. And this is the higher  

  • 00:21:12 resolution image generation. As I said, hopefully  I will research this, so stay subscribed. Ask any  

  • 00:21:18 questions that you have. Hopefully, see you in  another more amazing tutorial video. And you  

  • 00:21:23 can use all these presets. We have tutorials  for almost literally everything. You can just  

  • 00:21:28 message me from Patreon or from YouTube or from  Discord, and I will reply you as soon as possible.

Clone this wiki locally