Skip to content

How To Do SDXL LoRA Training On RunPod With Kohya SS GUI Trainer and Use LoRAs With Automatic1111 UI

FurkanGozukara edited this page Oct 23, 2025 · 1 revision

How To Do SDXL LoRA Training On RunPod With Kohya SS GUI Trainer & Use LoRAs With Automatic1111 UI

How To Do SDXL LoRA Training On RunPod With Kohya SS GUI Trainer & Use LoRAs With Automatic1111 UI

image Hits Patreon BuyMeACoffee Furkan Gözükara Medium Codio Furkan Gözükara Medium

YouTube Channel Furkan Gözükara LinkedIn Udemy Twitter Follow Furkan Gözükara

If you don't have a GPU, or have a strong GPU, or you are using Mac and your computer not supporting Stable Diffusion training, SDXL training, then this is the tutorial you are looking for. In this tutorial, we will use a cheap cloud GPU service provider RunPod to use both Stable Diffusion Web UI Automatic1111 and Stable Diffusion trainer Kohya SS GUI to train SDXL LoRAs. By watching this tutorial, you will be able to training and generate images as easy as your own computer by using RunPod cloud service.

1 click Auto Kohya Installer ⤵️

https://www.patreon.com/posts/84898806

Tutorial GitHub Readme File ⤵️

https://github.com/FurkanGozukara/Stable-Diffusion/blob/main/Tutorials/How-To-Install-Kohya-LoRA-Web-UI-On-RunPod.md

SECourses Discord To Get Full Support ⤵️

https://discord.com/servers/software-engineering-courses-secourses-772774097734074388

My LinkedIn ⤵️

https://www.linkedin.com/in/furkangozukara/

My Instagram ⤵️

https://www.instagram.com/gozukarafurkan/

My Medium ⤵️

@FurkanGozukara https://medium.com/@furkangozukara

My CivitAI ⤵️

https://civitai.com/user/SECourses

Hopefully I will do more research about SDXL training. Will investigate training only unet without text encoder. Moreover, I will investigate and make a workflow about celebrity name based training hopefully. Furthermore, SDXL full DreamBooth training is also on my research and workflow preparation list. Stay subscribed for all.

00:00:00 Introduction to easy tutorial of using RunPod to do SDXL training

00:01:55 How to start your RunPod machine for Stable Diffusion XL usage and training

00:03:18 How to install Kohya on RunPod with a single click installer

00:04:22 Important things about using SDXL on RunPod

00:05:38 Step by step installation of Kohya SS GUI trainer on RunPod

00:07:08 How to terminate automatically started Automatic1111 Instance to free up VRAM

00:07:26 How to change relauncher.py file to prevent infinite auto relaunch of Automatic1111 Web UI

00:08:00 Where do you need to download and put Stable Diffusion model and VAE files on RunPod

00:08:58 How to download Stable Diffusion models into RunPod

00:09:25 How to terminate previously started Automatic1111 Web UI instance to free up VRAM before training,

00:10:03 How to start Kohya GUI after installation

00:10:42 How to save and load Kohya training configuration on RunPod

00:10:54 How to upload and set your training images

00:12:01 How to install runpodctl

00:13:15 Why runpodctl is useful

00:13:51 How to configure Kohya LoRA training on RunPod for Stable Diffusion XL

00:15:58 What is the importance and effect of Network Rank (Dimension)

00:19:41 How to update Automatic1111 Web UI on RunPod

00:20:04 How to understand your Kohya training completed or not

00:21:57 How to start using your trained or downloaded SDXL LoRA models

00:22:13 Where the training checkpoint files are saved

00:22:46 How you should connect to Automatic1111 Web UI interface on RunPod for image generation

00:23:15 How to set best Stable Diffusion VAE file for best image quality

00:23:33 How to set full precision VAE on RunPod --no-half-vae

00:24:18 Where to find good Stable Diffusion prompts for SDXL and SD 1.5 models

00:25:05 How to activate your trained LoRA via appending to the prompt

00:25:47 How to improve faces / fix them via using Adetailer automatic inpainting extension

00:26:44 How to reload last used prompt and all settings with 1 click

00:26:52 How to enable and use adetailer extension

00:27:06 In which folder generated images saved and how to download them very fast on RunPod

00:29:24 How to test face quality and verify your trained LoRA or DreamBooth models / checkpoints

00:30:10 How to do prompt engineering to improve and fix faces

00:32:06 How to join our community helping Discord and where to contact to me

Video Transcription

  • 00:00:00 Greetings everyone.

  • 00:00:01 In this video I will show you how you can use SDXL on RunPod services to Kohya SS LoRA

  • 00:00:09 training on SDXL even if you don't have any GPU.

  • 00:00:14 Because this is a cloud service, the tutorial works both on RunPod Stable Diffusion Web

  • 00:00:19 UI template and also RunPod Fast Stable Diffusion template.

  • 00:00:24 So I will show you step-by-step installation and training of Kohya GUI SS on RunPod.

  • 00:00:31 As you are seeing right now, I will show you the important steps, the tips and tricks.

  • 00:00:37 Moreover, I will show you how you can update Automatic1111 Web UI to the latest version

  • 00:00:44 and use best VAE and generate amazing pictures like this.

  • 00:00:48 I will show you the LoRA training logic of SDXL by using Kohya GUI Trainer on RunPod.

  • 00:00:55 Moreover, I will show you the extension of After Detailer (ADetailer) to automatically

  • 00:01:02 fix and improve faces of distant shots.

  • 00:01:05 With ADetailer, it has automatic face inpainting, but it also has automatic hand inpainting

  • 00:01:13 as well.

  • 00:01:14 So you can also test this, but I didn't find it very useful.

  • 00:01:17 So it is up to you to test.

  • 00:01:19 After watching this tutorial, you will be able to generate amazing quality images like

  • 00:01:24 this.

  • 00:01:25 And all of the training will be done by only using RTX 3090 Pod, which is a pretty cheap

  • 00:01:33 Pod, only 35 cents per hour.

  • 00:01:36 So I have prepared a very detailed GitHub readme file that I will share all of the links

  • 00:01:42 and instructions to follow this tutorial.

  • 00:01:45 Actually, this file was used in my latest RunPod Kohya tutorial.

  • 00:01:50 So I updated it.

  • 00:01:51 I will show both automatic installation and step-by-step manual installation.

  • 00:01:56 Let's go to our RunPod account from here.

  • 00:01:59 Let's login.

  • 00:02:00 I will start 2 machines and I will do both automatic and manual installation.

  • 00:02:06 You can choose either Stable Diffusion Web UI template or RunPod Fast Stable Diffusion

  • 00:02:13 template.

  • 00:02:14 Both of them are working.

  • 00:02:16 When you are watching this tutorial, you may see different versions.

  • 00:02:20 Don't worry.

  • 00:02:21 I will update the GitHub readme file if it be necessary.

  • 00:02:25 So let's begin with the Web UI version.

  • 00:02:28 Let's continue.

  • 00:02:29 Deploy.

  • 00:02:30 Okay.

  • 00:02:31 You see, we got an error.

  • 00:02:32 So we need to pick another Pod.

  • 00:02:34 Sometimes this happens.

  • 00:02:35 So I will select extreme speed.

  • 00:02:37 I am using RTX 3090 because it is pretty good.

  • 00:02:42 Also pretty cheap.

  • 00:02:43 I don't suggest you to use RTX 4090 at the moment because of the drivers.

  • 00:02:49 I think it is not working at the full performance.

  • 00:02:52 So let's go with RTX 3090.

  • 00:02:54 Okay.

  • 00:02:55 Template is selected.

  • 00:02:56 Let's deploy.

  • 00:02:57 Okay.

  • 00:02:58 Let's go to my Pods.

  • 00:02:59 Let's say this will be auto installation.

  • 00:03:02 Okay.

  • 00:03:03 Let's start another Pod quickly.

  • 00:03:05 I'm going to use same template.

  • 00:03:07 Let's go to my Pods.

  • 00:03:09 Installation is totally same for Web UI template and also Fast Stable Diffusion template.

  • 00:03:15 So this will be manual.

  • 00:03:17 Wait until you get connect button appearing here.

  • 00:03:21 After you got the connect button, click it.

  • 00:03:24 Connect the JupyterLab.

  • 00:03:25 So the JupyterLab is loaded.

  • 00:03:28 Let's begin with automatic installation.

  • 00:03:30 Open this post.

  • 00:03:32 Everything is also written here.

  • 00:03:34 It is so easy to use.

  • 00:03:36 We will download these three attachments.

  • 00:03:39 Let's download them.

  • 00:03:41 Then upload them into the workspace.

  • 00:03:43 Click this upload icon.

  • 00:03:45 Select them and upload as you are seeing right now.

  • 00:03:48 Then open a new terminal.

  • 00:03:49 Then you need to only execute these 2 commands.

  • 00:03:53 It will totally automatically install Kohya and download necessary models.

  • 00:03:59 So copy this.

  • 00:04:01 Paste it.

  • 00:04:02 Hit enter.

  • 00:04:03 It will start doing processing.

  • 00:04:04 Open another terminal like this.

  • 00:04:06 Copy this.

  • 00:04:07 Paste it and hit enter.

  • 00:04:08 And nothing else you need to do for automatic installation.

  • 00:04:12 Now we can begin manual installation.

  • 00:04:15 Running after automatic installation and manual installation are totally same.

  • 00:04:20 You just copy this and execute it for running.

  • 00:04:23 Before showing you manual installation, there are two important things: You should download

  • 00:04:28 your models and choose them.

  • 00:04:30 Also, if you don't know how to use Kohya, how to do Kohya LoRA training, please watch

  • 00:04:37 this tutorial and also this tutorial.

  • 00:04:40 This is SDXL tutorial that I just released like 1 day ago.

  • 00:04:46 It is 85 minutes and it has 73 chapters.

  • 00:04:50 And here my older tutorial.

  • 00:04:52 In this tutorial, I have shown how to do training with SD 1.5 version based models such as Realistic

  • 00:05:00 Vision or DreamShaper.

  • 00:05:02 And here my best images finding script tutorial video.

  • 00:05:07 And this tutorial is super important.

  • 00:05:09 If you don't know how to use RunPod, please watch this tutorial.

  • 00:05:13 This tutorial is amazing.

  • 00:05:15 It is over 100 minutes, fully chaptered.

  • 00:05:19 Watch this and it will help you significantly.

  • 00:05:21 I have put the resources links here so you can quickly download the Realistic Vision

  • 00:05:27 model from here.

  • 00:05:29 Best VAE from here and SDXL best VAE from here.

  • 00:05:34 So let's do first manual installation and download the models.

  • 00:05:37 The manual installation is also pretty straightforward.

  • 00:05:42 You need to execute each one of the commands one by one.

  • 00:05:46 So let's begin from this.

  • 00:05:47 Copy it.

  • 00:05:48 Let's connect to our manual installation pod.

  • 00:05:50 Connect to JupyterLab.

  • 00:05:52 Meanwhile, automatic installation is doing everything for us.

  • 00:05:55 Let's open terminal.

  • 00:05:57 Paste the command.

  • 00:05:58 Hit enter.

  • 00:05:59 Make sure that command is completely run.

  • 00:06:02 Then select the second command.

  • 00:06:04 Paste it.

  • 00:06:05 Do not skip any of the commands.

  • 00:06:07 It is asking you to continue.

  • 00:06:09 Yes.

  • 00:06:10 Then copy this command.

  • 00:06:11 Paste it.

  • 00:06:12 Then copy this command.

  • 00:06:14 Paste it.

  • 00:06:15 Hit enter.

  • 00:06:16 It will clone the Kohya-SS repository.

  • 00:06:18 Then copy this command.

  • 00:06:20 Paste it.

  • 00:06:21 We will move into the Kohya-SS folder.

  • 00:06:24 Then copy this command.

  • 00:06:25 Paste it.

  • 00:06:26 Hit enter.

  • 00:06:27 Then we will activate the virtual environment.

  • 00:06:29 Copy this command.

  • 00:06:30 Paste it.

  • 00:06:31 Virtual environment is activated as you are seeing here.

  • 00:06:35 Then copy this command.

  • 00:06:38 Execute it.

  • 00:06:39 Then copy this command and this will do the final installation.

  • 00:06:43 Execute it.

  • 00:06:44 This will take a lot of time.

  • 00:06:46 Just patiently wait until it is completely finished and it won't display you the status

  • 00:06:52 of installing package.

  • 00:06:54 You need to wait until that package is installed and it will move to the next package.

  • 00:06:59 So this may take a lot of time depending on your selected RunPod.

  • 00:07:04 Okay both automatic and manual installations are completed.

  • 00:07:09 Since we are using SD Web UI template we need to kill automatically started Automatic1111

  • 00:07:16 Web UI instance.

  • 00:07:17 You see it is currently using 30% GPU.

  • 00:07:22 So we need to change the relauncher.py file.

  • 00:07:26 Changed relauncher.py file is shared on the Patreon post.

  • 00:07:30 Open it, download the relauncher.py file.

  • 00:07:34 Enter inside Stable Diffusion Web UI folder.

  • 00:07:37 Click upload icon.

  • 00:07:39 Upload relauncher.py file overwrite then restart your pod.

  • 00:07:44 So if you're not my Patreon subscriber.

  • 00:07:47 Enter inside Stable Diffusion Web UI folder.

  • 00:07:49 Double click relauncher.py file.

  • 00:07:51 Copy this line.

  • 00:07:54 Change the while line here like me.

  • 00:07:57 Save it.

  • 00:07:58 Don't forget to save the file, then restart the pod.

  • 00:08:01 That's it.

  • 00:08:02 So the automatic installer already downloaded Realistic Vision version 5.1 model.

  • 00:08:07 SD 1.5 best VAE file, and SDXL best VAE file.

  • 00:08:14 Let me show you them.

  • 00:08:15 When we enter inside Stable Diffusion Web UI, inside models, inside Stable Diffusion

  • 00:08:21 folder, we will see the Realistic Vision model here.

  • 00:08:24 Also, when we check out the VAE folder, we will see the base safetensors model here.

  • 00:08:32 SDXL base model and refiner model is included in the both Stable Diffusion Web UI template

  • 00:08:40 and Fast Stable Diffusion template, so you don't need to download them yourself.

  • 00:08:46 Now I will close my auto installation and continue with manual installation because

  • 00:08:51 the rest is almost same.

  • 00:08:54 So let's connect to manual installation.

  • 00:08:57 Let me show you how you can download these models into the respective folders.

  • 00:09:01 For example, let's download Realistic Vision version 5.1.

  • 00:09:05 Copy it.

  • 00:09:07 Enter inside models, inside Stable Diffusion.

  • 00:09:10 Open a new terminal here, paste and it should work.

  • 00:09:13 You see it has started downloading, then to download VAE, enter inside VAE folder, open

  • 00:09:20 a new terminal, copy the command, paste it and hit enter.

  • 00:09:25 So for training without using GPU memory, I will kill the automatically started Automatic1111

  • 00:09:32 Web UI instance.

  • 00:09:33 To kill it I will copy this, open a new terminal, paste it, and I will kill it.

  • 00:09:39 You will also get this message.

  • 00:09:40 It is not important.

  • 00:09:42 What is important?

  • 00:09:43 The important thing is you will see pid here which is process id.

  • 00:09:48 It is terminated.

  • 00:09:49 Also, when you check your my pods section, you should see GPU memory used 0%.

  • 00:09:56 So we will be fully able to utilize the memory of the GPU.

  • 00:10:00 Then open a new launcher, terminal and for starting the Kohya GUI, we will copy this

  • 00:10:06 command.

  • 00:10:07 This is same for both automatic and manual installation.

  • 00:10:10 After installation, paste it and hit enter and it will start Kohya GUI with a shared

  • 00:10:17 Gradio link.

  • 00:10:18 We will get the Gradio link here.

  • 00:10:20 Okay, let's open it and the Kohya GUI has started.

  • 00:10:25 You see LoRA, DreamBooth, all other tabs are here.

  • 00:10:28 What is different with RunPod training than doing it on your computer like in a Unix system

  • 00:10:35 or Windows.

  • 00:10:37 Only different thing is paths, nothing else.

  • 00:10:39 For example, if I want to save the configuration, I need to type it here like this and click

  • 00:10:45 save.

  • 00:10:46 Then when I check the workspace folder here I will see the test1.json file.

  • 00:10:53 Now I will show you an SDXL training with my training data set.

  • 00:10:58 To do that, I will use 2700 ground truth real person images for regularization.

  • 00:11:05 I have explained all of these in very much details in this tutorial.

  • 00:11:11 So watch this tutorial to learn how to do training.

  • 00:11:15 This tutorial currently that you are watching is all about how to replicate this tutorial

  • 00:11:21 on a RunPod machine like it is on your computer.

  • 00:11:26 So the instructions to download images on RunPod is shared here.

  • 00:11:30 Okay, I have downloaded and extracted all of the images.

  • 00:11:35 Let me show you some of them.

  • 00:11:37 For example, this one, it is getting opened or this one.

  • 00:11:41 These are super high quality ground truth regularization images.

  • 00:11:46 Or maybe let's open another one.

  • 00:11:48 This one okay here, another one as you are seeing.

  • 00:11:53 So now I will upload my training images.

  • 00:11:56 For uploading and downloading I suggest you to use runpodctl.

  • 00:12:01 I have explained how to use runpodctl in this tutorial, how to install it in this tutorial.

  • 00:12:08 The runpodctl is also shared in this Patreon post, click it.

  • 00:12:13 Then you can download this auto installer and it will install you automatically runpodctl.

  • 00:12:19 After installation when you type runpodctl like this, you should get this runpodctl message.

  • 00:12:28 Then it means it is ready.

  • 00:12:30 Now I will upload my training images by using runpodctl.

  • 00:12:34 All of them are 10,024 (1024) pixels by 10,024 (1024) pixels.

  • 00:12:39 I suggest you to use same dimension having images.

  • 00:12:43 I have explained it a lot of details in this tutorial.

  • 00:12:47 So how are we going to upload them?

  • 00:12:49 First, I will make a folder here named as train, then enter inside that folder, then

  • 00:12:56 I will move to parent folder, copy the path of the folder, open a new cmd, then type runpodctl

  • 00:13:05 send put a quotation mark like this, paste the folder path and add the quotation mark

  • 00:13:11 and hit enter and it will give you a link.

  • 00:13:14 Copy it.

  • 00:13:15 Why this is useful because if you had 1000 images, it would be much harder.

  • 00:13:19 So open a terminal here, paste the command and hit enter.

  • 00:13:24 Alternatively, you can use this upload icon, select all of your files from here, and upload

  • 00:13:31 them.

  • 00:13:32 This alternative way so you see all of the images are uploaded.

  • 00:13:36 Now I am totally ready to start training.

  • 00:13:40 The training is completely same as I have shown and explained in this tutorial video.

  • 00:13:46 The only thing that changes are the paths.

  • 00:13:49 So let's begin with selecting our base model.

  • 00:13:53 Select custom here.

  • 00:13:54 Let's go inside where our model is located.

  • 00:13:58 Inside here.

  • 00:13:59 Let's copy the path.

  • 00:14:00 Right click, copy path, then paste it here like this.

  • 00:14:05 Don't forget to put this backslash to the beginning.

  • 00:14:07 People are making this mistake.

  • 00:14:10 Select SDXL model, save.

  • 00:14:13 Then go to the tools.

  • 00:14:14 Go to the deprecated tab.

  • 00:14:17 In here set your parameters.

  • 00:14:19 Let's get the path of our training images which are here.

  • 00:14:23 Let's copy the path.

  • 00:14:24 So here I copy the path.

  • 00:14:26 I will make the repeating 25.

  • 00:14:28 I will copy the path of the regularization images.

  • 00:14:31 They are also called as classification images as well.

  • 00:14:34 Repeating 1.

  • 00:14:35 The destination.

  • 00:14:37 Now this is important.

  • 00:14:38 I will make the destination inside LoRA folder of Automatic1111 Web UI so I will be able

  • 00:14:44 to use them directly.

  • 00:14:46 So I copy path.

  • 00:14:47 Always I am putting backslash to the beginning of the path just here.

  • 00:14:51 Click prepare training data.

  • 00:14:53 Then in here you will see the message of the folders are generated.

  • 00:14:58 Then click copy info to folders tab.

  • 00:15:01 Go back to training, go back to folders and you will see the folders are set like this.

  • 00:15:08 Give a name.

  • 00:15:09 Let's say runpod_kohya.

  • 00:15:11 We are still not ready yet.

  • 00:15:12 Click save.

  • 00:15:13 Go to the parameters.

  • 00:15:15 These are the parameters that I am using currently.

  • 00:15:17 There could be better parameters.

  • 00:15:19 I am in research.

  • 00:15:20 So number of epochs are 8.

  • 00:15:23 bf16, bf16, cache latents and cache latents to disk.

  • 00:15:27 Learning rate is like this 4e-4.

  • 00:15:30 Constant learning rate.

  • 00:15:32 Scheduler optimizer will be Adafactor.

  • 00:15:35 I won't use bucketing because I have all same dimension images.

  • 00:15:40 Max resolution.

  • 00:15:41 Don't forget to change this as well.

  • 00:15:43 I am making the learning rate equal in text encoder learning rate and UNET learning rate.

  • 00:15:49 Hopefully I will also research training only UNET not training text encoder and make another

  • 00:15:55 video about that.

  • 00:15:56 So stay subscribed.

  • 00:15:58 Network Rank (Dimension).

  • 00:15:59 This is really important.

  • 00:16:01 Network Rank (Dimension) has a certain effect.

  • 00:16:04 When you open this tweet that I have shared, you will get to this page.

  • 00:16:09 And when I use Network Rank (Dimension) higher such as 256.

  • 00:16:15 It is able to learn more details about the subject that we are training.

  • 00:16:20 However, what is happening?

  • 00:16:22 The extra network layers that is being used during the inference is overriding the existing

  • 00:16:30 knowledge of the model.

  • 00:16:32 So the subject we trained becomes better.

  • 00:16:34 However, when you look at these trees, you see they are not looking good.

  • 00:16:40 This is 256.

  • 00:16:41 When we compare the same prompt with the Network Rank (Dimension) 32, we see the tree details

  • 00:16:48 are much better.

  • 00:16:50 They are looking much better.

  • 00:16:51 So there is a trade-off between higher Network Rank (Dimension) and lower Network Rank (Dimension).

  • 00:16:56 If you use higher Network Rank (Dimension), you will learn more details about the subject

  • 00:17:01 that you are training.

  • 00:17:02 It can be style, it can be person.

  • 00:17:04 It can be whatever you are training.

  • 00:17:06 However, the model will lose its existing knowledge.

  • 00:17:11 So for this training I will use 32.

  • 00:17:13 You can also use 256.

  • 00:17:15 You can use 128.

  • 00:17:17 It is totally up to you.

  • 00:17:19 You can try different Network Rank (Dimensions) and compare the results according to your

  • 00:17:23 purpose.

  • 00:17:24 So let's go with 32 network.

  • 00:17:26 Alpha 1.

  • 00:17:27 The advanced tab is now located here, so in the advanced tab we just need to select Gradient

  • 00:17:33 Checkpointing and that's it.

  • 00:17:36 You don't need to change anything else.

  • 00:17:38 Save.

  • 00:17:39 Before starting training.

  • 00:17:41 Click print training command and check out the command from this.

  • 00:17:45 Open the terminal window, verify that everything is looking good.

  • 00:17:49 The number of images found, number of steps.

  • 00:17:51 I explain everything in very much details in this tutorial video.

  • 00:17:56 So don't forget to watch it.

  • 00:17:58 For SD 1.5 watch this tutorial.

  • 00:18:01 Everything is same with SD 1.5.

  • 00:18:03 You are giving the parameters, models, pathing, everything, just the learning rate and use

  • 00:18:09 used optimization may change.

  • 00:18:11 I haven't recently tested for SD 1.5 models.

  • 00:18:15 So for learning, RunPod again this is the best tutorial.

  • 00:18:18 Okay, we are ready to start training, so just click, start training and watch the terminal

  • 00:18:24 where you have started your Kohya GUI instance.

  • 00:18:27 It will show you everything like you are seeing right now.

  • 00:18:31 I am not using captions at the moment, but I will also hopefully make a research about

  • 00:18:36 using captions for training a subject.

  • 00:18:39 Possibly also for teaching a style and make tutorial videos about them as well.

  • 00:18:44 So stay tuned for that too.

  • 00:18:46 So it is now caching our images.

  • 00:18:48 The caching images is only 1 time.

  • 00:18:50 So the training has been started and it has been over 16 minutes.

  • 00:18:54 The training speed is 1.39 seconds per it, which is pretty decent when you consider we

  • 00:19:02 are just using RTX 3090 a very cheap GPU.

  • 00:19:08 So this is the speed and hopefully it will be completed in one 1 hour 44 minutes.

  • 00:19:14 So it is going to take about 2 hours for 5200 steps with batch size 1.

  • 00:19:21 So the training has been completed.

  • 00:19:23 It took exactly 2 hours for 5200 steps.

  • 00:19:27 Now we will begin using our trained LoRA models and how are we going to do that?

  • 00:19:34 We are going to start our Stable Diffusion Automatic1111 Web UI instance on RunPod.

  • 00:19:40 Before starting your instance I suggest you to update it to the latest version.

  • 00:19:45 I have an auto updater script on my Patreon post.

  • 00:19:49 Let's update both automatically and manually.

  • 00:19:52 I am keeping this file up-to-date.

  • 00:19:55 Let's download the 1 click auto1111 SDXL file, upload it into our workspace folder, open

  • 00:20:03 a new terminal.

  • 00:20:04 By the way how did I understand my Kohya training completed?

  • 00:20:08 You are looking at the terminal where you have started your Kohya.

  • 00:20:12 Don't forget that.

  • 00:20:13 All right.

  • 00:20:14 We have uploaded our updater file.

  • 00:20:16 Then we will execute this command to run this auto updater file.

  • 00:20:21 It is going to update Automatic1111 Web UI to the latest version.

  • 00:20:25 It will install xFormers, enable it and it will download the best VAE file.

  • 00:20:31 So how could you update it yourself?

  • 00:20:34 It is following by these commands that you are seeing.

  • 00:20:38 This is the auto updater file right now.

  • 00:20:41 It may change in the future, but this is the way updating.

  • 00:20:45 You do git stash, do git checkout master, git pull so it is updating to the latest version.

  • 00:20:50 However, there is 1 tricky issue.

  • 00:20:52 When you do git stash and git checkout to the master and do git pull it will overwrite

  • 00:20:59 the web user pt1(.sh) file the file that we use on RunPod.

  • 00:21:05 So before updating it, enter inside Stable Diffusion Web UI folder and take a backup

  • 00:21:12 of your webui-user.sh file.

  • 00:21:16 You see currently it is already overwritten by the updater.

  • 00:21:19 Then it will get replaced again after the update.

  • 00:21:22 So make sure that you have downloaded this file onto your computer, then upload it again

  • 00:21:28 after the updating.

  • 00:21:29 This is crucial.

  • 00:21:30 Then another crucial thing is you need to activate the virtual environment of the Automatic1111

  • 00:21:37 Web UI, then install xFormers like this.

  • 00:21:40 After that the rest is so easy.

  • 00:21:42 You see we are downloading the best VAE file and in this line it is actually reverting

  • 00:21:48 back the webui-user.sh file.

  • 00:21:50 So the update has been completed and also Automatic1111 Web UI instance has been started.

  • 00:21:58 You can also manually start your Web UI instance by using this command that is shared on the

  • 00:22:03 GitHub readme file because before training we have terminated the running instance of

  • 00:22:09 Automatic1111 Web UI to free up the used vram.

  • 00:22:13 So where the LoRA files are saved?

  • 00:22:16 Because we have saved them inside models folder, inside LoRA folder during our training, our

  • 00:22:23 LoRA files are already in the LoRA folder of Automatic1111 Web UI.

  • 00:22:27 Therefore, the Automatic1111 Web UI will see all of the LoRA files automatically.

  • 00:22:32 Moreover, I rename the latest epoch, the latest checkpoint with the same naming convention

  • 00:22:40 of the other previous checkpoints.

  • 00:22:42 So let's rename this as the checkpoint 8 like this.

  • 00:22:46 Okay, let's connect to our Web UI instance, connect, connect to http port and we will

  • 00:22:53 connect the Automatic1111 Web UI instance.

  • 00:22:55 Okay, it is loading the model.

  • 00:22:57 You see it has started the other model that we downloaded.

  • 00:23:00 Now, we will switch to the base model because we did the training on base model.

  • 00:23:05 So whichever the model you use for your training, it will work best with your LoRAs.

  • 00:23:10 SDXL LoRAs are not compatible with SD 1.5 version LoRAs and let's also set our best

  • 00:23:17 VAE file.

  • 00:23:19 To do that, let's go to user interface.

  • 00:23:21 Let's type SD VAE to the quick selection, quick settings list, then apply settings and

  • 00:23:28 reload UI.

  • 00:23:29 There is one another important thing with SDXL actually.

  • 00:23:33 So it is better to also add.

  • 00:23:35 Let's refresh this file.

  • 00:23:39 Okay, --no-half-vae.

  • 00:23:41 This is necessary for SDXL.

  • 00:23:43 Automatic1111 is doing this automatically.

  • 00:23:46 However, if you also put this command, it is useful.

  • 00:23:49 Because in some cases, Automatic1111 Web UI is failing to enable 32 bit full VAE precision.

  • 00:23:58 So therefore, if you manually add this, it is better.

  • 00:24:00 All right.

  • 00:24:01 I will also update my auto updater for this change so it will automatically add that in

  • 00:24:07 future downloads.

  • 00:24:08 Let's also select the SD VAE safetensors from here and we are ready to start generating

  • 00:24:17 our images.

  • 00:24:18 So click the Stable Diffusion link here on the GitHub readme file.

  • 00:24:22 This is my main repository.

  • 00:24:24 We have reached 700 stars.

  • 00:24:27 I hope you also Star the repository.

  • 00:24:29 Fork it.

  • 00:24:30 Also Watch it.

  • 00:24:31 If you also be sponsor of me I would appreciate that very much.

  • 00:24:34 All of these are helping me.

  • 00:24:36 In here I have amazing prompts list for Stable Diffusion so I will use some of the prompts

  • 00:24:43 from here.

  • 00:24:44 For example, let's try this prompt.

  • 00:24:46 Okay, this is the positive prompt.

  • 00:24:49 Let's copy paste it.

  • 00:24:50 Then there is negative prompt here.

  • 00:24:52 So let's also copy paste it and let's apply some high resolution fix like 1.2 like 20%.

  • 00:25:01 Let's make the denoising 50%.

  • 00:25:03 Let's make the CFG 9.

  • 00:25:05 And one more thing.

  • 00:25:06 You need to also append your trained LoRA.

  • 00:25:09 So where is the LoRA?

  • 00:25:11 LoRA is currently from this icon.

  • 00:25:13 But when the development branch of Automatic1111 Web UI merged into main branch, this icon

  • 00:25:19 is gone.

  • 00:25:20 And the LoRA tab will be here.

  • 00:25:22 So don't get confused.

  • 00:25:24 So these are our LoRA files.

  • 00:25:25 For example let's append the seventh one.

  • 00:25:28 This is usually the best one with these settings.

  • 00:25:30 Okay, let's close it and let's generate.

  • 00:25:33 The generation is happening.

  • 00:25:35 It is 2.25 it per second right now.

  • 00:25:38 It is doing the high resolution fix.

  • 00:25:40 Okay.

  • 00:25:41 Image is generated.

  • 00:25:43 However, the face is not very good because it's a distant shot.

  • 00:25:46 So how can I fix the face.

  • 00:25:48 For that I will use an extension called as Adetailer.

  • 00:25:53 So let's load from here, search for detailer, then you will go to the after detailer extension.

  • 00:26:00 Install it.

  • 00:26:01 I don't know if this extension is requiring any new library installation because if it

  • 00:26:07 is requiring something new to be installed, you need to remove --skip-install from here.

  • 00:26:12 This is added by default.

  • 00:26:14 Currently, since I did the auto upgrade, it is removed, so don't forget to remove that.

  • 00:26:20 Okay, after extension has been installed, let's restart the Web UI and we can watch

  • 00:26:26 what is happening from our terminal where we have started our Web UI.

  • 00:26:32 Okay, it is restarting.

  • 00:26:33 It is downloading the necessary face detection models.

  • 00:26:36 It has downloaded and the Web UI started.

  • 00:26:39 Okay, it is still reloading.

  • 00:26:41 Okay, reload almost completed and reload completed.

  • 00:26:45 Let's click this to reload the last values.

  • 00:26:49 It has reloaded the seed as well.

  • 00:26:51 Now I will use adetailer.

  • 00:26:52 You see, I have the adetailer extension here.

  • 00:26:55 I just need to enable it and if I leave this prompt input blank, it will use the inputs

  • 00:27:03 from here.

  • 00:27:04 So let's generate and see the difference.

  • 00:27:06 The generated images will be in the outputs folder in the text 2 image tab.

  • 00:27:10 You see you can download all of them either one by one or using runpodctl which is the

  • 00:27:15 best way.

  • 00:27:16 So you can start a new terminal here, type runpodctl send, copy the folder path name

  • 00:27:23 like this and it will give you a link to download all of the images inside that folder.

  • 00:27:29 Okay, it is generating first, then it is going to inpaint the face.

  • 00:27:33 Okay, for some reason this image didn't work well.

  • 00:27:37 This is interesting.

  • 00:27:39 Let's manually inpaint so I send it to the inpaint.

  • 00:27:42 Okay, image has come.

  • 00:27:43 So let's mark the face and let's inpaint only masked.

  • 00:27:49 All right?

  • 00:27:50 Yeah, looking good.

  • 00:27:51 Let's make the denoising strength 50% and try again.

  • 00:27:54 Okay, it is somewhat similar to it but not much.

  • 00:27:57 So I will just try photo of ohwx man and the LoRA.

  • 00:28:02 I want to see the difference.

  • 00:28:04 Okay now I see the face similarity.

  • 00:28:07 Yes it is now much better.

  • 00:28:09 Similar to me since it is too distant.

  • 00:28:12 It is not very visible but it is exactly like me.

  • 00:28:16 So let's change the prompt.

  • 00:28:17 Or maybe let's try some another one with another seed.

  • 00:28:22 I will generate 12 images and then we can look all of them.

  • 00:28:27 By the way, this is running on cloud, therefore you don't need a GPU.

  • 00:28:30 You probably already know that, but I am just reminding it.

  • 00:28:35 Okay, this image has 2 faces so just let's skip it.

  • 00:28:39 Even though I did skip it is also inpainting face so I will also skip that part.

  • 00:28:43 Okay, now move to next image.

  • 00:28:45 I just noticed a mistake of me.

  • 00:28:48 I should make the inpaint width and height 1024 because we did the training with that.

  • 00:28:55 I will also increase the inpaint denoising strength so let's make them like this.

  • 00:29:01 All right.

  • 00:29:02 Oh by the way, I don't need to because if you only choose use separate width and height

  • 00:29:07 you need to set this.

  • 00:29:08 So this wasn't a mistake.

  • 00:29:10 So I am just going to increase the inpaint denoising strength to 50%.

  • 00:29:15 Okay, the generations are completed.

  • 00:29:17 However, the face is not matching very well.

  • 00:29:21 The quality is very good, but the faces are not very good.

  • 00:29:25 So let's try another prompt which I like to test the face quality.

  • 00:29:32 It is somewhere around here, so it is here.

  • 00:29:35 I prefer this quality to test the face.

  • 00:29:38 All right here.

  • 00:29:40 And let's also copy the negative prompt and then we can re-evaluate better.

  • 00:29:44 Oh by the way, I will close the high resolution fix.

  • 00:29:47 Maybe that is the reason.

  • 00:29:49 So let's just try first without high resolution fix.

  • 00:29:53 Okay.

  • 00:29:54 Okay we got the results.

  • 00:29:55 They are pretty decent.

  • 00:29:57 Let's look at them.

  • 00:29:58 These are not cherry picked results.

  • 00:30:00 They are looking very decent.

  • 00:30:01 We also applied the adetailer.

  • 00:30:04 You are seeing right now.

  • 00:30:05 Really really good details.

  • 00:30:07 Looking decent.

  • 00:30:08 So what we need to do is we probably need to change the prompt of the after detailer

  • 00:30:15 in our last prompt.

  • 00:30:17 Okay, I made some changes and look again.

  • 00:30:20 Yes, now we can see the face very clearly.

  • 00:30:23 The face.

  • 00:30:24 Of course, without generating a lot of images, you won't get the best results so you need

  • 00:30:30 to generate a lot.

  • 00:30:31 Okay, this is a very good one.

  • 00:30:33 Okay, okay this is also looking pretty good.

  • 00:30:36 The face is there.

  • 00:30:37 These are not cherry picks.

  • 00:30:38 This is the first results.

  • 00:30:40 So as you generate more you will get much better results.

  • 00:30:43 Okay, this is also really good.

  • 00:30:45 Yeah, this is also really good one.

  • 00:30:47 Yeah, the model is working, everything is working.

  • 00:30:50 You see.

  • 00:30:51 Pretty good and you need to do some testing of course.

  • 00:30:54 So what I did extra: I added sunglasses to the negative because it was producing all

  • 00:30:59 sunglasses.

  • 00:31:00 Why?

  • 00:31:02 Because we used sunny day in the prompts.

  • 00:31:04 That is why.

  • 00:31:06 I normally have eyeglasses.

  • 00:31:08 Therefore it is associated me with eyeglasses.

  • 00:31:10 However, when sunny day added it is associating it with sunglasses instead of eyeglasses and

  • 00:31:17 in the after detailer (adetailer), I simplified the prompt as photo of ohwx man.

  • 00:31:25 Actually, I just typed photo ohwx man.

  • 00:31:28 So you need to try and see which one is doing the best for you.

  • 00:31:33 This is how you use RunPod for LoRA training how to use your trained LoRAs.

  • 00:31:40 So if something gets changed I will update this file.

  • 00:31:43 This file will be always up-to-date to install Kohya SS GUI on RunPod.

  • 00:31:50 So this is your source to see the installation commands instructions to install it.

  • 00:31:57 These instructions are working for both Web UI template and both Fast Stable Diffusion

  • 00:32:04 template.

  • 00:32:05 I hope you have enjoyed.

  • 00:32:06 Please join our Discord server.

  • 00:32:08 When you click this link, you will see our Discord server page.

  • 00:32:11 Join the server.

  • 00:32:12 We have over 4000 members.

  • 00:32:15 You can also follow me on Twitter, just click this link.

  • 00:32:18 This is my Twitter profile.

  • 00:32:20 You can also purchase my Udemy course if you wish.

  • 00:32:24 You can also support me with Buy Me A Coffee from here.

  • 00:32:28 I would appreciate that very much.

  • 00:32:30 You can also support me on Patreon.

  • 00:32:31 I would appreciate that very much.

  • 00:32:33 You can also follow me on LinkedIn.

  • 00:32:35 When you click here.

  • 00:32:36 This is my LinkedIn profile.

  • 00:32:38 You can connect with me.

  • 00:32:39 You can follow me.

  • 00:32:41 I have 2600 followers.

  • 00:32:43 You can also follow me on CivitAI.

  • 00:32:45 When you click this link you will get to my page and you can follow me from here.

  • 00:32:50 So far I have 54 followers.

  • 00:32:52 I also started to be active on Medium and DeviantArt as well.

  • 00:32:57 So follow me on Medium and DeviantArt too.

  • 00:33:00 This is all for today.

  • 00:33:02 I hope you have enjoyed.

  • 00:33:03 Please subscribe.

  • 00:33:04 Join, leave a comment.

  • 00:33:07 Ask me anything you wish.

  • 00:33:09 Join our Discord server.

  • 00:33:10 If you support me on YouTube by joining.

  • 00:33:13 I would appreciate that very much.

  • 00:33:15 I have so much new stuff to experiment, research, and publish relating SDXL training.

  • 00:33:23 Hopefully I will also release them.

  • 00:33:24 I am working on a woman classification data set.

  • 00:33:26 Hopefully I will also release that.

  • 00:33:27 So stay subscribed.

  • 00:33:28 Hopefully see you later.

Clone this wiki locally