-
-
Notifications
You must be signed in to change notification settings - Fork 358
Invoke AI Full Install and Run Tutorial for Windows RunPod and Massed Compute 1 Click Easy Guide
Full tutorial link > https://www.youtube.com/watch?v=BuxFBYAUGIY
If you are looking for a guide to install and use latest Invoke AI very easily on your local Windows Computer or want to use it on the best cloud services like RunPod and Massed Compute, this tutorial is all you need. We install on official PyTorch template on RunPod.
Video Links
🔗 Tutorial Used Installers Zip File and the Page
🔗RunPod Register
🔗Massed Compute Register
🔗 SECourses Discord Channel to Get Full Support
🔗 SECourses Reddit
🔗 SECourses GitHub
🔗 Invoke AI Repo
Video Chapters
00:00:00 Intro
00:00:40 How to download installers zip file and install Invoke AI on Windows
00:02:11 How to start InvokeAI APP after installation has been completed
00:02:49 How to set and manage your models, download models, use existing models
00:03:07 Generating first image on Invoke AI APP on Windows
00:03:38 How to install Invoke AI on Massed Compute - ultra cheap fast secure cloud service
00:06:25 How to start InvokeAI on MassedCompute after installation has been completed
00:07:10 How to use Powershell command to access inside Massed Compute running app in your Computer with 100% security
00:08:10 How to use existing models on Massed Compute inside Invoke AI APP
00:08:34 Generate an image on InvokeAI that is running on cloud MassedCompute machine
00:09:01 How to install Invoke AI on RunPod
00:09:43 How to set proxy port to connect later
00:11:03 How to start and use InvokeAI on Run Pod after installation has been completed
00:11:44 How to download model from Starter models and use on Invoke AI
InvokeAI: Your Gateway to Powerful and Customizable AI Image Generation
InvokeAI is an open-source project that offers a powerful and user-friendly interface for generating AI images. Building upon the foundation of Stable Diffusion, InvokeAI provides a comprehensive suite of tools and features, making it accessible to both beginners and experienced users alike. This article delves into the capabilities of InvokeAI, guiding you through its installation, usage, and exploring its remarkable features.
What is InvokeAI?
InvokeAI is more than just a graphical user interface (GUI) for Stable Diffusion. It's a complete ecosystem designed for creative exploration with AI image generation. It boasts a robust command-line interface (CLI), a user-friendly web interface, and a rich set of features that empower you to craft stunning visuals with unprecedented control.
Key Features of InvokeAI:
- Versatile Image Generation:
Text-to-image synthesis: Generate images from detailed text prompts, leveraging the power of Stable Diffusion's models.
Image-to-image generation (img2img): Transform existing images based on text prompts, allowing for creative edits and variations.
In-process image prompting: Guide the generation process by providing image prompts alongside text, influencing the composition and style.
Outpainting: Extend the boundaries of existing images, seamlessly generating new content that blends with the original.
- Advanced Control and Customization:
Extensive prompt engineering options: Utilize negative prompts, prompt weighting, and various sampling methods to fine-tune your image generation.
Customizable model support: Integrate different Stable Diffusion models, including custom trained models, to explore diverse artistic styles.
Seed control: Recreate specific image generations by using the same seed value, ensuring consistency and reproducibility.
Upscaling and detail enhancement: Refine generated images using built-in upscaling techniques for higher resolution and finer details.
- User-Friendly Interfaces:
Intuitive Web Interface: Access a feature-rich graphical interface through your web browser, simplifying the image generation process.
Powerful Command-Line Interface: Leverage the CLI for advanced users who prefer scripting and automation.
Node Editor: Visually connect different image processing nodes to create complex workflows and effects.
- Active Community and Development:
InvokeAI benefits from a vibrant community of users and developers, continuously contributing to its improvement and expansion. Regular updates ensure access to the latest features and advancements in AI image generation.
-
00:00:00 Greetings everyone, today I will show you how to install Invoke AI on your Windows computer,
-
00:00:06 on RunPod, and on Massed Compute. Additionally, I will show something new that I haven't shown
-
00:00:14 before — how to use Massed Compute running applications on your PC like it is running
-
00:00:22 on your localhost. You see, this is a localhost URL, but it is running on the Massed Compute.
-
00:00:28 It is amazing! Just keep watching. I will begin with showing on Windows, then Massed Compute, then
-
00:00:34 RunPod. You can check out the video description to find the accurate video chapters. So let's
-
00:00:40 begin with installing into Windows. Download the attachments from here; the link to this file will
-
00:00:45 be in the description of the video. Move the zip file wherever you want to install, extract the zip
-
00:00:51 file, enter inside the zip file, and double-click and start the Windows install.bat file. You
-
00:00:58 see is asking you to have Python 3.10 or Python 3.11, C++, CUDA, and also enable Windows long
-
00:01:07 paths. So let's enable the Windows long paths from here. Double-click it; it will ask you for
-
00:01:13 permission, then click yes here, and it will be done. Then click Windows install.bat file
-
00:01:19 again. Press any key. So for the requirements, I already have an excellent tutorial here; the
-
00:01:24 link is here. Watch it, learn it. Also, for CUDA 12.4 and C++ tools, I have a new post, so you can
-
00:01:33 also check it out if you encounter any issues. So after the installer initializes, it will ask you
-
00:01:38 some options. Just hit enter here, hit again if it fails, hit multiple times, yes, now working. Okay,
-
00:01:46 it is asking you to select the installation folder. I want to install it wherever I started
-
00:01:51 the bat file, so I deleted the folder path and hit enter here. It will ask you like this,
-
00:01:57 hit yes, and hit enter. Then it will ask for your GPU. I have CUDA, so 1, and hit enter, and it
-
00:02:04 will start the installation automatically for you. Just wait patiently. Okay, so the installation has
-
00:02:10 been completed. It says "Installation successful, press any key." Then when you return back to your
-
00:02:16 folder, you will see Invoke.bat file. This is the file that will start Invoke AI. Double-click
-
00:02:22 it, select option 1, and that's it. It will start Invoke AI on your local Windows computer. It is
-
00:02:29 downloading some of the necessary files, and it is getting started. Okay, it has started. You see,
-
00:02:36 this is the URL Uvicorn, running on this URL. To open this URL, I am going to click here, copy,
-
00:02:45 go to your browser, and paste, and Invoke AI has started. You can set your models from the models
-
00:02:51 folder, scan the folder, enter wherever the checkpoints you have. You can download models from
-
00:02:57 starter models from here quickly, and then you can generate images from here. You need to select your
-
00:03:02 model. Let's generate an image as an example, then we will be moving. So I will scan this folder,
-
00:03:09 and I will use Juggernaut XL version 9. I will click; this is my existing model, but you
-
00:03:14 can download models quickly from here — just click the plus icon, and it will download them. Okay,
-
00:03:19 then I go to the canvas. I have a prompt like "a car." The model is selected. Invoke. Now it will
-
00:03:26 start generating a model for me. You can see the progress on the CMD window. Okay, this is
-
00:03:31 the speed. It is not a great speed, if you ask my opinion, on RTX 3090, and the image is generated.
-
00:03:38 Okay, now I will move to Massed Compute. Please use this link to register. Open it. After
-
00:03:43 registering and setting up your billing. Then go to deploy. Here, select GPU. I suggest you use RTX
-
00:03:52 A6000 GPU. If this is not available, use RTX A6000 alt config. In some cases, it may not have enough
-
00:04:01 RAM memory. Then you can select 2. If you also need more space, you can increase the number of
-
00:04:06 GPUs to have more disk space. Then select category creator, select SECourses. You see the current
-
00:04:13 price is $1.25 because it is 2 GPUs. Select 1. Now the price is 62 cents. After we apply our
-
00:04:21 coupon SECourses, verify it will become 31 cents for this amazing GPU and the machine. Then click
-
00:04:28 deploy. Our coupon is permanent, and our image is amazing. You will see it! So it is starting. Wait
-
00:04:35 until the initialization has been completed. So I am going to use ThinLinc Drive. You can
-
00:04:40 download it from here. It is so straightforward to install and use. Just set it up from here;
-
00:04:47 then open your ThinLinc client from here. Go to options, go to local devices, drives, and set
-
00:04:53 your synchronization folder. Set it as read and write. In the other tutorials, I explained this
-
00:04:59 in detail. Okay, so the machine has started. Let's connect as usual, and from here, copy your IP and
-
00:05:07 password, and connect. Continue. Let's download the file again, move it into your Mass Compute
-
00:05:14 synchronization folder like this. Click start. Massed Compute deal is just amazing — amazing
-
00:05:20 price, amazing powerful GPU, and amazing speed their machines have. You will see it. Go to the
-
00:05:27 home, go to the thin drives, Mass Compute. This is my synchronization folder. Then find your zip
-
00:05:33 file and drag and drop it into your downloads folder here. Wait until it's copied. Yes,
-
00:05:39 it is copied. Right-click and extract here, then enter inside this folder. You will see that it
-
00:05:46 has a Massed Compute instructions file. You really should read my instructions file. For installation
-
00:05:53 we are going to copy this command, open a terminal here, paste it, and hit enter. The installation
-
00:05:59 is the same as on Windows, actually. Just hit enter on this question, then delete this part, so
-
00:06:06 it will be installed wherever you have copied and pasted your files. Hit enter, then hit yes, and
-
00:06:12 hit enter. If you get this message, just cancel it. Okay, I am going to use NVIDIA GPU, select 1,
-
00:06:18 and hit enter, and it will start the installation. Installation on Massed Compute is just super fast.
-
00:06:25 Their machines are just super fast. Okay, so the installation has been completed. You see,
-
00:06:30 "Installation successful." Now we are going to start the Invoke AI. For starting it, copy this
-
00:06:36 line, copy it, go to the installation folder, open a new terminal, paste it like this, hit 1,
-
00:06:43 and hit enter. Then you will see that it will start here. Okay, you see Uvicorn running on — let
-
00:06:52 me zoom in to show you. Okay, here, so open this link, and you can use it inside the ThinLinc
-
00:06:59 client. However, it is not very fast, so if you want to use it on your computer in a secure way,
-
00:07:05 we are going to follow the commands here. First of all, go back to your Massed Compute running
-
00:07:10 instance, copy the IP from here, you see, copy it, go back to your ThinLinc client,
-
00:07:16 and change the IP address here to your IP address of your machine. Then copy it, open a PowerShell
-
00:07:24 in your Windows computer like this, paste it, and hit enter. It may ask you some questions,
-
00:07:30 just type "yes, yes, yes." Then it will ask you for the password, so for the password, copy the
-
00:07:36 password from here. It is copied. Then return back to PowerShell, right-click, then hit enter. So it
-
00:07:42 will approve your connection. Okay, after that, to connect it, we are going to use this local URL.
-
00:07:51 Since this SSH is running, then this local URL will be bound to the Massed Compute. So actually,
-
00:07:59 this application is not running on my computer right now, but it is running on Massed Compute
-
00:08:05 right now. As usual, you can download the models from starter downloads, or you can scan the
-
00:08:11 folders of the Massed Compute. So for the Massed Compute folder, it is here. Just copy-paste it and
-
00:08:17 scan the folder. You see, it is running because it is running on the Massed Compute, not on my
-
00:08:22 computer. Now I can add the models from here, or I can download any model I want from here. Actually,
-
00:08:29 I'm going to show FluxDev download on RunPod. It is the same; there is no difference. Then go to
-
00:08:34 the canvas and wait for the models to appear. I think it is still adding. Yes, added, yes. Now I
-
00:08:41 should be able to select. Yes, select it, then just hit Invoke. You see it shows the queue, and
-
00:08:46 you can see the images are getting generated. I can see that on my Massed Compute. Yes, it is 5 it
-
00:08:54 per second, which is really, really good. You can see it here. So it is working amazing as expected.
-
00:09:01 Now, as a next step, I will use RunPod. Okay, so for RunPod, use this link to register, please.
-
00:09:08 Then let's log in, set up your billing, and set some money. Then deploy. You can use Secure Cloud,
-
00:09:14 Community Cloud. I am going to use Secure Cloud for faster setup, and I am going to use US Texas
-
00:09:21 3, this is the fastest one. Then let's pick the 4090. Use this template; I didn't test the other
-
00:09:29 ones. So, to select that template, type Torch and RunPod Torch 2.1. I am not sure if others will
-
00:09:37 work or not. Set your volume disk as you wish. Let's set it to 100 gigabytes, and you can use the
-
00:09:43 proxy of RunPod to connect, so don't forget that 9091. Then set overrides, and we are ready. Deploy
-
00:09:51 on demand. Then go to "My Pods." I already have a lot of tutorials for RunPod. You can watch them
-
00:09:57 to learn how to use RunPod better. You can always message me, and I will link you to the accurate
-
00:10:04 tutorials. Just wait until the connect button appears. So the connect button appeared. Let's
-
00:10:09 connect. Connect from the Jupyter Lab interface. Let's download the model again. You see it is
-
00:10:15 downloaded. Wait for Jupyter Lab to open. Okay, it's opened. So I am going to upload the file,
-
00:10:22 select the downloaded Invoke AI zip file like this, wait until the upload is completed. Yes,
-
00:10:27 then right-click and extract the archive, so it will extract everything into here. Refresh, then
-
00:10:33 open the RunPod instructions txt file. Copy this, open a new terminal, paste, and hit enter. Then
-
00:10:41 it will ask us the options; just wait. Okay, same as usual, just hit enter. I will delete this, so I
-
00:10:48 want to install it into the workspace. Hit enter, then type "yes," and hit enter, and then just
-
00:10:55 wait. Okay, then type the GPU "NVIDIA," hit 1, and hit enter, and the installation has started. Okay,
-
00:11:03 so the installation has been completed on RunPod. Let's return back to the RunPod instructions.
-
00:11:08 Copied this and open the terminal. I make my installers all the same way, so once you learn one
-
00:11:14 of them, it works for all. Hit 1, and hit enter. Wait until you see the local started URL. As
-
00:11:22 usual, it shows our GPU cuDNN version, and yes, you see the local URL started. Then to connect,
-
00:11:30 connect and connect HTTP service port 9091, and it is started. Now I can use it. Currently, I don't
-
00:11:40 have any models, so I am going to download a model from starter models. Let's download FluxDev. Since
-
00:11:46 this is a very powerful GPU, we can use it. So it is going to download its full size. You can also
-
00:11:51 download quantized ones. You see the download is in progress. It is really, really fast. This
-
00:11:57 machine is fast. Yes, it is downloading right now, so wait until all downloads are completed,
-
00:12:03 and progress is completed. It is hashing after downloading them, so it is taking some time. We
-
00:12:10 can see that here on the CMD window, but it should be fast. So it is all completed. Let's generate an
-
00:12:18 image. So I click here, then I go to the canvas. Now the FluxDev is here. Let's type a prompt:
-
00:12:24 "amazing sports car, race car on the roads." Okay, something like this; it's not very important. Then
-
00:12:32 we can see the speed of generation here. It is first loading the checkpoint shards. Yes,
-
00:12:38 pretty fast. It is fully utilizing the VRAM. You see, it is almost using it entirely,
-
00:12:44 and the generation speed is 1.6 it per second. It is slower than Swarm UI. With Swarm UI,
-
00:12:51 you can use --fast and generate images faster, but yes, it is here. So this is how you can install
-
00:12:58 and run Invoke AI on RunPod. I hope you have enjoyed it. I am not an expert on Invoke AI,
-
00:13:05 but this is how you install them and start using them. Hopefully, see you in future tutorials.
