AUTOMATIC1111's WebGUI

GUI Details


Back to the main page

This page will detail every slider, button and option available on the WebGUI so you have a better idea of what you're doing. All information is up to date as of 25/02/2023. However, due to the speed at which the WebGUI updates, some information may be outdated in the future.

Model Switcherdown_caret

The model switcher is located in the top-left corner of the GUI, and it allows you to easily load and switch between different models. All models are located in the ~/models/stable-diffusion/ folder, but you can create your own folder structure within that folder to organize your models better.

For beginners, it's recommended to have one model named model.ckpt, as it is the default name used to load a model on startup. However, you can change the startup model by editing the webui-user.bat file with the command line argument --ckpt {insert path to model file here}.

It's also important to note that you can rename your model files however you like, and the GUI will still detect them as models as long as the file ends in either a .ckpt or .safetensor extension.


The refresh button located next to the model switcher will reload the list of models. This is useful if you've added or changed models while the GUI is open, and you don't need to reload the entire GUI to do this - just click the refresh button.

Main Tabsdown_caret

The main tabs shown below are the main screens of the GUI. You can switch between each tab to access different features. Switching tabs does not interfere with any processing happening on the current tab, so you can switch tabs while generating images, training models, or doing anything else. However, you cannot use other features while one tab is active.

Additional tabs may appear depending on the extensions you install on the GUI.

The following sections will provide more details about the default tabs available on a clean install of the Web GUI.

Txt2Img Tabdown_caret


The text2image tab is the most used tab of the WebGUI and allows you to generate images using text prompts, textual embeddings and other features you can add using extensions. If you're wanting to generate a completely new image not based on any source material, this is the tab you use.

Below is an overview of what every button and slider on the interface does. More detailed information about the technical aspects of some options are available on their respective pages on this site, for example the Models page for a list of models and how they work.

The most important part of the interface is of course the prompt inputs. You'll see 2 text boxes, one for your main prompts and a second for your negative prompts. I have already covered prompting in great detail on the Prompts page. There are a lot of ways you can prompt and it is the main way to specify how you want the image to look.

You can also Ctrl+z to undo the last text you added or removed.

Sampling Method This allows you to switch between the various sampling methods available. More information on sampling methods is on the main page. Generally, they can all produce the same quality imagery and the differences are minor, but it's still useful to try different sampling methods and find one you prefer.


Sampling Steps Sampling steps is basically how many generation cycles you want the model to run for before completing the image. More steps is generally better, however you can achieve great results with as little as 20 steps depending on your model, sampling method and other parameters.


Face Restoration is an additional model that can be used to improve facial features. While it's a simple checkbox on this screen, there are a few settings available in the settings tab. Currently, there are two face restoration models available (Codeformer & GFPGAN). They are not downloaded with the installation of the GUI and will only be downloaded the first time you use them. If you need to edit or change the model files, they are located in the ~/models folder of the GUI installation.

While face restoration can create more coherent faces, more often it will generate a generic face that does not resemble the original face at all. You can reduce the effect of the restoration with a slider in the settings, but results can still be quite different from the original face.


Tiling is a feature that enables you to generate seamless textures by forcing the generated image to tile in any direction. This can be useful in game development, 3D modeling, and other applications where seamless textures are required. When enabled, tiling will create a seamless loop in the generated image, making it easy to repeat the texture as many times as needed.

It works surprisingly well, however it may be harder to get exactly what you're looking for using your prompts.


Hires. Fixdown_caret

Hires. fix is a powerful feature that allows you to upscale the generated image while adding more detail, without using as much VRAM as generating the image at a higher resolution natively. Enabling Hires. fix opens up a whole new section of the UI, with many parameters to fine-tune the upscaling process. The following explain these parameters in more detail.

It does still require more VRAM than a normal generation and the larger you want to upscale, the more VRAM required.

Upscaler There is not a lot of information available about the upscalers in Hires Fix. Unlike sampling methods, the upscaler models can create different results depending on the model, sampling steps and other parameters. The differences are subtle but noticeable.

Hires Steps is basically the same as the sampling steps. It allows you to choose how long the upscaler should run for. If set to 0, the upscaler uses the same amount of steps as the sampling steps. If you set this slider to anything other than 0, the upscaler will generate for exactly that amount of steps.

Denoising Strength allows you to specify how much the upscaler will influence the original image. With low values, the original image will remain relatively unchanged, whereas higher values will result in the upscaler generating a higher level of detail or changing certain parts of the image.

If you are using any Latent upscaler model, it is recommended to set the denoising strength to 0.5 or above, as lower values may result in blurry or pixelated images. Due to this, it is suggested to use the Latent upscalers only if you want more detail to be generated in the image. If you simply want to upscale the resolution, you can use any non-Latent model. However, please note that a high denoising strength may still change certain elements of your image, regardless of the chosen upscaler model.


Upscale by is an easy way to set how large you want the output image to be. You can upscale anywhere from 1 - 4 times the original image dimensions. However the larger the upscale, the more processing power is required, especially when using the Latent upscalers.


Resize width/height to allows you to specify the exact dimensions to which you want to upscale (or downscale) the image. Unlike the Upscale by option that simply multiplies the dimensions, these sliders enable you to choose custom dimensions that don't necessarily fit evenly into the original image.

Keep in mind that trying to set the width/height to values that don't match the original aspect ratio will crop the image to fit the new dimensions, which may not always result in the desired output.


Width & Height This option should be fairly self-explanatory, but it allows you to adjust the dimensions of your generated image. Most models are trained on a size of 512x512, so that should be your starting point, unless the model specifies a different dimension, such as SD 2.0 that can do 768x768. Keep in mind that the larger the image you try to generate, the more VRAM is required.

Since most models are trained on square images, generating landscape or portrait images may create glitchy artwork more often. It's recommended to use the Hires. fix if you are trying to create images with a different aspect ratio or if the image size is much larger than 512x512.


Batch Count allows you to generate a set number of images one at a time, without increasing the processing power required. This feature simply queues a certain amount of images to be generated sequentially. However, if you have Batch Size set, it will generate the set amount of Batch Size for every Batch Count.

It's also important to note that all the images generated will use the same parameters each time as there is no in-built way to vary the parameters in a batch generation.


Batch Size is similar to Batch Count, however instead of sequentially generating images, the WebGUI will generate all images simultaneously. This will increase VRAM usage dramatically and should only be used with powerful GPUs. In general, you should keep this set to 1 and use Batch Count instead. Batch Size is only useful if you're trying to generate images faster at the cost of requiring a lot more VRAM. Even though it generates all images at once, you will still have the seperate images in your output directory.


CFG Scale, which stands for Classificer Free Guideance Scale, is a slider that allows you to control how closely the model adheres to the prompts you provide. Lower values produce more creative and varied results, while higher values result in the generated image closely matching the prompt. It's generally recommended to use a value between 4 and 18 for best results, as extremely high or low values can lead to weird artifacts in the generated image.


Seed is a crucial value that acts as the image's ID. It serves as the starting point for the model's other random calculations because computers are unable to create true randomness, and require this starting point for further pseudo-random calculations.

Stable Diffusion's randomness is deterministic, meaning that if you know the seed value (along with the other parameters) for a specific image, you can recreate the image exactly as the original. However, as previously mentioned, the seed value is only a starting point, and changing other parameters in the model can result in subtle or extremely different images being generated.

The dice button next to the seed input will set the input to -1, which prompts the WebGUI to pick a random seed each generation.

The Recycle button next to the seed input allows you to reuse the last generated seed, which is helpful if you want to create variations of an image you previously liked. You can retrieve the seed with this button and experiment with other settings to achieve different results.

Tick the checkbox next to the seed input to access the extra section, which lets you add more randomness elements to your generation.


Seed Extrasdown_caret


The seed extras section allows you to add even more randomization to your generations so you can create more varied outputs of a similar object or subject.

Variation Seed is exactly the same as the normal seed input, it simply allows you to use a different seed along with the original seed. It can be left as -1 to use a different variation seed each generation as well. Even if you have the main seed set to a specific value.


Variation strength as the name suggests, it lets you control how much the variation seed will affect the main seed and overall generation. At 0 it will have no influence, and at 1 it will generate a drastically different image but will maintain the major characteristics of the main seed.


Resize seed from width/height sliders can introduce a lot of variety to your original image. These sliders are used to choose what size the variation seed should generate its image at. If set to 0, it will use the same dimensions as the original image. However, you can set the dimensions to anything you'd like and the variation seed will generate its part of the image as if the final image was at the specified size, even if the final image is at different dimensions. In other words, changing the height and width of the variation seed will change the output of the variation seed even if all other values are equal.


Script The script option allows you to use or add different ways to generate your images by running Python scripts that interact with the WebGUI. These scripts can modify the image generation process, offer new ways to prompt the model, and more. Scripts are community-driven, and new ones are created all the time.

You can install new scripts from the Extensions tab or manually copy the Python script into the ~/scripts folder of the WebGUI installation. Once a script is installed, it will show up as an option under the Script dropdown menu. When you run a script, it will interact with the WebGUI to generate images according to the script's instructions.



Generate is used to... generate the images. Simply click the button to begin processing, if you have Batch Count set to greater than 1, the set amount of images will be queued to generate.

If you right-click the Generate button, you have the option to generate forever. As the name suggests it will keep generating images until you right-click and choose to stop generating. Generating forever will work the same as clicking the generate button normally, so if you have a batch count, it will still generate the full batch per each cycle of the generate forever generation.

When you click Generate, 2 grey buttons will replace it, these buttons are "Interrupt" and "Skip". Interrupt will stop generation completely at the next available point. Skip will simply skip over the current image and start generating the next image in the queue if there is a queue. Interrupt will not stop the generate forever option.

If you are using the Generate Forever option, it's recommended to either have a good cooling system on your computer or keep an eye on the tempuratures as long periods of high tempuratures can degrade electronics faster.


Arrow Button similar to the refresh button for seeds, the Arrow button will insert the previous prompt used.

Trash Button will clear both prompt textboxes. It will confirm you want to delete before actually deleting to prevent accidental deletions.


Picture Button opens a new section that lists textual embeddings, hypernetworks, models, LORAs and any other modifiers in a more visually appealling way.

Click on the embedding, checkpoint, etc that is listed and it will automatically activate the selected one or add it to the prompt list.

On your screen, the listed files will not have a preview image like shown below. This is because you need to create the preview yourself or add a preview image to the corresponding folder and title the image {prompt_name}.preview.png.

To add your own preview image using the GUI, you can generate an image, then hover on the title of the embedding and click the "replace preview" button that appears. This will copy the last generated image to become that embeddings preview image.


Save Icon Button the floppy disk icon as seen above allows you to save the current prompt/negative prompt to reuse later with the "styles" dropdown menu. To overwrite a saved prompt, simply click save again and name the prompt exactly the same.


Styles Dropdown is how you choose from your saved prompts. Simply choose from the list. You can choose multiple prompts and add saved prompts to your active prompt list. Once you've selected your saved prompt/s click the clipboard button that pastes the saved prompt/s into the positive or negative textboxes.



This is the output section where you'll see the results of generations. You can also send the generated image to other tabs using the respective buttons.

If you have the setting active, you will be able to see a preview of the image being generated on this screen as well.


Folder button will open the location that the images are saved in.


Save button lets you manually save the selected image, although by default, images will automatically save to the ~/outputs folder of the WebGUI. You can turn off automatic saving in the WebGUI settings.


Zip button is similar to the save button but it will zip all generated images into 1 zip folder for easier storage. This is useful if you're generating large batches of images at once.


The text section below the preview provides all the important parameters that were used to create the image. It's useful for figuring out some information about your generations when using more advanced features like wildcards or matrix prompts.

PNG Info Tabdown_caret


The PNG Info tab can be a really useful tool for loading up images you've previously generated or loading images other people have generated.

You can drag and drop any .png image that was generated in the WebGUI and it will display all of the parameters used to generate that specific image, including the seed, model and embeddings used. This is a great way to retrieve old images or save prompts/ workflows to be reused later.

It's important to know that this will only work with .png files that were created using the AUTO WebGUI. Any other images will not work, and AUTO images won't work if they have been compressed or the metadata has been removed.


You can click one of the "send to" buttons to transfer all of the image parameters to one of those tabs to easily regenerate that image, or use it as a starting point to generate similar images. Be sure to set the seed back to random, as transfering the image parameters includes setting the seed value to that exact image's seed.

Extensions Tabdown_caret


The Extensions Tab is where you can easily install and manage plugins & add-ons to your WebGUI. The Installed tab shows you what extensions you have already installed. As you can see in the screenshot above, I already have the Control Net & Dreambooth extensions installed. Built-in extensions will have their URL as "built-in" and they come installed with the WebGUI already.

You can disable installed extensions by unticking the checkbox next to their names and reloading the UI. To completely remove an extension, you will need to delete its corresponding folder in the ~/extensions folder of the WebGUI.

You can check if your extensions have an update available by clicking the "check for updates" button. When clicked, the "update" column will change to a checkbox that says "behind" if there is an update available. You can untick extensions that you don't want to update.

Once the check for update is complete, you will need to click the "apply and restart UI" button to actually install the updates and reload the UI to activate any changes.

Remember to check for updates often, as everything to do with Stable Diffusion is changing at a very rapid pace.

The Available tab is a way to see a list of extensions that were directly added to AUTO's github page. You can however search other lists by entering your own Extension Index URL, however I would recommend using the Install from URL tab if you want to install an extension that doesn't appear on the list.

To see the list of extensions, just click the Load from button and click the "Install" button next to the extension you want to install. It should be a seamless process, however there isn't a good progress bar at the moment, so its hard to know when an extension is installed. Once downloaded, you simply reload the GUI or restart the entire program to load the extension into the WebGUI.

The Install from URL tab allows you to install any extension using a link to its Github repository. There are many people contributing to AUTO's WebGUI and not all of them will be listed on AUTO's Github. This tab should be fairly self-explainatory, but all you need to do is paste the Github Repo link in the URL input and you may choose a name for the extension. Then click install and reload the GUI as with any other extension.