πŸ”ž NSFW πŸ”ž Stable Diffusion / AI art thread
"Alien technology that allows for superpowers" edition - featuring a helpful guide to help you install SD

Currently reading:
πŸ”ž NSFW πŸ”ž Stable Diffusion / AI art thread
"Alien technology that allows for superpowers" edition - featuring a helpful guide to help you install SD

ninturez0ninturez0's icon

i'm comfy
Staff member
πŸ’  (´・ω・`) πŸ’ 
supreme janitor
comfy friend
Joined
Jan 25, 2022
Messages
357
Reaction score
464
comfy coins
πŸ’ 10,862,798
Stable Diffusion is an AI art generator, or more specifically, a "latent text-to-image diffusion model". You type in some words, maybe give it an image to work with, and it spits out one or more images. It's essentially a free and open source version of DALL-E, and you can very easily run it on your own system without too much hassle, assuming your graphics card can handle it.

AI art has been a hot new trendy tech thing for the past months leading up to the release of SD, but OpenAI's DALL-E and the likes have numerous filters and restrictions on your prompts and the content generated by them so you can't make porn or whatever. This is not the case with SD. You can do whatever the fuck you want with SD, which has caused a massive media shitstorm over the public release of the model.

stable-diffusion-web-ui, also known as voldy, is a very nice fork which includes fancy features like inpainting and upscaling and shit. Here's a stolen 4chan guide that is good enough to be copypasted by mainstream tech news outlets so I'm going to steal it too:

--GUIDE--
Step 1:
Install Git (page)
-When installing, make sure to select the Windows Explorer integration > Git Bash

Step 2: Clone the WebUI repo to your desired location:
-Right-click and select 'Git Bash here'
-Enter git clone [URL][URL]https://github.com/AUTOMATIC1111/stable-diffusion-webui[/URL][/URL]
(Note: to update, all you need to do is is type git pull within the newly made webui folder)

Step 3: Download the 1.4 AI model from huggingface (requires signup) or HERE
-(torrent magnet)
(Alternate) 1.4 Waifu model trained on an additional 56k Danbooru images HERE (mirror)
-(torrent magnet)
(Note: Several GB larger than normal model, see instructions below for pruning)
comparison
Step 4: Rename your .ckpt file to "model.ckpt", and place it in the /stable-diffusion-webuifolder

Step 5: Install Python 3.10.6 (Windows 7 ver) (page)
Make sure to choose "add to PATH" when installing

Step 6 (Optional):
This reduces VRAM, and allows you to generate at larger resolutions or batch sizes for a <10% loss in raw generation speed
(For me, singular results were significantly slower, but generating with a batch size of 4 made each result 25% faster on average)
-Edit webui-user.bat
-Change COMMANDLINE_ARGS= to COMMANDLINE_ARGS=--medvram --opt-split-attention

Step 7: Run webui-user.bat from your File Explorer. Run it as normal user, not as administrator.
  • Wait patiently while it installs dependencies and does a first time run.
    It may seem "stuck" but it isn't. It may take up to 10-15 minutes.
    And you're done!
Usage
  • Open webui-user.bat
  • After loading the model, it should give you a LAN address such as '127.0.0.1:7860'
  • Enter the address into your browser to enter the GUI environment
    Tip: Hover your mouse over UI elements for tooltips about what they do
  • To exit, close the CMD window
--RUNNING ON 4GB (And under!)--
These parameters are also useful for regular users who want to make larger images or batch sizes!
It is possible to drastically reduce VRAM usage with some modifications:
  • Step 1: Edit webui-user.bat
  • Step 2: After COMMANDLINE_ARGS= , enter your desired parameters:
    Example: COMMANDLINE_ARGS=--medvram --opt-split-attention
  • If you have 4GB VRAM and want to make 512x512 (or maybe up to 640x640) images,
    use --medvram.
  • If you have 4GB VRAM and want to make larger images, or you get an out of memory error with --medvram,
    use --medvram --opt-split-attention instead.
  • If you have 4GB VRAM and you still get an out of memory error,
    use --lowvram --always-batch-cond-uncond --opt-split-attention instead
  • If you have 2GB VRAM,
    use --lowvram --opt-split-attention.
-Otherwise, do not use any of these (Increases generation time)-
src: https://rentry.org/voldy / (archive)

I made the mistake of installing this shit last week, and I am struggling to be productive since then, because all I want to do is generate cool art in my computer. Here is some of the cool images I have got out of it:

image001.png
image002.png
image003.png
image004.png
image005.png
image006.png
image007.png
image008.png
image009.png
image010.png
image011.png
image012.png
 
Description
"Alien technology that allows for superpowers" edition - featuring a helpful guide to help you install SD
Last edited:
Since posting this thread, there is now multiple public fine-tuned models that you can use to generate gay furry porn with your graphics card.


I will upload these to nnty.fun very shortly.
 
also some tips since i've been messing with this for over a week:
sampling steps : 0-40, the closer to 40 the more detailed and accurate the results. 5 is minimum unless you want psychedelic halos. values higher than 40 don't help with the default "ancient" mode
cfg scale: 15 is a good balance, lets the ai be creative without straying too far from the prompt
resolution: don't touch this, anything higher than the default 512x will result in weird repeated patterns about 9 times out of 10, not recommended for characters but it might work for landscapes if you want some really fantastic ones. will also eat your VRAM like crazy along with doubling the render time, 1024x is the max i can do with a 1080 ti and it uses around 90% of its 11gb vram

here are more good holos
 

Attachments

the >512x res issue illustrated, seems like the AI only knows how to generate 512x images then tries to join them
creates some really trippy landscapes if that's what you're going for
 

Attachments

and this is a "fluke" at 1024x using the same parameters as the image above, in this case it didn't really stick to the prompt (i wanted a specific coastal town) but it gave me something beautiful
you'll only get these flukes about once every 10 tries at 1024x but man they're worth it
 

Attachments

the >512x res issue illustrated, seems like the AI only knows how to generate 512x images then tries to join them
creates some really trippy landscapes if that's what you're going for
voldy has a fix for generating images at high resolution without being weird about it, update your shit and it will be below the height and width settings labeled "Highres. fix"
 
Is it better than using webui + the novel models?
Don't know about it being *better* but it's far easier, cleaner and the outputs are 1:1 to what NAI could output on the day of the leaks. (NAI outputs will never be truly 1:1 as long as they keep fucking with the samplers under the hood)

You can also run multiple models in it (whether they're from the leak or not) but they require a restart until someone hacks the model selector in.
Check here https://rentry.org/sdg_FAQ#naifu-novelai-model-backend-frontend there's also a guide on how to use other models

Not as flexible as webui but it's pretty much as 1:1 you can get to the true NAI experience

You can probably skip redownloading the model itself if you already have the ones from the leaks
 
Last edited:
Wow some of those landscape and cityscape images are really pretty. I'm continually impressed with what these AI models can do, especially ones like this that anyone can run on their own machines locally.
 
Here are some me and faen faen made a few days ago using Stable Horde, the first one is upscaled and set to monitor rez to be a wallpaper, he has more of those here
They all had BLAME! in the input :>
 

Attachments

/pub/ ~ public channel
Help Users
      DangercloseOhNo @ DangercloseOhNo: big big big
      Back
      Top