this post was submitted on 30 Nov 2023
106 points (91.4% liked)

AI Generated Images

7151 readers
228 users here now

Community for AI image generation. Any models are allowed. Creativity is valuable! It is recommended to post the model used for reference, but not a rule.

No explicit violence, gore, or nudity.

This is not a NSFW community although exceptions are sometimes made. Any NSFW posts must be marked as NSFW and may be removed at any moderator's discretion. Any suggestive imagery may be removed at any time.

Refer to https://lemmynsfw.com/ for any NSFW imagery.

No misconduct: Harassment, Abuse or assault, Bullying, Illegal activity, Discrimination, Racism, Trolling, Bigotry.

AI Generated Videos are allowed under the same rules. Photosensitivity warning required for any flashing videos.

To embed images type:

“![](put image url in here)”

Follow all sh.itjust.works rules.


Community Challenge Past Entries

Related communities:

founded 1 year ago
MODERATORS
 
top 29 comments
sorted by: hot top controversial new old
[–] [email protected] 7 points 11 months ago (3 children)

Generated on your own PC, how? Is there a tutorial you followed that you'd be willing to share?

[–] [email protected] 12 points 11 months ago (2 children)

Check here https://easydiffusion.github.io/ Old models still works with my GTX 970 from 2016, but I can't run SDXL on it, with Stuff like Dreamshaper 8, I can generate an image in less than one minute. I plan to change PC soon (may-be for the January sales), and will take a bigger GPU to run SDXL. (That said I am affraid that in a couple of years, model will be even bigger and will need these Multi-GPU dedicated rack which are used for example in CT scanner)

But at the moment if your PC  has a gaiming GPU, no need to pay credit for an online service, you already paid for a PC :)

[–] [email protected] 4 points 11 months ago (2 children)

Only works with nvidia tho. 😠

[–] [email protected] 1 points 11 months ago* (last edited 11 months ago)
[–] [email protected] 1 points 11 months ago (3 children)

Is it still only nvidia or does AMD work ? I am on a market for a new PC, and I heard AMD works better on Linux. Moreover google tells me that automatic111 runs on AMD but don't know if someone tried

[–] [email protected] 2 points 11 months ago

All I saw when I checked your link was this:
System Requirements
Windows 10/11, Linux or Mac.
An NVIDIA graphics card, preferably with 4GB or more of VRAM or an M1 or M2 Mac. But if you don’t have a compatible graphics card, you can still use it with a “Use CPU” setting. It’ll be very slow, but it should still work.
8GB of RAM and 20GB of disk space.

[–] [email protected] 2 points 11 months ago* (last edited 11 months ago) (2 children)

I do Stable Diffusion on an AMD GPU, a XT 7900 XTX on Linux. So, yeah, that works.

The problem is that well-performing support is relatively new (the RDNA 3 cards are okay), and older cards may or may not be practical; that particular card is their latest generation. I agree that generally-speaking -- not talking specifically about generative AI -- on Linux, it's preferable to use AMD these days. For generative AI -- not Linux-specific, but in general -- Nvidia started earlier than AMD; in general, people have written generative AI stuff to run on Nvidia on Windows and now are adding AMD support. Problem is that Nvidia is also charging considerably more for AMD for their hardware, and you want a high-VRAM card to do Stable Diffusion...I'd probably recommend at least 16GB if possible.

Also, a popular library for doing some generative AI stuff, "transformers", doesn't currently run on AMD cards. Stable Diffusion can run with it -- it speeds operations up -- but doesn't require it. But a few other things, like Tortoise TTS, a piece of software that can generate speech in someone else's voice given some samples of that voice, do require transformers.

AMD has been putting out Linux support for generative AI before Windows support, actually -- they just got out Windows support for my card, and it's been usable on Linux for a while.

If you're going Linux and you're willing to get an RDNA3-based card (new and high-end and a lot of VRAM) and you don't need to run transformers (which you don't for Stable Diffusion), yeah, I'd say go AMD. If you're going Linux and you don't care about generative AI, then I'd get AMD whatever. If you're trying to use older cards, I'd consider Nvidia; I had been using an older Nvidia card prior to this, and while it could do the generative AI stuff on Linux, for everything else I'd prefer the AMD card.

Be warned that generative AI stuff is really VRAM-hungry; I would prioritize getting something with a ton of VRAM over performance (so if you go Nvidia, don't get their -Ti models, which run more quickly but have less memory). With Stable Diffusion, VRAM places hard caps on the size of the image that you can process at one time (though you can generate lower-resolution and then upscale an image in chunks).

Right now, AMD cards of the sort that I'd recommend run something like $500 to $1000. If you can wait another year or so, I'd assume that hardware prices for higher-VRAM cards will come down (though Stable Diffusion's VRAM requirements have risen as newer, models aimed at higher-resolution images come out, so...)

Your main system's specs don't matter much for Stable Diffusion -- throwing a lot of CPU or main memory won't make much difference. Faster storage might speed up initially loading the model into the card, dunno. But basically, all the heavy lifting happens on the GPU, so the real requirement is a beefy, high-VRAM GPU; unlike with games, you can do fine with an older computer and a high-end GPU if your only concern is running Stable Diffusion.

Another option, if one wants to just dabble with Stable Diffusion doing generative AI a bit, is to rent a computer in a datacenter somewhere that has a high-end GPU in it. I have not done this, but vast.ai does this sort of thing, rents machines out on something like an hourly basis. That can be more cost-effective if you're only going to be sporadically doing this sort of thing. An Nvidia 4090 might run something like $2000, but a remote computer with one could rent for (checks) 50 cents an hour. So if you only have time to play around with this on, say, every other weekend, you could rent the thing for a weekend for $24. Buying the card would require 83 weekends to break even.

[–] [email protected] 1 points 11 months ago

I'm running a 3060 with 12 GB VRAM on a 2014 rig. Can confirm it works well, the only sad part is the random all-computer crashes sometimes.

[–] [email protected] 1 points 11 months ago

Thanks for your insight.

So indeed, looks like AMD compatiblity is still "work in progress" or require "expensive hardware" (Knowing that in one year, the model may have become even heavier)

Renting GPU time might be the proper way

[–] [email protected] 2 points 11 months ago (1 children)

I want to find time to try out SD on my desktop but I’m running an AMD GPU. Would love some insights too!

[–] [email protected] 2 points 11 months ago

See my response to parent comment.

[–] [email protected] 2 points 11 months ago

I picked up a 4090 a few months ago specifically for stable diffusion. The generation times are insane. To compare I used a 3060 mobile and a 7900xt.

AMD compatibility is very bad right now with ai. Progress is being made but it's slow. If you're checking back in a year or so, and it's been fixed, I would honestly pick up a second hand 7900xtx and it would blow your 970 out of the water at a much better price point than my 4090.

[–] [email protected] 5 points 11 months ago

I was inspired to try this out by this Fireship video.

Here's the Fooocus home page, with pretty much all of the instructions to get started. If you're on Windows, you need a few spare gigabytes of disk space. My PC barely met the system requirements, so it was kinda slow but tolerable.

[–] [email protected] 1 points 11 months ago

I don't know how they did it, but I cloned the automatic1111 repository on github and ran the webui.sh script.

[–] [email protected] 6 points 11 months ago (1 children)

Cute turtles :)
It looks very realistic! Would you consider cross-posting it to [email protected]? That community is still new and could use some nice AI photos.

[–] [email protected] 3 points 11 months ago (1 children)
[–] [email protected] 2 points 11 months ago
[–] [email protected] 6 points 11 months ago* (last edited 11 months ago) (1 children)

My first attempt did not go so well:

Easy Diffusion v3.0.6 "Sailor moon taking a big hit from a bong"

[–] [email protected] 3 points 11 months ago (1 children)
[–] [email protected] 2 points 11 months ago (1 children)

Are you using SDXL? If you are, you need to set the resolution to 1024x1024

[–] [email protected] 2 points 11 months ago

Sweet a tip. I will try that. I have been just playing with settings and making things worse.

[–] [email protected] 2 points 11 months ago (2 children)

Upside: Oh look at that turtle in the background, going into Turtle Mode. This wolf is OK. No need to be shy.

Downside: Oh why does that turtle on the left have three legs goddamn it.

[–] [email protected] 2 points 11 months ago (1 children)

It's mid-run and the leg is obscured by the shell

[–] [email protected] 2 points 11 months ago

CW: PRIMORDIAL MAMMALIAN FEAR

Oh yeah, reminds me of this video: "I'm a cute turtle. Just let me rest in peace. ...Why are you messing with me. ...You're seriously fucking with me? FEAR ME, PUNY MAMMAL. I AM A FUCKING REPTILE. ...Yeah you just stay back where you belong."

[–] [email protected] 1 points 11 months ago

Injuries happen in nature. Three-legged tortoises are valid!

[–] [email protected] 2 points 11 months ago

I like turtles

[–] [email protected] 2 points 11 months ago* (last edited 11 months ago)

u fokin' wot m8? (-the turtle probably)

[–] [email protected] 2 points 11 months ago* (last edited 11 months ago)

"the rules have changed"

[–] [email protected] 1 points 11 months ago

I love coyote too! Did you read that Dan Flores book?