this post was submitted on 18 Jul 2023
24 points (100.0% liked)

Stable Diffusion

4305 readers
7 users here now

Discuss matters related to our favourite AI Art generation technology

Also see

Other communities

founded 1 year ago
MODERATORS
 

Hi there, I've seen a few videos on yt showing it off and it looks incredibly powerful in finetuning the outputs of SD. It also looks dauntingly complicated to learn how to use it effectively.

For those of you, who played around with it - do you think it gives better results than A1111? Is it indeed better in finetuning? How steep was the learning curve for you?

I'm trying to figure out if I'd want to put in hours to learn how to use it. If it improves my ability to get out exactly the images I want, I'll go for it. If it does what A1111 does, just dressed up differently I'll sit it out :)

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 1 year ago (1 children)

The repository links to a list of examples, but the best way to learn is just to mess around with it. It is fairly intuitive to work with (especially if you have used another node-based UI before, like Blender).

The UI also has the ability to import/export your current setup (what I call a workflow) as a json file. If I get some time, I might share some of mine with a pastebin link or something.

[–] [email protected] 1 points 1 year ago (1 children)

I just figured out that I could drag any of my images, made with A1111, into the UI and it would set up the corresponding workflow automatically. I was under the impression that this would only work for images already created with ComfyUI first. However, this gives great starting points to work with. I will play around with it tonight and see if I can extract upscaling and control-net workflows with it as a starting point from existing images.

[–] [email protected] 1 points 1 year ago

I didn't even know that, that's pretty cool. I'll have to try it later