this post was submitted on 16 Sep 2024
12 points (92.9% liked)

Stable Diffusion

4303 readers
12 users here now

Discuss matters related to our favourite AI Art generation technology

Also see

Other communities

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 1 month ago* (last edited 1 month ago) (2 children)

I don't think so. They're going to have to do a lot better than a tutorial to win people back. That said, the two Flux models being distilled making them close to impossible to fine-tune sucks too.

[–] [email protected] 1 points 1 month ago (1 children)

kohya now supports flux fine tuning. I have seen nice examples in civitai.

[–] [email protected] 0 points 1 month ago

Those might just be LoRA merged models, not full fine-tuning. From what I heard, fine-tuning doesn't work because the models are distilled. You'd have to find a way to undistill them to train them.

[–] [email protected] 1 points 1 month ago* (last edited 1 month ago) (1 children)

People have been training great Flux LoRAs for a while now, haven't they? Is a LoRA not a finetune, or have I misunderstood something?

[–] [email protected] 0 points 1 month ago (2 children)

Last I heard, LoRAs cause catastrophic forgetting in the model, and full fine-tuning doesn't really work.

[–] [email protected] 2 points 1 month ago

Oh well, in practice I'll just continue to enjoy this (possibly forgetful and not-fully-finetunable) model then, that still gives me amazing results 😊

[–] [email protected] 0 points 1 month ago* (last edited 1 month ago)

quite the opposite. Lora's are very effective against catastrophic forgetting, and full finetuning is very dangerous (but also much more powerful).