11
submitted 2 days ago by [email protected] to c/[email protected]
you are viewing a single comment's thread
view the rest of the comments
[-] [email protected] 1 points 2 days ago* (last edited 2 days ago)

People have been training great Flux LoRAs for a while now, haven't they? Is a LoRA not a finetune, or have I misunderstood something?

[-] [email protected] 0 points 2 days ago

Last I heard, LoRAs cause catastrophic forgetting in the model, and full fine-tuning doesn't really work.

[-] [email protected] 1 points 8 hours ago* (last edited 8 hours ago)

quite the opposite. Lora's are very effective against catastrophic forgetting, and full finetuning is very dangerous (but also much more powerful).

[-] [email protected] 2 points 2 days ago

Oh well, in practice I'll just continue to enjoy this (possibly forgetful and not-fully-finetunable) model then, that still gives me amazing results ๐Ÿ˜Š

this post was submitted on 16 Sep 2024
11 points (92.3% liked)

Stable Diffusion

4254 readers
22 users here now

Discuss matters related to our favourite AI Art generation technology

Also see

Other communities

founded 1 year ago
MODERATORS