catbat

joined 11 months ago
 

The generator is frequently updated but this is what we know for now (based on all the different topics here about it):

-the "style" add some prompts and negative prompts to your own prompt, so you have to add it on your prompt on automatic1111, you can find them here: https://perchance.org/t2i-styles#edit

-the model is different depending on the prompts/style, for example realistic vision (it is confirmed), and maybe deliberate v2 are alternatively use for photo related images (mainly realistic vision since an update made one month) ,and I think deliberatev2 is use for other kind of styles too (it is a very versatile model). But it was used for sure because someone find it on previous jpeg files properties . I think it is still used. We don't know if other models are used for other styles

-sampling method : it seems it is Euler A, that is what someone find on previous jpeg files properties too

-sampling steps: no more than 20

-"no loras, no embeddings, no fancy sampler stuff, no hi-res fix, no hidden ‘quality’ prompts" (from @[email protected] )

-BUT (and this is the "problem" lol) "there is hidden stuff to reduce suggestive/nudity/etc." (this is the words of the owner of @[email protected])

And THIS (the hidden stuff) is why we just can not reproduce the perchance text2img generator locally on automatic1111 I think, even if we use realistic vison or deliberatev2 with the right sampling method/steps, the exact same prompt and style prompts. Stable diffusion is very very sensitive to very small change (just change one word on a prompt and the images can be very different). Even the resolution change the result (portrait vs landscape). So without this "hidden stuff" and other settings we don't know, it is for now impossible to reproduce it ... I tried to add negative prompts like "nsfw" or "porn" and things like that but it doesn't repoduce the perchance generator.

So we really hope he will give us all settings (in fact it can be explained in two minutes ^^) and so we will stop to bore him with that lol

to upvote the topic/comment can help ^^

[–] [email protected] 1 points 10 months ago

Yes, I tried again an again to reproduce the perchance text2img but there is really something behind the scene. The hidden stuff make the difference I think. We know the model (for photo it is alternatively deliberatev2 and realistc vision), he said he use no lora, "no embeddings, no fancy sampler stuff, no hi-res fix, no hidden ‘quality’ prompts" BUT "there is hidden stuff to reduce suggestive/nudity/etc". I tried to use negative prompts like "nsfw" , "porn" and things like that, but I never have exactly the same thing as in perchance. It is very frustrating. please @[email protected] , could you help us ? it will literally take you 2 minutes to explain the full settings and we will all leave you lone about it lol thank you very much !!

[–] [email protected] 1 points 11 months ago* (last edited 11 months ago)

Thank you for your answer !

could you please tell what was the previous model used for photo (before "realistic vision" which is , imho, less good than the previous one), and the sampling methods/steps and the hidden stuffs you used to reduce suggestive/nudity/etc (I supposed these are hidden prompts; right ?): these "hidden stuffs" can make a big difference on the results (SD is very sensitive to small changes)! I don't have bad results on my local automatic1111, I just try to have the same results with the same prompts; and so I just need these simple infos (and I think it's the same for other people who asked for them). thank you !

[–] [email protected] 1 points 11 months ago

It's not sure that he use deliberate v2, because even with it you won't have the same results as in perchance generators. Maybe he used euler A too , but really, we need infos directly from him. And I think the settings were changed a few days/one week ago, the results are now slightly different

[–] [email protected] 3 points 11 months ago (6 children)

Deliberate v2 (or Artius too, but I don't think it's the right checkpoint) but like you, I just can not replicate what I do on perchance. We need to know the exact checkpoint/lora's and the settings / the hidden prompts (but all the "art style prompts" are known, check my post) ect... in fact it can be explained in two minutes ahah but without it, we can work during months without to find the right settings lol so yes, we need the help of the Dev ! please !!! ^^

[–] [email protected] 1 points 11 months ago* (last edited 11 months ago)

I think the hidden prompts/settings have been changed recently ? The results are different than before with the exact same prompts. The text2image generators are still very good, but it's frustrating to depend on settings that can change at any time without having any control over them. So it could be so cool to know the models/lora and prompts/settings (if possible the previous ones) used for the generators ! please ! ^^

[–] [email protected] 1 points 11 months ago* (last edited 11 months ago) (2 children)

yes I understand it can change later on. But for now it's impossible for me to have the same result with the exact same prompts+prompts art style for example with deliberatev2 and I made tons of tests with a lot of models+sampling methods, it's very very frustrating. In a few seconds the dev can resolve all my problem with just a message or even a private one, by telling me the name of the model + lora (if there are any of them) + prompts under the hood, actually used on the generators .

Could you please ask him if it could give me these simple informations ? it will help me soooooo much ! I just would like to replicate what I have on the generators but on my automatic1111, that is all ^^ and it’s just about to have the right models/lora and prompts/settings, it's very easy ahah ^^ but without these infos it's just impossible to do it

[–] [email protected] 1 points 11 months ago* (last edited 11 months ago) (5 children)

thank you ! but I think it is really about the model (+maybe lora)

My point is to experiment prompts on the perchance generator: it’s super fast , easy to use, it's super cool !!! And then I would like to use the prompts I experimented on perchance and that I like but on my pc, to add things like Lora and stuffs like that and with better resolution, to improve it. But for that I need the same model (+lora if there are any) of course

Is it possible to have all the info ( the name of the checkpoint merge/model and lora for example) or if it is a custom ones to share it ? it is the same question as in the "post regarding the model" and honestly you (perchance) kinda evaded the question ahah . if we don't have the "recipe" (model and lora if there are any), we can not replicate what we do on perchance and so it is less usefull to work on it because the prompts won't have the same results, you see what I mean

 

Hi, I would like to know what model for stable diffusion the Perchance text2image generator use. I am not able to have the same (very good) results "at home" with the basic model or custom ones so it's seems it's a specific one ? I would like to try adding some LoRa. So It will help me to have the name of the model, or if it's a custom one could you share it ?