this post was submitted on 27 Feb 2024
3 points (71.4% liked)

Perchance - Create a Random Text Generator

445 readers
11 users here now

⚄︎ Perchance

This is a Lemmy Community for perchance.org, a platform for sharing and creating random text generators.

Feel free to ask for help, share your generators, and start friendly discussions at your leisure :)

This community is mainly for discussions between those who are building generators. For discussions about using generators, especially the popular AI ones, the community-led Casual Perchance forum is likely a more appropriate venue.

See this post for the Complete Guide to Posting Here on the Community!

Rules

1. Please follow the Lemmy.World instance rules.

2. Be kind and friendly.

  • Please be kind to others on this community (and also in general), and remember that for many people Perchance is their first experience with coding. We have members for whom English is not their first language, so please be take that into account too :)

3. Be thankful to those who try to help you.

  • If you ask a question and someone has made a effort to help you out, please remember to be thankful! Even if they don't manage to help you solve your problem - remember that they're spending time out of their day to try to help a stranger :)

4. Only post about stuff related to perchance.

  • Please only post about perchance related stuff like generators on it, bugs, and the site.

5. Refrain from requesting Prompts for the AI Tools.

  • We would like to ask to refrain from posting here needing help specifically with prompting/achieving certain results with the AI plugins (text-to-image-plugin and ai-text-plugin) e.g. "What is the good prompt for X?", "How to achieve X with Y generator?"
  • See Perchance AI FAQ for FAQ about the AI tools.
  • You can ask for help with prompting at the 'sister' community Casual Perchance, which is for more casual discussions.
  • We will still be helping/answering questions about the plugins as long as it is related to building generators with them.

6. Search through the Community Before Posting.

  • Please Search through the Community Posts here (and on Reddit) before posting to see if what you will post has similar post/already been posted.

founded 1 year ago
MODERATORS
 

Hi all,

first off I'd just like to say how blown away I am by the potentials of Perchance. I bet to most of you this is baby stuff but for me this is my first step in to this world and its just incredible stuff.

So I have a question...more so to if its possible and then I'll properly wrap my head around the coding of it.

I'm looking to create a short sequence of scenes, like a still animation. Some time the background will stay the same but the character would change pose say. Or maybe the background (say a kitchen scene as example) may change camera angle/view and the character would change position/pose. Im not looking to create frame by frame stuff. Just scene changes but retaining features through out. I totally can see it being possible to do, was just hoping to hear some advice from people that have much more experience than I do.

If any of that doesn't make sense (most probably!) please just ask and I'll try to better explain.

TIA

Sam

p.s. Oh I should probably state that I plan to use t2i to create the scenes, then overlay/combine character and adjust accordingly

top 41 comments
sorted by: hot top controversial new old
[–] [email protected] 3 points 8 months ago* (last edited 8 months ago) (1 children)

cool idea and welcome!

started typing stuff but i think more info would be handy first.

of note is that i know on the site Playground, which is advanced and uses Stable Diffusion XL to generate images (perchance uses the Stable Diffusion before XL), you could generate a character and a scene, then edit the scene file and copypaste the image of the character on to it, then ask the ai to recognize what is in the picture and redraw it. Then you would get a 'seamless' combo picture of the specific character but in a different scene. The issue doing that here is that perchance, as far as i know, does not have the ability to recognize an image and generate something from it. I will eventually make my own SD XL server, but with many steps between here and there, so we will eventually have this power but not yet.

you could also split a prompt in to two parts and have a character and a scene and allow editing the scene prompt and character prompt separately so the same character could be generated with multiple different background scenes. very highly doable and simple codewise.

it is also slightly harder but within reach to get the images out of the generator, so if, for example, you want to place a big box on the topleft of your screen which is an image of a background, and then place another image on top of it, you can and i or we can show you how. Sounds like this may be what you are going for and is highly doable and i bet i could show you even tho i don't know ti2.

If you want to do something like the above and have transparent backgrounds for the image of the character, i believe that is something perchance lacks currently, though I believe I can create something that will postprocess your character image in that way.

that's just what comes to mind now. "Is it possible?"

if you figure out more of what you want and more detail i will respond more exactly. :)

[–] [email protected] 3 points 8 months ago* (last edited 8 months ago) (3 children)

p.s. Ive just looked up the site Playground you suggested. Landing page it highlights the very thing, two images integrated. Ive not been on the site so have no idea as to the UI and whats involved....but if Im honest, I quite like the idea of coding it myself and really deep diving in. Also I think having the images seperate still might be handy, I dont know!? But it'll be quite fun to explore. This whole project has evolved for me...I want to create a portfolio of music compositions for film/TV etc.. and was going to use old silent movies. But then pondered the idea of creating my own. Animation would be too labour intensive (I think...well in these preliminary stages anyway) so thought about creating stills. Started exploring AI possibilities, discovered perchance today and was blown away by it! Fits what Im looking for perfectly. Coding it bespoke to my project will provide so much versatility and potential. So it looks like Im back to school!

Also, I wasn't sure if the image would differ if I altered the prompt itself and so would lose continuity? Was thinking that maybe setting up the generator to have conditions that cover the alterations so would only change say 'viewpoint' and use the same seed? again, completely new to this and just piecing bits together so most likely completely arse about face way to do it! lol

[–] [email protected] 3 points 8 months ago (1 children)

I would be quite hard in Perchance since it currently doesn't have any 'image-to-image' or 'in-painting' capabilities, which might be what you are looking for. I would suggest looking into creating your own local Stable Diffusion setup (which is the model that Perchance is using). Other than that, prompting in Perchance is quite customizable, see this prompting guide.

Unfortunately, we wouldn't want for users to post here on asking 'Prompts' or how to get a certain image with the plugin or with a certain generator (see Rule #5), since people would mostly think that Perchance is an AI website, it isn't, it is first and foremost an engine for creating Random Text Generators. We have a channel on Discord that is much more relaxed in asking 'prompts' for images.

[–] [email protected] 3 points 8 months ago (1 children)

Thank you for your very informative reply (although now the cogs are whirring in all manner of directions!).

I totally understand what you're saying and see that if I were requiring that realm of functionality I.e. in-painting then a different approach would be better. I'm intrigued by the suggestion of my own local Stable Diffusion setup. I didn't even know that was a possibility and in the long run may possibly be the better option even if what I require is available here. So in my thinking (and please forgive my naivity, I'm green as grass at present!) I would generate one image, say background first. Write the generator to have quite specific conditions I.e. depth of view, positioning, scale, size etc etc (I'm totally clueless as to the limitations here so again please correct if not possible). Result, one image. Backdrop. Then generate the character image. Maybe whole separate generator. Again, very specific with scale or pose, positioning etc and without background. So now both images are created in the manner required. My thoughts were the locked-layer-combination!? To combine the two images 🤷🏻‍♂️ and voila. The next step would be to then adjust parameters within each generator but revolving around the same seed (might be talking nonsense here as again, I'm just learning) to adjust in the desired way but retaining the features I.e. colours, style etc for continuity.

Am I just wishing on a star do you think and better suited to my own local?

Also, I'm not seeking any prompt advice here and respect that this isnt the place for that too. I just wanted to reach out to experienced minds to question the plausibility of my thinking as with my limited knowledge I could only piece together an idea from what is already available, tailoring to my needs. I mean, even if I create 2 images and use photoshop to layer its not an issue. My main focus is the retention of detail whilst manipulating the image.

Thank you again for a very thought provoking reply. Off I go to see what a 'local' setup looks like! :)

[–] [email protected] 3 points 8 months ago (1 children)

I would generate one image, say background first. Write the generator to have quite specific conditions I.e. depth of view, positioning, scale, size etc etc (I’m totally clueless as to the limitations here so again please correct if not possible). Result, one image. Backdrop.

Images can be fine-tuned with tags (albeit the AI tries to), so you can change the conditions of the images.

Then generate the character image. Maybe whole separate generator. Again, very specific with scale or pose, positioning etc and without background.

You can also do the character in the same generator, no need to code another one, you just need to change the prompts.

My thoughts were the locked-layer-combination!? To combine the two images 🤷🏻‍♂️ and voila.

The layering part is quite easy to implement (see this example generator I made a while back) but you would need to edit the character image to be cropped so it can overlay successfully on top of the image. If you were to use 'inpainting' you could just 'paint-over' the current image without needing to layer and position multiple images.

The next step would be to then adjust parameters within each generator but revolving around the same seed (might be talking nonsense here as again, I’m just learning) to adjust in the desired way but retaining the features I.e. colours, style etc for continuity.

Using the same seed is a good idea to have the 'same' essence, or replicable composition (see the prompting guide that I linked earlier). But yeah, a local setup with 'inpainting' would be better, there are also other 'plugins' for the local setups to instruct the AI area by area. Also note that you might need a powerful computer (and graphics card) to run the local models quickly.

[–] [email protected] 2 points 8 months ago* (last edited 8 months ago) (1 children)

ahhh I see, tags will be involved for sure then. I'll take a look at your generator too and try and get my head around whats going on. I wonder; you mentioned 'cropped image' in your reply...well it just got me thinking about how images are 'recognised' and if there is a possibility in the AI to differentiate the character from background (if background is given transparency) and create the crop itself? As in, a lasso plug in type thing that can auto detect? Just an abstract thought so don't know if its completely ludicrous or not!? 🤷‍♂️

Yes, GPU....might be my achilies heal with that at present. Im just running off of a laptop, 8GB RAM kind of crap! Apparently could run Stable Diffusion from it but not SDxL without it being slooooow! Would having a local Stable Diffusion and not xL be pointless?

[–] [email protected] 2 points 8 months ago (1 children)

I don't think there are any AI that can auto-crop an image, at least that is currently available and can be integrated on the site. I think v1.5 models are the creme of the crop currently since there are a lot of models for it (also it having not much censorship compared to the XL, I think). I haven't made a local setup myself so I can't really point you to a direction. I would suggest looking into the StableDiffusion subreddit ( or at the community here on Lemmy), to start.

[–] [email protected] 2 points 8 months ago

Ive been looking in to the image cropping and finding some jscript plugin subreddits so will delve a little deeper to see what people have been coming up with. I'm also going to try SDXL on my laptop to see what happens...if it takes an age to do anything will resort to SD....Thanks for all the pointers 👍

[–] [email protected] 2 points 8 months ago

I would like to have a generator that I can use for projects after too...just keep on developing it as required

[–] [email protected] 1 points 8 months ago* (last edited 8 months ago) (1 children)

"to create a portfolio of music compositions for film/TV etc"

do you make music?

here's a cute thing: setInterval(function(){

},3000)

is javascript for 'do something every 3000 miliseconds'

if you have constant BPM in your songs you can align your 'animation' exactly to your song :)

[–] [email protected] 2 points 8 months ago

Yes I do make music. This is a really cool idea to have in your back pocket..once Im a bit further down the line will see if it comes in to fruition! thanks

[–] [email protected] 3 points 8 months ago (1 children)

"you could also split a prompt in to two parts and have a character and a scene and allow editing the scene prompt and character prompt separately so the same character could be generated with multiple different background scenes. very highly doable and simple codewise." may be what you are seeking and most doable, though it may not do it to the quality you want :)

does this sound like what you would like to try for first? i could make an example generator that does this :)

[–] [email protected] 3 points 8 months ago (1 children)

Ello allo!

What an amazing response, thank you for taking the time to do so.

We're totally on the same method of thinking; I'll use a kitchen scene as an example again (its also part of the story I'm writing) and within the kitchen is an old granny. What I'm hoping to achieve is a similar effect to classic animations but minus the animated character also. My plan: generate a kitchen image in a specific perspective and then use that image to further generate replica images just with different viewpoints/perspectives. Then generate the granny (say full body side profile) so shes "facing the cooker" in a kitchen backdrop. And simply overlay and position the two together. Doesn't need to be anymore technical than that. No movements etc. Next scene, same backdrop for example but now generated a granny with her back to us (but exactly the same details from the previous scene i.e. granny looks like the earlier granny) and now overlay the two differently to incorporate granny at the sink. Next scene...close up of granny at sink. So backdrop has change focus and zoomed in, so has granny but all still same features.

This is the first encounter I've had of perchance and even coding for such a thing but after discovering several plugins like 'pose' , 'physical description AI image' and 'image layer combiner' I had a good feeling what I'm looking to achieve would actually be possible. I kinda wanna go through the challenge of it a bit too as I'd like to get a deeper understanding so when I want to manipulate stuff later I have more control and greater scope of what can be achieved.

Again, really appreciate the considered response. I was looking through some posts and could see that its a lovely community. :)

many thanks, Sam

[–] [email protected] 3 points 8 months ago (2 children)

my suggestion is: start having fun with perchance :)

i made this with you in mind https://perchance.org/astartforyou to explain at least a little :)

i love your learning attitude and that you yourself are growing :)

I hope you find perchance an avenue in to coding :)

[–] [email protected] 4 points 8 months ago (2 children)

Yeah, I also hope so 😊

Also @Alllo, the starter generator is pretty neat, just need some code refinements so more people that are new to Perchance can understand it better 😄

[–] [email protected] 2 points 8 months ago* (last edited 8 months ago) (1 children)

haha 'code refinements' = story of my life

that literally was my best attempt at making a starter thing lol

i tried to go over every line with care and make the html simple and exact.

funny you immediately notice im a noob anyway, lol!

you could always go over it and refine it as you see it to it's next better state :)

[–] [email protected] 3 points 8 months ago (1 children)

@Alllo Every time I woke up, I immediately saw your post or reply at some time, then if I wanted to reply with that, I'll do that. Definitely will take a look at the code at some point and maybe transform it into my own entire generator 😄

[–] [email protected] 2 points 8 months ago

Thanks guys, this is greatly appreciated :) I'm going to sink my teeth in and try and get up to speed a bit (well, maybe a hurried walk, or a jog in fact. Not quite speed! lol)

[–] [email protected] 2 points 8 months ago (2 children)

Thank you too. Its really lovely to see such welcoming people. For me, I'm super keen to explore the possibilities/capabilities of these incredible tools. Also I'm super aware that to do that requires a thorough understanding which I am most definitely without! I find it interesting when first starting something like this. Everything is so alien. Nothing fully makes sense. Slowly the picture starts to becomes clearer and you begin to see the possibilities growing. 👍

[–] [email protected] 2 points 8 months ago* (last edited 8 months ago) (1 children)

You're welcome @mindblown! There are lots of examples and sources that can help you learn about creating Perchance generators, not just image generations:

https://perchance.org/plugins
https://perchance.org/tutorial
https://perchance.org/examples (see the More Examples section)
https://perchance.org/advanced-tutorial
https://perchance.org/perchance-snippets

Feel free to check them out when you want to explore more of the platform! 🙂

[–] [email protected] 2 points 8 months ago

Great stuff, thank you! So I think I'm going to wrap my head around the elements Im looking to utilise and give myself priority learning topics (theres a lot of random generator stuff that I wont be using, lists etc). I built a 3D printer many moons ago and so had to dive in to visual studio and python so I have a very limited understanding of coding, certainly not html specific, but I get the premise. What I'm mindful of is not loosing sight of what Im also looking to achieve. I all to easily follow rabbit holes and lose track of the main objective 😂

[–] [email protected] 2 points 8 months ago (1 children)

The Perchance Hub - Learn Tab also has couple of articles that might also be useful for learning.

[–] [email protected] 2 points 8 months ago

Thank you 👍 As I say, Im going to see if I can setup a local and then start learning and developing what Ill specifically need for my project. I don't need to get absorbed in all facets of SD just yet. and generally as things move along more comes in to light and I'll swat up on the hoof. I like to know what Im doing or what things do rather than just blind copy and paste, so I'll use the elements I use to teach me

[–] [email protected] 2 points 8 months ago

Ive just opened this one up....thank you. That Artist list of yours is a bloody handy addition! 👍

Thank you for your support. Its really surreal how coding has entered back in to the journey....Its a story for another time. Really excited to see what wonders will be discovered along the way

[–] [email protected] 2 points 8 months ago (1 children)
[–] [email protected] 1 points 8 months ago

very. much masterpiece. i BELIEVE that's a vioneT creation

[–] [email protected] 2 points 8 months ago

@mindblown Woah, this has to be the most commented post on this community, there's so much discussion poured around this very topic.

[–] [email protected] 2 points 8 months ago (2 children)

@[email protected] @[email protected] @[email protected] So I wanted to give little update. Not only to keep you guys in the loop but to pull myself back in to check. There are so many rabbit holes I keep delving in to and the reality is, I think (bar a couple of bits) I have the functionality and tools here anyway. I'm going to develop the structure of it here. Or at least a base version to expand on.

My plan..... Although it has been suggested that one generator would be sufficient, my idea is to use multiple gens. and funnel down the outputs. As an example....First phase, scene/background generation. All the variables available to create the initial style/scene. 1 image created and put to the side temporarily (Output = Image A). Continue on to character generation (initial variable being how many characters in the scene. In which more cycles would be added to accommodate character volume. The image created from this run is the base position character (Output = Image B). The overlay/combining images 'How to' is still in the works so we'll just say that's happening now. What I would like is for that to be a relatively straight forward process (Output = Image C). BUT....That's not the end! The next generation is the image manipulation phase. So most of the required data will automatically be funnelled down from the previous. Initial selection to ascertain what outputs are changing (which will then automatically populate the variables that are the firm Un-changeable's. The variants for Output A will be different to Output B (A's being framing, focus, lighting etc, B's being poses, focus) and input selections will be limited to suit (as the core data that's already been funnelled and populated is fundamental to retain continuity so wont be amended in any way ). Output D &/or E generated. To then utilise the Output C generator again, maybe with a further addition of referencing the earlier Output Image C as well. My thinking to further ringfence what the intended style/feel is. (but maybe it would cause a negative effect...not sure). Output = Image F. This cycle can then be further repeated to create additional scenes.

I intend to utilise this system (creating scenes of multiple variations, all generated from the same core data) to be my method for creating 'Acts'. Meaning all of the relevant data is still live, as well as being utilised that way too by funnelling it throughout. Once I've created a base model of this idea it'll give me further insight in to what works, what doesn't, what it needs, what it doesn't etc etc. and if I'm barking up the wrong tree or not!

Would love to hear your feedback/suggestions/concerns/criticisms! In my head it works....but that doesn't mean it actually works! lol!

S

[–] [email protected] 2 points 8 months ago (1 children)

@mindblown My head was a little bit wobbling reading all the technical details about the image generation, but anyways, that's also kind of interesting, especially for this one...

I would like to see the progress on your generator though, just make sure to write the technical details maybe a bit more understandable for the less-into-AI-image people (like me).

[–] [email protected] 2 points 8 months ago

Absolutely 👍 I hadn't really thought that far ahead but as you've mentioned it and I've pondered I think I would initially provide a condensed version (The background gen, character gen and combination) publicly. Just until I've had some time to explore the boundaries of the original. But who knows...as and when the time comes I'll be sure to be clearer with my explanation though 👍

[–] [email protected] 1 points 8 months ago (1 children)

sounds like a great starting direction with distinct goals.

i think it doesn't matter if it actually works (tho it sounds both legit and doable), since you can adapt and make a 'next best thing' for any one part that doesn't and some parts may even work better than you thought and with empowering nuances you don't yet see until getting to it. aka sounds like a great path. nice getting it in a sequence and system.

i've been doing a similar first push in to node.js and databases. basically pushed way far and used alot of things and got some core things running successfully on that first push. then, yesterday, stood back, went over nearly the entire system i had made when exploring, and, now understanding it, remade it in my own way to do what i want how i want.

up to you how you learn, tho definite thumbs up to your direction and how you are getting in to it and that initial 'info gathering in to deciding prioritizations upon clearer picture' phase. will help when you ask

have fun delving in :)

[–] [email protected] 2 points 8 months ago (1 children)

Yeah I agree...what I'm trying to do is wrap my head around the components and then further expanding on those. In doing so new approaches/methods come in to the light. Example...earlier in my head I envisioned the 'stacking' of generators.....that image is now evolving as I'm discovering input output formatting, advanced hierarchical lists, multiple sub listing etc etc.

That's great man, nice work 👍 We sound very similar in our approaches. The last few hours I've pulled up generators that have applicable functions and taking a good look at how they've been coded. Just breaking it all down and wrapping my head around it.

Is there a way of viewing a plugins source code at all? Just I really want to look over a few and see how they're put together. The interesting thing I'm seeing with coding is the multiple approaches for achieving the same result. Some look crisp and smooth running....some look convoluted and hard work for all involved even the AI! lol

[–] [email protected] 2 points 8 months ago (1 children)

If you mean how the AI generates the image, the code is server side. But, you can look at the code of the client side on Perchance. Go to a page, then click the 'edit' on the top navigation bar and it would open the code panels.

[–] [email protected] 2 points 8 months ago* (last edited 8 months ago) (1 children)

Ahhh now I understand. I initially thought all plugins were like the t2i one....so with this example below, where is the 'pose-generator-simple' code?

image = {import:text-to-image-plugin}
pose = {import:pose-generator-simple}
verb = {import:verb}

If i paste it at the end of perchance.org in the address bar I get 'random pose' generator....so is {import:pose-generator-simple} using the generator at https://perchance.org/pose-generator-simple as the plugin? And therefore the html IS the code?

[–] [email protected] 2 points 8 months ago (3 children)

So, in Perchance there are two places to put codes in, the Lists Panel and the HTML Panel

Most of the time, you 'import' a 'plugin' on the Lists Panel e.g.

image = {import:text-to-image-plugin}

output
  ...

So, to see the code of the text-to-image-plugin you go to the perchance.org/text-to-image-plugin.

In your example, you are importing the text-to-image-plugin, pose-generator-simple and verb generators. Then to see their code, you just add those 'names' to the 'perchance.org/{name}' to see their code.

By Default, all generators in Perchance can be imported and the imported data can be specified.

On the text-to-image-plugin it would only output the $output(...) => which is a function, then to use it on your generator it would be [image(...)] since the output of the text-to-image-plugin is a function, and you imported it into the namespace image (image = {import:text-to-image-plugin}).

[–] [email protected] 2 points 8 months ago

Im finding your physical description v2 image generator really interesting to analyse 👍

[–] [email protected] 2 points 8 months ago (1 children)

And it all becomes instantly clearer. Thank you 😀👍

[–] [email protected] 1 points 8 months ago (1 children)
[–] [email protected] 2 points 8 months ago (1 children)

Woah that is minimal indeed! ;) Its cool though man I understand whats happening now. I presumed all plugins were programmed using jscript so didn't make the connection that the generators were indeed "plugins" themselves. It totally makes sense now. 👍

[–] [email protected] 2 points 8 months ago

and besides...I've no time for plugins...I'm elbow deep with KhanAcademy drawing ellipse's 😂 😂

[–] [email protected] 2 points 8 months ago

I've jumped on that KhanAcadamy course to help fill in blanks I'm unaware of 👍