cool idea and welcome!
started typing stuff but i think more info would be handy first.
of note is that i know on the site Playground, which is advanced and uses Stable Diffusion XL to generate images (perchance uses the Stable Diffusion before XL), you could generate a character and a scene, then edit the scene file and copypaste the image of the character on to it, then ask the ai to recognize what is in the picture and redraw it. Then you would get a 'seamless' combo picture of the specific character but in a different scene. The issue doing that here is that perchance, as far as i know, does not have the ability to recognize an image and generate something from it. I will eventually make my own SD XL server, but with many steps between here and there, so we will eventually have this power but not yet.
you could also split a prompt in to two parts and have a character and a scene and allow editing the scene prompt and character prompt separately so the same character could be generated with multiple different background scenes. very highly doable and simple codewise.
it is also slightly harder but within reach to get the images out of the generator, so if, for example, you want to place a big box on the topleft of your screen which is an image of a background, and then place another image on top of it, you can and i or we can show you how. Sounds like this may be what you are going for and is highly doable and i bet i could show you even tho i don't know ti2.
If you want to do something like the above and have transparent backgrounds for the image of the character, i believe that is something perchance lacks currently, though I believe I can create something that will postprocess your character image in that way.
that's just what comes to mind now. "Is it possible?"
if you figure out more of what you want and more detail i will respond more exactly. :)