![]() The below is already looking pretty good, but the lighting on the foreground elements does not match the lighting in the background.The result after changing the blend mode to Color. I also removed the diagonal books and used adjustment tools to change Rodriguez’s sleeves from yellow to white. Using various selection tools, I created a clipping mask to remove the flat grey background from behind Rodriguez so that the police station would show through. Subtle warm colors, black outlines, intricate, highly detailed, post grunge, cell shaded cartoon concept art by josan gonzales and wlop, by james jean, victor ngai, david rubin, mike mignola, deviantart, art by artgem Using Photoshop To Put the Character and Background TogetherĪfter getting an AI character and background I like, I used the upscaler to increase the size of both images by 4x, then imported them into Photoshop. To generate the above, I used the following prompt:Įstablishing shot of the interior of a police station. Since I already knew what the character looked like, I was able to use img2img, a CFG of 14 and a denoising strength of 0.3 to get a style and color palette that better complemented Rodriguez:īackground after several im2img generations to match the character Using a similar prompt to the one I used for Rodriguez, I got a pretty decent looking background, but stylistically it did not match the cute cartoon look of the character. In the story, the main character goes to meet Rodriguez in a police station. Making the Background in Stable Diffusion For example, in a few steps you will see that I will remove the odd stack of books on the left which Stable Diffusion probably intended to represent a desk phone. It’s easy to remove or edit specific elements in Photoshop afterwards.With a simpler prompt that focuses only on the character and a couple foreground elements, I am much more likely to get a combination I want.Without a busy background, SD is more likely to render anatomically correct and distinct body parts.The cell shaded comic art style plus the plain background has several advantages: Had I tried to also do the background simultaneously, it would be highly likely that elements I cared about would be dropped, or SD would render them in a deformed manner that blends the foreground with the background. I will admit that I generated dozens of different characters until I got the one above, but I think that is a major advantage of this method. That told Stable Diffusion to generate an image with a plain, single-color behind the character, which makes it very easy for me to use selection tools such as the magic wand to cut out Rodriguez later. The “flat background” part of the prompt is key here. ((Flat background)), subtle colors, black outlines, ((post grunge)), concept art by josan gonzales, wlop, james jean, victo ngai, david rubin, mike mignola, deviantart, art by artgem Wide angle portrait, cell shaded comic of a (frowning) chibi ((police officer)) sitting behind a (desk) with (stacks of paper). The advantages of using Photoshop will hopefully become clear as I walk through what I did.įor the character of Rodriguez in my story True Justice, this is the prompt I used in Stable Diffusion: This is the exact opposite order from in-painting, for which I generated the background image first and then added the character afterwards. With Photoshop, it makes sense to create the AI character first and then composite it into a separate background image. The process I am about to describe is a faster and much simpler version of the video tutorial produced by Albert Bozesan. While in-painting has a lot of advantages, after much experimentation I have decided that rendering elements separately with the Stable Diffusion Web UI and then using a third-party image editor such as Photoshop or GIMP is by far the fastest way of getting consistent and high quality results. I have also previously discussed h ow NOT to use img2img to insert an AI character. In my last article, I discussed how to only use in-painting to insert an AI character into an image. This limitation can be a big problem for me when I make the images to accompany my original short stories because I want a character that has distinguishing features set cohesively in a detailed background that matches the text of the story. Frequently, AI-generated characters also get lost in the background, with their hands and clothing often blending into other elements in surreal and deformed ways. Stable Diffusion really struggles to produce single images that depict multiple separate elements and characters. A Stable Diffusion guide from Alex Inglewood
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |