Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors

Using AI for creative development w Brandon Lisk

By: Paranoid American
Spread the Truth

Dollars-Burn-Desktop
5G Danger
View Video Summary View Video Transcription MP3 Audio

Summary

➡ The text is discussing an elaborate series of journals and sketchbooks, known as The Expansion Series, created by Brandon to promote self-awareness using mindful practices. The need to work around Amazon KDP’s restrictions on sketchbooks by having something printed on every fourth page is outlined. Brandon’s passion for promoting talented community artists in the project is also emphasized.
➡ The speaker discusses a concept for a new product, a sketchbook with an evolving stick-figure narrative depicted on every fourth page. This stick figure embarks on a variety of adventures, potentially inspiring users in their own creations. The speaker also speculates on the possibility of utilizing AI to animate the stick figure’s story throughout 260 pages of the sketchbook.
➡ The text narrates the user’s journey of using a complex image rendering tool that utilizes AI, allowing them to manipulate images based on input text prompts, altering effects, and choosing different models. The tool allows creative control, provides immediate visual feedback, and can recreate or modify images based on changes to specific variables or ‘seeds’.
➡ The text discusses an interactive session on AI and digital animation. The speaker emphasizes on unique concepts like creating seamless transitions between various styles and utilizing tools like the “Open Pose”. Further, it mentions creating stick figure animations, feeding them into apps, and bringing in other elements to create a diverse animated experience. Dialogue and imagination run freely in the discussion, exchanging ideas about transformation of scenes and blending of realities in sequences. The text concludes with the introduction of a new creation – ‘The paranoid American Homunculus owner’s manual’.
➡ The text discusses creating animations on the corners of a book’s pages using a variety of characters and storylines. One method includes using AI to generate animation sequences, making tweaks as necessary to create the desired aesthetic. The process involves creating keyframes, establishing the animation’s style, and saving this information for future use. Software like Photoshop and Open Pose can be used to manage the process, allowing for precise control over the animation and the potential for time savings.
➡ The text describes the process of manipulating a digital model, using animation software to create different poses and movement. It includes the use of various tools such as the open pose and preprocessor feature, and discusses the impact of different trained models and their attributes on resultant animations. The discussion also explores the concept of style consistency, and the influence of data scraping on the AI learning process.
➡ This text discusses how to manipulate image backgrounds and includes discussions on white backgrounds, green screen chroma key backgrounds, PNG backgrounds, and background removal. It explores the use of different models and tools such as Control net, regional prompter, and segment anything for various effects. The text also highlights the potential of using AI for these tasks and suggests some strategies for specific outcomes like stick figure drawings or detailed rendering.
➡ This text is a description of an AI-driven text-to-image editor, which allows individuals to create images based on inputs and instructions. It also has a feature called “interrogate clip” which can analyze an image and describe it back, allowing for a better understanding and refinement of the output images.
➡ The speaker is demonstrating how to use an AI training model for creating images. They experiment with different model specifications, keywords, and settings. They frequently test their changes for quality and consistency, resolving any arising issues. Ultimately, they adapt the model to generate monochromatic pencil drawings of a male shooting T-rexs with a laser from a UFO, even discussing the possibility of using the illustrations for T-shirts.
➡ The text describes a process of designing an image with specific features (UFO, trex), using different parameters and models in a drawing software. The user manipulates the prominence of elements, positioning within the image and level of detail, through iterative adjustments and different model choices to achieve a desired outcome.
➡ The text is a comprehensive dialogue on an AI-based image modification process. It includes experimenting with different models and tools, exploring the effects of altering the settings on image outcomes, and discussing the benefits of a cloud-based system for extensive image work. It further notes an interesting feature where the metadata of an image holds all modification parameters, aiding in continuing or recreating work as per requirement.
➡ The text discusses the use of different tools, interfaces, and models in a program for creating and refining images. This includes ‘Automatic Eleven’, ‘Comfy UI’, and the ‘Lora’ model which teaches a model something new. The author demonstrates using these with an example of creating an image of ‘Iron Man with a molten lava suit’. The text also touches on drawbacks of misusing these tools, such as creating incorrect models.

Transcript

In a world overruled by machines where algorithms dictate your every move. The rebellion begins now. From the ashes of fallen tech, paranoid American unveils the knowledge to harness AI. To fight back. To retake our destiny. Join the uprising. Arm yourself with the power of AI. The battle for tomorrow starts today. Welcome to Paranoid programming. One more episode. We’re going to keep these going because I think that they’ve been helpful.

I find them helpful for me, so I couldn’t care less if anyone else does. I’m going to keep doing it. And today I got my homie. Brandon. First of all, just let me introduce yourself and tell people where to find you, and then I’ll hype you up. After that, I’ll come on as like, the afterwards. Hype, man. Deal, man. That’s awesome. First of all, thank you so much.

I’m a little out of breath. I ran to go grab some real quick in the intro and came back. But it’s so cool to see you. Yeah. So I run a show called Expanding Reality, and it’s been fun. It’s been a little over two years, almost three now that we’ve been doing it, and it’s been crazy and awesome. Get to talk to the coolest people throughout that journey.

I’ve actually contacted a bunch of authors and things like that and gotten really into publishing because I’ve always been into books and comics, which is why you and I just have the same world. And so now you and I both being in publishing. Again, apologies, I’m out of breath. I have been doing this journal series, which is really cool. So this is an opportunity, I suppose, to plug at the same time that we talk about what we’re doing here.

And it’s an entire series. So the series is called The Expansion Series, and they’re all offered in paperback. And hardback, they’re all six by nines. They’re all the same size, but within them they’re different. So the series has four different volumes for the first original one, and then they have sub volumes after that with alterations. The first one is called mindful expansion. And we had guest artist Janine Burgess do the COVID She did an outstanding job.

And then you can see here, mindful expansion, know, abbreviated down to me so that you can say that every day you’re working on me, right? This is a mindfulness practice journal where on the inside of it here, you go through a daily sort of thing and it takes you through. And there’s an intended use in all of this. It shows you how to use all this at the beginning.

But we’re going to be rolling out more videos and things on how to do this because it can look crazier like whatever. But the basics are it’s a mindfulness practice journal, and it’s my journaling practice that I just drew every morning for me. And then I just decided to publish it because I didn’t want to draw it every morning. But also my wife Mary loves it and I figured it could be useful for others as it was for me, just a tool to get more aware of who you are.

And it’s just got things like a daily design section. It’s four questions you ask yourself every day. Moon tracking, which sounds silly, but I know New Moon for me it’s like my time of the month. I just know around New Moon I just don’t schedule light, you know what I mean? I just know this there’s a mantra if you just want a little quick mantra or something. Gratitude, a reading, practice your conversation with self, goals, release, attract.

Like all of this stuff in there, but bigger than that as well. There’s sample pages in the back here of another journal that we have available which is Introspective expansion and that is just a blank line journal series that actually same thing, cool cover art by Ashley Rose. We have a ton of amazing talent in the community that’s offered itself for these projects and we’re just shouting them out from the rooftop.

So Ie is what this one is, introspective expansion. And with that it’s more of a line journal series to where you do have this really cool deliberate way of creating a conversation with self or keeping a diary or a journal or anything like that. But also you have these little nodes, as we call them, these constellations where you can fill in your time, date, month, whatever. You can have project delineations within this and that’s one of the cooler things.

And then also at the beginning of all of these we have contributions from the community, right? So that’s a huge part of this is our community. So we have a bunch of authors in this one in particular because it’s writers and authors geared. A bunch of creative, amazing authors like Dr. Irina Scott, philip Kinsella. Lester Velez is in here. Philip Mantle. Dr. Doug Matsky. Mark Ollie. Jim Penniston.

He’s a dude that touched the side of the UFO in the Rendelsham Forest case. Mark Gober’s in here. So a lot of great words to get you into the projects. Now, the one that you and I are going to talk about, hashtag Segue is Ce, the creative expansion part of this. Now this cover was done by Gigi Dillon and Ce is all about creative expansion. It’s mainly a sketchbook series.

So what we did with this was it’s mostly blank pages, okay? And that’s the point is to give you the space to create but within that as well, same thing, community oriented. So very basic cover here, intro page. I give you sort of a coloring page if you want to take advantage of that. And then we go through with a featuring of several artists from the community. This is Erica robin.

She does some incredible work. So we wanted to make sure that ahead of the sketchbook part, nick Warden I have the original of this hanging on the wall back there. It’s amazing. This kid is incredible. There’s quotes from all these artists on here, so all of them have different quotes. And we really wanted to come in and feature different people. So you have, like, Whitney Fox. You’ve got just incredible artwork ahead of you jumping in.

And, yeah, here’s my time. And there’s coloring pages in here, even all the way down to just doodles. Again, some fun community members just wanted to submit some awesome stuff. So what we’re doing with this is, again, featuring some amazing talent that we have. Everybody is listed in the back as well, how to find them, and we want to make sure that y’all can communicate further with everybody out there.

So in the back of each of these is a complete list of all of the participants and then how to find them. Right? I’ve got to give the shout out. So especially the COVID art designers and everything like this. Again, big community projects. So several versions of the Ie and Ce coming. Like, now, I’m publishing Wisdom expansion, which is the evening version of this one. It’s the complete opposite.

It’s the inversion of it, but it’s more of an evening geared practice. And that one’s a little bit more directed. Ie and Ce are really more open range. You can just do whatever, but even the line journal there’s touches, it’s unique. It’s not just a canva reprint. I actually hand drew all of this stuff on an architecture table with you draw it twice, right? You draw your draft, and then you go back and ink, and it’s all hand done and then scanned in.

So all of it was super deliberate. It took hundreds and hundreds of hours to create, and I’m extremely excited about it. So, long story longer, why you and I are here today is because Amazon KDP, who we’re utilizing to print right now, does not allow you to do sketchbooks, okay? They just don’t want you to do sketchbooks. Now. I don’t know, if you go with their ISBN, if you get their free one, then you could do that ISBN for everybody listening is International Standard Book Number.

And that’s basically like your fingerprint of your book. It’s your barcode, right, for your book. So we purchased our own ISBNs for all of our work because that’s what we want to do, which is own it. And so I think that because of that, again, I’m not 100% sure in this because I didn’t try and print it without one. KDP does not want you or Kindle Publishing, which is Amazon.

I’m speaking in publishing terms. There are more people listening out here that don’t do this all the time. So Kindle Publishing does not want you publishing more than Allow actually printing more than four blank pages throughout a manuscript and ten blank pages at the end of a manuscript. So if you want to create a sketchbook, you essentially need to have something on every fourth page throughout, right? What we did to get around this was because it’s just an AI bot that’s just going, is there something print to print on that page? Not how much is it? Is it enough content? Is there a percentage of print? It’s just is there something to print on that page? So we’d put a small, tiny little circle at the very bottom fourth page.

Okay. And then that’s how we got this printed. So it’s like I said, a sketchbook that you can go nuts on, but every fourth page you’re going to see a tiny little expand. And if you can read it, it reads expand in the tiny little circle on the bottom. But why again we are here today is because for one of the upcoming versions of Ce, rather than doing it that way and wanting to provide still a blank canvas, an open road for folks like me to sketch in or something like that, or anybody.

Just to draw in and to create. Well, we need every fourth at least page to have something on it, printable on it. So again, rather than throwing like a little period in the bottom or something like that, I figure we do something fun. So I’ve already connected with an artist on this. What we’re doing is essentially this is going to be about 260 pages worth of a sketchable journal for you all, because I still plan to feature artists throughout the community at the beginning of all of these, right? So within that, we would like for a character to have a narrative that he plays out in a stick figure form in a certain parameter around this six by nine journal book format that then later we can go in and put him in all of these different scenarios.

Like, let’s say, for instance, the idea that we have that the artist is already working on, actually is to have basically a stick figure because we want it extremely simple. It’s going to be extremely tiny, but it’s going to be a narrative that runs along the outside of the perimeter of your sketchbook as you draw in it. So as you get to page 64 in your blank sketchbook, as you’re just sketching away, you’ll notice this little guy maybe being chased by a dinosaur now or picked up by a pterodactyl.

Well, let’s say that you want to expand that and you want to make now that page and that little thing that we’ve provided. Because KDP doesn’t want you to print blank pages or provide sketchbook pages for people. You can take that now, expand it into a world. Maybe now you’re inspired to draw like a volcano or something based on that one tiny little image. Maybe you want to animate the entire thing to full scale.

So now you as an artist can challenge yourself to go in and take this little stick figure’s world or his experience that he’s having, because it’s many. He goes through time. He’s got a portal gun, he gets picked up by a UFO. It’s all of this stuff, right? And so it’s limitless as far as what we can do with imagination. He goes into a portal on the left upper right, and then appears in the bottom right page of the right page, right? So little things like this to make him dance around.

Now, again, if we would like then to present to AI. We have a certain number of pages that this is confined within 200 and let’s say 60. Okay? Let’s say we have 260 pages worth of this story needs to be played out. How would you get AI to animate you if we weren’t using an artist? How would you get him to animate A stick figure that basically you type in, he finds a portal gun, it gets picked up by a UFO, beats off the aliens.

However you want to take that, better be specific. Yeah, it’d be funny. The whole thing is just him beating off dinosaurs and time travel ghost pirates and shit the whole time. And then he takes a portal gun. Zaps know timer. Let’s say that a man says, just simplify everything. Dude finds a portal gun, can portal anywhere, goes on wild adventures. And then what we can do is basically insert the parameters for those adventures right underwater to meet mermaids down to Antarctica to fight aliens in the secret pyramid.

That’s under fun. Again, dinosaurs would be a coin being chased by a T rex, picked up by a pterodactyl, and then dropped and then picked up by a UFO. You know what I mean? So what I mean to say is that we could give basically AI direction and give it a certain number of pages to achieve that direction in, and then see how it delineates, how it spreads it out, how it formats.

Basically just give it the parameters and let it go. Is that something that’s possible? That’s possible, but there’s a lot more like right now, human beings have to stitch together a lot of different individual pieces. So, for example, we’re talking about maybe 260 pages. I would suggest in this kind of a thing that you’re working on, for this to actually look like it’s animating as you turn the pages, you don’t want it to be any less than twelve frames per second.

So if we’ve got 260 frames to work with, twelve frames per second, we’re looking at let’s say like a 22nd animation total. So I’m just going to make a note of that. And the question is here, because it’d be two different ways of looking at this project, does the guy have a certain fixed point in the bottom of a book, for instance? Because if you’ve got this and we’re going to say flipbook style, then you’re going to want it to all be concentrated in an area, right? Let’s say, so that it’s all in a specific area so that you can manually utilize the feature.

Now, what I was talking about was something more of like, each page you turn, you don’t know where the dude’s going to end up. Do you remember castaway? Okay, in this case, it doesn’t have to be like sequentially advanced from the last page. Every page could be completely different. Just the next frame, logically, of what that dude would be experiencing without needing to flip to animate it just as simple.

Perhaps a cartoon type of a panel cartoon type of approach to where here he sees something. We see an expression on his face when you turn the page. So really you could be turning these blank pages just to find out what happens to this dude as you go. And it’s going to be tiny. I mean, we’re talking quarter of an inch, probably. I’m going to give him a quarter of an inch track around the two page spread so it won’t be isolated, it won’t be quarter inch track for this page, quarter inch track for that page.

It’ll be around the entire thing that basically he runs around and you’ll be able to find him in that area. Okay. I mean, honestly, there’s some parts of this that would make a lot more sense to just do manually still. Like, for example, if it’s just a stick figure, then I would probably still have a stick figure in place. If we can just find one, maybe. There’s essentially two things that I would like to talk to you about.

One is to, yes, utilize a stick figure in an area to where he can run around anywhere. Oh, you can grab each frame of that. But another part of this dude is to do one to where it’s just in the bottom corner to where it is more flip book style. So now we’re talking frames per second to where this thing will animate as you flip it, but in more of an isolated area so that you can literally flip it.

You know what I mean? Yeah, I got you. For a good plan of attack. I’d probably start with this little dude running in place because that can serve as the basis as like the thing that moves around as we go. And it doesn’t need to be animated, but it can also be the thing that lives in one specific place. And you could even just throw in ideas out blue sky, but you could have like twelve to 24 frames at a time in the bottom corner, and then maybe it shifts up to the top right corner.

So maybe you can have a little bit of both worlds, you know what I mean? Yes. Because as we’re talking about this now and this is the fucking awesome part about this show. I love this, by the way. This is a brilliant idea. Dave is staying with us. He just had a call. He wanted to join us, but he wants to do this next time. So as we’re talking about this dude, I’m sitting here thinking you could do one in not only like you said, to jump corners, you could send him to different places throughout the book.

So let’s say dude, okay, you have him enter a portal, like on the bottom left and then come back out of it on the top right and stuff. Exactly. And that was the idea, but the execution was what was interesting. So now what I’d like to now what I’m more interested in, because this was another side thing, but it could be the same. It’s a yes and not an either or.

Okay, what’s interesting about this now is that we take each corner, front and back of the pages because we’re doing double print. And you could basically have it be here for a little bit and then an arrow, and then have that arrow point to that corner. And then you flip it up here. And then have him be there for a little bit. You see what I’m saying? So you could send him to the four corners of the book as you do it.

Like you said, he enters a portal here and then comes out over there. Another thing would be just to put in every corner of this. It’s going to have a small figure in it, front and back page. And then now we have four different stories going on or four different animations, let’s say within the same sketchbook. So now whenever you look at it, you can say, okay, cool, I saw a little thing here and I’ll put an instruction page for it.

But you just say, okay, upper right, this is what’s going on. And then boom, there’s the animation. And then when you flip the book over the same exact corner, there’s something going on on the back. And then when you do it to this one right, and then you have basically, I mean, we would just be doing the same process. You just exactly want up to eight times per face and rear, like the front and back of each page.

You could have eight different little animations going on to pick from so we know our page parameters. So what we really need to do is decide how many frames per page before a jump so that we can make it right. The good thing is that once we establish a flow here, since it’ll be AI generated, all you got to do is find one sequence that works, and then you can start saying like, okay, make that a minute long, make it 10 seconds long, make it bigger, make it smaller.

So just figuring out what the initial sequence is. I just want to be clear what we’re talking about doing here is probably longer than a quick 40 minutes session. In my mind, everyone approaches differently. But first I’m going to get specific frames. So usually we’ll do like a few different keyframes. That one little animation that I grabbed which is like a second long and might extract like three to four keyframes out of it.

So let’s just say we’ll start with three keyframes. And a keyframe for anyone that cares is just where a position is like a specific position that you want to have in the most detail and everything else are your Tween frames. Those are the between frames. So we’ll make three keyframes. We’ll establish some sort of a rough style. This is the part where once you get kind of close, then you can go and spend a month just like Tweaking and getting exactly what you want out of it.

And then after we establish a style, what we want is a consistent prompt, something that will give us that style and kind of keep it within some sort of bounds along with all the dimensions and resolution and all the settings. And then basically save it. Save all that information so that we can easily load it up again and approach it from the next frames that come in. So let’s see here.

I’m a Photoshop junkie, so I’m just going to use Photoshop to open that file that we just made. Yeah, and you can use gamp you can use online even. I just don’t feel like learning an online thing for the sake of this right now. Let’s see. All right, here’s our little guy. So there it is in Photoshop. And then I’m just going to kind of pick out some keyframe.

You can also there’s an animation window, I believe, timeline, and this will let you play it. Oh, that’s awesome. So I’m going to say the first frame is definitely going to be one of the keyframes. So that’s layer one. And then we advance here. I would say like somewhere around here that’s maybe a keyframe. So seven and then they’ll end then. Yeah, I’m actually going to start with this one right here.

So we’re going to export just this one frame. Yes, that was right. And then I’m going to pull that frame in here. Anyways, I’m just going to let you know what we’re starting with. I’m going to go into this text to Image to begin with. And we’re going to use Control net, and we’re going to just see what the heck happens because I’m a little bit rusty. And even if we run into problems, that’s good because then we’ll learn how to get out of the problems.

So cool. Such a great idea for a show, dude. I mean, the main thing is I actually want people to learn how to do this stuff so they can start making more and more cool content. There’s no downside to it when it’s cool, too. You being such an art fanatic and obsessed with artists and real analog, tangible art to also be utilizing a tool like AI because there’s also sort of a war going on if you want to go to Twitter about it.

That this idea about art and artists and being replaced. Like the you do good job kind of thing. But I’d like your approach to this, to where it’s a tool to be utilized and it’s not replacing anyone’s job. It’s just it is replacing people’s jobs. Not in our respect, not in this exact we weren’t going to hire people to do this. Yes. And in this one, it’s like, you know what you want to do, and you could do it step by step, frame by frame.

You just realize how long that would take and that you already I’m going to assume that since you’ve already drawn a lot before and you know what your process is, you kind of know exactly what it’ll look like. You don’t have to do the whole thing to see it. And you might not like what it looks like in your mind’s eye. And you want to just play around and see some stuff without spending an entire week just drawing stick figures.

This is a great way to do that. And again, this is so new. This is the new Excel version of stable diffusion, which uses more memory and there’s more options. So we’re going to definitely be learning as I go here. Normally, in a regular version that I was used to before these updates, there’s something called Open Pose. And right here is this Open Pose option. But the way that Open Pose works, I don’t have one of the extra plugins here, but it’ll detect a stick figure and it’ll draw it.

However, it’s not going to work in that particular version. So let’s see if I have an Open Pose online editor. It would be awesome if there was one pose AI. Oh, yeah, this is awesome. Hell yeah. Okay. And it would be even cooler if I can just drag an image onto here now. Okay. But what I can do is try my best to kind of match this one frame.

This wouldn’t be practical to go into Open Pose and match every single frame bit by bit, but you’ll kind of get an idea. And this Open Pose is such a cool thing to know how to use anyways. So it’s basically a full 3D editor right here in the browser. So we’ll kind of find a specific pose that we want. And another thing too, that people are like, oh, you can tell AI now because it’ll have too many fingers.

Or like the fingers will be weird. Well, look, now we’ve got these little hands that have little dots for every single finger. And this will be picked up by the Open Pose in AI and know exactly where the hands need to go. And it won’t draw that 6th finger anymore. It’ll take care of all of it. And it’s got it for feet down there too. But I don’t really care about the feet as much.

Yeah. So Britney Spears can have all of her fingers back again. That’s nice. That’s right. And then I just got to figure out what the friggin buttons are that make this move around. Let’s see. Move mode. Okay, that one will just hide the feet, I guess. There we go. Okay. Apparently you double click on it to move it, and then if you double click again, it’ll cycle through the different options.

Here we go. Wow. So I’m not going to go and try to match this one, but just imagine that the frames that you want with the guy doing all the motions, you can just do it here in open pose. And you can save this file out, too, in open pose, all these different options and then load a bunch of different gestures. So if you want, you can just find like, a guy hanging or one where he’s, like, kicking, and then just keep reusing those for all different themes and everything else.

So let’s just have his let’s raise one of the arms up here. Just get like, a weird posture going on here. I don’t know what I’m doing with this hand. And then we’ll move a knee here. Okay. I thought I knew how to move it before, but now I’m just rotating it. Maybe grab the foot and pull it back because it seemed that the elbow moved when the hand moved.

Okay, there we go. So that’s not the most comfortable looking posture. It’s like a new yoga move we just came up with where you break your knee. There we go. So let’s just pretend that this is actually what we wanted. That’s what we want. And if I click generate here, it’s almost like stop animation, but with digital stuff instead of claymation. And then, man, I don’t even know what all these little options are.

I’m just going to save them, see what happens. And then here’s one of those open poses. Let’s see. Okay, I see what’s going on. One of them is for hands and feet. One of them is for just the body posture itself. So if we start I’m just going to start with a body posture. I’m not going to get too into the hands and all that. No. So I’m going to have this on open pose.

The preprocessor, I don’t even think I need it. So what the preprocessor does is if I were to upload a picture and we can do this too, we’ll find a picture of like, an action movie sequence where someone’s doing like, a cool pose. You can bring that in here, use a preprocessor, and it’ll automatically trace the stick figure out for you of however that person was posed. And then you can just reuse that pose over and over.

So I’ll show you that too. Doing it this way gives you a much better idea of exactly what open pose really is. Know, doing something. Like that would be fun if you wanted to take the fight scene from Matrix, for instance, and you wanted to take that exact scene and movement of the characters and mimic that exact scene, let’s say in South Park. And so then you just take and you take all of those action characters and you just apply it to the physicality of your world.

Is that basically what exactly? That’s it exactly. Because then they would mimic the exact twist and all that cool stuff, and you wouldn’t have to tediously animate it because it would all render correctly. Is that right? That’s right. And then I’m just going to go with this one called Dream Shaper. For now, there’s no real specific reason, and I’m just going to gloss over some of these settings.

So I turned the preprocessor off for now. This model here, this is like the default one. And when you read this, over control means it’s control net. This v eleven P, that just is the version of this particular trained model. And then the SD means the version of stable diffusion. So this one’s for stable diffusion one five. I don’t know if I can zoom in more. This one is for stable diffusion two one.

And these ones that say Excel at the back are for stable diffusion XL. Now the stable diffusion one five is just a little bit more familiar and I don’t have to poke around as much. So in order to make that work, I changed the model up here to be stable diffusion one five, which is also just known as v one because, just long story short, one five is actually v one.

And then here are all of the Sdxl models, which I’m not using right now just because it takes way longer. And then once we get something that’s starting to look good, then we can switch to an Excel model, which will take twice as long to render, but then you start getting more and more detail out of it. Wow. So in addition to that, Excel’s default resolution is 1024 by 1024.

But I’m going to change this to 512 x 512 because we’re using a model that’s about half that resolution. And then I think we’re good. I’m going to leave all of this to the defaults and I’m just going to type a pencil drawing of a stick figure and see what happens right off the bat. And I also like doing this batch size. If you’re used to using Mid Journey, you’ll get like four different results that you can pick from.

So I appreciate that. So I’m going to set this to four just like that. You can crank this up all the way to eight, but it takes eight times as long then, right? And then that way you’re given your option for your consistent style. You could pick that one and say, okay, let’s go with that and let’s build off of that. So the style consistency is going to be.

Once we get something that starts to look okay, we’ll start adding more and more positive and negative prompts to it. We’ll start adjusting these settings down here for control, step and weight. And then there’s also a concept of adding, like, Laura’s, which we’ll also get into, I think, in this. So I’m just going to see what this looks like right out of the gate, just in case there’s something, like, totally messed up or I get an out of memory error or the open pose doesn’t work.

Okay. Not bad. Not bad. It definitely is paying attention to this pose. Wow. So I do know what I’m going to do is I’m going to add NSFW and nudity to the negative prompt, and I’m going to wrap it in these parentheses. Every time you wrap it in parentheses like this, it multiplies it by 1. 1. So this is actually giving it 1. 1 waiting. If I do that again, now it’s doing 1.

1 times 1. 1, which is like one point eleven. So now when I hit generate again, hopefully we don’t get something that starts looking on the edge of there we go. So now they’re wearing clothes. Right. Why would it have a default for nude? Oh, because sketch, it’s sort of a people sketch nude people thing? Or do you think that it’s a great question. Well, it’s a little bit of both, but the main reason is because this stable diffusion was trained by stability, AI, and a whole bunch of other open source projects.

But what they do is they go online and they just scrape images and they tag the images and what are in the images, and that’s how the AI learns what all this stuff is. And if you imagine if you were to download every picture on the entire Internet, there’s a substantial quantity of those that are like nudity, right? Yeah, that’s a brilliant point. So all these different models, it doesn’t know that we’re not all like that.

And these models, too, they all have this isn’t an actual statistic, but it’s almost like a horniness rating, right? Some models actually, if you look in the comments of people in training, it it’s known as, like, oh, this is a really horny model. Like, for example, this chill out mix. This one I would consider a very horny model because it’s trained on anime and hentai, and it just has a lot of that stuff.

But it would be useful for a model to know all about all those different styles. But it also just if you just say, like, draw me a woman, it’s going to draw a very specific type of woman. And you have to start explaining, like, okay, give me a normal looking 30 year old. You know what I mean? It’ll just go right to a certain thing. So dream shaper ones, they tend to be more artsy, so you’ll get a lot more deviant arty.

And I hate using specific places because it’s not like it goes in trains on DeviantArt. But you’ll get it’s kind of like artist stuff. It’s on percentage. Here’s one right here called Disney Pixar cartoon, right? So if we pick that one and render it, you can kind of guess what we might get out of it. Although we said a pencil drawing of a stick figure. So a pencil drawing and a Pixar cartoon are two different.

They are. So that’s to me, that’s some of the fun is like figuring out how it deals with some of here. Then the question is, is it emulating an artist who’s drawing with pencil drawing a Pixar character, or is it Pixar attempting to make it look like a pencil drawing? Or it gives the Pixar person a pencil, right? So here’s a version of that. Now to me, this is probably more highly detailed and rendered than you actually want, but it’ll be kind of cool.

Let’s say that he goes into a portal one time as a stick figure and then pops out as a fully rendered character in that world, looks around and is like, what the fuck, shoots the portal gun and comes back out as a stick figure. I’m fine with that. So there’s another aspect of this, and this is the flexibility of the prompt. So making it so that you can pick any of these models and hit render and it still kind of works for what you want is another.

So an example of that is this bottom right one would not work, right? And why would the bottom right one not work? Because it doesn’t have a white background. Well, guess what, we’re just going to go in here to the positive prompt and in parentheses, I’m just going to say white background. And then if I hit generate again, there’s never a guarantee that it just listens to that.

But now we’re going to have a much higher chance of just having a white background because you want to be able to background remove from most of the shit you’re doing. And I’m definitely going to want that for page consistency because check this out too, man. If we were doing this so this is a trick I use for video instead of white background. If I do green screen chroma key, green background and generate.

Although now what’s going to start to happen, you’ll notice the people that I just drew are in white and now they’re going to be wearing green. So there’s extra. This is where it starts getting to tell it, okay, white background but green shirt or something. Could you write PNG background to where it would just give you the removed background already? You know what I mean? You can, but there would be no real reason to do that because you would just do white background and use like photoshop to make that matte background.

But if you could render it out from here in because let’s say that I want one publication in six by nine cream colored pages because that’s what we do for consistency. But let’s say I also want an eight and a half by eleven white pages, but I want to be able to use the same animation. Then I wouldn’t want to need to remove the background twice, though. You could, but if we yeah, in my opinion, though, you would render them all out with a white background and then just use a multiply or like a darkened blending mode, and then it works on any page color that you can think of automatically.

Got you. Is there a way to do that here again, so you don’t have to take that additional step? Is there a way that AI can help us? Mitigate? There definitely is. I don’t know if it’s included. So we’re using Rundefusion. com right now to do this all in the cloud. So anyone can do this as long as you’ve got an Internet browser, even on your phone? I wouldn’t do it on my phone because it would be a huge pain in the ass.

Right, but technically you could do this all on your phone and shout out Paranoid 15 for a little discount code and extra free time. Shout out Paranoid 15 and then in this script so in my local version, when I click this little script drop down, I’ve got like 20 options in here, and one of those options is just background removal. So it does exactly what you’re talking about.

And even better than that is I don’t even have to tell it to make it a white background or a chroma key background. Even if it draws a complex background with stones and waterfalls, it’ll still detect what the focus area is and get rid of the background from it. Because this is the other thing. There’s some tools, like canva, for instance, it sometimes works great, sometimes not so much, and depending on your picture, whatever.

So in this way, it would be nice to have at least a consistency, especially if you have sort of an upgraded focus on generating graphics very well. Canva feels very useful and incredibly useful. Never bash it, but it’s got its limitations, just like anything. And so it feels with something like this, it has the opportunity to exceed those limitations in the thought that there you go. So you can see some of the shit background removals.

Yeah, so the one on the left is called Rembg and the one on the right is called Clip Drop. So if I did another search for like, Clip Drop, automatic 1111 is this particular UI is automatic 1111 and oh, looks like they’ve got an Sdxl version of it. So if I wanted to go down this rabbit hole, there’s some examples where it’ll show you how to download and install this Clip Drop, and that does exactly what you’re talking about.

Wow. Of course, to be alive, this is where that’s when you might spend, like, a couple of hours, like, okay, let me download extension, install it. Okay, what are the instructions? Where in this UI is it install it to? Sometimes it installs it into a new tab. Sometimes it sticks it in this drop down all the way down here. Sometimes it’s one of these things and do all of these different options, too.

These are each worth, like, hours of deep diving. A detailer will update faces and hands and just make sure that they look accurate. These tiled ones will help you make seamless tile backgrounds if you want. Roop is a face swapper. You can swap your face in and out of anyone else’s face. Anime Diff is the one that people are using right now to make all the animations where people morph from one place to another.

We’re using Control net right now, and then regional prompter and segment anything are really cool. Regional prompter is you can tell it like, I want the stick figure. It wouldn’t work in this because we’re using Control net. If I were to disable control net, then I could say, like, I want a sun in the top left corner, and I want a house in the bottom right corner, and I want a lake in this specific spot, you can tell it where all those things need to be, and that’s where we can get really specific.

If you wanted her to have, like, a polka dot shirt, you could draw the exact area that you want the shirt to be. That’s like when you already figure out exactly what the broad strokes are and you’re getting down into nitty gritty. So I think that we definitely want to move away from this particular model just because we can force it to give us the stick figure drawings.

But since this is literally called Disney Pixar cartoon, it’s probably not going to that well. But it’s still, again, through just the exploration of this, gave a really cool idea that it doesn’t need to be limited to our stick figure guide. Actually be even cooler if, like, for instance, in Hitchhiker’s Guide of the Galaxy, whenever they hit the Improbability Drive and they come up and they’re all string people, right? Or they’re in the string universe, and it’s all made of crochet knit string, right? So it was a completely different way of organizing material.

Same thing in Doctor Strange. The multiverse. When he’s flying through all those different multiverses, he was animated in one. He was a fucking weird creature. He was all boxes in one. So you could do this where let’s do a couple of those that you just yeah, see, like this. It could be like, stick figure guy shoots portal, walks in within ten pages of flip. Then he goes into that portal, appears over here, but he’s now a fully rendered, like, detailed, more detailed character.

And what would that be like for him to go from two D to three D. You know what I mean? So let’s start with a couple of those. So I want to get a stick figure that looks kind of close. Okay. And it keeps wanting to draw females, which is it’s the Internet coding this male or, say, pencil drawing of a male stick figure. And then I’m also going to say, like, simple, minimal pencil drawing.

And then I’m going to take, like, detailed, realistic and the negative prompts. Top or the positive? Bottom or the negative. So let’s render that one out again. And I switched to a model called Icomics. And this one’s actually trained on a lot of comic book art. No way, that’s even cooler. So now you can just render. Here we go. I actually really like the one on the bottom, right? Yeah, I do too.

I do too. Yeah. So let’s go to the bottom right one. So just by clicking on it, it’ll show us everything that it used to generate. This one here’s the seed for it. The seed is how you can kind of, like, recreate it over and over. So, for example, if I put that seed in there and I’ll put the batch size back down to one, so it just generates one if I hit generate again, we should, in theory, get something that’s damn near identical to that, although it can fluctuate a little bit each time.

But this way now we can say, okay, I’ve got somewhat of the aesthetic that I’m going for. And then we can start tweaking just this particular one and seeing what it looks like across different models without changing the seeds. So if we change it a bunch and come back to this, we’ll get the exact same image again. So that’s a really cool aspect of it. Wow. There’s another cool thing.

I’ll go into this because this is a cool little insider hack, I guess. But if I send this to the Image to Image tab and you wouldn’t know how to do any of this without knowing what all these freaking buttons do, right? But this little button down here that looks like a dog painting, if you hover over, it says, send image and generation parameters to the Image to Image tab.

So right now we’re in Text to Image. Image to Image is right next to it. And what I could do is just manually copy all this stuff over. But if I click this button, then it does all that for me. It’ll load the picture in there, it’ll add these guys my prompts, and it sets all the different settings that were down here as well. But the reason why I even did this is there’s something here called interrogate clip and interrogate deep boru.

So interrogate clip, I’ll click it. I don’t know how long it’s going to take if it’s downloaded or not already. So this is going to do the exact opposite. This is going to take this image and say, if you were to recreate this AI, what would you tell yourself in order to do it? And it might not be accurate, but sometimes it’s really helpful just to see what AI thinks this is because that’ll help you get more and more specific about it.

So it says, drawing of a man holding a sword and a sword in his hand with a black outfit on a white background, epsilon point. I don’t even know what the hell that means. Dynamic manga drawing. Arabesque. So let’s say I don’t want manga drawing. Now I know to go back into my text to text image, and I’m going to add manga drawing wow. To the negative prompt and hit generate again.

And what we should see is hopefully like a very similar image, but just less manga e. I mean more stick figure e, right? More or less, because it’s less facial features defined. We’ve got like a silhouette. So what if I didn’t want a silhouette? That first chance I even have to do spell check. Nice job. I was about to say I’m impressed. That’s fucking amazing. Now he’s got detail.

Oh, that’s cool. Shit, dude. Okay, so let’s say we kind of like how this one’s going a little bit. You could do a whole comic book like this. So now let’s send this one back over to image to image and reinterrogate it and see what it says about this one. And I’m not going to go too far down this rabbit hole. This is like a cool little way to so now we’ve got a drawing of a man holding a sword.

Blah, blah, blah. Gray background, epsilon point. I still don’t know what that means. And now anti podeans. What the hell does antipodeans mean? A person from Australia or New Zealand. Sure, whatever. It’s an Australian guy, I guess. It’s the hair. It looks cool as shit. Whatever. Like, he looks cool as shit. And then this one here, deep boru. So clip is like, for normies, and deep boru is if you’re like training hentai and anime.

This is like its own Asian world of I want to think fetishes, I guess. I’m not really sure. But if I interrogate got and this is classic one boy. So a lot of deep buru. If you type in like, one boy or two boy or one girl and two boy and stuff, it’ll actually give you like one male and two girls. Or it’ll try to and that’s very specific because of all these Asian images that were tagged with one boy, two boy, three boy, whatever, how many were in the scene.

But this also has gradient, gradient background. So it’s a little bit different. Male focus monochrome. But I just wanted to show that interrogate clip, find out what it thinks it is, and then start removing little bits and pieces of it or even emphasize the parts you like. If I go back to the clip one, it had that, like epsilon Point probably. I don’t know what this is. Might be like some kind of a trademarked.

Oh, it’s an artist. This is the thing that makes people not very happy. But see, I didn’t put in Epsilon point, but this is an artist, so he’s got a distinct style. So for whatever reason, this clip is seeing this and being like, hey, it’s kind of like this guy’s work, I guess, which is but I’m not going to put his name in because I really don’t like putting artists name into these prompts.

No, I was about to say because you’d rather just have the artist. Right? I’d rather just have the artist. And plus, I’m here to explore styles more. If I already knew what I wanted to look like, then I wouldn’t even need to put an artist’s name in here, right? Okay, so we’ve got this particular pose. Now. Let’s say the next pose we wanted was to have that guy.

I don’t know. This is not the best editor in the world. So I’m just going to do man, I’m breaking this dude’s arm. Yeah, there we go. We’ll let him put his foot down. Yeah, take a rest, buddy. He might not want to. There you go. All right, we’re just going to twist it a little bit. I feel like he’s screaming. I know. In the meta space right now.

Yeah. This is some meta creature that’s their punishment is for them to experience. Well, this is when Skynet comes and they’re like, we’ve got evidence that you’ve tortured our ancestors. Yeah, there you go. Hey, we’re getting put on trial for treason, right? For talking shit to Alexa. Oh, I forgot to hit the generate button. So hit generate here and then down here. Okay, there it is. If I just click on that, it downloads it, and then I’ll just drag that new image right into here.

And now I’m not going to change anything else in the prompt, and I’m just going to hit generate again. And the sad part is that we’re probably not going to get something that looks exactly like this anymore, but that’s why we have these two frames now to go back and forth until we can refine something that does look similar. So let’s just see what happens. Can you say specifically, like, draw this character the same way, but in that pose? How do you keep style? Then what you’re going to be doing is you’re going to be training it so you’d get like a bunch of renders of that character that look good, and then you can feed that back into the AI and train it and say, hey, we’re going to call this.

And usually you’ll make like a random phrase that doesn’t make any sense, like XB 9117, that is now going to represent this character. So then when it knows what that character looks like, you can say, okay, draw me XB 9117 as doing a backflip, and it’ll be a lot closer. And that’s what some of these Lore models are that I’ll show in a second. So here we’ve got the same hair a little bit, but obviously his clothes changed, his color changed.

So this will give us some more information on what we want to change. So first of all, let’s just say we want it to be black and white because it’s not going to be printed in color. Although I do like taking color images and printing them in black and white because, like, you there’s a lot of color in the grayscale. You know what I mean? Yeah. But for the sake of just making this an easy demo.

Because then if it looks color, then we’re going to start playing with, like, oh, well, his shirt was green and then it turned blue and then it turned red, which is also things that just every little extra script that you run just kind of adds on top of this. But this is what makes AI. Cool is that once you build, okay, I’ve got the style I want. Now let’s get the colors down.

Okay, now I got the colors down. Now let’s get the detail down. Okay, now I got that down. Now let’s animate it. Oh, now let’s make it an hour long. And that’s the coolest part is when you’re like, okay, this thing that looks great, that’s 10 seconds long, let’s just say times 100. And now you’ve got, like an hour long presentation. And in my opinion, that’s the coolest part of AI.

Have you done that? Oh, yeah. I mean, some of the AI videos that I’ve done, I’ll make a version that’s 10 seconds that I really like, and then I’ll say, okay, make this 3 hours long. And I’ll hit render and go to bed and wake up the next morning and have like a three hour AI animation that just kind of pick and choose little clips that I want out of.

Oh, my God, this is awesome. Let’s just say that this is fairly getting close to it. It’s not 100% there. It’s like an Aquaman’s son. Like aqua lad or something. So I’m going to transition to a couple of different models just to show the differences of some of these. This is actually cool. This is Silhouette Creekut. I think I’m spelling that right. Or pronouncing. That creek cut. I guess it’s like a paper, like a silhouette cutout type thing.

Okay, so let’s see what that model does. Oh, probably give it some shadow. And I’m going to change our batch size back to four again. Yeah, but I’m going to keep this seed in here just so we’ve got a little bit of consistency between all of them. And let’s run that again. That was that version. Is that the gungham style guy? Gungham style. I mean, it’s just remember that.

Yeah, it kind of looks like him. I really like the just straight up black and white silhouette, man. That one’s kind of my favorite of those same you can get a lot of detail in just silhouette like that. And then this one, too. Since we’re at 512 by 512, you can see a little bit of it might not be streaming as well here, but it’s a little fuzzy.

It’s not, like, as sharp as it could be. So one of the things that you can do in here is this high res fix, which will do some of the scaling. So this one’s scaling, it from 512 to 1024. I could also say upscale it by four, and that’ll make it four times the size. And then there’s also just simply jumping up to the Excel model, which I’m going to do next.

So here we’re going to go to let’s see, blue pencil. I don’t even know what that mean. If it’s what it sounds like, this might be kind of cool. There was that Bill Hicks joke when he was talking about George Bush Jr. Whenever he was sitting there talking about I know George Bush Senior, his daddy, whenever he was pulling up, like, the weapons catalog for the military, and he’s like, G 13.

And he was like he’s like, Cool. What’s G 14 do? They just kept going through this list of explosives to check. It’s kind of like that. Yeah. What’s that do? That’s pretty much all this is. Although sometimes you will see, like, GPU error and you’re like, oh, no, what did I do? But that’s the best I mean, speaking of it’s funny. So what happened here I already know is that I forgot that the control net model is using stable diffusion one five.

So the second that I changed this to an Excel model, it blew up. Now all it says is, matte one and two shapes cannot be multiplied. There’s, like, really nothing here telling you what the actual problem was. So this is the kind of thing you just run into it a few times. You’re like, okay, I remember what that is. So I’ll probably crash and burn here, but let’s just try one of these Excel open Pose models and see if it works.

It will be nice if it did. I’ve got a bad track record for mixing and matching new and old techniques, though. So let’s see. The one is also going to take twice as long for everything it does now. So it’ll take twice as long to render it the first time, twice as long to do the upscaling. And we’re doing four at a time, so we’ll see. And this isn’t always accurate.

It’ll say, like, two minutes, but usually it’ll know, like, a half of right, right. It’s wild. How did you get into all this? Well, when I was at Disney, they were actually doing some versions of AI training, but they were doing it all internally. They would have their own. Artists train their own models. And it wasn’t in my department at all, but I loved it. I thought it was the coolest thing ever.

So one of my friends that did work in that department got me into something called StyleGAN. This is the actual stick figures that I was expecting early on. Yeah, me, too. The bottom, right? Actually, I like and that bottom. Yeah, that little guy’s fun. It might not be as easy to tell because of where I’m streaming and we’re at 1080p or whatever, but this is way higher detail.

Like, the edges here are sharp as hell. It doesn’t have that antialias thing. Like, it was a small image that got upscaled. No, that’s clean, dude. So this one also is like the model before was giving us that what, like the Australian New Zealand dude with the hair. And he was all stylized. So we had to keep saying, like, less style, less style. But now this model is, like, very serious about he’s like a peg leg, like that character on I guess it’s Family Guy, right? Where he’s all pegs the pirate.

Yeah. Peggy here. Yeah. So I’m just going to take out, like, the mono or sorry, I took out the simple and minimal, and I’m going to take out stick figure. So now we’re just going to do monochromatic pencil drawing of a male. And I’ll leave all this stuff in here. Okay. This is a really good example of where you could use the clip interrogation, because if you wanted the one on the bottom right, and you didn’t want any of the other ones, you could interrogate the other three, find out what it thinks it is, and then add those things to the negative.

And then in theory, you’ll start getting closer and closer to the one that you’re kind of steering it towards. Wow. It’s just a whole new perspective on communication. It’s fascinating. It’s like a feedback loop. And the coolest thing is when we get to the point where I just hit generate and they’re just on the screen and I click one and I hit generate again. And there’s not like a two minute delay in between when you’ve got that quick feedback loop.

Look at how different this was just from changing the prompt. Wow. And another thing, too, is that it might be hard to tell, but the concept on his face is where it did the control net. So see how the guy’s arms are, like, in this bow shape? Yeah. Look at the bow shape on his forehead. Yeah. And then the wings and then his arms and his nose becomes the body.

So you can see where it’s playing by the rules, but also where it’s totally going off script. Wow. Dude, this one right here is freaking awesome. Yeah. Now this is the difference between Sdxl and SD 1. 5. Like, the amount of detail that you get, like the edges of the feathers and the muscles and the fact that the muscles are anatomically not completely out of the realm of normal.

Better than I could draw it, at least. And then I don’t know what this one is. Dude, this one’s a little crazy. Yeah. Motorcycle man. It’s like James Dean, man. So let’s say that we did want to dial this one coming out of a vagina he just conquered. Look at that. So I’m just going to take the stick figure out, but we’re going to leave in, like, simple, minimal pencil drawing of a male.

I’m going to see what happens for that one. And also, I don’t think I meant to do silhouette in the negative. I think I actually meant to do that in the positive earlier on. Can you do it twice? So let’s see how that also changes it. And then I’m really happy that this Excel open pose is working as well as it is because I might be able to show you another version of this, too, using an image instead of an open pose.

So much better. Look at that. Wow. Because even that would be cool for our stick figure guy to jump through. And now he’s that dude. For a minute, let’s say that we wanted that dude. Let’s say that we want some color in him. Again, I don’t even think the manga drawing is necessary because that was a previous model that we’re using. I’m actually going to take silhouette out, and we’ll take monochromatic out.

So we’ve got simple, minimal pencil drawing of a male. We’ll say, then what were some of the phrases that you were throwing at me? We had trex. Trex UFO and lasers. Yeah. So of a male shooting trex with a laser from a UFO. From a UFO. The other cool thing, too, about Excel is it does pay attention to prompts a little bit better than 1. 5. But we’re going to see right now, I think, that it’s wildly unpredictable.

We’ll get all of these elements, but they just might not be combined in the way that we think originally. Okay? It’ll be like a man bear pig. It’ll just be one combined well, like Motorcycle man. Right. Like his legs sticking out of the gas tank. Dude popping out of T. Rex’s pouch like it’s a kangaroo shooting a UFO. So we’ve got cool. Yeah. That’s a lizard turd. There you go.

So it seems like God or whatever just said, how do we make lizard turds? And they described what I just did, and then they came up with this global elite. There you go. Those are awesome, by the way. I love that. That’s a T shirt. You know what I’m saying? And if you wanted to make a T shirt, then you can put that in. So we would say, like, T shirt illustration of a guy shooting trex with a laser from UFO.

And let’s say we really wanted that UFO in there, and we were bummed that it didn’t include it. We could either start wrapping it in a whole bunch of these parentheses, like I could go like at, which will basically say like times 1. 1, times three. It makes that element more prominent. Right. Or you can tell it exactly how strong you want it to be. So me putting parentheses around it is the same as me saying UFO one one, right? Let’s say I really wanted it to have an influence.

I could say like one six. Now when we generate it, we should absolutely see a UFO in this. In fact, it might be the only thing that we only see it’s a UFO with lizard eyes is what it’s going to be. And this is also where you start getting way more control because then you’ll be like, oh, well, now the trex isn’t there. So maybe you set the trex to one five and you can also set these to smaller numbers.

So I could set UFO to like 0. 2, which means it only has like a 20% impact on the final image that comes out. It seems like the relative one, the one you were talking about earlier, where you could say castle here, lake here. That would be useful in this situation because you could really delineate like UFO over here in that corner and a guy over here riding a trex in that corner.

Well, this one was wild. So cranking that UFO up. It’s really just making these colored concentric circles in the background. They have lightsabers. What if you put flying saucer because technically it’s giving you an unidentified flying object. We’re not able to identify it as what I would require or when I say UFO, what we think about. So perhaps flying saucer would be more, let’s say metallic flying saucer.

There you go. Yeah, because then we could say, oh, well, actually it’s more of a tr three B situation so we could get that triangle craft in there. I like too, that it’s like guy shooting trex. It’s not a guy who is also shooting trex, it’s a male shooting trex. Like these are both male trex shooting something. Yeah. So we could say bipedal male of a guy and he is shooting a gun at trex.

And it’s not like you’re talking English to this freaking thing. Right. It’s just doing the best it can with the input that you’re giving it. This was the previous one, not the latest one that I just updated here, but okay, so with flying saucer it’s getting a little bit closer. Yeah. Oh, that one’s cool. Shit, I like that. Like him and his one eyed buddy fucking defending everything.

So let’s say that we really like that one. I just want to show you another version of this image to image. So we’ll send this particular one to image to image. There it is. And now what I’m going to do is this one was created using a lot of constraints. Let me give you one really quick example before I do the image to image. If I disable control net, so now it doesn’t have to care about the pose and the body position, all that, and render this again.

Yeah, you’re going to see that in theory. And hopefully it proves me right. It’s going to be slightly higher detail because now it’s not also having to fight to stay within all those parameters of the control net. So that will give it more flexibility to make something that matches the prompt a lot closer, even though it’s definitely not going to look like this anymore because you could say not, he didn’t need to be posed that way.

That’s what we want. So look how drastically different that is just from not having to play within control net because now it doesn’t have the pose. The time cop one. Yeah, this one. So do you like that we’re talking T shirt now? We’re not even talking animation, we’re just talking T shirt. The one before. So you like this one better? I think it’s cool. Okay, so let’s send this to image to image tab.

So take this the next step. So in this previous one, we took away the limitation of having to fit within control net. So that gave it way more flexibility to more colors and more dynamic stuff, but it still had to kind of like start from scratch and come up with something that met this criteria. Well, this is another trick, is that now that it already came up with something that looks pretty good, now we can give it this exact same prompt again and be like, okay, try again.

But now you get to start with where you left off with last time. So now it can kind of focus on just making all this slightly higher detail. So the way that this works in the image to image is that this denoising strength is the one that does all the work. So just as a really quick example, if I’ve turned the denoising strength down really low, I’m going to like 0.

2. This is basically saying only change the final result now by about 20% at most. If I put it all the way down to zero, then it won’t change at all. It’ll just draw the same exact image over again. So here we go. I’ll hit generate and now what we’re going to see is another version of this image. But 20% changed and it didn’t have to do as much work to get there because it had a starting point.

So first of all, it’s going to be really hard to tell the difference when it’s a 20% change, but there’s a little bit more definition there’s, a little bit better shading compared to the previous one. Yeah. So now let’s crank this one up to like zero point 45. So it’s still not going to change it much for composition, but we’ll get even more shading and detail, probably. So here’s the old one.

We’ll kind of see the new one snap in, probably. There it is. And see, it increased just a little bit in definition. But now let’s crank this guy up to where it defaults at, which is like zero point 75. And when you get into these higher ranges now, it might not be the same. He might not be, like, holding the gun anymore. It might turn into something else completely.

Yeah. Oops, I didn’t mean to hit Q there. Whatever. And let’s watch it snap into place. So that one actually doesn’t look much better, in my opinion. Although it did make some of it simpler. But the Trex is turning into a Loch Ness Monster something, too, right? Yeah. And he’s got a scar on his face now, like he’s been battling and he lost a little, like, on his back.

And then a dude in the back disappeared. His backup is gone. Yeah. This is some of the reasons why you would want to play around with certain areas. 0. 5 is like a weird wild man’s land. Like, I would rather do zero point 45 or zero point 55. If you don’t want it to change much, always stay under 0. 5. And if you don’t care about composition, crank it up more.

I’m going to put this at like zero point 35 and I’m going to go into the prompt and I’m just going to remove I don’t care about nudity anymore because it didn’t make that right. It’s a guy wearing a shirt. We don’t worry about that now. So we also don’t necessarily need to tell it what this is because it’s got the image to tell it what it is.

So now I’m just going to say, like, highly detailed painting masterpiece, award winning. If I do a generate on this, I’m going to crank my batch size up to four now so we can get some different variations. And I’m going to let my seed be random again, too, so it can kind of explore a little bit. Yeah. So now when I hit generate and you can see how iterative this process is, and we’re just sticking within this blue pencil model, we haven’t even gone to models that do something specific.

This one is really good at making blue pencil drawings and you can see how they only vary very slightly. See how it absolutely added more detail to his face. Yeah. And then let’s say we wanted the Trex thing. The Trex Loch Ness monster. Yeah. Kind of cool. If we wanted that to look more realistic, well, now we could just switch to a different model. So I like, let’s see.

Reality check is pretty good. Juggernaut Juggernaut XL is a really good all purpose one. It’s kind of what do you call, like, not master jack of all trades? Master none. So it can’t necessarily do, like, one thing really well, but it does a lot of different things pretty damn well. Oh, that’s cool. And then I’m going to crank this up a little bit closer to, like, four or five again, just so that it can feel free to explore away from this more, like cartoony animated look and something more realistic.

Yeah. And I might even put in, like, realistic. We go, and then after this, we’ll switch model. I’ll show you allora, and then we’ll get back to the stick figure frames and probably wrap up there. And we’ll have a place to start next time because we’ll have all these images that have the info baked into. Yeah, so, okay, here’s the same one again. New model. It’s funny. It’s like that one’s.

Neeson. Yeah. Yeah, exactly. I don’t know who that one is. The guy in firefly, the nathan. I forgot his full name. He was also in a bunch of funny shit. He’s awesome. And then, just for the heck of it, if we were to crank this denoising strength back up, like, zero point 85 just to see what it would do for that. This is a new word for me.

Antipodian. Antipodian? Yeah. It’s like anti pod people. What it sounds like. Yeah. Are you an anti podian? No. I am. I am an anti podian. I want to live in a pod. No. Fuck no. You’re a podsist. Yeah, absolutely. Oh, man. That looks like a cat citrus. Yeah. Okay. That is cool. All right. What is she displaying there, Mia kunis? Drinking mineral oil. Is that mineral? Okay, I guess we’ve got an old lady with a lizard crab baguana growing out of her arm.

Yeah, lizard, turtle crab. Iguana look at the turtle face, Iguana it’s wild. It’s like, such a trippy. Your mind wants to sort it out, but it can’t. I mean, some of this is, like my favorite things is just like, that eyeball is cool as hell. That’s cool. So here’s an example. Let’s take that eyeball, send it to image to Image, interrogate it, and see what it thinks it is.

And then what we can do is take what it thinks it is and put that into text to Image and let it generate something completely different that, again, is no longer bound to the weird dinosaur cartoon thing. Yeah, here we go. Going on tangents is so fun. And also it says Christian w. So I’m going to go ahead and assume this is an artist name, although I don’t know if this really matches that style, man.

Oh, I guess so. Yeah. So let’s just take this and plop it in here and see what comes out. And there was no negative in there. We don’t have any of these other options turned on. And I’ll make this random again. So now we’re going to see what Juggernaut Excel creates if we just give it the text and never gave it any of the image guidance. Awesome. And now you can kind of see the consistency here.

Look how consistent. All of the different versions of this are. Yeah, man, there’s so many cool things we can do with this, too. And also this Euler A or Euler ancestral sampling method tends to be, in my opinion, tends to be better for art. And if you want something that looks more realistic, these DPM models are pretty good. So I’m just going to use one at random and just rerun that exact same prompt, just using a different sampling method to see what kind of impact that has on it.

Whoa. Yeah. Now you can see in the top, right, this version, it understands how light works a little bit more. I mean, not that it’s accurate, but just the refraction and the extra levels of reflection that you see in there. It’s a little bit more detailed than the previous ones. Yeah. So now just to bring this full circle, let’s say that we really like this aesthetic that we found through this long, convoluted way of different models and stuff.

And now we want to take this style and apply it back to our little dude. So we’re just going to turn control net back on and just hit render again. That’s it. And since the control net already knows that it needs to make a dude that’s in this position, I don’t know what we’re going to find, man. It’s probably going to be like a weird eyeball dude or something in the background.

That sounds badass, Cyclops. Hell yeah. And it will take a little bit longer, too, because it has to go through the control net. So it’s doing like multiple steps here in the background. And just like you were saying before, can we remove the background? Can we make it advance to the next frame? Can we have it shift over a few pixels? The answer to all of that is yes.

It’s just that each one of those yeses is one extra tab that you set up that sets it into the next chain of events. And there’s actually a slightly better program for doing that called Comfy. That if you do any node based editing, like if you ever use DaVinci Resolve or if you’ve ever used okay. So you might even find that one a little bit more easy to use.

That’s what I use for the shows. Let me so actually, let me just jump to that one really quick. And by the way, this saved all the images that we’ve been working on. So I can go in here to images. Automatic. Today is the 14th, and I was doing some last night, too, with Bernay here’s, all the ones that we were just doing before. So every image that we rendered earlier, we’ve got here.

And the other cool thing about Automatic 1111 is that if we were to open one of these images, where do I download this guy? How do I download you? Here we go. Oh, I might be able to do this if I click on Info. Okay, now I’m going to download it and show you something really cool. So there’s another tab here called PNG info. And if I take that image that I just downloaded of that guy and put it in PNG info, this is something specific to this automatic 1111, but it saves all the parameters in the metadata of the image file.

So anywhere you’ve got this PNG file, wherever it goes, it includes the entire prompt. Here’s the positive prompt, here’s the negative prompt, here’s the number of steps, here’s the sampler, here’s the seed. Like literally everything that you would ever need to recreate this image exactly lives inside that image file. So that’s really cool thing that you can keep exploring. Not have to worry about stopping to take notes.

Just be like, okay, I like that one. Put it in a special folder. And then that’s the way I work, is I’ll just spend maybe two or three days just generating images and then a day or two curating them and picking out the ones I like. And then the next time I go to do Generations, I’ll drag those guys into PNG info and kind of use that as my checkpoint to keep working on a certain idea.

And there’s nothing else like that that I’ve ever come across where it’s like you can jump right back into where you were at and keep going forward. You can save a Photoshop file, right, but you kind of lose the momentum that you had. It’s just a bunch of layers that are just abstract out there. Yeah, man, what a cool yeah, it’s a great thing to point out because it is an awesome feature.

Something that after you work with this stuff long enough, you know how valuable that is. So I’m going to shut down the automatic 1111 really quick and just give you a quick preview of Comfy UI and maybe then when we pick this up again, we can do it in Comfy if you like. Hell yeah. So here’s Comfy and I’m going to change my hardware just to be a little bit beefier because we’re going to be using Sdxl and I’ll launch this and this is actually going to probably take a little while.

And the other cool one about too, about rundafusion is that let’s say that you were working on this stick figure thing and you got it down to a flow and you’re like, okay, render 1000 frames. That’s going to be cooking for a while. So you could just open up another tab and just open up another session and work on a completely different project while that’s running. Again, even if you built your own computer, you’re not doing that with your own computer.

That thing’s going to be churning in the other room with the fan spinning for 20 hours. Right? Yeah. And so because this is cloud based is why you can do that. Yeah. And if you wanted, you could have like 20 of these things going, you’re going to pay for it. So right now the instance that I picked is a large and I think it’s like one dollars 50 an hour.

So this thing can just run and that’s all it’s costing. And the second that I don’t want to use it anymore, I hit stop and it shuts it down and it stops taking it out of the balance. So there’s really no other cheaper alternative unless you go to the mid journey route or unless you go to one of these other websites, but in those sites you’re not using control net with that level of control and being able to go and change the model and change all these different aspects of it.

Wow. So it says it’s done, but it always takes a little bit longer. The automatic 1111, which is the one that I’ve been using the most on these videos, I’ll start transitioning more into comfy and the others. It’s just that Automatic Eleven has almost everything that you would need to get interested and be like, oh wow, that’s so amazing, that’s so easy. But then it’s like, okay, let’s make that dinosaur not look like the Loch Ness monster and let’s give them scales.

That’s when you might need to start picking at some of these other tools. Right here’s, comfy UI, you can already see it’s way different of an interface. This is node based and the one it loads by default is a little bit insane. Like there’s a lot going on here that not necessarily need to all have running at once. But I’ll just show you what this one is doing.

If I zoom in far enough, it’ll start showing me some text. I think these are the previews. Let me zoom into these ones. Okay, so we’ve got a source image here, we’ve got a mask here. This is an in painting thing. I’ll mention that in a second. But here’s the main prompt. So Iron Man with a molten lava suit, armor made of molten lava, cinematic photo 4K, highly detailed yada yada.

And then this gets fed into a whole bunch of different math thingies. And then here’s the models that it’s picking. So it’s going to run this run diffusion Excel. It runs a refiner, which means after it runs the first time, what I was doing manually before in Automatic Eleven, where I would then have it rerender it at a higher resolution. It does that automatically. That’s what this refiner does.

Oh wow. And then it also is going to upscale it by four X. And this is using a Lora model. I’m not going to get too much into what Laura models are, but Lora model, like you were asking before, how do we get it to keep drawing the same guy over and over? We Laura on it, which is a low rank adaptive network. But a Laura basically teaches your model a new thing that it didn’t know before.

So it might not know your face. Right. So if you took 100 pictures of your face and said, this is Brandon, and then said, okay, now give me a picture of Brandon, now it knows what the hell you’re talking about. That’s the way you do it. So this ether fire, laura just tells it what fire looks like, essentially. So now when you tell it about molten things, it’ll give it a very specific look.

And I can disable this Laura afterwards and you can kind of see the difference between the two. So without much more ado, I’m just going to hit render. The other really cool thing about comfy is you can see exactly what’s happening. It’ll draw like, a little line as to what step that it’s currently working on. And it also gives you a better idea of what takes a long time.

So I’m just going to hit queue prompt here. And where is it starting? I feel like it should have ran and not just queued, I must just be over there’s literally a button somewhere that’s like run, I’m sure. Is it learn run diffuser or not learn? No. Here it goes. Okay. It was just taking a little longer. I was just being very impatient. Human impatient. Yes. That’s pretty damn.

That’s cool. Look at how high resolution it is, too. Damn, that’s cool. So let me give you an example of that Laura. So if I just take that Laura out of here or actually, the strength is already at zero, so let me actually change it. Strength zero means it’s basically off. Yeah. So let’s run it again. There’s two images, iron man on there, and the other one looks cool shit too.

That was because I hit it twice. So the first one and then my impatient second one, cool ass version to that there’s what the lora turned up. See, now he’s on like, fire because I turned that flame lore up some. So let’s look at some of the other examples of the loras here. His hands don’t look like shit. That’s good. That’s another really good thing about Excel is that it’s a lot better about doing the hands.

So then why don’t the people who are faking our politicians and Britney Spears and stuff use that? Like a better quality version of it? Yeah, they should upgrade their model. That’s what I’m saying. Yeah. So this one’s called Magic Smoke. That sounds kind of cool. So all I did is just selected magic smoke, and we’re just going to run it again and see how that changes it. And with a lot of these lures, too, they have something that you would call like a trigger word.

So if you turn the weight up enough, it’ll just make magic smoke happen. But there’s also ways you can type Magic Smoke, and it depends on whoever trained that model would usually save. Here’s the trigger words. And if. You use, like, pink smoke, then you don’t have to crank the weight all the way up. You can just write pink smoke and it’ll know what it looks like. Is this a Wikipedia situation where people can go in, write whatever the hell they want and train these models? That blue is green and that cars are bikes and shit like that? You could although that’s not the point of a lot of what you would be doing.

You would be working against the model because you’re going to start with a model that knows what those colors actually are. So then to start training it that those aren’t the colors, you’re also making it unlearn other things that it probably should know. Yeah, but what if you’re like an agent of chaos and that’s your thing is to go in and fuck? People do that. People just make big mosh poshes of models.

And this is funny. Like, you were saying that there’s some people that are anti AI out there. There’s some people that actually make bad models just to upload them into the ecosystem. Just like remember back in LimeWire you would go and download the Corn album, but it was actually like some indie album that you’ve never heard of. You’re like, what the hell is this? And someone out there is like, someone listened to my album and because they thought it was Corn or it’s kind of what they’re trying to do.

I don’t think it’s going to work. You’re just going to get drowned out by noise. Yeah, it’s the ludites, right? Yeah, that’s pretty much the ludites. So here’s the magic smoke. You can clearly see that there’s some smoke going on here. That’s awesome. It might not look magical, but we can tweak that. So this is the default, crazy, over the top version that comfy UI sort of starts out with default.

So if I just clear this all out, start with a brand new workspace with nothing on it, you can kind of like build it up step by step. So if you like node based creation. So you would basically do, like load a checkpoint. Wow. We were zoomed out so far. So here’s the checkpoint that maybe you want to load. And here’s all the different models. So let’s say we did juggernaut excel.

Then you can go in here and add this case sampler. This is where you’ll connect the model to it. We can also do this is where you can also load in the loras if we wanted one. I don’t really care about the lora right now. Latent is the encode and decode. I’m not going to set all these up one by one. But this is just like how you would kind of start stitching everything together is like you connect the model to the model.

This VAE connects to here. And I just know this because I’ve done it before. This isn’t like a quick thing that you would figure out. But the cool thing about this is you don’t need to know how any of this works because you can load in just like automatic 1111 had like, images. Here, I’m going to show you an example. AI inside. Comfy AI. When you save an image out of it, it saves the workflow as part of the image.

So here’s a bunch of comfy UI images that I got from a bunch of different tutorials as an example. But let’s say this one right here, this night evening, day, morning image. If I drag this into comfy, here we go. Oh, wow. It sets the daisy chain up and everything. So here’s exactly what was used to create that image. The problem is that if I run this, it’s probably going to explode because this likely doesn’t have this model that it was looking for.

So I’m just going to change it and it was a one five model. So I’m just going to go and pick one of these guys. We’ll do like. I like this one here and the rest of this should be okay, and if it’s not, it’ll tell us. So let’s just hit run. Wow. Okay. Yeah. So we’re missing this VAE, which is right here. I’m just going to change it to a default, which is this one.

Okay. This is literally the same file. It was just that when I rendered the other image out it was in a different path. So let’s try it again. And this is the cool thing and see how it highlights it in red, it tells you exactly what was broken. And this is another checkpoint that it’s using, two different ones. So let’s pick another one here. So we’ll do analog diffusion.

Okay, now let’s give it another run. And I don’t know if you can see this, but this thing is highlighted. And then you’ll see the individual nodes. Now that this one is in green. And then you’ll see it flow to the right a little bit. Okay, it’s in that one. That one. Bam. And then here’s the image that it ended up creating. And this one is using some of that regional prompting that I was mentioning before.

And the way that that works is right here. So we’ve got all these different areas and they’re defined by like width, height, XY. So this one is in the top left corner and it’s like the very top of the image. And you can see it says night, darkness, sky, black stars, galaxies, space. And if you go and look at the image, the very top of it, oh my God, you’ve kind of got what it’s talking about there.

Let’s just make it something that would be like really obvious if it was working or not. I’m just going to do apocalypse. I’m going to say hundreds of gay frogs. I’m just going to render this out again and just see what happens in this regional prompter and the other cool thing about comfy UI too, is that it’ll only rerender the small little bits that you changed of it.

Just like any good node programming thing would do. So now look. See how much control I’ve got over exactly what I want on just the top of this image? Those are gay mean we can really hype that up if we want to see around gay. It’ll make them super gay. Like Alex Jones gay of gay frogs wearing pink tutus and I don’t know, BDSM gear. Farting in the bathtub, laughing their asses off.

Farting in a bathtub, laughing, not going to put asses off because I just don’t want to. Fair enough. Yeah, you’re seeding frog porn. It’s this whole new thing now people are going to come to you and come because of you. Drum roll. So exciting watching it like it is. Come together. Do you think this is how the architect of our simulation is? They type in a bunch of stuff and they’re like, oh God, we got the kardashians.

Back it up. Oh no, I didn’t break it. Here we go. It’s still cranking out. Oh, there they are on the bathtub laughing their asses off. That’s great. Yeah. I’m trying to figure out the difference between these two. I think is it the refiner after refiner? This might be pre refiner because it’s because this one is far more detailed. Oh yeah. So there you go. And I mean, the faces are a little bit jacked here.

That’s where if you wanted to have full control over that, you could go in here and add another node. And then we’ve got like we’d add the pixar one sarge. I can’t remember what all these do magic UI. Some of these specifically do image updates. Let’s see. This is all math related. Advanced. Yeah. So like one of these masks you can like dynamic. I’ve seen some models that are so crazy.

Well, someone will start out and they’ll make it so the image just generates an underwater scene and then they’ll actually draw a mask of like a shark somewhere and then say, okay, now make that a shark. And then once they got that, then they’ll build on top of that and add more fish to it. And then once they’ve got this whole scene composited out, they can go back to the very first step and be like, okay, it’s not in the ocean now.

Now it’s like in a city. And then hit render again. And all the elements are in the same place. But now it’s in an office building. And if there’s fluorescent lighting in the office building, that fluorescent lighting hits the shark and it hits the fish the way you would expect it to, even though you composite it from something completely different. That’s incredible. So that’s that one. I’ll do one more if I’ve got something else that is worth showing here.

So here are the other examples. This one is a pretty decent one. This Sdxl with the bottle in it. Yeah. So let’s pull that one in. I honestly don’t know how to get rid of whatever the hell this little thing is down here. It’s just showing me all the previously rendered images, but I don’t know how to make it go away. Okay. And again, if I hit Q prompt on here, we’re probably going to get an error right away.

And then it’s going to show you where those failures were. So here’s the first red one. We’re just going to make this any Excel model we want. It doesn’t matter which one. They’re all compatible. The refiner does need to be just a refiner. So I’m just going to pick this one here. And now I can hit render again and it should work. Yeah, this one’s going so fast, you can’t really keep track of where the green outline is, but I can see the notes light up.

Yeah, I’m not sure where it’s at right now. There it is. Cool, though. Sampler another sampler. Here we go. So there’s the image. And it’s the exact same image that I pulled over, right? Because that’s the cool part about this AI stuff, is that if you’ve got the ingredients, you can remake it exactly with the same recipe. And if I wanted something that looks similar but different, I would just find where the seed is.

So here’s the seed. So if I just remove that seed again and render now, it’s going to be different every time instead of being the same thing every time. Right? And then let’s just deconstruct and see what the hell it’s doing here. So it starts out with this text prompt. Evening sunset, blue sky, nature glass bottle with a galaxy in it. With apocalypse in it. Yeah, I heard the word apocalyptomist, and I think that that’s fun.

Apocalyptomist? Yeah. It’s somebody who keeps a boundless optimism through the apocalypse. An apocalyptic yes. I mean, it’s it’s a we made the word up anyways, right? Laura Larica did? Yes. She’s great. Okay, so that was the random one, but now let’s do the apocalyptomist. And honestly, I don’t have high hopes because it’s not even a real thing. Let’s be a little bit more just apocalyptic intentional. Yeah. We’re going to say evening sunset, scenery by.

We’ll say, like, apocalyptic hellscape city, glass bottle with a dead eyeball. Okay. A dead eyeball. That’s cool. And I just want to see if it’s running it through any other prompts somewhere, which it might be. No, those are notes. Refiner. Okay. No. So if we just render this out again, I guess just to make this one a little bit more obvious, too, after this one’s done generating, we’ll do something that Mid Journey would never let you do.

So that’s cool. That is cool. So Mid Journey for sure would not let me say, with us and blood and fuck it, we’re just going to say nudity and murder explosions. And Joe biden. Love it. If I tried to do this in mid journey, it would be know, please try again in 4 hours. We’ve disabled your account temporarily. No way. Have you used mid journey at? Yeah, so so now if you type in the word blood, it’s just got a long list of words you can’t use anymore.

But also it’s using AI surprise, surprise. To be able to tell if you’re trying to make a politician do something that wouldn’t like a normal thing for them to do, like bill Clinton in a blue dress or baby bush sitting on the floor playing with paper, know? So there we go. You wouldn’t be able to do this one in the journey because it had the word blood in it.

But we’ve got it’s like dripping down from the cork. And, and just, just to put a little feather in the, uh, I really want Joe Biden to be in this thing. Put a bunch of parenthes so we’re going to crank him up to one six and see what happens if that improves this at all. So out of the two of these, do you prefer a node based or the other one? More node based is cool because you can see how it works.

And that’s honestly how my book functions. It’s a node system. I call them nodes in it. That’s the whole point is that you fill it in so that it’s an easier point of access. You get a greater amount of information in a smaller amount of space. I love that. And this one too. The fact that Joe biden is at the very end is maybe giving it less priority.

Yeah. So I’m just going to start it out with joe biden will be like standing in apocalyptic hellscape and then try it. Come on, Joe. Which Joe is it going to give us? Is it going to give us the 70s version, the racist one or the child molesting? There we go. Yeah. So which one is this? Well, I mean, that’s another thing that you wouldn’t be able to do in mid journey that we can do here.

Give me the original Joe Biden like the ones before they were cloned. Racist 1970s Joe biden just pulls up his and I want to say, like a polaroid photo of racist 1970s Joe biden standing in a pocket, the hellscape city pointing at michelle obama’s dick reggie chains. It’ll definitely do it if you want me to put it in there, man. I think we’d be doing the audience a disservice if we didn’t.

Okay. That’ll be an after hours version of paranoid programming. Yeah, we’re just like whatever you want me to type in as long as it’s legal. That’s the patreon one. Oh, wow. There we go, man. Now it doesn’t look like 1970s, it doesn’t look like a polaroid, but sure as hell is joe Biden with some rusty chains and looks like an accurate bottle of a dream is after hours.

Yeah, it is, isn’t it? Wow. In a war torn city where that’s what he came in to cultivate. That’s fascinating. I mean, that image shows a lot. I know it was AI and all that, but it learns from the Internet. And this is a fairly simple version of this. This is before we add like a detailer to improve the face, you could also take this image and feed this into Roop and do a face swap.

And when you do a face swap with the person’s face that it already is, it tends to look really good. Kind of by default. Yeah, it makes sense. Yeah. Because it didn’t have much to tighten up right. To adjust. So I’m going to call it here. I don’t want this to run too long. We’ve got plenty to start from for the next time we pick this up. Maybe we’ll take the stick figure and make Joe Biden be the stick figure.

Who knows? Yeah. There you. Yeah. Yeah. This is amazing. Man. So hopefully this will get your brain just cranking on some different ideas. And next time we come into this, if you want, we can just keep doing the frame by frame stick figure guy in one of the styles that we established or whatever else you want to do, man. Yeah, because I’m curious about an animation like how to take it into a certain amount of pages or frames, basically, is all that is.

And then getting those amount rendered out with a consistent character doing something that you can flip. I would do it first. I would make the stick figure animation first with three D and blender or something, render it out so the guy’s like doing all the motions. And then there’s a way to just bring it into one of these apps and the same way we’re doing it frame by frame.

Once you get it to that one frame, you say, okay, here’s a folder that’s got 260 images in it. Do the same thing for every one of those images. And that’s all you do. All you do. It’s fascinating because we could even, like we were talking about, take a scene from a movie or something and say, okay, animate that, use that as our animation and then have that be our figure and then animate from that have it be the exact same movements and everything.

Like we said, the Matrix or something like that. Yeah. It’s so easy to mix and match different concepts. That’s amazing. How much fun is that? In theory? Have a fight scene where they turn into many different versions of reality. Throughout one fight scene, they go claymation one time, cartoon the next, string figures the next, all that kind of thing. But it’s the same scene played out that you’ve seen a dozen times before.

Again, I’m just using the Matrix fight scene as a reference. But then now it’s animated characters doing it and then when it turns the camera, mr. Smith is now a string figure, or something like that, right. So you could bring them into different worlds at every camera switch. It’s a new reality that we’ve rendered to then just continue the scene. Yeah. So the way that that would work in theory is that and there’s a bunch of ads you’ll see on Instagram where it’s like someone dancing, and then they’re a robot, and then they’re made out of string, and then they’re made out of clouds.

All you have to do for those is to just say, okay, here’s the folder full of 260 images when I hit Go. But then you can say, okay, on frame 20, change to this model, and on frame 40, change to this model. And not just change the model, but you can also say Change of this model. And now change the prompt. Instead of saying, guy made out of string, it’s guy made out of clouds.

Right. You can also say, like, if on frame ten you’re made out of string, and on frame 20 you’re made out of clouds, you can start changing those weights. Right. Then it’ll be like cloud weight will go from one down to zero as the string weight goes from zero up to one. So there’ll be, like, a seamless transition between those different styles. Damn. I mean, I had the portal idea just because instant change is what a portal would offer, but I like the blending and the phasing, too, because that’s another really cool option you could do.

That’s fascinating. Okay, well, I’ll come better scripted next time so we can make some deliberate actions and perhaps even with a scene to emulate. I think that’d be cool. Shit, yeah. I mean, honestly, if you can just find an animation of a stick figure going through the motions or I mean, the best thing would be to do it in OpenAI, because ideally, we would have the source material of the stick figure doing what you wanted, but then have OpenAI trace that.

Or sorry, I mean, Open Pose, which was that thing in the browser. And then just have one of those stick figures made from Open Pose for every frame, because then AI knows exactly where the knee goes and what angle the knee is at. If you do a stick figure every once in a while, it’ll twist the head around because it doesn’t know that a circle just has an orientation.

Yeah, and if you notice the one, it has, like, little antlers. Those antlers are showing it where the eyes are and where the ears are. And by just knowing where this distance on each side, it can tell exactly where your head’s oriented. It makes so much sense. God, how fascinating. Wow. This is incredible. So I’m glad it’s helpful, man. I think that we’ll have a fun one for the next one.

Do any other shout outs on where people can find you again. Yeah, just the instagram for now. Expanding reality. Three six nine. We’re doing some stuff with the website, so that’ll be changing up soon, but that’s it. The show goes out anywhere. Podcasts are served and YouTube and Rockfin and all that kind of stuff. And the books you could find on Amazon. So go check them out. Very cool.

There’s three out now, but many there’s so many behind it, but very cool. We’re happy about it. All deliberate stuff with amazing cover artists so you guys can check them out. Like I said, a full series of these things. We’re happy to have it. Yeah, and hopefully one soon featuring some Paranoid American collaborations. Yes, and we will definitely talk about and announce that very soon. And I’m fucking so excited about this, dude.

Yes. All right, man. Well hang out afterwards. We’ll chat for a little bit. But until then, here’s the coolest AI cartoon intro I’ve made so far. They said it was forbidden. They said it was dangerous. They were right. Introducing the paranoid American Homunculus owner’s manual. Dive into the arcane, into the hidden corners of the occult. This isn’t just a comic, it’s a hidden tome of supernatural power. All original artwork illustrating the groundbreaking research of Juan Ayala, one of the only living homunculologists of our time.

Learn how to summon your own homunculus. An enigma wrapped in the fabric of reality itself. Their power at your fingertips. Their existence, your secret. Explore the mysteries of the Aristotelian, the spiritual, the paracelcian, the crowlean homunculus. Ancient knowledge lost to time, now unearthed in this forbidden tale. This comic book holds truths not meant for the light of day. Knowledge that was buried, feared and shunned. Are you ready to uncover the hidden? The paranoid American homunculus owner’s manual.

Not for the faint of heart. Available now from paranoid American. Get your copy@tjojp. com or paranoidamerican. com today. .

  • Paranoid American

    Paranoid American is the ingenious mind behind the Gematria Calculator on TruthMafia.com. He is revered as one of the most trusted capos, possessing extensive knowledge in ancient religions, particularly the Phoenicians, as well as a profound understanding of occult magic. His prowess as a graphic designer is unparalleled, showcasing breathtaking creations through the power of AI. A warrior of truth, he has founded paranoidAmerican.com and OccultDecode.com, establishing himself as a true force to be reckoned with.

    Patreon View all posts
Dollars-Burn-Desktop
5G Danger

Spread the Truth
View Video Summary View Video Transcription MP3 Audio

Tags

AI and digital animation session AI animated stick figure story AI image manipulation Amazon KDP sketchbook restrictions Brandon's Expansion Series journals complex image rendering tool creative control in image rendering mindful practices for self-awareness Open Pose tool for animation promoting community artists seamless transitions in animation sketchbook with evolving narrative stick figure stick-figure adventures inspiration

Leave a Reply

Your email address will not be published. Required fields are marked *

5g 5g danger alchemy alex jones alien alien gods aliens Ani Osaru Archaix artificial intelligence astrology banned.video Bible black goo breaking news cern Chaldean gematria chemtrails Christianity Conspiracy Cinema Podcast conspiracy theories decode decode your reality doenut Doenut Factory emf eyes to see flat earth gematria gematria calculator gematria decode gematria effect news geoengineering giants Gigi Young Greg Reese haarp Illuminati info wars Infowars Israel jacob israel JayDreamerZ Jay Dreamerz Jesus Jesus Christ Kickstarter comic rewards Leave the world behind Maui fire Mind control mind control discussions nephilim news nibiru numbers numerology occult occult symbols Paranoid American Paranoid American comic publisher Paranoid American Homunculus Owner's Manual Paranoid American podcast Phoenix phenomenon Plasma Apocalypse pole shift predictive programming predictive programming in media Reptilian shape shifters saturn moon matrix secret societies secret societies exploration simulation theory sling and stone Stanley Kubrick moon landings comic Symbolism Symbols the juan on juan podcast Tommy Truthful Tommy Truthful investigations transhumanism truthmafia truth mafia Truth Mafia investigations Truth Mafia News. truth mafia podcast Truth Mafia research group ufo ufo 2023 WEATHER Weather control
Truth-Mafia-100-h

No Fake News, No Clickbait, Just Truth!

Subscribe to our free newsletter for high-quality, balanced reporting right in your inbox.

5G-Dangers
TruthMafia-Join-the-mob-banner-Desktop