Experimenting with Firefly beta webinar
Join Howard Pinsky in this webinar as he dives into Firefly.
Good afternoon, everyone, and welcome back to Adobe Live. My name is Howard Pinsky, senior design evangelist here at Adobe. Hope you’re all doing well on this Friday afternoon. I haven’t seen many of you in quite a while, and many of you are already asking for boops in the chat. Boop. There we go. Today is all about Adobe Firefly. We’re going to be diving in, having a little bit of fun for about half an hour, and I’d love some of your suggestions in the chat. We’re going to be experimenting with some prompts. We’re going to be going over text to image, text effects, all sorts of fun stuff. But before I do that and dive in, I want to say a big hello, good morning, afternoon, or evening to everyone joining me in Behance today, or on Behance, I should say. If you are tuning live, let me know in the chat who you are and where you’re tuning in from. We’ve got Frank and Cody and Laura, General Kenobi. I always love that username. Gareth and Robert, Oliver, Jack, great to see you all. L3DR, also a great username. Don’t know what it means, but it’s wonderful. Many familiar faces. Also new faces as well. Some of you I’ve seen in Discord. By the way, we have over 100,000 users in our Firefly Discord. That’s insane. All right, let’s go ahead and hop over to my screen. Boop. There we go. And we’re going to dive in. So if you haven’t explored Adobe Firefly yet, head on over to firefly.adobe.com. Still in beta, so we can test things, we can break things, and we can get it to a spot where we can roll it out to a larger audience, but I think more importantly, so we can roll it out into many of our tools. And I’m going to talk a little bit about that in just a second. So here’s the page. If you’re not part of the beta yet, you can request access. We are rolling people into the beta gradually. We have a lot of people excited about this. So definitely request access if you haven’t yet. But let’s go over this page a little bit. Top right-hand corner, you can join us in Discord. It’s a good time, especially if you are not in the beta yet, just want to hang out with people. If I hop over to Discord for a second, again, we have over 100,000 members in Discord, which is insane, which is unbelievable. And it looks like right now they are streaming live in Discord. Meet the Adobe Firefly team streaming at the same time as me. How rude. I’m just kidding. But this is kind of cool. So we have a lot of the team members who are building many of these tools in Discord. And we’re going to be doing pretty frequent streams where you can talk with the team, give suggestions, you can hop in and share your work that you’ve done. I mean, look at this stuff. This is just delicious. I want… now I’m hungry. Thanks, whoever this is. Eva, thanks. I’m hungry now. Look at the little cute little dragon dinosaur thing. I don’t even know what it’s called. But hop in there. Have a good time. All right. So firefly.adobe.com. What can you do here? So right now we have two tools available, text to image and text effects. Now, like I mentioned earlier, I’m more excited about what is to come, right? Obviously, something like text to image is important for us to stress test how Firefly works to get as much feedback as possible and improve the overall algorithms and diffusion models and all that stuff, right? As a non-engineer, I don’t understand much of it. So that’s why we have some engineers in Discord to talk to all of you. But down below, all the way down here, you can take a look at some of the things that we’re exploring internally. And some of these, maybe all of them, will be rolled out, not just as standalone tools in firefly.adobe.com, right? But also in our applications like Photoshop and Illustrator and After Effects. That is what I’m excited about. Because think of Content-Aware, right? We introduced Content-Aware Fill, I believe, about 13 years ago. And it’s wonderful at removing some objects in Photoshop. But even though it’s kind of in the AI ballpark, it’s not what AI is today, right? It kind of looks at the pixels around your selection and it figures out what might be there and it fills it in, right? But with something like inpainting, and I’m just kind of imagining and brainstorming right now, if you’re in Photoshop and you’re painting over an object, it’s not just going to look at the surrounding area, but it’s going to possibly look at your entire image. And it’s going to determine what should be there and fill it in in a more artificially intelligenced way. I don’t know if that’s the word I’m looking for, but it’s more intelligent, right? So you have things like inpainting, personalized results, so you can maybe train your own model on your artwork, which is interesting. Text to vector, extend image, think content-aware fill or content-aware extend, but again, more intelligent. 3D to image, text to pattern, I mean, the possibilities go on and on and on. I’ve seen so many people in Discord already giving amazing suggestions on how we can use some of this technology in Photoshop, Illustrator, and After Effects. Again, this is what I’m excited about. Obviously, text to image is kind of cool, text effects, really neat stuff, I got a big applause at MaxLax last year. Lode says inpainting is very powerful. I agree. I’m really excited for that one. But let’s dive into some of these things and recolor vectors. Our good friend Paul Trani, I think, demoed this in his last Firefly stream. So definitely go back and check that out if you haven’t already. And that should be available soon. All right, text effects. Let’s start with this and then we’ll dive into some text to image. And look at this. This is kind of cool, right? We showed this off at Adobe Max last year. We sneaked it. But what’s nice is you can dive down here at the bottom left-hand corner. And hopefully my, okay, I tried to position my camera so it’s not blocking things, but it looks pretty good. So if I enter something like pop, right, that’s going to be the text that we’re going to be displayed. And then we can enter something like exploding popcorn, right? And generate. So it’s going to take the text that we entered, it’s going to put it right here. And then it’s going to take our prompt, which is exploding popcorn, and it’s going to create a text effect based on that. And that looks pretty wild, right? Many of you have a need for this. It depends, right? Depends where you are in the design world. Some of you may not. But if you’re looking for something quick and something a little bit stylistic and you don’t want to grab a bunch of popcorn and manually place all over the place, something like this is wonderful. Or if you’re just trying to generate some inspiration, right? And then over to the right, you have some sample effects. So if we want to switch it to flowers or snakes, and that’s basically going to replace this prompt right here, the exploding popcorn. You can also choose whether you have a tight, medium, or loose fit. So what does that mean exactly? Tight is going to basically take the popcorn in this case and make sure it’s really wrapped around the edges of the word pop. And as you go higher to something like loose, you’re going to see more popcorn, especially in this case, exploding popcorn, kind of fill the screen, as you can see there. Now, I did share a tip in Discord not too long ago. Let me hop over there very quickly. If I hop over, look at this, Santa Claus. If I hop over to Tips and Tricks and I scroll up a little bit, we do have some variables that we’re testing, right? Some of them are still internal, but some of them we’re talking a little bit about. So text effects, outline strength. So if I go ahead and copy this and pop that at the end of this, it’s going to explode it a little bit more. So it’s going to push the bounds more than it did before, right? As you can see, there’s some issues down at the bottom, which is why we’re still testing some of these things. But that looks pretty cool. And again, delicious, right? I’m still getting hungry. And then we have some fonts down here at the bottom we can switch to. And then we have some background colors, right? So we have our white and we have gray. We have text color and background color. I did switch the text color, which is fine. But we have our background colors, right? And of course, you’re noticing that some of the transparency isn’t perfect. These things, not just in text effects, but also in image or text to image, will improve over time. So anything that you’re running into, definitely report it. You can report it at the top here. Again, things will improve over time. It’s me, Kingpin. Oh, Kingpin. Hey, great to see you from Discord. Great to see you. Text effects not working in Safari for me. So, Gail, this button right up here. Definitely press that. Report a bug. We’re obviously working on a lot of things. That’s why we’re in beta, just to make sure that things are working. So text effects, really cool stuff. Let’s go back over and dive into some image or text to image. I always get that backwards. So I’m going to go ahead and generate. And we have this beautiful gallery of some images that have been generated by some Adobe employees and I believe some from the community as well. You can submit your own when you do generate something. But if I go ahead and just click on one of these, it’s going to bring up this generation. So we have a Scottish terrier fern. Just cool stuff, right? But of course, the magic of this is you can take whatever is kind of building up in your mind if you have a dream one night. And this kind of thing I love about these image generation technologies is you have all these wild dreams and now you can kind of bring those dreams to life using technology like this. So what we might want to do is we might want to do something like, maybe we’ll just start with art, right? And again, give me some suggestions in the chat. But I might want something like a person standing on a cliff looking out to a beautiful landscape. Right? I can’t spell while I’m streaming. There we go. So I’m going to press generate. And initially, it’s going to, assuming you don’t have any very specific keywords like art, right, inside of your prompt, it’s going to give you a result that looks like this, relatively photorealistic. And then over to the right-hand side, you can control the aspect ratio, which in many other AI tools, it’s not easy to customize your aspect ratio. You have to add a bunch of extra prompts and stuff like that. But I can just press on widescreen and it’s going to regenerate based on those particular aspect ratios, which is great, right? So here we go. And time-wise, it’s not too bad. Now, what you might notice is some people, for example, depending on the angle, their faces look a little bit wonky, but we’ve seen this in the past with other AI tools. Faces and hands and things like that, they’re going to start off a little bit wonky, but that’s why we’re testing these things and we’re in beta. These things will improve. I wish I can show you some of the generations from months ago before we even announced Firefly. They weren’t great, but they’ve gradually gotten better and they’re going to continue to get better, which is wonderful. Cody says, I like pineapple on pizza. My people, yes, pineapple on pizza. All right, so we have a relatively realistic result, but we might want something a little bit more stylistic. So there are a few ways we can do this. We can either do a comma and then maybe, let’s say, oil painting. Now, for certain things, I really can’t spell while I’m streaming. Painting. There we go. Watch what happens here when I press return. It’s going to recognize that I typed in oil painting. It’s going to automatically apply that style down here at the bottom. So it’s going to help us a little bit so we don’t have to browse through some of these styles and techniques on the right-hand side. But we can also switch this over from, let’s say, none to art, for example. And it’s going to push it even further in that direction. Right? Now, even though we don’t train our models on current artists that are still living and even some artists who have passed away, we do train the model on some historical artists like Van Gogh, for example. So we can go ahead and do something like, In the style of Van Gogh’s Starry Night. Now, this is going to work for every artist. But for some, like Van Gogh, I think Picasso might be in there. It’ll work for that. And we’ll get something that looks like that, which is incredible. So if you’re trying to get some inspiration for a project or just something a little bit stylized, that’s one way to do it. Now, speaking of inspiration, I know a lot of artists are trying to figure out how to use AI to get a rough sketch of something. So let’s go ahead and maybe get rid of oil painting. We’ll keep art. But we might want, maybe, let’s keep the same prompt. But we’re going to go ahead and maybe we’ll do basic sketch. Right? And we’ll do something like that. And we’ll see what this gives. And this is the fun thing about doing live demos of AI, is you have no idea what’s going to happen. Right? So we get something like that. And then we can dive in here on the right hand side. Maybe we want this to be a bit of a doodle. Right? And then you can bring this into Photoshop and do what you need to do with it or just use it for some inspiration, which is really cool. Muhammad is saying, really creative Adobe team trying to save time for designers and do things and waste time like this. Save time for designers. Yes. And that’s kind of what I’m really excited about this, is again, text to image, it’s kind of cool. Right? But it’s the time saving features that we’re building into Photoshop and Illustrator and After Effects. That’s what gets me excited. Right? And we’re going to show a lot more of that, hopefully, in the weeks and months to come, which is going to be really fun. So again, very cool stuff. You can get some inspiration just by doing a little bit of sketching and things like that. But it goes even further. Right? We can go ahead and create some photo realistic images. Let’s go ahead and go ahead. Go ahead. Go ahead. Right? Let me get rid of those. And we’ll do something like photo of a burger. Right? We’re going to start simple. And I’m going to show you how you can build up on your prompts. So starting off with photo of a burger, we might get a relatively basic photo of a burger. Again, I have no idea what’s about to happen, which is fun. Right? And this photo of a burger looks fine. Looks like something you would take for Instagram while you’re at, I don’t know, In-N-Out or something. Right? Let me know in the chat, what’s your favorite burger place? In-N-Out I think is a little bit overhyped. But I know some people who live and die by it. So this is fine. Right? But one thing that I love about text to image is really figuring out the right prompts to use. So photo of a burger, not bad. Right? But maybe on a rustic wooden table. So now we’re starting to define the scene where our burger is going to live. And you might find yourself using this because you might be on a stock website like Adobe Stock, and you might be browsing through beautiful images. But you just can’t find the exact one that you’re looking for. So you might want to generate something that’s a little bit more specific. A lot of Five Guys burger. I like Five Guys. They’re very greasy, though. MrBeastBurger, I haven’t tried that yet. Big fan of Shake Shack. It’s always good. Obviously not sponsored, but hey, Shake Shack. So we’re getting there. But we can continue to define what the environment looks like. So maybe we might want a dimly lit rest. Restaurant is another one of those words I just I can never spell for some reason. Restaurant environment.
And again, it’s going to continue to get to where we’re looking for. Let’s see what happens. OK. Not bad. Maybe we can do dimly lit restaurant environment in the background. So now we’re starting to define what the background of this photo is going to look like, right? See, now we’re getting somewhere, right? And then maybe we might want the overall vibe rustic style, right? And this should be kind of where we’re going, where we’re trying to get with this particular photo, right? And there we go. So this is looking kind of a lot more what I had in my mind than the beginning. If I go back, we can kind of see where we’ve come from. So this was our first generation photo of a burger. Very basic, but maybe this is what you want. Maybe you just want some inspiration for a photo, right? But then we started to define on a wooden table, dimly lit restaurant environment. Dimly lit restaurant environment in the background. And then we added a little bit of a rustic style to the overall vibe of this photo. And you can continue kind of adding things, right? So maybe you want dimly, or maybe you want, let’s say, glass of soda, right? And that should put a glass of soda somewhere in this photo, maybe beside the burger, for example, right? And in just a moment, there we go, right? And some of them, some of the glasses might look a little bit large and that sort of thing. But we’re kind of getting to where we want to go, right? But what we can do, there are a few other options we can play with. If you hover over a photo, there’s, right over here, you can obviously submit reference image, which I’ll show you in just a second. But over here to the left, you can show similar. So maybe you really like this style here, or this one up here. Maybe we’ll go for this one down here. You can click on that, and it’s going to show you additional generations of that photo that kind of have the same composition, maybe the same colors, and that sort of thing. And then we’ve got something that looks like that, right? So very similar, especially these two down here. It almost looks like a McDonald’s burger. Still hungry. I don’t know why I do this to myself. I just make all these food generations. And you can also, in this case, we may not necessarily need it, but you can also experiment with the content types. In this case, probably either none or photo are the only two that would make sense, unless you want to take this in a very different direction. And Laura’s saying it’s impressive how fast it works. I agree. The speed at which these generate, ooh. It’s going to regenerate this one. But the difference was pretty strong there, right? From none to photo, we got much nicer looking results, right? If I go back to photo, ooh. OK, so content types. Definitely don’t forget about your content types. Now I’m even more hungry. But yeah, going back to what Laura’s saying, it is very impressive how fast this works, right? There are some other AI tools that go a lot slower. Could take a minute or two to generate four different photos. And this one does it very quickly, right? Which is wonderful. So I’m hungry now. Anyway, let’s go on to something a little bit less delicious. It could be delicious. But one thing I also posted in the tips and tricks section on Discord is you can use something like Firefly to create maybe patterns or backgrounds that you might want for your particular designs, right? So you might want something like liquid swirls in beautiful purple and black colors, right? Maybe with bits of gold, right? With gold powder. And we can generate that, right? So if you have a very specific background that you might want for maybe a live stream or a design that you’re working on, maybe a flyer or a poster, whatever, ooh. Ooh. I mean, talk about luxury, right? Look at that. That’s fancy. It’s wild to me that AI can generate things like this. If you would have told me like five years ago that we’d be doing stuff like this, I would have probably laughed at you. And now the world is changing so drastically and we can just type in a bunch of things and there we go. But I hope we all handle this well. Not just Adobe, because we’re trying. Our Firefly model is trained on Adobe stock images. It’s not scraping the entire internet like some of the other tools are doing. But we’re in the wild west of AI right now. It’s a scary place. And I don’t know what’s gonna happen in the next four months, let alone five years, right? So things like this, right? And then we can do something like I showed you before. We have our reference image. So if you click on those three dots, you can use this as a reference image. Now what exactly does that mean? Well, it tells you right here, right? Reference image will be used to influence future generations, right? So we initially see very similar results. And we do have this slider at the bottom, which I’ll show you in a second. But let’s say you love the purple. Or maybe you don’t love the purple. You love the gold and the black, but you might want instead of purple, maybe blue, right? So it’s gonna take those images and regenerate them with blue instead of purple. So let’s see what happens. So we’re now getting a little bit of blue in our generations, which is nice. But you might remember that inside of this popup, we have our slider. So we can control. So if we slide this over to the left, it’s gonna stick more closely to the original image. And I’ll show you this like this. So we should see a result kind of like what we saw before, probably a lot less blue. But now if we take this slider and drag it over to the right, it’s gonna look more at the prompt than the reference image. So it’s a lot of, it’s a balancing act, right? Depends on how far from the reference image that you want to get. And we should see more blue in just a moment. Woo. Oh gosh. Okay, I think I found a new, a new strange hobby. I’m just gonna generate these all day. I don’t, AI is weird. It just, and this is the kind of thing that’s somewhat exciting about this is it’s just like inspiring me to create other things and expiring, expiring me? I mean, maybe. But inspiring me to create designs based on these generations, right? I don’t necessarily look at this as a one and done thing. I’m probably not going to just take this and upload it somewhere or take it and just post it on Instagram. I wanna take this and use it for something, right? I wanna bring it to Photoshop and create a photo composition. I want to create some UI designs based on these colors. That’s what I’m excited about. And I hope a lot of artists and designers are excited about this. And I get, I totally get the fear that technology like this is instilling in people. Totally understand that. And that’s why we’re trying so hard and we’re working so hard behind the scenes on the tools that we’re gonna see in Photoshop and Illustrator and After Effects and who knows what else, right? That’s what’s exciting about all of this stuff. So, let’s see, chat. Gareth says, I’m obsessed with exploding powder paint at the moment. Well, let me go back. Let me close this. Look at, let’s say, exploding powder paint in pastel colors. Oops. Pastel colors. I’m telling you, I can’t type when I’m streaming. I don’t know why. Loads says animations. Ooh. Animating this sort of stuff would be really interesting. Oh, poof. Look at that. That is fancy, right? And you can take this in so many different directions. You can turn this into a photo. You can make some art with it. The possibilities are just absolutely endless. So, that’s gonna basically wrap up today’s stream. Look at that. That’s fancy. So, wrapping up today’s stream with a bang. Definitely, hop in Discord, chat with the community, have some fun in there, share your work once you get into the Firefly Beta, but I think more importantly, at least for me, share your suggestions. How do you want this technology to function and live in Photoshop? And I’m not talking about putting text to image directly in Photoshop so you can generate things like this, right? I’m talking about things like inpainting and extending your images and maybe making variations, maybe changing the aspect ratio of a design so you don’t have to redo it 1,700 times for different social media platforms. Definitely let us know what you wanna see. That’s gonna wrap things up for me for today. Big thank you to everyone who has tuned in, and I will see you all next time. Before I go, I’m gonna leave you with a video that’s gonna show a little bit about what we’re thinking about behind the scenes. Thanks, everyone. preparing I’ll see you guys in the next one.