 We focus on trying to understand why people are concerned about copyright. And I think we have a very strong perspective that this is not violation of copyright when you're training these large models and stable effusions trained on five billion images. I think it's important to respect artists opt-outs, but to your point, this is not copying the image. This is learning those patterns between images so that it can understand what the color red is and what a hat looks like and what a sunflower might look like if it was in the morning kind of sunrise. Hi, this is your host Abil Bhartian. Today we have with us Kent Kersi, founder and CEO of Invoke AI. Kent, it's great to have you on the show. Yeah, thanks for having me. Yeah, and you're a founder and you're leading Invoke AI. I would like to know a bit about the company because as you're talking, you've created the company this year. So talk a bit about what led to the creation of this company. What is your own background? AI is a busy space. So what specific problem niche that you felt nobody's solving that you wanted to solve? Sure, Invoke's got a little bit of an interesting background because it evolved out of the open source ecosystem. So there was an open source model that was released last year for image generation using generative AI called stable diffusion. And our repo was one of the first that was created to allow this to run on people's computers. And in making that project and kind of building the community around it, we saw problems that professionals faced in using this in their day-to-day workflows. So professional creatives, people who do this for a living, we saw that we could improve that user experience, that interface and work with them to build tools that enabled them to create and then really co-create with AI rather than simply throwing a text prompt into kind of the void and hoping that you get something useful from the model. So that's how we kind of found the problem as we were building an open source. And then earlier this year, we kind of spun up our commercial arm and started to work with enterprises, especially in media and entertainment who are looking to accelerate their 2D asset creation pipelines with AI. Perfect. And can you talk about your own background as well? My background is varied, but primarily focus on product management throughout my career. I started kind of out of school in technology consulting at Ernst & Young. And after a few years fell into the Atlanta startup scene. So I've worked on everything from B2B products, B2C products, B2B2C products, two-sided marketplaces. I've done a little bit of everything and I've seen everything from direct consumer SaaS products to enterprise SaaS. So have a varied set of experiences that help make and frame my perspective around how to build good products for customers, but also identify how to apply this technology to customer problems. How you have seen evolution of AI and by evolution I just, I'm not talking just the technologies, but usage in once again not consumer space, not ring or Teslas, but in enterprise space we do see some core challenges, core problem that need core solutions. The biggest evolution in my mind is that we've seen a step change in capability, but we've also seen the release of open models that can be fine-tuned and customized to a business's use case. And so this deviates a little bit outside of where we focus at Invoke, but I think large language models have been probably one of the biggest drivers of generative AI. And there are open models that you can download and use locally as an end user, but also as an enterprise. Lama 2 made a lot of news recently, Meta released that openly and for most businesses they'd be able to use it commercially. And when you have the capability to take this model and fine-tune it, you can really optimize it for your business's use case and develop strong IP around that model, both on how you use it, kind of the prompt interface and some of the more advanced workflows where the prompt is kind of feeding back in on itself and kind of an agentic workflow, but also through augmenting retrieval, pulling documents from a vector database or any type of document store and pulling that into the model for context. There's a variety of ways you can apply this to an enterprise use case and that customization is really where you unleash the potential in enterprise workflows. And so we identified that same parallel in the asset creation space. These open models that you can pull down and run, they generate images, but if you're working on a very novel IP as a large entertainment company, it's not gonna understand when you prompt for your character, when you prompt for the aesthetic of the world you're creating and those types of customizations exists. You can train that model with new concepts and a new understanding. You can pass in your style as an artist and you can teach it to actually generate high quality images that fit to your project. And that's where we see the most kind of incredible potential for creators is taking these models, customizing it for their use case whether you're an individual artist and looking to kind of embed your style in the model or you're an enterprise that's developing IP, you have the ability to take these models and make them your own. No, let's just talk about the market of space. You folks work in the whole creative space. There has been a lot of discussion. I used to be right on myself that, hey, you know what? Genetic AI is going to take all the jobs of writers and artists. No, it's not actually arrival of Photoshop did not shut down a lot of camera store, actually it made us more creative. It actually freed those creative professionals from doing mundane tasks so they can actually take more projects. The invention of Ville did take a job of folks who are carrying things on their back, but it also created the whole automobile. So what I'm trying to say is these are tools which will of course, a lot of jobs will go away. But when we look at, sorry, a lot of jobs, mediocre jobs will go away but it will also open up for a lot of high level jobs and AI will do some of the basic mundane tasks. Now let's look at the specific market, the specific use cases that you folks target and then we'll talk about opportunities and challenges in that space. I agree with you completely. I think the fear right now is one of being replaced. The skills that we've developed no longer being valuable and us feeling like we don't have a place in the world and a lot of artists when they saw what these systems can do immediately kind of responded out of that fear. And I think to your point, Photoshop did the same thing 20, 30 years ago. Where we're at now, I think a lot of artists are starting to see how this can empower them. Artists are realizing that these models don't really have a good understanding of their style unless they're trained and the only person who can really train them well is the artist who's got all of those high res assets to train those things. And when they start to apply this to their problems, the more advanced professional workflows are no longer just throw text at a model and hope you get an image. You can actually co-create with it. You can draw an image, a sketch and pass that in. You can train it on your style. You can use rendering tools, whether you're an architect doing something in Revit or you're a 3D artist working in Blender or Maya, you can really use multimodal inputs to help guide the AI creative process in realizing your creative vision as an artist. And the most creative people that I talk to that the kind of driving force behind what they do every day is a passion and desire for creating. And that is really what this technology is going to enable and unleash. You talked about changes. I think, yes, we will see new jobs created at enterprises for these types of roles, but I also think we're gonna see the creative world completely change. The individual creator now has the capability to do an end-to-end production. I think we're gonna see music. I think we're gonna see movies. I think we're gonna see entire video games all created by small teams that are now capable of doing this type of AAA quality work with that small team because AI is helping them do that. And so I think we're gonna actually have a creative renaissance with AI and it's gonna be really, really magical for a lot of people. Now, I'm gonna ask some tricky questions and which are also most about the real world that we live in. When it comes to content creation and when it comes to gendered AI, or machine learning AI, it's about learning, right? You do need to have samples to go through or a lot of content to go through to be able to create something without you cannot give an algorithm vacuum and ask them to create a lot of the ring character. No, it need a lot of references. Just like us humans, whoever we are, it's all about the things we have read, people we came in contact with, discussions we have had, things we have heard. We learn throughout the life. And now there is an ongoing debate even some actress writer like Stella Silverman, they sued a company like Metta so that they should not be able to learn from their books, but that's how we learn as humans. So it's also about, there's a difference in copying but there is a difference in learning. It's a very complicated question. I mean, I have read the whole lots of ring book and if I come up with my own novel, it's not that I'm going to copy it but there will be a lot of influence it could be suddenly. So what I'm trying to understand is since the field that you work in, what are your thoughts on this? You know, we focus on trying to understand why people are concerned about copyright and I think we have a very strong perspective that this is not violation of copyright when you're training these large models and stable diffusion is trained on five billion images. I think it's important to respect artists' opt-outs but to your point, this is not copying the image. This is learning those patterns between images so that it can understand what the color red is and what a hat looks like and what a sunflower might look like if it was in the morning kind of sunrise. Those types of patterns emerge from looking at multiple images and any individual image is less important than the broad pattern that's being identified by the machine learning model. So we think that it's important to kind of separate those two out and focus really on how are these models trained? How are they distributed? And then how are copyrights obtained on the generated outputs? So our perspective is it's important to have fair use for training because otherwise I think we're gonna set ourselves back in innovation but when you understand why people are concerned about this, it's because they don't want their style very easily replicated by somebody who's going in and typing a prompt in. And I think for most artists that I've talked to, if they actually try this technology and try to recreate their own style, they're unable to without a customized fine-tuned model. And so if you go to stable diffusion, any of the websites that host those or mid-journey and try to replicate your style as an artist, you're gonna find it's really bad. If you use the specialized fine-tuned models, it's really good. But that means there's a specialized model that's out there that's really trained on your work as an artist. And I think that's where there's probably some issues is distributing models that are intended to recreate the likeness of an individual or recreate their style and commercially compete with them. That's probably where there's bigger opportunities to restrict the type of model creation and distribution is when it's targeted and specialized because we believe that type of specialization should take place with the artist. They should be able to train their own model. They should be monetizing that and licensing that model out if someone wants to recreate work in their style. And I think it's a little bit nuanced because this is a new age of intellectual property, but we think it's important for artists to understand the intellectual property that they potentially could own if they approach this correctly. When an individual like you or me consume content, it's like one person reading a book and then one person may write, but when it comes to whole AI, like it's a bulk of machines going through my data. Look at AWS. They have disrupted almost everybody because the scale that they operate, no smaller company can operate. Nobody can literally compete with them. So a company like Meta, the amount of data they can chug is much more higher than a smaller companies can do that. So this is also, it's not about just learning from that. It's about the scale that they operate. And that also, it's become an uneven field, not an even playing field. So it's much more complicated than, can you also talk a bit about whether it's artists or whether it's enterprises? So you did talk about that. Where and what role does Invoke AI play? How do you help both parties? Or you're like, no, we mostly help parties or we help enterprises who have these artists on their team to create work or art. Could be games, it could be music, it could be films. I think it's a good question. And I think you talked about the big companies and them operating at a scale that nobody else could compete. And I think that's one of the important things in all of this and is AI develops is making sure that people have access to the technologies, especially if their field is being disrupted by it. So this is a key philosophical perspective we have in why we're building an open source. Our tool can be used by anybody. They can download it and run it on their computer for free. It's not kind of some faux free trial version of our tool. It's the core generative AI studio can be run locally for free. Our goal is to enable our ability to really empower the entire world with good technology. And we do that by helping artists, individual artists who need to use these tools, help educate them, help teach them how they can train their own model. We plan to help them monetize and distribute that model to potential customers. And we're helping enterprises understand the value of these models, train their own custom models and deploy that studio across their teams. And so we really work across the entire ecosystem to make sure that we're building a foundation for the future. We wanna build an ecosystem that artists have a place in where creativity thrives and where Invoke can help evangelize the way that we believe this ecosystem and the world ought to work with AI at its core. That's excellent because I also wanna talk about the ongoing strike in Hollywood that's going on and some of the key point that they're doing. Less use of AI, but the fact is that we cannot go back to coal mining and horse driven carts. We do have to leverage AI ML in creative processes. So rejecting it altogether is also bad. Second thing, this is a bit tricky is, and I'll see whether I should keep it off camera or not. Even if you look at mega million dollar Marvel movies, the FX artists, they are the one who get the least money, the big actors. So with Invoke, I went with this, all these creative processes, the artists are not getting paid what they deserve to get paid. So how do you look at, because you are working in this space, so how do you look at that problem? How big is that problem? Or how Invoke can also help solve that problem? The important question here is, how do we create a sustainable ecosystem, right? A sustainable model for people to create. I think looking at the writer strike and where everything ended up in those negotiations, I think they've kind of sealed their fate in some sense. I think they landed on the wrong set of agreements, mainly because I think AI is going to be at the center of how content is created in the future. The funny thing is, I don't think it's the writers that lost. I think it's the studios. And mainly because I think the future of this ecosystem and kind of entertainment is gonna be creator centric. It's gonna be those people who can harness these tools to create those magical experiences and those wonderful movies and stories. They now have a new set of tools that can take their story and what they wanna tell and make that real. And that was the role of a studio in the past, right? The studio used to be the enabler of creating your vision as a director, as a storyteller. Now we have these tools, the budget can be a lot lower and those creators can go tell the story the way they wanna tell the story. And so that's where I'm very optimistic about the future, about how creators can leverage these technologies to realize their vision. And that is what we wanna support. We wanna help evolve that ecosystem. I think the important piece here is that creators need to understand the intellectual property that they own and not give up that intellectual property to anyone else. They need to own that. Whatever that secret special sauce is that makes them a valuable creator, whether it's their voice, whether it's their storytelling capability, whatever it is, they need to understand how these models are trained. They need to understand how they can optimize those models for their own creative purposes. And they need to be able to deploy that and use that in their creative process. I just came back from open source summit in Bilbois, Spain. And even in the foundation, they announced some project around AI ML. And they were like, you know, we need to democratized AI ML. There are a lot of projects with name open in it, but they are not open source. Lama two is a very good example. Talk a bit about first of all, when it comes to AI ML, especially in this, actually AI ML is now turning out to be a legacy traditional term. We have to talk about the modern AI. What does open source mean here? Because AI ML is not like a traditional lamp stack, but those four components are their open source. We talk about models, we talk about framework, software itself has no value without all the data, you know, all the information. So talk a bit about what does open source mean for AI in general. And then let's talk about what open source means for invoke AI. So I think it's important to recognize open source is probably the wrong label when we're talking about models because the models aren't really like source code. It's just a data set or it's a not really data set. The data set is its own thing. The model has a license that you must follow to use it. And most of the licenses don't necessarily map to the licenses we're familiar with in the open source ecosystem. They're specific to models. So there's the code to run these tools and run inference on the model. That should be open source. But the models themselves, you know, our perspective is we want this to be openly licensed. And what we mean by that is barring restriction for abuse or misuse, those should be free to use for anyone and free to create derivatives of. And the reason why it's important that you can create derivatives of those models is because it allows people to fine tune and customize those. And from an enterprise perspective, you want assurance that if you fine tune or customize that model for your business, it is yours, right? You're not stuck behind some sort of commercial license agreement. There's no really kind of like hidden hooks to pull you back into a specific vendor. It is an open model that you can use and run your business with. And from our perspective, that's the only viable way that businesses will move forward in this space is they'll want to find that specialized model that they can train from an open model, you know, one of the openly licensed models in the ecosystem and use that internally to optimize their workflows. And that way they can actually build this as a core capability of the business rather than something that they're just renting from a large tech company. And what kind of trends you are seeing in the, once again, using the term open source there in the whole AI or genitive AI revolution movement that is going on today. Are you happy? Are you satisfied? Are you like, no, we need to do more work in the context of open source. I think we're seeing a lot of really good trends. Hugging Face is an excellent company that's advocating for openly licensed and open source AI research. A lot of what has come over the past year is off the back of some of those openly licensed models that were released. A lot of research has taken place. And I think we're seeing a really, really strong open source movement that is moving AI research forward. I do think that there are some trends that are concerning where we see kind of commercial licenses be applied to open source, and no longer really open source, but more like source available at that point when you put a commercial license on top of it. And I think that kind of goes against the ethos of what open source is. So I'm concerned a little bit about that. But with models specifically, I think that we are seeing a lot of open licensed models be released. And I think we're seeing the cost to train come down. So I'm pretty confident that we will continue to have these types of openly licensed models available, even if some of these larger providers end up closing those off. The research is out there and I think we'll be able to train competitive openly licensed models regardless. So I think trends are pointing in the right direction. Ken, thank you so much for taking time out today and talk about this topic or talk to us about the company and deal with some complicated questions. Thanks for all those insights. And I would love to chat with you again soon. Thank you. Thank you.