 Nice to see you today here. Our next speaker is Tammy Lister. Tammy Lister currently works at INPSIDE as a developer focusing on WordPress. She has a hybrid background across product, design, psychology, and development. Currently, she contributes to WordPress as part of her role. She's passionate about open source community and drinking tea. The talk that Tammy will have today is exploring the power of generative styling. Editorial styling has made significant progress since Snowfall, which inspired Gutenberg. With the evolutions, you can automatically generate styles considering content, mood, or data input. As a result, it has become more sophisticated and accessible, truly generative. We'll review the past looking at generative art and upcoming AI technology, and then explore current tool availability. Discover the need for intent. Understand what happens beyond simple prompt-based styling and explore the true power of generative styling. Tammy, thank you very much. The floor is yours. Thank you. And first of all, I'd like to thank everybody that helped me get the slides up, in particular Bernard, who is basically how we've got our slides working today. It was a group effort. So, as said, I currently work at Inside and I am a core contributor. So, this is Snowfall. This was an inspiration at the start of Gutenberg and what was looked at to connect, to aim to be possible, to create. In many respects, we've got there. In some, though, we are now starting to see how we can move from building this block by block into a space where, simply by adding content, the style is generated around us. The blocks are created, laid to our content, no flaws around our words. But where did this all begin? What are the roots with things like generative and where are we going today? Can we even create a generative Snowfall today? And how far off would that be if we can't do that? So, for those of you that don't know, or those of you that do know, this is the game of life, simply known as life. And it's a cellular automation. It's devised by someone called John Horton Conway in 1970. It's a zero player game and the earliest versions of this didn't actually have computers. Which is kind of quite fascinating. Many generative forms are seen from coming from this with algorithms. And it's always a really good point, I think, when we're starting to think about this. But as I hope to share, it's really not the starting point we probably should be thinking about. The rules can be set to anything, but typically, as with most generative things, they need rules. But what are the rules in editorial content and how are these going to be shaped? How do we start to discuss these? How are we having those conversations now? That's something I want to start those discussions as well. So this quote for me, for Marius, really is fairly key in thinking about generative art. It's a simple definition of creating a system. It could be quite wide if you start to take it meaningfully. And often in the origins of generative art, it is indeed much wider than you would actually start to think about. Some of the earliest known generative art is plotter. That's kind of what most people think when they start thinking about that plotter algorithmic art. Although as I'm going to share, the concept of generative art really goes much, much earlier than that and predates digital. This from Wikipedia goes a bit further highlighting this concept of autonomy. Although there is, of course, a fine line between that and that domino of things called prompts, which we all know is kind of a standard word now, into most generated systems. So a truly generative system should perhaps be truly autonomous, but then is that generative? It's kind of quite mind-bending when you start thinking about that. What would that look like for the future even? Are we entering into the space of totally generated content and styling? In this talk, the terms and, in fact, many quotes of generative AI is going to be used, but before I go too far there, I want to offer just some ideas of some definitions. Probably everybody's going to have a different definition of these terms. This uses machine language to generate. This is often the AI of content as shown here. It learns by supervised learning. AI, machine learning. I use the term generative styling in my title, and I will also use that in this talk because this encompasses, for me, everything, from recommending colors through to creating colors, maybe based on photographs, and it goes that step beyond that generative AI and really into that styling point. At the foundation of all those terms, for me personally, is something incredibly personal, which is generative art. Because it's that foundation, I want to share some background before moving on and talking about how within technology and maybe the work that we're doing within WordPress and within editing, and we can start to use that. And there's a whole history to discover, gain, and form insights. It's also not just visual art, and I wanted to share that because code is poetry. Many have explored generative text, not just visual art or styling, and this is something key to note when we start to think about this, that poetry can also be generative. So if we start to think about different types of content, we need to start not just thinking about images when we do this. But before I move on to look a little bit, I wanted to just set some terms down, and that was AI and machine learning. These often get fused together and really fuzzy when people are trying to determine them. Everyone has an opinion on what these mean, but AI is more or less the process, and machine learning is more or less the subfield. That's the rough way to look at it and really distilling it down here. It's really key to differentiate between them, and I think that this is probably about the best we're going to get for today anyway. So I can't include every example because there's going to be a boundless timeline. I am going to be sharing a resource, which if you want to dive into the history of genitive art, and I would encourage you to, just to learn that foundation, you can start doing that. There is a resource that I'll start to catalog it, decode by decade. I think they've got up to the 80s, and it's really interesting to go back forwards and start to see that as well. But I thought it was really important to go as far back as they have into the deep history, and they start here. This goes quite far back when we start to think about generative, we start to kind of use all those modern terms, and it goes back to cave drawings and I Ching. That is the first form of binary, if you think about it as well, up to Gottfried's computational thinking in the 1600s, and into the first permanent photograph, which was in 1826. So it feels quite far away, but it's quite recent if you think about some of these dates. And this is the resources down here, I am going to have a link to share at the end. I highly recommend understanding these routes because you start to understand the perspective, and you start to understand how far we've come, and when we're talking about all these terms, how new we are in still exploring some of this stuff. So a lot of this fuels some of those artists that really were taking that work, and Vera Moeller is someone that I'm going to be sharing in a minute, and she was definitely fueled by this. As we move forward, Grace Hopper discovers programming and AI is coined. Kind of not that. It feels further away than we maybe think. We're using these terms and saying, hey, they're new, they're not new. And then we have Norm Chomsky, which generally a lot of the language we use today comes from the work that was done there. It's worth calling these out because it's our roots. It's our language and our art and our foundation in what we are building and we're using today. We just sometimes don't know that when we're using them. If we look at those dates again, it feels not too far. Genetic art itself has actually got roots in data and surrealism. It's kind of interesting when you think about that. And it has actually come from a designer's experimenting with analog devices, not even computers and mechanical systems. This is the early art history of it, and someone again, I'm going to refer to her, Mira Muna, she actually had classical art training and she was committed to research in these fine, infinite variations of geometric shapes and lines. And she saw herself actually as the drawing machine before there was a time to be able to do drawing machines. I love this quote where she just literally says she has no regrets. My life is squares, triangles and lines. I think it's kind of pretty amazing. And she's still been creating it. And then she learned FORTRAN and BASIC to then be able to do it within computers, which I think is pretty awesome. Beyond her work, others grew from Manfred Muher to Howard Coyne, who was the designer of something called Aaron, which was meant to make art independently. And it did, and it didn't. These were the seeds. And from today's genitive theme grew flash art, glitch art, all the fun things that we like to think were quite new and they really weren't new. And early web apps and into NFTs and beyond and whatever you feel about them, they all have their roots and they all have their part to play. The language of machine art became more commonplace and accepted. And as it did, we always thought we had found something new, but we probably hadn't. Before we look at though the state of things today, I really wanted to share some of that work of a few people that I think is key who are practicing today in thinking about genitive styling, in particular how we maybe would implement it with editorial work. And, because all of this is great, but how does that then apply to what we may be gonna create on a enterprise site or how we're gonna kind of implement it? I've looked at that past, but how do we take it? So the work of Jaya Taubo, beyond being one, an inspiration from my work, he kind of has this generation of things like iconography. And if you think about some of the visuals that we want to create, it's really inspiring to think about someone having endless inspiration. So this is from a language which was in a game, which was the journey. But if we think about things that we could maybe have generation to, and again, like most of these artists, the source code is shared and something akin to our community in particular is sharing source codes. So you can go to these links, you can get it, you can kind of iterate on it. And that's something really cool about a lot of these pieces. These, this artist really is kind of combining both static and non-static background pieces. They are based on natural forms, but in natural is the best way to describe them. And it kind of starts you to think about generated backgrounds and how these backgrounds could respond to the foreground as well. So Vicky shows the impact of technology on the process of painting. So what she does is she has an ongoing series called the soft body dynamics. And it's created using 3D software that just generates and generates and generates and generates. And then she picks one of those generations and then she uses using oil on linen. She actually creates a physical piece from that. So that kind of hybrid is something that probably is going to apply to a lot of work that starts to happen as AI kind of works but maybe it doesn't work and then we need to adjust and iterate as well. So she's using it as that source of inspiration to then take into the work as well. And then Katrina also uses algorithmic art going back to those kind of roots a little bit which I think is really interesting again, sharing the link. All of these possibilities, if you think of content that you maybe want to put your starter content on your site or maybe you want to have a random image, different versions of different products or all these kind of things using algorithms, her work has kind of extrapolations that you can kind of build up from that as well. So most of today it is prompt based input leads to output. And we're fairly comfortable with that now. We often glamorize it way too more than it actually is that's quite simple, but it is just prompted. The action is started somehow fairly explicitly if we go back to it. And although limited chains often happen, they can be quite complex. And because of that complexity, we can often fool ourselves into thinking, nope, it's not prompt based. Prompts often fuel the genital styling to create images or to create outlines before you write. This doesn't mean that those, as I say, they can't be incredibly refined. And actually prompt writing is an art form in itself when you're actually doing it, but it's really just a response, a reaction to a prompt. Today, as I said, we are reactive. And that work by Katrina is randomized. It can appear fairly interactive, but it's not. You can go quite far with these kind of pieces, but not really what we've maybe thought of is where we were gonna go. For example, if you were gonna write a post and you were going to have the top five holiday spots, you could change the background to different top color combinations using prompts or you could pick three photographs for each holiday spot as a gallery and you could feel pretty cool about that because you didn't pick them. You could use a recommended font or readability combination. You could do all of those things today. But most of what we consider AI today is simply a workflow. It's not the AI we were promised by sci-fi and movies. It's not thinking really, it's push and react. It's one step above that connection and response to reaction. And that doesn't mean it's not useful. It doesn't mean it's kind of not cool. It doesn't mean it's not exciting, it kind of is. But workflows can also free us from mundane tasks and allow us to be empowered. They can also suggest what is an accessible color scheme or advise the best legible font at certain sizes and correct us when we're maybe not doing something we should be. The FT and many organizations like this are starting to have guidelines and these are really worth us starting to consider when we're looking for our work with clients and we're looking to form our guidelines on how we are even gonna approach these pieces and approach our AI. So this declaration shows a team formed to explore it and experiment, but with a focus on ethics. And a note of caution is worth sharing in the same statement. It's kind of interesting, some people fall into different camps on how they feel, but it's worth noting that yes, this excitement is amazing, but pause. Think about the commitment to journalism by this establishment, to its readers. They don't want any kind of fall in that. They want to reflect on that and use it for a purpose. Wired also has guidelines that's published and these are pretty good if you're looking to kind of have your own basis and then spin yours up for your own organization as well, which I think. And most of these revolve around may is the kind of quite curious word I think to be used. And also they, this fascinating alignment are not using stock photography, but using AI to spark ideas, which I think is kind of interesting. AI is that kind of helpful friend and that spark and that ease of workflow, which I really think is important to start thinking about. Art, it actually turns out and visual is easier to automate than code. This is a true number. If you actually used AI to generate code, you would have this percentage of vulnerabilities. If you look at this article, it was an experiment, like a test run on it and this was found to be true. This means that genetic processing when it comes to styling makes a lot more sense than actually using it for things like code, which I find kind of interesting because some of the applications we thought of might not be the best ones for using these technologies. It might be the other ones that will be the better ones for using these technologies. What can be done today though, falls into some simple categories. Most of these, we kind of are pretty used to. They become household things that we're pretty used to if you look at them as well. That's visual, audio and text. They almost feel mundane when I'm showing them. I couldn't share them a year ago and some of them feel quite flash and new, but they feel quite normal now for us. These are generative applications and they are growing almost daily in our acceptance of them and the normalcy that we have. But what is the state of generative styling today? Well, it's really in content creating or constructing and it would come down to probably the following. Something like typically it's surface, that's it. It's recommendations, templates might be recommended and it takes the role of an advisory guide. Pretty much it. It's all light though. It's pretty surface. What about the future? Do we get that sci-fi? Well, one of the things we need to do in the future and probably near future now is be okay with accidents, which in turn is be okay with experiments. This generative form isn't gonna really progress with caution and worry. Of course, we need to not create things that harm but we need to explore and we need to generate what ifs. When the key parts of generative art and art forms in general is embracing this happy little accidents and in the future there's going to be a lot more fun visual experiments that are probably quite challenging for us and we're probably not gonna see some of the, and I say this loving minimal design, minimal monotony of past content and we're gonna probably see some challenging visuals and some things that are quite engaging that has generated for us. I personally think that's quite exciting to really explore what could be but to be open to embracing those happy little accidents is something that is gonna take a lot for us to do beyond our own personal blocks. And you can think of this as beyond prompts and maybe be truly responsive. And of course today there is likely gonna be a prompt at the route somewhere because that's just where we are with technology but there's artists like someone called John McCormick who have been creating life forms that were impossible to actually and are impossible to create outside computers but he creates them within the computers and I think this is just gonna be the norm. The things that couldn't exist outside are gonna be created within and the next step is to think of how you can maybe go and do that. What could you create for your clients that couldn't exist before and how can you start extending that? How can you start utilizing that? We are seeing that with things like product combinations and different things and to do even more of that. What if instead of prompts you provide content and the system creates? We kind of have this today but what could we have more? This is kind of true or maybe at least not prompt based and pure content over just that input. Let's take that holiday article again. You add the content and then without a prompt you have that styling applied that fits. Just fits, it just works for you. Now the prompt kind of has been that content so we're kind of there now. That's kind of what we have but if we kind of go beyond that maybe there would be an acknowledgement of somewhere in the system that you don't need photographs or that you need better photographs and they would appear and then the photographs appear and then it would know the number of photographs that you would need for that amount of text that you have and then you'd have that and then as that grows it would know what your style is or know what the style should be for writing for that article and then it would be able to recommend and pass that. It kind of spirals when you start to be able to think that. Gets a little bit sci-fi when you start to think about it might actually know what your style is it may be able to give those recommendations but it's proactive, it's seeking even. It really is able to be reactive to you rather than passive. There's an interesting aspect to this and Genovative Art is actually creating its own style. You can see this influence from hyper-real food to the washes and colourisation, you know, glitches and manipulations. So Boris took this a step further and he won a Sony World Photography Award. This is not a real photograph that he won the award with. He sent in an AI photograph and won an award and when he won the award he went oh I applied as a cheeky monkey in this article that he literally said that just to see if it would be accepted. It was. So it raises a few questions particularly about Boris but it also raises some questions about what is art and what is the art form and how does the inspiration come in. We can see three hands. There are some obvious anatomical issues with this if you look at it further but you don't because you accept it as an art form and you set this visualisation and we've grown to appreciate photography as an art form so there is a lack of questioning of photography as an art form so it's kind of curious. Genovative Art is creating a style of its own as this isn't real. There is something to be said for that and then that circling and repeating and then people creating in the form of Genovative Art you've seen people create filters that apply them to normal photographs that aren't generated that make Genovative Hyper-Real photographs that weren't Genovative Hyper-Real. There's a whole cycle going on. The case he has is an interesting take to what it looks like beyond the input and into the system. The system as the artwork. This makes sense to then become the style in some sense and influence and evolve and this was one of the processing co-founders so someone that really knows from the origin of Genovative Art forms. Beyond prompts is really what we should be considering. What lies when we let true generation happen? At what point do we stop prompting and really start those conversations? Stop conversations and just react. Don't ask, just see. Beyond prompts is making connections between styles, between content and then suggesting what style fits it. This is where we are going from the template to the first filled out draft and then more and more. And then we start with this less uncanny valley. We shouldn't think of AI either as human again. We tend to think of it as cute and that's probably, in fact, it definitely isn't a good idea. The inputs of now are going to create the reactive generations of tomorrow. Maybe we shouldn't and from poor profiles. I hate that word, but I'm going to say it anyway. It's a really challenging word. Don't think of it as human. Think of it as a slippery slope. So this article, it actually shares this Dickens quote of it's either the best or the worst of times with AI. And I think that's really important to think about depending on your perspective and depending on where we go with this. A big factor in what will or won't happen is hardware advancements. It's not even that they're going to be smaller. They could be bigger. We might need bigger screens, not smaller screens for all of this. We're going to need AI tooling. We're going to need in partnership with hardware to really get that. These tools need to be accessible enough to allow advancements and attractive and interesting enough to peak curiosity and functional enough to make economic sense to be created with. Even with the best non-cost effective tool is going to diminish and people are going to need to iterate and use it. The key part, at least for the foreseeable future, is going to be the role of the human as the editor. This isn't changing with the near future or the far future tooling, as far as we can predict. It certainly hasn't up to this point. Even with the most dreamlike of far away from prompting, the content produced in order to not hit that valley needs some editing, adjusting from the hyperreal. And often, though when we think about it, algorithms are part of the conversation. For example, fracture art is often not even considered generational art because a computer does all the work. So where do we draw the line and where do we determine what is actually generational or not? What happens when the computer does it all? I don't have answers to this, but these are all the type of questions that we really need to start asking ourselves, asking them in companies, having policies for, and then asking collectively as a community about what do we want to create. This robot isn't real. But there was a calling robot in 2010 that brought up the question, as often robots in AI do lately at the uncanny valley. And that's that when our brain sees something that is too real, we really question it and we have a lot of problems with that. Generative dance is around the uncanny valley with what it creates. We fear something robotic, but too human. Yet we also attach humanization to it, cutify it, and make it less scary from cute logos for AI. How many AI products have cute logos, most of them, that's just what we do as humans and we probably shouldn't do that. To faces on robots that don't need them. Robots do not need faces, but we put faces on robots. Adorable, right? There is a danger though in that valley. So we need to be aware of this and we need to, as we create products and we build generative experiences, be aware that trust is key and broken trust can, it's really difficult to gain it back. If the system starts doing too much without building up that trust or we have that bond or we are dealing with different generations and we're dealing with different generations, as in human generations trust, this is a whole different conversation. We don't do well as humans with replicates, we don't want to be replaced. We don't do well sitting in that valley. Could AI even make us care more? Could it make us focus on what's happening more? Could it help us pay attention? Could it take up some of our burden? Could it make everything better and could it enhance skills? Could it free us from those mundane workflows and really open us to learn new skills? One of my feelings is actually it probably could if we allow it to, but there's a lot of conversations we've got to have to be in a space to do that. It needs to be, we need to be aware that we need to let it take some of those burdens as well and be okay with that. It does offer us opportunities to analyze and document and augment human intelligence to raise awareness for generations to come. But perhaps that's going to be really hard for us because that offers, we have to offload some of what's us to it. We are stuck in that gap where the computer is primitive today but where we might end up with something different. It's all uncertain and AI isn't human or shouldn't be thought of as that. The pivot to learning or really kind of getting to that point is key here. What is intent? What are we going to use it? That point of intent is incredibly important. Trust and intent. Knowing the concept of intent and trusting that intent is going to be critical moving forward. There are certainly a lessons from history where we are going can get strangely sci-fi fast and really hand-waivingly predictive and vague and everyone has their own hopes and dreams and favorite science fiction. However, we need to ground in reality and everyone has a lot of life going on. And sense that AI shouldn't be human and it isn't human and AI probably doesn't do much in our day-to-day life or does more that we think it does in certain areas and recognize where it does things. It truly can empower, it can lead to powerful creations and generation and genitive styling is happening already for us. We just need to be aware of where those workflows are and recognize them. Today, it's barely learning to crawl. That baby isn't doing very much. Let alone walking. Really not doing that. We have lessons to learn from genitive art because it's been around for a long while and genitive styling can go a long way. We have to experiment and it starts with housekeeping. We're going to look back at today's workflows with a cute mindset of primitive judgment just like those cave drawings that I shared earlier. And it's going to form that timeline but in many sense it's an exciting time to experiment and learn considering how you can include and ease those flows. Things that's brought today are going to be the foundations of possibility. Right now, there is often something a little off about these generations and we can laugh and we can think they're kind of cute but this is what's been generated today. The system though is learning and improving for each of these generations. It doesn't see people. We can think it's adorable but it doesn't. The editors need to edit out the flaws. That's the human interaction. Hyperwheel isn't this Frankenstein body part going on a holiday by a seaside. Hopefully, this is not your summer holiday. If it is, this is not relaxing and I would encourage you to have a different relaxing summer holiday. That's what this is meant to be and that's what AI created me when I asked for this in a prompt. I'm unsure this is relaxing for many in that picture. Whilst prompt based fun is hilarious and I do encourage you to have some fun with friends with prompt based. It does teach you about prompts too and it teaches the system and we're all evolving as we learn to do that. We are there already though. We just don't know it at times and this quote again by Marius really sums up to me how what we think of as the future probably isn't the future. Just like we think of generation as new and it truly isn't. Our human brains like to not always accept the reality or history and think of the new as new but the truth is that genitive is here and we have to accept it already. Now is the question how do we want to use it and how do we want to empower ourselves through it and where do we want to learn from in the past of history and not make maybe some of those mistakes and maybe also want to draw on that vast knowledge that is there already because it has been around for a while we just haven't really used it fully. So these are some of the resources all of these images were generated by AI. So you can kind of follow these resources to have fun with friends and generate for yourself. And here's a QR code to get all those links because I know that was a lot of quotes. So I'm happy to answer questions. Thank you. Thank you, Tammy. Questions, anyone? Well, that means that you did a great presentation and answered any potential question we had today. Thank you, Tammy. Oh, there's one. There's one. You can have a long question. Yes. So once upon a time, no. I don't understand what you mean with beyond the prompt. I don't have enough experience with this but what is a prompt and what does it mean to generate something without a prompt? Yeah, so a prompt is you generally say something and something so it can be you're physically adding a prompt or your code is saying a prompt. It's a chain reaction from something without a prompt is literally the system just creates. Nothing would initiate it. Yes. That's why it gets very hand wavy but nothing would initiate it. So imagine a site exists. This is where we're going to get a bit weird. A site exists and content gets added and it responds to every different bit of content that's getting added. But you don't at the moment you'd have to say generate the styling for that. That's like a midpoint but that's like the best way I can kind of describe it today. At the moment we're very chain reaction. Kind of. Does that make sense? Yeah. And then it goes even weirder. If we think of the possibilities of creating images or design with AI where do you see the strengths and the use cases we might have today? So for example, I love to create text with AI. I learned from you that's not the best use case but going into image and design is something which I'm really eager to. Where are the strengths? Where are the weaknesses? What is possible today? And where are the above? Text is fine. It's code. Like it's not good at code reviews. One of the good ways is text outlines. Things like Grammarly is really good. Doing an outline. Generate me an outline. There's too many different tools out there. Why I say Grammarly? Oh my goodness. The outline tool, is it Grammarly Go or something? Things like that. Or create me an outline and then you just fill in the outline. So good. It's again that workflow. It's like take the stuff that fills my brain so I can then do better stuff. That's basically good form from that. So filling out outlines. A lot of stuff that maybe back in the day if you ever use Alfred or those kind of tools that you would have like shortcuts and templates that all shortcut like key commands that you would paste with templates. AI will do that for you. That's great today to use. Still not AI though. Again we're using a big term for a tiny little thing. But let's use that term. Those tools are great. Offering how you could do your language better. Oh my goodness. So better from a tech space. To parse your content and suggest how you could improve it. I use that all the time. And that is getting better and better and better for tone, for quality, all of that. That's generally just like pattern matching is at that level. So it's fooling you. Create a template. Image generation, 100% starter content. Find me multiple images that are like something. Recolouring at mass or stylizing at mass. All of those kind of things. I don't know if that answered but yeah. Tammy we've been mainly talking about like specifically asking AI for things. And we've already had a conversation about this. But what opportunities do you see in having AI give you recommendations for a design for example. You mentioned some examples saying hey you are probably going to need like five images for this length of text. Generate those and so forth. But yeah what opportunities do you see in having AI act like someone in the background as an agent or something? I would love that. Random thing but I would love AI to give us the color of the year from Pantone. It would just like I don't know. The art nerd in me would be so excited if AI predicted. It's just a random thing but also just giving us it has so much content that we don't always have content wise. I remember studying art history and the books and having to study that content. It has access to all of that and is able to process it so well and give access to it. So from the advice and the guiding perspective. Yes I think there's a definite guide from accessibility, from a what content just looks better in these from readability from business perspective. What just looks better? What's read better in these situations? What's read better in these formats? All of those kind of things as well so legibility. There's so much better that it's going to no data that I just won't I can no ish in my career but I'm not going to have access to. So things like that is just so recommendations is a big one. It sounds a small one but it's a big one as well. So coming up with that again though that's still you're going to be an editor. That's where I think like often people are like oh there's going to be no need to be. I can't see for quite a considerable time that humans aren't going to need to be an editor. I don't think you're going to need to do input as much. I think quite it's going to do things. Do things is a big word but you're still going to need the editing. I think there's going to be something about an art form of writing things. This is kind of where it gets a bit weird but if you think about people pay extra for craft people pay extra for handcrafted objects. I think we're going to move into a space of we're not going to suddenly have the you will be able to tell what is AI for quite a considerable time. Three hands. Weird hand. It can't do hands. It's going to take a long time before it does hands. See the baby. But also I think people are still going to want that. You know yes you can get your article styled but people will still want a handwritten article. The art of journalism and it's an art form is not going to go right. The art of these things. Is there going to be a narrow? I don't know. But I think there's craft still in these forms. As someone who has done art I am not scared. I am still wary because I'm a human being and all human beings are wary but I think it being able to come up with combinations that I've never thought of. Like I am so opinionated on colors for example and I'm going quite away in your question sorry but I'm really like color sensitive so I'm super challenged but I would love it to challenge me on colors or challenge me on layout. Some of the things that the AI generated I was challenged by and I love that. So yeah I want AI to challenge me. Anyone else or we close the session? I think everything was quite clear out of this explanation. I mean you know how to find Tammy if you have further questions regarding that Tammy thank you very much for this amazing session. Thank you.