 Welcome. I have something that I would like to share with you before we start. I have been a great city, spinning in shouts. The sound of the road washed away. The mountain passes through. The streets are gone. The silence is raining. It sits in silence, glint its own. Who in here thinks that this poem was written by a human or an AI? Raise your hand if you think it was an AI. About 25%. Those of you were correct. And in fact, deep learning has gotten so good at generating text descriptions from images that now researchers are starting to create pretty remarkable things. So with that said, we find that creativity is not only for humans. And what do you guys think? What role do you think that AI will have on the fields of art, design, creativity, and what expressions do we see today? Yeah. I mean, since I run a startup called Creative AI, we can see it almost in any creative field. I mean, AI as a kind of broad technology that touches many facets of society. It touches creative expressions from music being generated or semi-generated through interaction with machine systems, art, visuals. We do a lot of visual design. So we just launched here at Slush our first visual generative design to Bloma.ai. So that's visual design, which is already touched by AI. Do you have a specific example that you could mention, perhaps using music? I mean, I'm very much fascinated by finding expressions where it's human in the loop type AI, where it's not the click of a button and you generate something. It's not very interesting. So I mean, even creativity is, I think, not really an interesting concept. I know that's maybe a bit challenging to say. I think what is more interesting to say, what is interesting, I think what's interesting is what is happening. For instance, in the art space, especially in the art space, there's a lot of people trying to work out interesting symbiosis of where AI is just seen as another tool for creative self-expression. And then there's too many examples to mention. There's Mario Klingelmann, a German artist who just won the Lumen Prize, the Gold Award in London. Very prestigious work generated with a bunch of AI models kind of building up slowly the art piece in complexity, like a painting. That has been quite interesting, both from a technical perspective as well as from a visual perspective. And then we also have the art piece that was sold at Christie's now for $432,000, which was a Renaissance piece that was created through a data set of, I think, was 15,000 photos. So what does that mean when AI now has been recognized as an artist? Do you think the AI is an artist itself, or where does the human end and the AI start? Well, first of all, hello. This is fantastic. It feels like I'm on a sort of fortnight level. It isn't. It isn't. Yeah, it's crazy. Where does the, well, I mean, a lot of the stuff that, I should say that I'm a bit of a fraud being up here because my colleagues who actually work in art and machine intelligence at Google, Kenrick McDowell and others, should really be representing and talking about this. But the sort of things that I'm interested in and sort of try and push on is trying to think of your lost question, where does the human end and the machine intelligence begin? That's certainly something that's very pertinent to what I'm trying to do, which is looking at much more at human augmentation or human intelligence augmentation. Human in the loop, as Rolof said. Looking at the ways in which the ways that computers pursue solutions, particularly using sort of, you know, learned systems, machine learning, those sorts of things, augment the ways that humans pursue solutions or novelty or, you know, interesting things. And so looking at that as a partnership, rather than looking at the AI as an othered creature, which I think is part of the problem with the sort of popular conception of AI from the movies and also from the tech industry, is sort of creating these sort of othered creatures that sort of are meant to replace you or, you know, be different from you and you address, rather than thinking of it as an exoskeleton or a, you know, a kind of suite of tools that the human employs to explore new possibilities. And I think that's where AI and creativity does become exciting is when it becomes, you know, something that the human intuition can use to explore things that are outside of human intuition. And I think that's what, you know, learning systems, AI, machine intelligence, whatever you want to call it, can help the human do. So I didn't answer the question about the auction thing. I'll leave that to you. But I do want to pick up on what you said about having AI to be an active collaborator or how you can use AI as a tool in your own creative process as a designer, as an artist, or even as a data scientist. What might that look like? I think, I mean, AI, as we all know, and luckily there's some talks around this, like AI inherently is dependent as a technology on the data you feed it and is inherently biased, right? And we should acknowledge that. In the greatest place, actually, bias can be a very positive thing. You can train inherently an exceptionally stupid system with just one thing really well, which then can actually, in a particular context, can be exceptionally creative and useful. But yeah, but you mentioned, I think what is interesting with Lucas is more seeing AI and people or artists, designers, as a symbiotic relationship, right? As more as another tool, which can be very smart, so it almost could be as an active participant in your creative process as a designer or whatnot, where agency is potentially negotiated between you as a designer, as well as a smart agent or some type of intelligence. Where, I mean, there's the concept, like we discussed this concept shortly, of the centaur versus the butler, centaur being this concept of a Geric Kasparov, right? When you had like chess computers and the idea after, you know, people can, you know, basically are beaten by chess computers now with AlphaGo, et cetera. The interesting next step is like finding ways how we can work together with computers to as a sport, as a kind of centaur principle. So that's, I think that's a much more humanistic, interesting way of working with these things. Yeah, I mean, a couple of examples. I mean, this isn't really art or cultural products, but, you know, recently the work of colleagues in Google's Accelerated Science program in Google AI working with fusion engineers that they produced a system called the optometrist. And the optometrist is called that because it's a bit like when you go there and they drop the lenses in and go, is it better like this or is it better like that? Is it better like this? Is it better like that? And that's what the computer system can do is it can generate so many different possibilities, then put in front of the human expert and they can go, oh, it's better like that. And so they can follow their sort of, their intuition through this kind of hyperspace of possibility with this kind of rocket boost of the computer system being able to simulate and then give the results of a simulation incredibly quickly to them. So in terms of engineering, architecture, those sorts of things where you're looking for optimal solutions, the sort of center pattern, the sort of, you know, the new version of Steve Jobs's kind of bicycle for the mind is very powerful. In terms of cultural production, often culture is not an optimization problem, right? Apart from perhaps for art dealers who want to optimize the price of things, but generally speaking, we are not entertained by the optimal cultural solution. We are entertained by things that are newer, novel and exciting or recontextualize all things. And that is something which at the moment, learned systems are not terribly good at or do in very predictable ways. But in fact, the sort of center pattern with artists, musicians, sculptors, whatever it might be is really a new material exploration for those creative people to kind of quickly look through, again, in a way, look through these hyperspaces, the things that are interesting and exciting to them. But it's not too different in some ways, but I think there's a very different, the human in the loop of the cultural exploration is looking for a very different thing. They're not looking for the optimal. They're not looking for the result. They're looking for something which speaks to them and maybe speaks to a larger audience. It's like a possibility space we like to call it. So AI can really help you have a possibility space where none of the possibilities are like even necessarily the end thing, but they can really help you as a surprise search. There's actually quite a lot of work in evolutionary algorithm, genetic search, where they even are inspired by what they call surprise search by biological organisms finding surprises and novelty search. And that's a way of almost drilling down on, and again, I speak clearly from my own context, which is in the design world. So within design, a lot of where we all always like to say we're the most creative profession of the world, a lot of our actual work is not very creative at all, unfortunately, and a lot of that kind of work can actually really be augmented by smart AI systems that can really help speed up that part, which is actually not the real thing we're after when we're doing design or when we're trying to be creative. So do you think that AI will be able to replace designers and artists? No, then what role would they have in the creative process? Like what work could you outsource to an AI and what work would creatives use themselves? Well, I don't know, I mean, I think this dialogue comes again from thinking about the AI as an other that you outsource things to, that you sort of give this personhood to, and I think we're pretty far away from that, in most cases, other than we dress up sort of learned systems or complex systems with sort of fake personalities, but we're pretty far away from actually having an artificial other that we can sort of ask questions, have that sort of relationship with, but we can certainly, I think if you sort of, you don't think about outsourcing the job of going from A to B to a bicycle, you're on the bicycle going from A to B, and I think this is where we're at right now with creativity, design, cultural output and AI systems or learned systems. But I think a lot of the work that's happening right now is also crucially to, I think a lot of the work of artists, particularly, is to actually make these dynamics of these complex systems legible to a greater audience. And I think that's the main service I sort of see, kind of, you know, artists like, you know, Refik Anandal or James Bridal kind of doing amazing work right now in terms of like getting, allowing people to have a way to speak about how these things fit into everyday life and might fit into their jobs or their lives. And I think that's like, in terms of AI and creativity right now, there's almost like creativity is, the creative, you know, the artists are almost doing a better job than the technologists of creating a dialogue around these systems. And I know that's a slightly off topic, but I think it's also why they're deliberately playing with things like bias in learned systems to talk about it and expose it and create kind of objects that can be talked about in a wider culture than in perhaps places like Slash. I think it's still early times, right? So like historically, I would separate creativity and creative work as like analog means, where things are analog and we do it ourselves, still with the help of tools, but non-digital tools. Then we have digital tools where we have desktop publishing, we have, you know, it's a camera, et cetera. And then we're at the third stage already for a while, which is that that's kind of simulating the analog with the digital, right? So it's such the digital equivalent of the analog thing. And now we're in the stage where we're actually starting to build more intelligence or our agency into those digital tools, where the tools become much more of an extension with a limited level of agency to them, which is why we're having all the problems we're also having, but also why it's so interesting. So all the problems we're having with AI is going to take over creative jobs is because we don't feel that there's control over these technologies. We don't understand them, but we have to be able to feel that this is actually empowering us. And we also have to be aware of, like the kind of things we actually want to be doing. And so this idea about augmentation versus automation, which is the thing, do we call people or do we replace people? Actually, it really drills down, not to a technical question if we can do this. It should be more like a cultural question or political question, do we want to, right? And that's not an engineering question. That's like a question we all have to answer together. And what are some of the main challenges that you see right now, like with AI in terms of the creative work? Well, yeah, I think that like it's early, so there's not much control. It tends to be very much problem-driven and engineering-driven rather than, and I say this because I studied anthropology. So I'm a failed anthropologist. I started, I studied anthropology for 10 years and then I was miserable at it and so I switched to computer science. But I think it's, yeah, I have no idea where I was going with this, but so it's early and because we don't do the power, yeah. Or like challenges that we see. I mean, and because it's technical, so it's technical-driven and it misses all the soft skills, which we all, which is inherent in startup culture and we have the problems we're having about gender inequality and all of that. I think it's an effect of being like a tech bro culture which is very much engineering-driven. That's something we have to fix and that can only help this system and AI becoming more fine-tuned or augmentative to our own needs as a society rather than as a tech bro group of homies. Agreed. I think the other thing that's lacking right now is, well, it's not lacking it. We're kind of right in the middle of it, but I think there's a sort of tool chain problem, which is it's very hard for people who are not, extremely invested in the sorts of skills to be able to use things like TensorFlow or other platforms to quickly make mistakes and sort of do that on their own time and experiment on their own time. And we're sort of missing that kind of, if you think back to 10, 15, I guess, years ago, the birth of intermediate languages like processing, which were invented by artists, Casey Rees and Ben Fry and others, and platforms like Arduino, which sort of opened up the tech-nearing aspects to new constituencies, which hopefully makes it more inclusive, but also allows those folks to make mistakes on their own time without having to look to the permission of others or look to the time of engineers who perhaps wouldn't be interested in their ideas or what they're trying to express. And I think we're just on the cusp of seeing some really interesting work there. There's work from ITP in New York. There's a platform called ML5, which is building on top of TensorFlow to sort of create these kinds of kind of little libraries and sort of sandboxes that people who are not computer scientists or AI-trained engineers or familiar with machine learning techniques can actually kind of start to get into what this thing works like as a material. And we've just been missing that the last couple of years. I think, you know, never make predictions on a stage or a fortnight level. But, you know, I think by the end of 2019, we'll see a whole bunch more kind of creative middleware for people to start exploring the use of AI systems as a material for all sorts of different expressions. We have a question from the audience. As what do you think AI-based systems like Adobe Sensei, which helps artists to find assets and inspiration from vast cyberspace? So... I haven't experienced it or used it myself. Yeah, I think it's great. You've used it? Okay. I mean, I think Adobe is really investing a lot in kind of trying to find ways of using AI in kind of their tool. So I think of the limitation there, I would say, as, you know, a start a big corporate has, you know, their own customers to appeal to and cannot make like radical changes. But they are building a lot of these interesting things into their existing products. So being able to spend less time on searching through large volumes of images or stock photography to find the kind of content I'm looking for with smart visual search. And there's some startups here at the Boots doing some of those, I saw as well. I think that's like a huge time saver. And it's great, yeah. Matt, I want to go back to your comment like about the systems. Just because I know that you work a lot with like the architecture, like of AI and like, what does that work entail? And like, where do you see some like coming trends? Well, I mean, you know, the biggest, I mean, this is, I would say this, wouldn't I? Because it's what I work on. But I think one of the biggest changes that we're seeing other than, you know, the opening up of the sort of tool chains to people who are not computer science professionals is that we're sort of seeing a rebalancing from the sort of pattern of centralization that we've had over the last 10, 12, 15 years. Almost, you know, the pendulum is not going to swing all the way back. It's still very useful to run stuff on other people's computers in, you know, the thing that people call the cloud that actually isn't a cloud at all. But, you know, the pendulum is swinging back a little bit more towards decentralization, more towards kind of AI being able to run where the action is on device, in very small devices in sort of very energy efficient ways, very fast efficient ways on, you know, new types of chips and new types of kind of approaches to running machine learning systems on device, which allows you to sort of think about the architecture of how you use machine learning a lot differently. It can happen more locally. It can happen faster, lower latency. It can happen with more agency, you know, for the end user owner of that rather than all, you know, rather than the old pattern of sending, you know, all of the data to some kind of central location for the learning and the inference to happen, the inference can now happen certainly where you are, where the action is happening. But also the learning can happen there as well. And that's kind of, you know, it's what our group works on, but, you know, other, you know, it's prevalent now across pretty much all of the big players in terms of AI that more of this kind of like edge AI is starting to become possible and become powerful as well. And I think that's just really interesting because it starts to make you think very differently about how you might build certain things, whether it's a creative solution or something more around, you know, productivity or the environment or whatever it might be. But it just, it's just a new, again, it's like a new material, it's a new tool. But it helps, you know, it's something that's changes perhaps your base assumptions about how you would architect something. And that's always interesting to me, at least. So, I think. And I think it also connects to what you said earlier about play. So I think the concept of what it allows, like when we go back about that this is a technology which actually should be explored by as many people as possible and try to find what can this mean in a creative context, what can it do, is also being able to make these tools easier to use as one aspect, but also making them usable in more contexts and easier contexts and only somewhere on a big massive GPU, but actually on your phone. It's a super important part of that and it really goes back to a lot of the kind of things we have right now digitally really come from experimentation, right? So Xerox PARC was famous for, invented like desktop publishing and et cetera, was famous for bringing children in, like LMAK was a big fan of the concept of play, like having instead of like doing user testing with like fellow, you know, tech workers, having actually like children coming in and try things out who still have like, you know, an unspoiled mind by what things actually could be and being able to do that for AI, for large, you know, groups of people, I think that's what kind of is the extension of what this new, this kind of, when you said like what is going to happen, like I think that's a progress which I think I hope we'll see more but what this kind of technology definitely makes possible. I think it's, I mean, the more constituencies can be involved in the sort of generation of these things, the better and the more right now we're just at this sort of tip from, you know, it having to be the big companies, it having to be not even startups but like, you know, being these companies which have, you know, very big data sets, very big compute facilities to something which can now work on, you know, Raspberry Pis and I think that at least gives me a little bit of hope for the next 10 years. So, and you know, it means that there's going to be a lot of change which I think is also incredibly exciting. I mean, it's two trends. So we see both trends happening. So there's the trend where as a, like computer scientists, there's the trend where the biggest kind of gains in deep learning from such humans are being made by just blowing up the models, right? Just having bigger, like the largest kind of networks now, they're like, they're literally just like, it's an exponential graph of computation and really minimal gains in accuracy. But that's kind of, so it's a really stupid way of doing computer science. But that's kind of how a lot of these things work, unfortunately. But then there's the other trend which you, like, which of course we're a big proponent of which is like actually making things smaller and smarter. Yeah, yeah. Yeah. So how do you foresee AI being regulated in the future? Because you did mention that right now, it's mainly like the large corporations like working with AI, but we're starting to see, like, more people use it. Like, should there be like regulation? And if so, what might that be? I can give a very short but maybe a bit radical statement on this. I think like GDPR is one example of like where, we as a tech community essentially have failed to hold ourselves to good standards so that there is regulation which has to force us to do that and then we complain that this regulation is too harsh to a startup. And I think it's just, it's still a very weak way. Like, having the fact that people are allowed to take their data out of your system when they give your personal data, like, not necessarily see what could be wrong with that is maybe for your business model, but it's inherently not the wrong thing. So I think the right kind of constraints and legislation can definitely help with it. So putting as much constraints in there as we feel is the ethical and the just thing to do, I'm a big proponent of that because it forces us also to then to think about creative solutions, about the kind of things we do want to have. This is where I should probably wear my, my views may not be representative of my employer T-shirt, which I should really, I should make those, I'd make a lot of money. I think there's, there are two things really. One is kind of, or three things. One is just right now, you know, the political class and the sort of journalistic class are just getting to grips with kind of the impacts of the changes in technology over the last 20 years or so. So they're running to catch up. And I think, you know, coming back to the point of artists and creative folks kind of helping with the grammar and the legibility of these complex systems is essential. And then I think I'm sort of hopeful because in other areas where engineering and science have had big impacts, we've learned to create ways that society, government and business can actually work safely. And so I think actually thinking more of it almost like a complex engineering safety problem is helpful because we have precedent there. We have precedent over the last 100 years of thinking about these things and looking at how these externalities play out. And right now we're just seem to be at this moment where everybody's kind of, it's dawning on people that is a huge problem with huge externalities, but we don't have, in certain constituencies, we don't have the language to have a good debate. I will need to cut you off there, Matt, but thank you so much and thanks to all of you for coming. Thank you. Thank you, Ron. Thank you. Cheers. Thank you. Thank you. That was good.