 I'll use this one. Yeah, so hello, everyone. I hope you're having a good EMF. I know this talk is called the Future of Invention. It's largely a talk about artificial intelligence and machine learning, and generally, therefore, how we can apply that to invention. So invention is one of these functions of human endeavor that is becoming infiltrated with data. So everywhere where you have an analog human making decisions and analyzing information themselves, increasingly, all of these are becoming data-driven. So this is something that's happening everywhere. But invention seems to be one of those last bastions where people still have this vision of a genius in a shed who's kind of coming up with magical ideas and who knows where they came from. But in fact, it's a deterministic process, and you can use the tools that are emerging that are helping in creative endeavors in art, writing, music, to also augment technical creativity, so inventive creativity. And I think that this is going to have a huge impact to a massive range of fields, I mean, all of them, really. And so I think that the implications of this change are really quite dramatic, or could be quite dramatic, and it's really great to think about them at this stage sort of when we're just at the cusp of these changes occurring. So I'm going to start with a quote from Mark Andreessen who coined this phrase around 10 years ago that software is eating the world. And what this means is that every area of human endeavor is affected by software. And in fact, part of it is automatable by the software that we have at the moment, so the algorithms that we use. And this is what he meant by software is eating the world. So every area of industry, of economics, et cetera, will have some part of it automated by software. So it will subsume a lot of the activity that used to be done manually will become software driven. But now we have this new phrase that AI is in fact eating software. So this is really being driven by these three converging factors where we have increasing amounts of data being collected in every corner of our lives, increasing availability of compute power. So I think this image is from, it's a supercomputer. I think it's something that Google uses. And we also have the increasing availability and advancement of the algorithms that we're using to process this enormous amount of data and use this enormous amount of computing power. So all of these three things are coming together to mean that this revolution is happening in many, many different fields. So just to bring you quickly through what a machine learning algorithm does, it's really summed up by this image here. So this is the classic example of image recognition. We have this really variable presentation of a thing. So you have a dog, but prior to machine learning, you would have to have very explicit rules about how to recognize what a dog is within an image. Whereas what we can do with machine learning is just give the machine a ton of data with labels saying that in this image, we have a dog and in this image, we have a dog and repeat that millions of times across all of the different versions of images of dogs and the system will learn what a dog is and a cat and it will be able to differentiate between those things and we can teach it a lot more than dogs and cats and we can teach it to recognize all sorts of objects within images and differentiate between those. And so other than the applications that we will know about in, you know, without these algorithms defining our lives in social media and many other areas of life, we also see really big positive changes that are gonna completely change the way that medicine is delivered with automated cancer diagnosis, for example, but not just cancer, this is being applied already as we speak to many other fields of medical diagnosis. Also driverless cars, I mean, we're sort of getting there, right? It's always sort of a couple of years away, but the capability is certainly getting there and also spoken in conversational user interfaces. I mean, I think probably everyone in the room has spoken to an Alexa or a Google Home by now and this is completely changing the way that we interface with devices. This is all driven by this fundamental capability of being able to recognize complex varying patterns that represent the same thing, so the same words or the same objects that we recognize in images. And yeah, this is an image just from the movie Her, which is a great film about a very advanced spoken user interface. So this kind of recognition step is obviously nothing to do with using the machine to create. This is a great tool for the machine to tell us what we can already do, essentially, but automate that function. But this model, because you've fed it, so I'm gonna focus on images as the demonstration example, but all of this can be applied to not just images, but also words and other sources of data. This model has learned a huge amount of connections and relevancies between different types of objects within the images that you fed it in its database. So it's learned an enormous amount of information to be able to tell you that this is a dog and this is a cat and this is a light bulb and et cetera. So we can throw an enormous amount of computer at these models and create hugely complicated models, but to know what the model has learned and to benefit from that in a way that's more useful than just telling us if this image is a dog or a cat is a really recently advancing field to try and figure out what the machine has learned and what it's thinking. So if we essentially turn these recognition algorithms on the heads and we get it instead to generate the doggiest image that you could possibly imagine, this is what it comes out with. You get dogs made of dogs. And it really shows that the algorithm deconstructs what a dog is. You've got fur texture all over this. You've got sort of ear structures and eyes and noses and they're kind of like taken apart like a Picasso. So this really gives us an insight into actually what features the algorithm is looking for in order to recognize a dog. You can get it to generate one image, but why not a slightly different image? Why did it turn out like this? And that's a really big question. What's the best way that we can communicate this hugely complicated model to a human in a way that a human will understand? So a slightly better way of doing it is to actually try and represent the entire model's relational space of many images. So what you're saying here is that generated image but regenerated sort of tens of thousands of times for different things in the algorithm's recognition database. So you can get it to output everything that it can possibly recognize and the most that of each category of thing. So you get this kind of atlas and this is an activation atlas as it's known. I've got the link in the bottom right now. I highly recommend this distill.pub. It's absolutely amazing for understanding the inner workings of machine learning systems. It's open access papers. So I'd recommend reading through those. So this gives us a two dimensional representation of all of the things that this algorithm can recognize in image spaces. And if you zoom in here, this isn't dynamic, but if you zoom in here, you can see that each image is one of these and it gives a representation of the machine's learned space. So there may be really interesting things here that the algorithm's learned that a human would not have learned, would not have known to look for. So how do we get the algorithm to give us stuff that is gonna be useful to us? Because if we want the algorithm to be creative, to spark something, how do you get it to pick? You have to kind of constrain it in a way that gives you something that you didn't know that you were looking for, which is a really difficult task. And so just a step away from art for a second, we also have music and writing where these algorithms are also being used. And we can see another example of where in order to constrain the machine, for example, Botnik was trained on all the chapters, sorry, all the books of Harry Potter and asked to generate three additional chapters. And mostly it doesn't really make sense, but what it does learn is the style and the connections and actually it does quite well at kind of picking up the relations and the kind of the characters of the different characters in the story, if you know what I mean. But of course, to a human, this is funny. This doesn't really make sense what it's generated. But that is also kind of creativity. I mean, this could be a new way of writing that a human wouldn't have considered, maybe for good reason. Oh yeah, and so just to show that there's a lot more going on than what I'm showing here. There was also a musical that was written by an AI. It received two stars in reviews, but it's happened. So there's a lot of people that are chasing headlines and trying to be the first to use AI to do X. And there's a lot of mechanical Turk humans doing a lot of work under the hood with these things because they can create things that are interesting, but usually wouldn't be coherent in a kind of overall arc or anything like this. So you've got, AIs are not composing whole sonatas and things like this yet. So some more examples of some art just because this was a recent development from Dahl E2, which is an algorithm by, I think by DeepMind, where this is an example of how you can constrain the algorithm to take that representational space and get to output something that is not exactly what you asked for. It's kind of filling in some of the gaps. So if you constrain it and we say an astronaut riding a horse in a photorealistic style, this is what it outputs. So this is a completely automatically generated image. We've just given it these three input prompts into the trained model. And we've got another example here. So this is a bowl of soup that looks like a monster knitted of wool. And you get these amazing outputs where you haven't, you haven't explicitly said that the monster should have horns or that the bowl should be resting on wool, but it's kind of created these extra details. It's filled in the gaps where you haven't specified. And that might actually spark some really interesting thoughts about how to compose your images or in the domain of AI-generated art, this kind of makes you think. And it is a kind of creativity, although you're still giving it top-down direction of what we would think would be funny or make a good image. So images are a really good example implementation of AI because you have huge data sets and tons of labels because everyone comments on their photos and this means that we have enormous data sets for these systems to learn from. But if we want to generate art by itself, then we don't really have any constraints because what makes good art is a very difficult question to answer. So it's very difficult to get this kind of image with, you know, to get a machine to give us something that's interesting and we would say that's really creative and that's art without having input prompts. So you're always really gonna have human-derived input constraints so that there's kind of this collaboration space between artists and the AI tools that they're using, but it's certainly giving us a new way to create. So it's a creativity tool. And there's a whole conversation going on right now at the moment, particularly in the AI art space, but also in everywhere with the AI where these tools are being applied. So, you know, is this really art? Is it really being creative? What does creativity mean to us? Is it gonna take over the art industry? I mean, arguably, right now there's a huge buzz around it and there's a lot of money going into AI-generated art and these kind of NFTs are part of that. But I particularly like this title that AI is blurring the definition of an artist. I think it's making us reconsider what an artist does and the ways that we can be creative to create art. But I think for now, at least, the artists are safe and this is just another tool that we can use. So, to summarize that section, the prompt is always gonna be human unless you have some constraints and some success criteria that are objective and that the machine can learn. And tools that give us exactly what we ask for are never gonna help us be truly creative in a new way because we're still using our human creativity to feed into the tool. So can we imagine a system that will give us something actually unexpected with minimal or without human direction? And that brings us to games. So games are a really good example of exactly this because we have clear unambiguous rules that the machines can learn from and clear unambiguous success criteria. You know when you've won the game. The third thing that you have here that you don't have without is that you can create a database of played games without humans being involved at all. So the way that AlphaGo was trained is that you have two players, two AI players that are basically playing off against each other and you can create this enormous solution space of all of the possible combinations of moves and strategies that can be used against each other. And this solution space can go way outside of what humans have even considered as plausible moves or actions. And this works with games. So there are solutions, there are strategies that haven't been discovered in things like games because the amount of combinations of moves that you have in a game like Go is so large. I think it's something like three times 10 to the 170 or something like that. So it's a very big number of possible combinations of legal moves within the game. So we can use the AI to much more efficiently search this kind of possible strategy space and then use that information to feedback into what it shows to us. So in playing the lead, the sort of most, what do you call it, world champion in Go, there was a particular move, move 37 that really completely changed the game of Go where the human commentators saw this move and they were completely dumbfounded. They didn't know why it would have made this move and they said that no human would have made this move. But nevertheless, looking back at the game, that's the one move that the analysts say, this gave the AI the victory and this is what really shows the power of AI creativity. So coming on to invention, why do we need this? Why do we need new creativity tools? I mean, we've got ways to innovate by applying things we know to new areas and finding new technology to existing problems. So these are the kind of spaces that humans are quite good at doing. And in fact, we've got more and more patents being published every year. So is there really a problem? And this graph is, so I found this graph in researching this talk actually and it's really quite stark. So this is all of the patents published from 1835 to 2015 analyzed by the kind of categories or the domains of human knowledge that the inventions are in. So what you find at this black line that's going up at the end, these are refinements. So this is where you have an existing technology or an existing thing and you're sort of adding something to it or you're improving it a little bit, you're tweaking it. And the red line that's sort of going up and then crashing towards 2015 is combinations and this is where you're taking two separate domains of knowledge or two separate things and you're kind of combining them in a creative way that creates something new. The two other ones that you have here that kind of stopped in 1880, these are originations and novel combinations. So obviously these are the totally new things but we're not getting any new, we're not getting much new physics. So those are long gone. But we can have new combinations but at the moment they're nose diving. So why is this happening? First of all, we have a growing burden of knowledge. There's 2.6 million scientific articles published a year and this is growing approximately 4% annually. So this is way too much information for a human to process. Secondly, humans are bad at thinking outside the box. I mean, we've had thousands of years of playing go and an algorithm is still able to show us a better move. And finally, humans are really bad at thinking outside of their own field. Most engineers have trouble keeping up with the stuff that's going on in their own field, let alone someone telling them that what's going on in someone else's field might actually be relevant to theirs because the amount of noise there is going to be much, much higher. So that kind of cross-domain curiosity is much more difficult to have in a systematic way because you have to spend all of your time being an expert in your field. You don't have the time to be an expert in multiple fields. So we do really need these new tools to help solve this problem and reignite our inventive creativity, our technical creativity. So can AI actually be applied to invention? So we have constraints. We have the constraints of the patent system that state that any invention has to be novel. It has to have an inventive step or be non-obvious. So this is a slightly difficult one to code, but it's not impossible. And it has to solve a problem, so it has to have value and address the details of a problem. We also have data. So we have textual data about the technology spaces that exist where all of this... So all of this research information, the 2.6 million documents that are being published, we can process that and categorize it, recognize the different fields, and we can also analyze the patent database. So we know which patents have been rejected and which have been accepted. And so we can use this to train some models. So how do we use it? So can an AI take what we want, what we tell it vaguely to do, and fill in the gaps with something sensible? Can it take what we know and tell us what we should want? So the question of how to apply this is not straightforward either. And the trick is to get it to give us something that would also be useful and not just unexpected. And I can also say that this is already happening. So we have patents that have been published already that have been... Where an AI is claimed as one of the inventors. So DABUS is a system that is... I think has two or three patent applications that are filed in many, many jurisdictions where we have an AI inventor in collaboration with human inventors. And so the legal system is also keeping up with this and we're looking at new ways to define invention and how the patent system should change to cope with this once we start to augment or even automate invention. So you can imagine that once you have these kinds of tools, there are ways that you can approach it. You can say, I have a problem. What might help me solve it? So as a non-expert in a particular field, you might know the problem, but you might not know all of the different things that could be applied to that problem. So this can help you sort of explore that... If we go back to that relational space that we saw with the image generation at the start of the talk, we can help... We can explore that space of connections by giving it prompts like giving it a problem or giving it a product that you might have or perhaps a technology that you've developed. Maybe you're looking for new ways, new problems that that can solve. Or in the future perhaps you just have... I have a business with many technologies, many products and many services. Can I have this tool that automatically keeps me up to date and tells me all of the relevant technology and all of the relevant things that are happening around the world that link into the work that I'm doing? So this is the only branded slide that I have in the talk. This is a slide from my company, Iprova, and these invention tools exist today, where we're using these programs to sense the technologies that exist around the world and make connections between those technologies and suggest these... to spark these new ideas to technical inventors who can then take that and make the next generation of inventions that involve combinations that are more creative than what your typical engineer within a single domain would be able to make. So this is where we are. So at the moment we're giving humans information and we're giving them ideas, kind of sparking insights in the human mind. What we're moving towards and what the domain and the area is moving towards is helping to create the inventions and doing that in a collaborative way with the AI systems together with humans. And will this move on to fully automated inventions or will there always be human direction at the top? So this is still an open question, but this is definitely the direction of travel. And this is fundamental, right? This is a tool for creating tools. Every invention that we create is something that has a function that changes people's lives, improves people's lives, ideally, at least. And so when anyone can bring all of humanity's technical expertise to bear on unsolved problems and perhaps 3D print any new idea, then you have completely democratized invention, which is a really exciting goal to be aiming towards. But of course it's not necessarily going to happen exactly like this, but this is why it's really important to have a look at this kind of direction of travel now and think about how we want to implement these tools. And should patents protect these creations? So I know that many law firms are creating a... or they're calling for specific legal definitions for patents that have perhaps a shorter protection time frame. So if you start to generate patents and inventions much more easily, that makes them kind of obvious, at least obvious to a machine or easier to make by humans, so we should have lower protections. Perhaps they should only be protected for a couple of years, five years or so instead of 20 years, which is the current standard. But what we do know is that AI will eat the analog inventor, and these tools are going to impact our lives in a lot of ways. So that's it. Thank you.