 Every now and then, it's so nice to be at innovation conferences, and I happened to be at one last year already in Telferd, which is near lovely Birmingham, near lovely sunny Birmingham, and there I run into the people of Harper Adams University, and they won an academic prize, innovation prize for a concept, which is called the Hans V. Hector, and the Hans V. Hector is an autonomous farm. So this is a hectare, as you can see roughly, absolutely strictly forbidden to get there. Nobody is supposed to be on that field, right? And it's completely taken care of itself. So it's already now more than, I think, almost three years now, it's unattended. So there are growing crops over there, they're harvesting, they're taking care of it, but all of it is done without human intervention. So it's all done by autonomous systems. And it's a very interesting project that has been going on, it's so successful that they're now increasing it to seven times bigger, they're now starting to call it the Hans V. Farm. So they're doing much more now, quite a success. And the thing is if you look at it, you see all sorts of, as you can imagine, autonomous devices, it's relatively safe to have these things on a Hans V. Hector, so they're not likely to bump into anything. So nice to experiment over there. So they're working on the crops, and if it's time to actually harvest the crops, there's another device that will take it from the Hans V. Hector and deliver it outside. It's all autonomous, they're all self-learning systems, they're all, of course, experimenting and it goes better and better and better, and it's more and more completely unattended. And then, of course, there are drones, gotta have drones there as well, one of these examples of autonomous devices, looking after the soil, looking after the condition of the crops when they're ready to be harvested, and all sorts of other things that you're simply not allowed to do yourself because you cannot enter the field yourself, so they're using all of that. And the greatest thing about it is, and who said that IT doesn't deliver value to society, the greatest thing is what they produce with it, which is Hans V. Hector Gin, which you believe that. So IT actually does deliver value to society, you see. You can actually order this stuff on the web, so if you have this nerdy colleague, you might have a nerdy colleague because we're, you know, open group, so you might have, and then here's the perfect gift, right, because nobody ever had that, right? So this is completely, let's say, AI autonomously created gin. So I think that's the perfect birthday gift. So that really got me thinking last year in terms of, here's a complete farm essentially running itself, completely hands-free, completely self-driving if you like. What would it be like if a business would be that autonomous as well? This is a business. This is actually a business, it's being taken care of by, let's say, machines, IoT and artificial intelligence. So what would it actually take to get a complete business like that as well? And I would actually be able to ask an AI like, you know, Alexa, just run my business, please, would be the idea behind it. So I've been diving a little bit into it, and I would like to share a few of the insights while I'm also writing on some materials around the topic. So I wanted to share a few of the insights with you. Now, I only have 40 minutes for this address. I'm already pitting the lady that has to translate to Japanese because I tend to speak relatively quickly, but you should hear me in Dutch, their translator. It would be twice as fast, so I'll try to get through the 40 minutes and give you some ideas of what I think is inside. At least if the slides still run, because now I'm crashing. Yeah, the Windows machine, huh? Yeah. Oh, there we are. Yeah. So it's not running Unix, of course not. I know it. It's a Windows machine. Sorry, Bill. So one of the guiding materials that we're using for this is what we call techno vision. It's a kept Gemini piece of work. Did I mention I work for kept Gemini? So what we do every year is together with a merry band of other CTOs who we put together an innovation document, a trans document, if you like. We're about to release it, half of December, we'll release the 2020 version. And the main theme of it is simplify. We couldn't do simple in an island, so you can say simple. And it's just simple and in French family, it's okay as well. But we learn from English native speakers that simple also means retarded, which we thought was a little bit, you know. Maybe off topic. So we had to call it simplify. And so I had a few additional characters in the words. But the whole feeling is technology is now reaching an inflection point in which we can create such seamless business experiences, whether it pertains to the customer or the employee or operations or anything around it, our business partners. It's so seamless. We can use technology to make something almost invisible, hands-free, self-driving, unattended, autonomous if you like, which is fascinating. Technology does so much for us. We just ask Alexa, right? It just drives us somewhere and we don't need to do anything soon. But which is very interesting and how to create these seamless, if you like, autonomous business experiences is one theme in this document. But on the other hand, there's also a dark side to simplicity, which first of all, of course, means you have to deal with a lot of complexity before you can be simple. This is an obvious observation. But also, of course, simplicity makes us if we don't watch out shallow and we start to rely on algorithms, on machine learning models, maybe on fake data. We don't think maybe that much ourselves anymore because the system seems to be emphatic towards our needs and seem to think for us. So there's a dark side to it as well that we probably, from an architectural perspective, need to address as well. So we try to navigate a little bit that fascinating world. And mind you, I've been in IT ever since 1937 or something. At least it feels like that. We've been in technology quite a lot. But been in technology quite a lot. But now we seem to be reaching that inflection point in which we've been automating a lot, in which we've been making things more seamless. And now we seem to reach that point in which we can say, hey, that system will actually take care of itself. So agile is very good because we can very quickly adopt it to change in circumstances. What if the pinnacle of agile would mean the system will do it itself? It will adjust itself continuously. And we're not attending because we're not allowed to enter that hectare. We're not allowed to be on that field. The system will do it itself. Would that be the pinnacle of agile? Maybe. I'm not saying that we're already there. We seem to be very clearly moving towards that state, which I find extremely interesting. So technology addresses a few of these things. We believe that both business and technology people should understand technology, so we're using, you know, relatively, we hope, compelling wording so that people remember trends. So we did a few funny things, we think with trends. You try to figure out all of the references to rock and pop songs and even heavy metal and what I view, there's all sorts of things. If you can find the Aussie Osborn one, you've done quite well. Yeah, you would, Steve, yeah, I know. But anyway, this is really meant to sort of make it easier to understand these different technologies as they are. And I've selected a few. They're actually 37. So within 40 minutes, that seemed like a little bit of a stretch. So I selected a few. A few of them more, let's say, phenomena that we're currently seeing in terms of how we apply autonomy to the wide field of IT. And then there's also a few, let's say, best practices in terms of what should we do from an architectural perspective to actually be able to create these self-driving autonomous systems? What are the architectural lessons that we can derive from it? So the first thing I think is important to understand from an AI perspective. I already said many of these autonomous systems, they rely on a combination of being able to sense everything in real time around them. So you need a lot of sensors. This is a typical, let's say, characteristic of an autonomous system. It's able to sense around it. So we need the IoT. We need the Internet of Things. We need to be able to connect to be connected in real time. We need 5G for that. We need to be able to store all of that data that we're collecting. And then, of course, we need to be able to train ourselves based on that data and see what works and what doesn't work. And I think AI, artificial intelligence, our perception of humans as a system that seems to behave intelligent is a crucial one. But from an architectural perspective, I think it's important to realize that there are different ways of teaching machines nowadays. And they are, on one hand, responsible for the breakthroughs we're seeing in AI currently, in autonomous systems. And on the other hand, they're also a bit scary because we don't understand all of it anymore. We call this trend, by the way, how deep is your math? That's an easy one in terms of a pop song reference. If you don't get that one, you're very young. How deep is your math? And what we mean with it is stuff like deep learning and reinforcement learning are ways to actually see patterns in huge amounts of data without really understanding the algorithms or the logic or the math or the statistics behind it because there's not really something like that. It's just a very big, very dumb, by the way, pattern recognition machine. Deep learning neural networks have nothing to do with intelligence-like people work. Sometimes people tell me, that's a deep learning neural network. That's not the way humans recognize something. No, it's artificial. It's not real intelligence. It's artificial intelligence, right? But these deep learning neural networks are able, provided you input them with enough data, provided you have enough input and training data, it starts to recognize patterns. And when it sees patterns, it can categorize something, so it can recognize a situation but can also predict something, for example. And these type of systems and derivatives of it are extremely crucial and they're due to the breakthrough that we're currently seeing in AI. And the scary thing from an architectural perspective is that, on one hand, we understand how these neural networks work. We don't fully understand why they recognize patterns. It's difficult enough to understand why they can tell a cat from a dog, but it can, provided they've had enough training data with pictures of cats and dogs. We don't know exactly how it sees them. It's very difficult to explain that, but it works. So from an architectural perspective, both for ourselves as architects, but also for the people that we develop these systems for, we find ourselves with tremendous power, with these additions to what we already had in terms of data science. And on the other hand, it's very difficult to explain why it actually says something. So computer says no. Computer says yes, right? Computer says 43 instead of 42. Why is that? It's something fascinating in terms of the autonomous systems that we're currently seeing. Now let's, that's an easy one as well. I hope AI did it again, no? Oh, come on. Never mind. You look it up. You look it up otherwise. If you Google it, you might run into something, oh, that's probably a spelling error. Did you mean, you'll probably find it. So if you apply the notion of being completely aware in real time and being able to sense, and then based on that input data, be actually able to learn what works and what doesn't work and what patterns evolve and then be able to predict and actually, you know, prescribe what to do, if you understand all of that and you start to apply it first of all to IT infrastructure or IT operations, you'll get something which is currently quite popular, which is called AI ops. So applying artificial intelligence to IT operations is a very interesting thing. It's almost like, you know, a farm, really. It's a very merry farm, you know, with all sorts of different animals but lots of things going on there. But of course, we've seen already for years, we've seen the whole, you know, topic of autonomy creeping in into the area of IT infrastructure. I'm sure many of you have been visiting Oracle Open World every now and then. It's quite a big event. And I still remember Larry Ellison announcing for the first time his autonomous database. You know, imagine there's a room full of 10,000s of database administrators. And then he says, I have the greatest deals ever. You know, we create as an autonomous database. It does a job better than you do. It's isn't that, you know, it will optimize itself. It will scale up and down itself. It will patch itself. It will run itself better than you, DBA, could ever make it run. Isn't that the greatest deals ever? You know, warm, warm bond with the audience immediately, you know, everybody like, yeah, this is such a good feeling. But it's happening. And the same is happening with data warehouses as well. I'm a bit in that business nowadays and it's very difficult to tune a data warehouse, to actually be able to really produce the information and the views that you exactly need at that point in time. And these autonomous data warehouses start to understand behavioral patterns, start to understand consumption patterns, start to adjust itself automatically to the needs, the evolving needs, even be able to predict what type of needs will evolve during the forthcoming minutes or hours or days or weeks and will be able to accommodate itself and adjust itself continuously to these evolving needs. These are autonomous data warehouses and databases and they're just examples of something we see in a broader range in infrastructure. And as I said, the whole notion of AI ops means there's so much logging data that we collect over the course of time when an IT operations is running, and there's infrastructure running by running applications, people are using workstations, mobile phones, whatever to access it, things work, things perform, they don't perform, things crash, things don't crash. There's a spike in a consumption need somewhere at the offices or somewhere outside and patterns start to emerge. And AI, which sort of puts itself right into that farm if you like of IT operations, start to get the pattern, start to see the pattern, starts to learn, starts to be able to predict what will happen, starts to understand what is happening and even can tell you what should be happening next and what you should do in order to have that IT operations almost completely autonomously taking care of itself. Usually it's for the better, usually it's for the better. Some of you may remember the outage that we had I think in 2017, famous outage of Amazon web services on the East Coast in the US. It was down for several hours. The only reason for that was that there was a human operator that wanted to start up a few instances, just put two zeros too much. It can happen to anybody on Monday morning. Probably had a very good weekend, this operator. So he types in, launch a few instances and just two little zeros too much. Sort of the whole East Coast down for a few hours. But it's a human, it actually was a human intervention that actually made that system crash. So sooner or later we start to realize that with systems like this and we learn enough from them and we collect enough data and we start to see the patterns better and better and better and there's so much logging data collected every second of IT operations running that we start to realize that there's a wealth of information over there that we can learn from and get better autonomously again and again and again. And by the way, you start to make this address, you do this address in the IT operations conference. You can imagine, you pull Larry Ellison, right? And it's not necessarily great news but it happens to all of us because this is not only an infrastructure. I'm a software engineer myself, as I told you, I worked on Unix in the beginning of my career, always been considered myself a software engineer. So when we looked at IT infrastructure and IT operations was a little bit like, you know, it's about time that that becomes automated and autonomous, you know, they're annoying anyway these guys so that's good news, right? We're creating software over here, right? Which is a very human task, right? Well, maybe not quite. When code goes low, business goes high. That's, yeah, that's a word difficult. Now, some of you Americans say, yes, yes, of course. I thought that one was pretty neat, by the way, when code goes low. Again, if you have no clue, never mind. There's a few that you're bound to miss. But when code goes low, business go high. We have this phenomenon of low code or even no code development in which we're using AI to understand what type of application services we need, how to develop them, how to adjust them, almost automatically to the evolving needs of the application users or application service users, if you like. And also given the obvious lack of high productivity software engineers we have nowadays, we see clearly yet again an evolution. It reminds me a little bit of 4G in the end of the 80s and in the 90s. You know, before we had the era of Java, which is, you know, like a Mordor era, really, of 20, 25 years of low productivity with a horrible programming language. But nowadays we see a lot of systems that use a lot of augmentation using AI, using pattern-based ways of creating software applications or application services, if you like, in a fraction of the time that you typically would need. And if it becomes no code, which means you just maybe model some concepts and work on some templates, sooner or later you start to realize that there's patterns there over there as well and it starts to become more and more automated as well. So software engineers that think that they're sort of, let's say, you know, free or liberated from this idea of parts of IT becoming autonomous. We clearly see the low code and then the no code development and then autonomous code generation as something that's definitely coming our way as well. And if nothing else, it will help us to deal with the lack of, you know, skilled high productivity software engineers that we're bound to see everywhere already. Same by the way happening in data, because after software engineering, I start to do a lot of data science and analytics as well and it's the same thing over there and maybe the lack, the scarcity of skilled resources in data science and also in data engineering is maybe even more apparent right now. It's very difficult to find data scientists, a particularly one that also wants to talk to you, even more difficult. So, you know, actually creating solutions on top of that data, people that understand deep learning neural networks, for example, and it also can position them next to the more statistical and algorithmics based ways of working with data is very difficult. But also to find people that can collect all of the different data sources, they come in real time from all these different sources, but how to put it together, how to prepare that data, how to get it into the right quality and then make it available for training so that you actually can create these autonomous systems is very difficult and it's a very scarce capability. So again, we see this phenomenon of augmented analytics. One of the really good ones is, for example, data robot, the word says it all. By the way, it created interesting enough by a few of the best data scientists in the world, for those of you that know Kaggle, they are consistently in the top 10 of Kaggle data scientists, which is a worldwide data science community and it's actually these people that created systems that will automatically create the trained models, the machine learning models for you and you can choose literally from dozens of it and then the system will choose for you, right, and they'll do it for you rather than you have to do all the science yourself, right? And it's interesting because it's data scientists themselves that actually are the very top of that food chain that actually created AI-driven systems like that. And sooner or later, obviously, again, if you start to realize what is the consumption behavior of all of that data, what are the evolving needs of it, we can predict it, we can prescribe how to get there successfully and once we've done that, we could actually let the system take care of it itself as well. So you see a lot of impact on, let's say, do it yourself and autonomous systems and AI augmentation. We see it pop up in many places in IT already. So now we've seen infrastructure, we've seen software, we've seen applications, software engineering, there's also processes. Can touch this? Anybody? Can touch this easy, huh? Again, that's so easy. They're all very easy, huh? Can touch this, you can touch this. This is the phenomenon of touchless processes which we see evolve a lot. Now we've, within TechnoVision, which is now in its 12th year edition, so it's the 12th edition that we're currently working on and already when we started 12 years ago, we had this big trend cluster which we call process on the fly. And at that time, we found it very difficult to explain to people process technology and why process was just as important as data and applications and IT infrastructure and that you could automate it and that you could manage it. And at that time, it seemed to, you know, not inspire too many people, but nowadays, of course, as we've seen, the whole notion of robotic process automation, whether it's in manufacturing and let's say industrial processes, like we have the forum over here and the open group as well, or whether it's more human labor, people behind screens doing their work. We see with robotic process automation, which are software agents, we see a lot of breakthroughs there in automating, very replicable, if you like, boring human processes. It's, by the way, ironic when I started my career in IT in 1937. You know, we called it automation and what we were automating were really manual work. So people were writing things in ledgers and so on and we automated that right and IT became big. And now the ironic thing is, is that what we now automating are people behind screens working with applications and they're using different applications and they're cutting and pasting information between applications. They look at the result of a screen, they take a decision, they launch up another application and do something there. Funnily enough, that's why we're now automating because we consider that a manual process. People working with applications is a manual process. Like a farmer on the land, if you like. Oh, that's manual, you're behind the screen. And robotic process automation has been the first step and very successful in many organizations in automating that tedious, error-prone, non-inspiring work behind the screen, which is so replicable, which is so predictable so that you can easily put it in rules and automate it. And then of course, next step that we're seeing is, so there's a lot of rule-based automation. We've done some research. Kepgemonite is a very nice research institute. If you want to look for it, Kepgemonite Research Institute, Google for it. But then you're seeing that there are other things on the horizon that actually add cognitive capabilities to these highly automated processes because people have cognitive capabilities. For example, in recognizing an image or seeing a very complex pattern or understanding a text written by a human, all of that, or simply have to go to a contract, right? And we consider these very cognitive human-like capabilities that are not so easy to automate because it's not necessarily very replicable. And nevertheless, what we're now seeing is that we're able, with AI, with all of the technologies I've shown you so far, we're able to actually add that more cognitive if you like smart behavior sooner or later to these automated systems as well. And then you start to realize that it becomes closer and closer to an actual, hands-free, autonomous type of process. The process is not only on the fly, so it's not an agile process in terms of we can change the process when we like. It actually becomes a process that adopts itself on the fly without us ever interfering with it to change in circumstances, which I think is an incredibly interesting thing. One of the software platforms that really puts it all very neatly together is originally a French startup which is called Aira Technologies. Maybe some of you know it because there's some big companies actually already working with it. And you put it somewhere in your supply chain. So you put it somewhere in which it's ERP heavy and it starts to simply look at all the flows, all the data flows, all the processes going on in that entire ERP dominated supply chain. It starts to collect, of course, data because we need sensors. So it has sensors, right? It starts to collect all of that data in real time, starts to see patterns. This software starts to see patterns. And it first can say, like in traditional business intelligence, I understand something better now. I now have a 360 degree view of the situation, right? So I understand better now. And then the second step, as we all know it is, it starts to become predictive. It can actually tell you what's likely to happen. It can forecast something because it starts to see patterns, right? So it can predict what might happen in your supply chain. That warehouse is running empty. Those stocks over there are too much, right? You're not catering for the right consumption needs and so on. So it starts to predict what will happen. We'll see spikes. We'll see something going down and so on. Then it starts to, of course, recommend prescriptives, if you like. It starts to say, if you want to achieve this particular business results, you better do this. You probably really should be doing that. You should be moving your goods to that other warehouse right now in order to anticipate that spike, which will happen clearly two and a half days from now as we speak. So it starts to recommend what you should do. And then, we all know these steps, right? It's sort of, particularly if you're in analytics, you know these famous three steps, you know? It's business intelligence, then it becomes predictive, then it becomes prescriptive. And now the fourth step would be, hey, you know what? I'm recommending something. Guess what? I can do it myself if you like. Shall I do it? I can run it myself. I can do it. Never mind. I'll do it myself because I know you would approve. And correct me if I'm wrong, by the way, which is some sort of a topic you want to build in there at least for the first few years. Correct me if I'm wrong, but I'm doing this, right? And afterwards, it probably doesn't even need to ask anymore correctly if I'm wrong because it's probably right. So that's era technologies and others. And they call themselves the self-driving enterprise. I sort of really love that concept. Hence, free enterprise, if you like. Autonomous enterprise, they're all the same way to describe an enterprise, a business, including its processes, that takes care of itself and will optimize itself and will read its environment in the end better than we can do and also will be able in the end to act faster and more decisive and more to the point than in the end we ever would be able to do ourselves. So that's what I call a touchless business experience and there's a lot of talk right now in the industry about touchless processes. But do I need to do to not even touch that process anymore because it will take care of itself. It will run itself. So these are a few of the, I would say, the big things that we're typically doing in IT, right? So we're involved in infrastructure, software engineering applications, data, analytics, process management, you know? I could tell a lot of things also, of course, about the user experience, by the way, and I'm working together in teams, but 40 minutes, so. But what I would like to go is also go a little bit more into a few trends that describe us a bit more from an architectural perspective, what we need to do in order to make these systems truly self-driving and autonomous. And we have these four ones over here that we've identified that I think are particularly relevant to that era of simplicity through autonomous systems. And the first, thanks Ozzy Osbourne, by the way, crazy data train, the whole idea is training data, data that we need in order to create these machine learning models that are at the very heart of autonomous systems. We need to understand where to get that data from. And that's really one of the most important things we need to do and also from an architectural perspective to understand that data supply chain and creating the systems for it to optimally sense and collect that data and then make it available in the right format to learn and to train our systems is absolutely crucial. So there's a lot going on over there. If you're working with autonomous cars, which of course is such an interesting phenomena, right now I see a few articles pop up and they're saying it's not going so fast with autonomous cars. Reminds me a little bit of the internet at the end of the 90s in which they said, you know, internet, that's very nice for e-commerce, but it will never have the broadband, you know. Well, you look nowadays, Friday evening and Netflix all over the world and sort of, I think, you know, the bandwidth of the network seems to be able to deal with it, I guess. And it's a little bit the same right now with autonomous cars. You see these things cannot completely drive themselves and they'll take years. Yeah, it probably won't happen tomorrow. And a few years from now we do wonder where did that train come from, that freight train come from because it all changed so quickly, right? Autonomous cars are obviously very dependent on sensor technology. So they're a very good example of autonomous systems, but they also need a lot of training material in order to be able to recognize objects and traffic circumstances. So there's companies like Mighty AI that are actually completely specialized in creating training data for car manufacturers that need to train their systems with real-life situations. There's also a lot going on in creating synthetic data, so produced, generated, fake data if you like, but then in a good sense. So you get simulations, so you'll get software that actually creates essentially unimaginable traffic situations unless you're in Mumbai, but otherwise it would be unimaginable traffic situations that no driver actually would ever run into. And still the system has produced that situation and uses it to train the car. So it actually will run into situations and will train itself to deal with situations that no ordinary driver is bound to ever happen to end up in. And I think that's very crucial. So it's very difficult to get all of the training data from live footage. So there's clearly also this approach to say, well, in that case, we'll generate it, we'll do it synthetically. And then there's also a lot going on right now around federated learning. Google, for example, is pioneering this idea. We can learn so much from edge devices like mobile phones and all sorts of other devices that are not really part of that central IT. We don't want to draw the data from it because, for example, privacy and security, what if we could learn, we could train the system on the spot over there at that data and then go back again and leave it alone. So it really stays completely anonymous and we're still learning from edge data, data at the very edge of our IT infrastructure, like mobile phones, like sensors on board of cars wherever, and we do a much more federated way of training. So training data and all the architectural means to get that data and actually feed it into the training systems is a crucial one from an architectural perspective. Another one, of course, good times. You know, we first called this one good vibrations, by the way. If I look at the audience, it's not a problem if I say this, good vibrations. Beach boys, everybody, you know, if I look at the audience, correct me if I'm wrong, but probably all of you get that one. I run into a few millennials that have never heard of the Beach boys. So they saw good vibrations and they're like, that's, I'm offended now. Happened to me. I'm not kidding. It happened to me. They said, I'm offended. Good vibrations, I don't like that. I have associations with this that I don't like. Why are you calling this good vibrations? So we had to cancel. And we did, because fair enough. People don't get it. Don't get the Beach boys. So we changed good vibrations to good times. Thank you now, Rogers and Schick. But I think that's still a safe one. Correct me if I'm wrong, by the way. Do send me a tweet otherwise, because we don't want to offend anybody nowadays. So it good times it is. But this is all about using AI in a good way, using data in a good way. So because there's a lot of question nowadays in terms of ethics, security, privacy, it's great that we collect all of that data. What is that data? What do we base it on? Everybody's already discussing for years about car driving algorithms that they have to choose who to bump into, right? But there are many other ethical considerations and they're all based on training data. Data that we gather, is that data actually unbiased? Is it fair data? Is it correct data? Are we doing things with that data that we as a company, as an organization, would consider ethical? So there's a lot of things going on in that area that we need to realize and I think it's an architectural challenge. Because it's so tempting and so easy to collect data and then have the computer say no or yes as a result of it. And we just say, yeah, it's the computer. It's a deep learning neural network. So who are you to argue? But we actually need to understand and it's a big architectural challenge how we can make people actually trust these systems and actually convince them even if there's black box technology there like deep learning neural networks that we're still able to convince people that we're actually done the right thing and actually created all the mechanisms. Although now it's an autonomous system but here's what it's based on. We can explain it, we can show it, we can make it auditable so people can actually check it and control it. And it's also fair and it's unbiased. So these are all things that I believe are a crucial aspect of creating these autonomous systems in the end. And then there's also of course the role of the human. I feel for you, right? Feeling, emotional intelligence is I think a crucial aspect of making AI successful. So we actually launched some research recently which is all about emotional intelligence and what we found is literally that in the era of artificial intelligence in which more and more becomes automated we start to realize that the capabilities of the humans need to be much more focused on the EI rather than on the IQ. So we need to, next to AI, artificial intelligence you need emotional intelligence. And correct me if I'm wrong. If you know what's underneath for example deep learning neural networks it's called hard silicon and it will not be able to be emphatic. It won't feel emotions. Whatever people think or say whatever Elon Musk will predict I don't think personally anytime soon that these systems will even remotely resemble something like an emotional, soulful human being. So there's a very clear plea that we need to develop our EI skills much more in order for these AI systems to be successful because a real thriving business will need to be a combination of that very powerful but very cold hearted AI combined with enhanced emotional intelligence skills. And it's actually also about emotions that you feel. Another research I would like to, you know if you have this download urge this evening there that's quite a few things you could download there's one of the really good ones this year in terms of research was about autonomous cars. And the question we had over there what does it take? What does it take to actually for a human being to step into a car? An autonomous car that will drive itself instead of you driving. What are the emotions that you will feel? And what do you need to overcome? What do the manufacturers of autonomous cars need to create architecturally speaking in order to create that trust base so that you'll actually step in an autonomous driving car and trust it more than you would trust yourself behind the steering wheel. And that is all about understanding the emotions having the empathy of somebody that is supposed to step into that autonomous system whatever it is. So whether it's working with a touchless process or understanding a recommendation that comes from a black box or it's literally stepping into an autonomous car all of that requires the same type of empathy from the architectural perspective understanding what it takes for people to actually embrace these autonomous systems. And I don't think by the way AI will be anytime soon be able to recognize emotions although there's some very interesting things going on with AI right now that also make things more fair. This is a well-published system that actually is used by bartenders. So if you're at a rock concert or something and there's a break and you want to get your beer, right? And it's very difficult because everybody's pushing around and for bartender very difficult to see who's next. Of course you have relatively simple image recognition sees who's first in line and who's next, right? So the system will tell you that's the next one. Whoever's pushing away, that's the next one because they were standing longer over here. Second step of course will be you probably need to ask for an ID with this person which is nice as well. I personally have added the third one which would be this person probably had enough but that's a different thing. Not difficult to detect I believe if you have image recognition it would be relatively easy to detect if somebody had enough, right? Because it's a very emotional, I guess reading emotions I don't think you need to be able to really be emphatic to understand if somebody maybe had a few much of these gin tonics. That's about emotions. So final thing I would like to say is as an architectural tool, of course we need to use AI ourselves as well. I'm absolutely fascinated by the whole notion of creative AI. This is AI that will use AI to create things for us together with us. And this is the whole era of creative AI. There's a very interesting set of technologies underneath. One of them are GANs, generative adversarial networks and they use two AI systems essentially. If you have one AI system that can recognize a cat from a dog then why not put another neural network against it that will create something, a random picture and the other neural network will say that's not a cat. Okay, I'll try again. Is this better? Slightly better, still not a cat. So you go on and on and on, right? And from a more or less random noise you create a picture of a cat and if the other system says that's definitely a cat you created the picture of a cat that never existed. Still recognizable as a cat. Other areas that we see as for example reinforcement learning in which you simply try millions and millions of different scenarios. You combine it with a deep learning pattern recognition and you for example become unbeatable at Pac-Man or Go or any other game. So reinforcement learning which is also very much about synthetic data combined with that deep learning is another way to create something that we thought as humans would not be able to create ourselves because there's a limit to what we think we would be able to produce ourselves. How this goes on, artists of uses, there's the French artist collective which created a completely fake set of renaissance paintings of a family which is not an existing family, La Famille de Bellamy and they simply used, they simply trained decision to recognize renaissance paintings of people. And now they're able to create renaissance paintings of people that never existed. But there's actually, you may have heard of this, they've already been sold for hundreds and thousands of euros, a piece completely generated by AI and then put on canvas which is relatively easy that latest step, right? But you can use it, if you can use this sort of tremendous power for art, for creativity you can also use it of course for other things like generating text. I'll be in Zurich this afternoon. I'm sorry I have to rush quite, quite quickly off to it. I'm flying to Zurich and among other things I'll be visiting IBM tomorrow and they have a project going on there which is called Project Debater and Project Debater is able to create an argument around whatever topic that you cannot beat. So it will simply go to all the text that is available around the topic. It will beat you on any topic. It creates a set of arguments that you cannot counter. You will lose unless you have your AI yourself and see what happens and then it gets better and better and better, right? So it can even create better arguments and generate debates and the debate is just beginning, right? The same thing is happening for example in Adobe Sensei which I found a very interesting tool because what it does is you just create a very coarse-grained sketch if you like of a campaign. I want to sell trench coats which they tend to do for example at Burberry, you know? I want to sell trench coats and I see an autumn scene over here with some gas lights over there and I want it to be 11 p.m. And you know, it must be stormy and so on and so on. And then the system starts to create the whole imagery itself from all the materials it has available. It even creates a marketing campaign. It even creates a mobile app and the website and the funny thing is, and of course you can tell that slice, I don't like it. It gets better and better and once you've done it it can say I can apply that same logic now again to an entirely other thing that you want to create and I'll follow sort of your way of designing and it gets better and better and better at it. And you know, imagine you wouldn't need marketing anymore. That's an interesting, I mean you know, marketing people until now thought all these IT people were rendering themselves useless. Well, hey, we have good news for you. Also in areas of marketing you'll even be able to do this. So that all brings me back to this original notion that I had, right? Alexa, run my business, imagine the pinnacle of everything that we've done with IT so far. The very essential agile architecture brings us to the phase in which we enter the office in the morning and that's only Alexa there on our desk and we ask it, Alexa, run my business. And what does it take actually to get there? I think we got some clues of successful autonomous systems that we already see around us and I believe that we'll definitely have the chance in our lives to actually make this happen and we might see the first few companies at the stock exchange just within a few years that creates tremendous value without actually employing anybody. And that's an interesting, well, except maybe Alexa them. By the way, there's very good news for some of you. There's a new voice theme coming up for Alexa. It's Samuel L. Jackson. So you can use the voice of Samuel L. Jackson for Alexa. You have a version with profanity and without. The version with profanity will be quite popular over here in Amsterdam, particularly if you're as a pedestrian walking on bicycle lanes, by the way, which I really would recommend to you. You try and then you get the flavor of what Samuel L. Jackson would say to you. So that's just a warning over here. Samuel L. Jackson voice theme. Is there anything better than us, Samuel L. Jackson, Samuel, run my business? And he will say, I dare you, I double dare you. I can simply see this. So it's not only about gin, of course. Two weeks ago, I learned about this Swedish whiskey that's also produced by an AI. So the funny thing is, the key lesson over there is, and I learned from the people, that there's a Finnish company over here for kind that actually created the models behind this. They didn't have enough recipes to really be able to understand how to create the optimal whiskey, given all the materials you have and given you the market that you expect. And there are only that much experts that actually put their expertise on paper on how to create whiskey. So what they actually did is they used relatively simple technologies. They did not use deep learning neural networks. They used relatively simple way of reverse decision trees to create recipes. And then they used a simple collaborative filtering. Collaborative filtering, collaborative filtering we already know from the 90s. So they actually used a few technologies that sort of remind you of this gam, this genitive atrocerial network. It really wasn't like that. But still, you know, I had two systems competing to create a better whiskey recipe and then they have the actual brewer, the whiskey master, actually saying I like this one, I don't like this one. So you can step, you can begin with features rather than immediately creating the self-driving, completely autonomous enterprise. I think in a way Elon Musk is doing the same thing with his Tesla. If you buy a Tesla, you can buy a self-drive feature because you're 8,000 euros, you get nothing. Elon Musk style, you know, you get nothing. But he promises to you that within three years, with all the hundreds and thousands of cars driving their collecting data, he will add self-driving features over and over again. And hopefully three years from now, you might have a level four, a level five type of self-driving car as a result of it. And maybe as architects, we need to do the same thing. We understand where we want to go. We want to learn in the meantime because we realize we need a lot of data, we need a lot of situations to learn from. But as we go, we will be adding, for examples, a safe lane change of collision detection or self-parking or the car will come to you when you ask for it from the parking place, right? It's not exactly level five autonomous driving yet, but you're getting there. And I think that's an important architectural lesson for all of us on our way towards it. So thank you very much for bearing with me this morning. I only had one mission really from Steve. That's in order to, you know, wake up to people a little bit. So I hope I succeeded over there. I do think there's a role for humans. I even believe that there might be a certification for the open group sooner or later. No AI was harmed in creating this product. You know, you get a certificate. Maybe we could start a little forum for it, you know, because there will be, of course, like people like vinyl instead of CDs. You'll have people that say, I want my whiskey. Definitely nothing AI interfering with my whiskey, right? Which is their good right. So we'll see a counter movement as well. And I think that would be a very good sign that we're almost there in terms of autonomous systems. So thank you very much for bearing with me. I do feel that sooner or later, we'll be able to write poems as a result because systems run anything else. So we'll express ourselves on the top of the Nestle Pyramid and we'll be writing beautiful Japanese haikus, right? About the beauty of nature and the beauty of animals. But then again, of course, as you could imagine, that there's probably, you know, maybe this haiku is not so human as we thought after all. But still, they're still beautiful poetry for us to enjoy hopefully, whether it's generated by a machine or maybe hopefully even by ourselves. Thank you very much. Enjoy your days over here in Amsterdam. Thank you very much. Please take any seats if you're choosing. It is, isn't it? Yes. This is incredible. Over here. So we do have quite a few questions, but I'm not gonna get to all of them. So, but I'm gonna reward the people that put them in early. So the first one is for autonomous IT work. Is there an appreciation of the dynamics of the system being automated? And how does one assure a closed loop stability? Now that these are two very different questions I believe. No, I do think it's an architectural quality that we need to strive for. And I do think it's called to action for all of us being architects in the audience as well. And it has to do with trust as well, right? One of the key features you need to trust an autonomous system is that it's actually stable or that you trust it to be stable. And if that needs for example transparency in order to convince you of it, or you need to be able to audit it through automated tools probably. Again, to go through it. But also as an architect you're able to show what actually you build in into that system to create that stability, that robustness of it, the reliability if you like of it. It's all very needed to create a trust that we all gonna need to embrace these autonomous systems. But it's of course always a conundrum in terms of, I'm automating something, maybe that's something we use to do ourselves. But that's a conundrum. We're pulling a Larry Ellison there, right? And we're all sitting in that room sooner or later. DBA were enough. And we're confronted with that big question. Is it, how much rewarding is it actually to create that system as a more autonomous version of itself? And we're no longer needed for it. Trust is a big thing there. Yeah. So AI will eliminate many IT admin jobs, massively reducing the jobs available for humans. Can you suggest any jobs that AI might create? Well, first of all, writing poetry of course. No, no, it's, as I said, I do think the key feature is in the fact that we rely more on our emotional intelligence capabilities and skills sooner or later. It's all maybe a little bit of a cliche in terms of we're getting higher on that mass loss pyramid. But I do believe that in the end, we all realize if we can make these sessions autonomous and we created these tools ourselves, we've always been creating tools to enable us to do things that we might not be able to do ourselves as simple mortals. And now we created simple tools to do things. And now we even created tools that sort of render ourselves useless in what we did so far, which means that we're all in this quest in terms of understanding what we could do on top of it, which is not easy. And particularly if you are that DBA, if you are the software engineer, if you've been running that process, it's a tough call. It's me calling the problem, not you. Is there a place for another form of AI or EI, environmental intelligence, where systems make choices based on environmental impacts? I can totally imagine. And environment is a complex set of patterns. With a lot of data you can collect about it. There's very specific objectives you want to achieve. So I believe if you can train that system or even simulate and create synthetic learning around it or how to optimize environmental objectives, I believe, I do like, by the way, the use of AI for good purposes. So good times, as I just showed it, is not only about using AI the good way, but also using AI for good, right? And I think it's one of these safe areas where we can all start with AI, by the way, because if we're very close to personal data, for example, it might be difficult or maybe security is a very big issue and a concern. It might be a passion killer. And then doing good with AI, from an enterprise perspective, what does good look like and could we actually use AI for it? It's not only very rewarding, because we all want to do more good, I think, but also a very safe way to start experiencing how to work with AI and create actually AI capabilities. Thank you. So a statement and then the question, do you agree with it? Oh, that's easy. That's a close question. Today it does not appear that technology is the limit to achieve more and more autonomy, but rather how to define legal culpability when failures occur. Do you agree? I'm not sure if it's more, but it's definitely an area we need to deal with. And we all realize an autonomous car is of course one of the well-known cliche examples almost in terms of, okay, it runs into something. Even if it only happens once every few months, it will make the front pages where people kill each other on the road every day routinely, right? And it doesn't even pop up in the papers anymore. But it's very fair. And as I said, the whole ethical use of AI and understanding the ethics of it and with that also the legal issues of it are crucial. And again, a plea, by the way, to sometimes start in areas where you feel less that pressure still can experience and build AI capabilities. And I believe that in this AI for good area is usually, I think you might find less legal even constraints over there, still be able to do something really useful with artificial intelligence. But for sure, it's good news for the lawyers as well, unless we make them autonomous. Wouldn't that be nice? Legal is so easy to put in the system, I believe. Don't you think, Steve? Absolutely, yeah, absolutely. I saw that coming, change to do something else. Yeah. So we're not gonna get to all of them. We have to move to the next week. So there is a last one that I particularly liked. What do you think Alexa will never do? I mean, even Samuel L. Jackson can now be on. I do believe that it will never be able to express emotions and be emphatic. It might be artificially understanding emotions, but actually feeling it as a human being with a soul. Firstly, I'm not a believer of that at all. So sorry for that, Alexa, but you're still called hard silicon. Ron Toledo, thank you very much.