 I mean, this is not supposed to happen in 2012. This is supposed to happen when my kids are grown up, not in my time, not in your time. And suddenly Google arrives and their cars just drive themselves. That is crazy. That is crazy. They passed the Nevada self-driving car test and they've been working on it all along. Google has been working on it all along and we'll see how much they've progressed Secondly, and it's war. Once they did that, it started a huge war. Everybody's into driverless cars, into automated cars, into Ford, GM, Mercedes, Tesla. They're all into it and it's a race. It's a race and not only the car manufacturers, suddenly the software guys, the software companies are into it too. You've probably heard of the sentence. I don't remember who said it, software is eating the world. We write software. We test software. Our software is everywhere and now it's in driverless cars. So these companies, the big ones, Google, Apple, Intel, Uber, Tesla, which is almost software, are also into it. But why? Why are driverless cars so important? Well, first of all, because driving is boring, not in Bengaluru. Driving is not boring here. I noticed. I mean, yeah, driverless cars are here for the United States. I think it'll take at least 20 years for them to reach, you know, Indian level. But mostly it's boring. People get bored very easily. They could be doing something else. But it's not just boring. It's dangerous. People, when they get bored, when they get distracted, they don't notice things. And when you don't notice things, you get hit. And it's like 30 billion or whatever. I don't remember the numbers. And these things are important because people don't want that. And car manufacturers and software companies understand that if we get driverless cars, we are going to buy them like crazy or whatever kind of business model they have. And the reason is because instead of concentrating on the what, on the how of driving from here to here, we can start concentrating on the what. What do we do when we arrive there? OK, that's the more important thing. And it is exactly the same with testing. In testing, instead of concentrating on the how, how should I test an application, the whole selectors and all those machinations, we could start concentrating on the really important stuff. If we had autonomous testing, and I will define what autonomous testing is, if we had autonomous testing, we could start concentrating on what's the test. And not on the intricacies of Selenium WebDriver and explicit weights versus implicit weights. And oh, what was that function? Oh, my God, the time out and it's flaky and all that. We're concentrating just like in driving. Like 70, 60% of our time is on the things, right? And we should be concentrating on the interesting things. OK, so that's the idea. Parallels, a little bit about me. As I said, 30 years of experience, it's actually 32. I'm really old. I go back to the 80s. Now there's front and back end. We didn't have any back end. We just had front end. There was no such idea. I am, I was, and I always will be a developer. I did the whole startup thing. I co-founded a company, CTO, the whole thing. Found I didn't like it and went back to my favorite hobby, which is just writing code. And I've always, since about 2000, even before that, I love to test my code. Testing the code I write is a passion. It's a thing I do. And I constantly research it and talk about it and think about it since back in the days before testing was interesting, which is what led me to ApliTools. ApliTools is a visual testing company. It enables me to do something that I couldn't previously do, which is test how the application looks like and not just how it functions. That was a mind-blowing thing to me. So I'm an evangelist there. I speak at conferences. I write blog posts. I do Twitter. Please follow me. It's important. I get paid according to the number of followers. No, I'm just joking. But I'm an evangelist, but I am also a software architect. I write code in ApliTools. And that's about it for about me. But that's enough about me. Autonomous driving. As I said, driverless cars are here and they're driving big miles. Tesla autopilot is now at 222 million miles of driving. That's a huge amount. And Waymo. Waymo is Google. Google split into lots of companies. I don't know if you've heard, Waymo is a driving thing. So Waymo did. Oh, I don't remember. I think it's 3 million. Yes, 3 million in 2017. I think they're now at 6 million. So wait, wait, wait, wait, wait. So Tesla is much better than Google in autonomous driving if Tesla did that number, 222 million, and Google did. And the answer is no. Google is, Tesla is not winning. And why? Because there are levels of autonomy, it seems. And back in the day, there were lots of ways to define what the levels are. So the whole American thing, I don't, doesn't really matter who, defined five levels of autonomous driving. Five levels of autonomous driving. And as we will see, Google is way above everybody. Google is really, really good at autonomous driving. So don't let the numbers fool you. Let's talk about the levels. There are actually six levels, because there's level zero to five. Level zero is no autonomy. It's basically, if you have a car, then congratulations. You are the owner of a level zero autonomous car, which means no autonomy whatsoever. Level one, level one is driving assistance. It's cruise control, but not the regular one where you just limit your speed. I was in a demo once. It's amazing. Volkswagen, you're in front, you're driving, and suddenly there's a stop sign. And the guy next to you is saying, just press the gas. Don't press the brakes. And I'm saying, but there's a car. He said, don't worry. Just don't worry, just press the gas. And you press the gas, and the car slowly stops. Because it senses there's a car in front of it. And cruise control is basically similar. It enables you to do 100 kilometers per hour, but it knows how to navigate between the cars between it. So this is level one. The question I don't want to go through the levels in a blind manner is the important thing is what is the technological advances that gave us level one and level two, et cetera, et cetera? And the technological, there are two of them for level one. And one of them is vision. To understand how to navigate and how to stop, the computer, the AI needs to see what is in front of it, or in the back, or on the sides. And the way they use it is the way they see is LiDAR, mostly LiDAR. Most of the autonomous cars use LiDAR. LiDAR is light detection and ranging. What they do is they have this pulsating thing, a round thing, it's going around and round. And it's throwing lights, pulses of lights all around the environment and waiting for them to come back and testing the amount of time it took. The amount of time determines the distance. And that way, they can get a 3D image of the environment. We don't do that. We don't have the pulsing. But we have two eyes, and we're much smarter. So actually, a fly is much smaller. We know how to combine the two images and figure out what the distance is approximately. But LiDAR enables us to do it in an accurate manner. That's the vision part. But vision without brains is nothing. So we have technological advantages. We have algorithms that understand what is being seen. Those algorithms enable us to do the cruise control thing and the automatic braking systems that we now have in level one. Level two is partial automation. This is where it gets brilliant. Tesla and most car manufacturers are level two. Maybe level three depends on how you look at it. Level two is partial automation. What does that mean? That means that you can leave your hands off. You can take your hands off the wheel and it will drive itself, okay? But you have to, as some of the people driving Tesla found out too late for them, unfortunately, you have to be constantly monitoring what is happening. And if the AI fails, you have to take control back and drive it yourself. And it can't drive anywhere, not even in the United States, okay? So highways are fine, regular roads are fine, but in different conditions, we're not talking really, really good driving. So yes, we're starting to see partial automation in Tesla and others, commercial partial automation. And the answer is what differentiates level one and level two. And the answer is vision work, we're done. And LiDAR is great, okay? It's the algorithms. It's the algorithms that enable us to move to something that enables a car to drive itself. Maybe not well, but it drives itself, okay? And it's all about AI algorithms. The previous algorithms weren't really AI. And what is AI algorithms? These are machine learning algorithms, neural networks, and deep learning. And let's talk about what those are. Let's talk about neural networks because that is the main thing that made our current AIs be what they are. This is the big step that happened somewhere in the 2000s that made us all AI crazy. And those are neural networks. And how does a neural network work? The neural network gets input. Numbers, just numbers. Those are the X1, X2, X3, X4. Those are numbers that enter. Go through a complex series of nodes. We will talk in a second of what those complex theories are and what they do. And there we go. We have output, which are again numbers, maybe one number, maybe more numbers, et cetera. That is a neural network. Numbers entering into nodes, going through some kind of processing, which we'll talk about in a second, and then coming out. And that's it. Those are neural networks. And let's talk about how we use them. And remember, we're doing vision, okay? And we're trying to understand what is in front of us. How does this work? There are lots of neural networks. I'm not an AI, by the way, a disclaimer. I am not an AI expert. I sit next to the AI expert in Aplitul. Really, he's the smart guy. There's three of them. And we've been brainstorming like mad in the beginning. He arrived at Aplitul around the same time as I did. And I love to talk to him. I learned so much, which I think is better because if he was standing here, he wouldn't understand a word. Probably, I wouldn't understand a word probably. So the whole, I hope I'm a good buffer between those and this. So how does a convolutional neural network work? Well, the answer is you put in an image and out comes the probability that it's a dog, a cat, a bird, or whatever. That's the, and what do you mean? Images are numbers. Remember, images are just numbers. So you feed the image into these networks, neural networks, they go through all those nodes which we'll talk about in a second and out comes the probability. That is a convolutional neural network. And when you have all those demos of AI, you usually see a convolutional neural network. How do they work? How does it work? How do those nodes process the information? Let's zoom in on a node, okay? This is a node. This is one of these white things involved. It gets input, as you can see, each one gets input from multiple sources, gets input, multiplies them by weights. Each number here is multiplied by a weight, different weight. And then we sum them up. That's it, there's a small function here but this doesn't really matter, it's not important. And out comes the number because we multiply and we're done. Out comes the number. That is the number of that neuron. And why is it called a neuron? Because in biology, our neurons perform very, very similarly. Okay, it's like a simulation of a real neuron we have in our brains and in our bodies. We get electrical impulses, they get applied using weights, et cetera. We sum them up. It's probably much more complex in biology but we, and then the neuron fires, pulses out another electrical thing. So much program, it's that simple. It is so idiotic, you would not believe that we can get from here to driver's car. But the big question is, if this is an individual neuron, who decides the weights? It's all about the weights, okay? Because if we had the same neural network, the same neural network can be used to identify a dog, can be used to drive a car, can be used to whatever. It's all about the weights. That is the difference between a one neural network and another. And how do we figure out what those weights are? We don't guess, okay? What we do is we train. You must have heard about training in AI where all those deep alpha, et cetera, we train them. This is what happens, and it's very simple. It's simple. It's almost embarrassing, isn't it? That, for me, is AI. How do we train them? Let's take a dog figure, a cat, everybody likes cats. Let's take a cat, a trainer. So we feed in a picture of a cat, and it goes through, and we give random weights in the beginning, and outcomes, yes, it's a cat. So we say, great. And then we feed another cat in, okay? And it says, no, it's not a cat. So we said, oh, wrong answer. It should be, the answer should be one. Should be, yes, it's a cat. Wrong answer. So we do, we go back, and we adjust the weight so that it will be a right answer. How? Mathematic. I don't really understand. It's called back propagation. We back propagate and fix those weights. And then we take in a picture of a dog, and it says, yes, it's a cat. And it says, oh, my God, wrong answer. And we back propagate. And then we feed in another cat, and it says, yes, it's a cat. And it says, well, good. We don't do back propagation, et cetera, et cetera. We feed in thousands and hundreds of thousands of pictures of cats and not cats, or if you've seen Silicon Valley, hot dogs, and not hot dogs, that's basically what they did. They fed in hot dogs into the machine and trained it using back propagation. After thousands and hundreds of thousands of images, those weights are correct. And we start feeding, no, not training data, but real data, and we get the correct answers. How? Why is it working? Nobody really knows. Really. Nobody really, really understands. And what about the topology? How do they figure out the topology? How many neural networks in the layer? How the layers are connected? How do they figure out math? No. Intuition. Really, they just guess. They have these guesses of how neural networks and what's best topology for these kinds of problems, and what is the best topology for that kind of problem. They just do guesswork. It's all like a little bit magic from my point of view. So, you know, when you're a developer, they tell you don't do this, and I hate when developers do this. Try this, if it works, good. If it doesn't try this, if it works good, if it doesn't try this, if it works good, never do that, never. Just think of what should work, try it, and if it doesn't work, figure out why it doesn't work and fix that. Don't do the try this, if it doesn't work, try this, if it doesn't work. But in AI, that is what they do. What we call a hack, they call AI. And this is why mileage numbers are so important, because we need data. We need data. And who's feeding the data to Google, by the way, and to Facebook? Yeah. We are, like in the matrix, but not batteries. We're not batteries. We're data-producing machines. This is why mileage numbers are important, because data is coming from those driverless experiments. Okay, they're coming data, and they're training the machine, and the machines are getting better and better and better and better. But there's machine learning, and there's deep learning, and I couldn't figure it out, so I asked a smart person next to me, the guy that really understands AI, and he explained it to me. He said, machine learning is the whole thing. It's not just neural network. There's a lot of other statistical ways to solve problems very similar to neural networks. Deep learning is taking the neural networks thing and finding out, in the beginning in the 70s, they tried neural networks, and they didn't really work. Why? Because Simon was talking about that. We didn't have really good CPUs, really. One megahertz. I remember I got a two megahertz machine and it was like, oh my God, this is so fast. I got a 16K, I'm not joking, extension to my Sinclair ZX81, and I'm like, oh my God, what do I do with so much memory? So much memory. And now I have 16 gigabyte, and I'm like, that's dope. So they tried it and it didn't work. It didn't work because machines were slow and they could just do a few layers, a few neurons, otherwise it would be too slow. But now, it's not the CPUs that are getting faster. They are. What do we have that we didn't have back then? Call of duty. Fortnite. How do these things work? You know? I heard that, I heard that, you know, louder. GPUs, graphical processing units, the same CPUs that make call of duty work so well enable us to train machines because they are very, very good at doing lots of similar things again and again and again and again. And so now our neural networks, instead of having a few neurons and a few layers, can be really wide and really, really deep. And suddenly, we can figure out that this is a cat and this is a dog, which we couldn't do, okay? So that's what deep learning is about. More layers, more nodes, more neurons in the network. Okay, but what has this got to do with driverless cars? And the answer is, we're getting images all the time from driverless cars. We take photos of them or light our photos of them and we look and we figure out what are the stop signals? What are the signals? The traffic signals, the speed limit signals, the stop signs, we need to find them. It's not easy, it's easy for us humans. It's very difficult for AI. So yes, in comes an image, out comes the stop signs or the signs, all these kinds of signs and the speed limit, et cetera, et cetera. So this is one way to use a neural network. And it's not one way. There are literally tens and hundreds of neural networks working inside driverless cars. And another one is incredibly crazy. You get in an image and you get the angle of the wheel. That's it. AI in comes an image, out comes the angle of the wheel. It's obviously not that simple, but it's not much more complicated than that, okay? So we have lots of neural networks inside a car, this kind, this kind, and more and more kinds. And it's not just neural networks, there are all kinds of AI algorithms in there. What about levels? So that's level two. And level three is much more autonomy. Humans don't need to monitor anymore. They can not really go to sleep, but they can read a book or work on their work. But the car will notify, it will beep when it can't handle a situation because there are situations that it cannot handle. That's fine. That's level three. Level four is where Google is. So most companies are level two progressing towards level three. Google is way above level four. Google says, level four is I can drive whenever. I can drive the car, I don't need humans tomorrow. I don't even need a steering wheel. Okay, they have cars without steering wheels. But I can drive anywhere. You don't need to monitor. No, humans need to monitor. Just go to sleep. That's fine. But we're not level five because level four can't handle difficult vision situation. Fog, snowstorm, heavy rain, et cetera, et cetera. So those are level four. Google is working on moving to level five. Level five is the Holy Grail. Monty Python, whoever hasn't noticed the reference. Holy Grail, any condition, any time, any moment. This is the highest level. Nobody's yet reached it, but I'm sure in the next five years, 10 years, we will be there. Okay, that was autonomous driving. Now comes the brainstorming part where I was thinking with my partner, with my AI partner, about, okay, what has this have got to do with autonomous testing? And let's, I'll give you a blueprint of our way of thinking of the five levels. Level zero is no autonomy. And it's important to note, it's not manual testing. It's, you write the tests, you run the tests. There's no AI involved. It's still automated testing, but it's not autonomous. Level one is driving a system, okay? What was the differentiator for level one? The first differentiator was vision. If we can look, it will help us. If the computer can see, it will help. And seeing is not just about seeing the page. It's also seeing the DOM, which we will talk about in a second. And how can visuals help us? And the answer is actually pretty simple. Most of the tests we write look like this. Action, action, validation, action, action, validation, action, action, action, validation. Where the action does the clicks and the answers and the text and everything. And the validation checks. Is the value here good? Is the value here good? Is the value here good, et cetera, et cetera. Using those test key and always changing selectors or locators as we made them in Selenium. Where I can see the page, can it help us? And the answer is yes. Let's say AI takes full screenshot of the page and compares to the previous one. I don't need those pesky selectors anymore. I just compare the two pages and the AI will find the problem for me. I don't need all those selectors and it will even find bugs that I didn't even think about. And that's important. Unfortunately, up to now, up to the last four years or so, it couldn't work. It didn't work. Why didn't it work? Because comparing pixels does not work. If I do pixel comparison of two images, you get hundreds of false positives. Because anti-aliasing differences, because of GPU differences, because of lots of differences. We need a better understanding. We need something. We need algorithms that can see the page as it should be, as humans do. And that is level one AI. Looking at the page as a human does and doing the validations for us instead of us taking time to figure out what those selectors and locators are on the page. Just take a screenshot and let the AI do all the work. What about level two? What was the differentiator? And the differentiator was AI. What happened when we introduced AI to those validations? What do we get? We get better validations. We get things like, it's not the same data, but it's the same layout. So I can compare two pages that are entirely different in terms of data, but still figure out that they are the same page. Figure out what the text is in each page. Not only that, I can group similar changes. So if the header changes, I don't need to find it in all the pages. Level three is where we really get smart. Because up to now we needed a baseline. We needed a human. Looking at the baseline and saying, yep, this is correct. It's still the human driving. The AI is just checking. But what happens in level three? And by the way, companies like Appletools, AI companies are in level two progressing towards level three. What happens is that a computer, we're training, we're getting lots and lots and lots of images. We're training them to figure out which are bugs and which are not without the aid of a computer. So we can figure out that this is a bug. And we can look at the data and try and figure out what that bug is. Because we can look, or we will be, it's not there yet, we will be able to look at data of users interacting with the pages and figure out which data is the correct data and which data is not the correct data. And there's only one way to do that, and that is machine learning and deep learning. Regular algorithms just won't cut it. So this is level three, automation. Determining whether there is a bug even without the aid of a human. Now, let's be honest, can an AI really determine? No, they can give a good probability of it being a bug, but humans will still need to monitor it. Level four is partial automation. Oh, good. This is an example of an AI. Guess who's the AI and who's the human? No, no, the right one is the AI. AI playing a game of pong. And they didn't even tell it what to do. It just figured it out by itself. And it's playing a game of pong. And the answer, again, is here. We observe the application. The AI can observe users using the application and figuring out what the flow is and then driving the test itself using their own data. And that is something that I think it will take time. I don't think it's that, I think it's incredibly difficult. And my AI partner seems to agree with me. But theoretically, if we throw at it enough data of users interacting, real human users, they can start figuring it out. So level four is users, sorry, the AI is now driving the test. It is writing the test with our superstitions. Level five, I'm sorry, for me, it's science fiction. I love science fiction. And what is level five? Level five is the AI talking to the product manager. And they're discussing the application and the AI is writing the specs and just writing the test and we're done, okay? Unfortunately, no human has ever been able to understand a product manager, okay? None, neither tester nor software developer, probably the product manager herself couldn't understand it herself. So we really, really, really need a good AI here. I'm sure it's very, very far off in the future. And I added another level. As you know, I am a huge science fiction fan and there cannot be an AI talk without a terminator and Skynet discussed. So level six is Skynet. There are no more humans, okay? So there's nothing to test anymore. But it does beg the question of who tests the robot software, okay? And I'm picturing, like, these little testers, like, running and testing the little, and the robots sometimes have been bugs and all that. But anyway, that's level six Skynet, just a joke. So are we there yet? Are we level four? Are we gonna lose our jobs? Should we go looking for another job? And I have a simple algorithm, oh, first of all, it's important to understand and I hope I got that through you. AI's don't, what we call AI, okay, today, they don't really understand. They don't look at a banana and say, oh yeah, that's a banana. They just use these complex mathematical, statistical stuff to figure it out after being trained in billions of data items, okay? I mean, my son just got his driving license and he spent, like, how many miles to train? A hundred? And we're like in three million miles, oh sorry, kilometers, and they still haven't trained the AI. We are much smarter than they are. We understand what we're looking and AI's don't understand. They're idiots, they're idiot savants. And to give an example of the idiotity, this is a banana, right? And what is this? It's a banana, right? We know that, okay? But if you put this little sticker here, which is especially prepared, then it thinks it's a toaster. That is the level of AI we're talking about. It's really, really good but it's really easy to fool. It's not real AI, it's not real intelligence. So what we have and that AI's don't have is intelligence, we can think. And believe me, we need that because the rest will be taken over by machines. So let's do a correction. Does your job require thinking? Or is it wrote and repetitive? Is your job repetitive or does it require thinking? If you're thinking in your job, you're good. AI will not be replacing you, we will still need you. Otherwise, no way, you're not lost. If a tool, a tool took away, all the time you're spending doing the boring selector stuff, okay? Will you still be thinking when you're doing work? Will you be thinking more? And if the answer is yes, you're good. Because AI will take away the boring parts and leave you with the thinking part. So yes, you're good, AI will be helping you. Otherwise, can you start thinking? I mean, yeah, I know, 30 years, 40 years, 20 years. It's a bit late, but it's never too late. If you can start thinking, you're good. AI will not be replacing you, start thinking. It will be helping you. Otherwise, I'm fine. AI will be replacing you. In 10 years, 15, who knows, five. So start thinking people. Remember, AI is a tool. Use it, use companies. Lots of companies are using AI. It doesn't mean that you need to learn machine learning. It doesn't mean that you need to learn deep learning. Okay, you just need, it doesn't mean that you need to understand these technologies. You can use them using the tools when we will have more and more tools that use AI coming to us. Don't worry about it. And the most important thing, don't panic. It's okay, you will not be losing your job success. So thank you very much. Any questions? Thank you. And you can come over to me. Oh, please visit the Appletools booth. And we will talk about visual testing, not AI. Oh, we have a question. Yeah, thanks a lot for the presentation. It was really good. Thank you. So the question is, not the question, just a clarification. Is there any project where, or right now, is there an implementation for the level one, level two, level three? Is there any GitHub project where we can have a look into it? Machine learning? No, but there's a really, really good talk about AI and testing. Candy Crush, you know Candy Crush? It's called King Software or something. They use machine learning to do, to test Candy Crush because it's very difficult, et cetera. So yes, but as I said, you don't need to understand machine learning to do this. You just need to use tools that do understand it. For example, Appletools and others. Okay, we have lots of others and in the upcoming years, there will be more and more. Okay, thank you. Exactly, thank you. Yes. You just told about that. For testing, you don't need to take care of the IDs and all. So while you're presenting, and we just had, just talked that, saying that we don't need, we don't really need to check the IDs to find something, to find some elements. Yes. And you can do it. So how we can do it with the functional testing then? So is there any approach going on? Yes, yes. As I said, if you use, if you use level one AIs and visual testing, instead of checking those selectors and checking that the values are correct, just take a screenshot and compare it against the previous one. Okay. And that will work today. Today. I had a talk at STP Conf, unfortunately it's not, oh, there's this webinar about visual testing as an eighth to functional testing. So you can see that talk and there's lots of code there in GitHub. Okay, sure. Thank you. Thank you. I think we have time for one last question. Okay. My name is Abdul. You talk about images or a lot of data. We do use a lot of image comparison, but most of the time it's not successful. It's most of the time if you run, it's fails, one time it fails, one time it pass. How do we handle it? I do use, I have used Apple tool. I do find the same issue over there. I would be surprised. But come to my booth. I really don't want to turn this into a marketing thing. Come to my booth and we'll talk about it. Okay. So how do you differentiate about the images? Just a lot of data. How do you describe about that? I'm sorry, I didn't understand. You're saying images, just a lot of data is not like a pixels. How do you define this? How do you improve? Come over to my... Okay. Yeah. Thanks. At least one will come, now I know. Thanks Gil. We have chocolates. Thanks Gil. It was really interesting to know how there's autonomous cars and all your work.