 Thanks very much. Can you hear me? Okay. Yeah good. So I'm the idea for this talk is to look at some trends that are changing the way we test software and I'm gonna make a couple of Kind of predictions what's going to happen over the next five to ten years in terms of automated testing and How teams need to prepare for that most of the things that I'll talk about You can do even today if you want to get your hands dirty some of them are not still that easy, but Most of them, you know if you find any of these ideas interesting You will be able to do even today, but I think over the next five to ten years They will become much cheaper much better and much easier one of the things that's kind of happening in the industry now is Stuff is kind of getting more and more fragmented And that's going to create a huge challenge for testing over the last Five years in particular. I think we've had it relatively easy in terms of testing because most browsers are kind of now cross compatible and everything seems to be kind of working relatively okay and hmm, I Think what's going to happen over the next five to ten years is the number of platforms is going to explode People who are doing Android testing now can complain You know for a good reason, but what's going to happen over the next? five years is that's going to be multiplied by a hundred times and There's a couple reasons for that first of all there's a prediction from ABI research that says that there's going to be 40 billion devices connected to the Internet by 2020 now Some of those devices will be reasonably good computers some of those devices will be you know silly iot experiments, but more and more will get the unexpected stuff for example, I'm working on a collaboration tool now and We try to test it in as many browsers as we can and We got a bug report a couple of months ago that it doesn't work on a Samsung fridge just kind of Slightly ridiculous. Why why on earth would use a collaboration tool in a Samsung fridge, but you know you can so people will report bugs in it and I Think kind of the estimate for the Internet of things as a market according to IDC is that it's going to be worth 7.1 trillion dollars by 2020 so I think we'll see a lot of investment in new devices new ways of communicating and we're already seeing stuff like Amazon Echo and Google what I call it that doesn't have a screen anymore and just as a voice control and things like that So the the number of devices the number of ways we integrate with things is exploding the other thing that's happening is cloud is kind of happening everywhere now and IDC estimates that by the end of 2017 65% of the companies worldwide will be on some kind of half private half cloud hybrid and That's a trend that's continuing and one of those things It's becoming fantastically easy with the cloud now is to Provision infrastructure and make kind of testing cheaper ten years ago the biggest problem most teams I worked with had was having a Relevant production like copy of the infrastructure for testing well now if you're on the cloud that becomes trivial easy But some things become almost impossible because you do not own the infrastructure anymore so Because kind of you no longer in control of the infrastructure any ideas around integration testing become a lot more complicated because you can't really cause I Mean you know apart from launching a nuclear attack on an Amazon data center You can't really close the infrastructure to properly go down and Any kind of testing around infrastructure becomes a lot more complicated So those are some of the trends that I think are becoming really really interesting there that are going to both open up opportunities and cause some serious problems for how we doing testing and make us rethink how we doing testing and The first real opportunity that I see there is that we'll be able to start changing the balance of What's considered expected versus unexpected most teams today? They have this division of kind of expected and unexpected where there's a bit of unit testing component testing and some kind of automated testing for the expected stuff and then there's a whole set of exploratory testing to catch unexpected problems and What people typically do with exploratory testing is they go through a bunch of heuristics looking at different formats looking at different kind of Boundary values and things that then try and see what what the system is going to do and This is kind of in my view a bit silly because most of what people say that talk They're doing when they're doing exploratory testing Is not really looking for unexpected stuff. They're applying heuristics for stuff that they can predict and they can expect For example, there was this a big thing two years ago where if you typed HTTP colon Without anything else into the Skype client Skype crashed so badly had to uninstall it and then reinstall it Restarting Skype didn't work restarting the machine didn't work now I'm Kind of we can say that's kind of unexpected and should have been caught in exploratory testing But that's silly. It's just kind of a URL that's invalid Kind of why on earth would we not test for invalid URLs? I and I think kind of the big reason why most teams consider things like that as kind of unexpected Is that it just costs too much to go through all those cases? It's just economics. So automating for all those cases because there's five trillion ways a URL can be invalid Automating all that is is just would cost too much and the maintenance would cost too much But it's not unexpected. It's something that we need to start treating as expected during testing and This is where a couple of things that kind of are emerging now. I think are going to make testing like that Significantly cheaper and easier to automate We've known about mutation testing for at least 15-20 years mutation testing is where We apply algorithms to modify inputs and kind of figure out what's going to happen Whether something is going to break and mutation testing exists in the huskill community as quick test and there was I Think it was it wasn't subotero something like something like that But it was kind of doing mutation testing in Java So we've had tools for mutation testing already, but kind of it's just too expensive to run It's too expensive to maintain. That's why people don't do that but calling it unexpected is kind of a bit weird because there's for example a Database on github published by max wolf of 65 kilobytes of typical problematic strings It's available on github. We can kind of just use that and run through those things And it has all these kind of weird edge case combinations I published a tool for chrome called bug magnet that on right click makes a ton of these typical Problematic unexpected cases available so you can just try how a field works with the valid URLs invalid URLs and a ton of other things So we have heuristics for this we have databases of this stuff It's just we're not using that for automation because it's too expensive now. What's really interested Interesting is if you look at the trends happening in the cloud, for example, Amazon has this device farm Where they will run your tests on hundreds of real devices in parallel and just report it in two minutes and That costs almost nothing so all of the sudden It doesn't cost too much to kind of run these tests on you know all the inputs fields in your application Because Amazon can do that at scale and they do that by automating through the accessibility API's on all these devices But there's another really interesting project that Jason Huggins is working on called topster bot topster bot is a physical tapping device that you can automate Using a kind of a stylus and the first version of topster bot was developed using The technology they use for 3d printers to kind of move the pen around and it was this whole I mean on github You can download the 3d print schematics to print your own topster bot and That means that we can now start automating the testing of phones and kind of devices and things like that and and all those weird Samsung screens that explode if you leave them charging In a kind of not just an automated way through this accessibility API that kind of you know has to simulate a lot of things but really tapping and At the moment there's there's no device farm that does this stuff But given where Amazon is going with kind of device farms for everything I can imagine that in a couple of years You know somebody's going to start offering a device farm of topster bots running on Real devices on lots of these weird things where you will be able to send a Database of you know weird strings that are completely predictable To see if any of the fields in your application given this weird string does something unexpected and kind of There are farms of applications now that are really really interesting for this for example browser stack allows you to Test how your app works and see how your app works in over a thousand combinations of operating systems and browsers source labs That's kind of done by the people who've done selenium allows you to automate over 800 combinations of browsers and operating systems using selenium and Kind of that's already here Now given those capabilities. I mean this If we were going to run tests on all our fields for all the data points and everything would kind of yeah cost a bit, but Is manageable now and again combining Amazon's kind of hunger for killing or everybody else by creating cloud farms and Devices that are being able to tap and kind of applications like this What I think is going to happen over the next five years is going to be amazing is we're going to get this combination of cloud device farms browser farms and kind of databases of testing heuristics that Will allow us to Not have to kind of say that these things are unexpected anymore because You know there are databases of this stuff. So it's you know 2016 somebody typing a Apostrophe in a name or a kind of incomplete URL is you know, it's irresponsible to call that unexpected now And I think What that's going to do is allow us to finally do automated UI mutation testing the way that can liberate humans to do Exploratory testing for real unexpected stuff. So My prediction is that well a lot of what we do with exploratory testing today is just going to be automated because that's kind of Predictable it's no longer kind of unexpected now Kind of with that in mind, we still will have a ton of stuff That's completely unpredictable because humans are unpredictable. I'm not saying this will replace testers but it will give testers more time to do important stuff and this is where When we start looking at humans being unpredictable the big question is you know Is one person testing something using heuristics Really better than five million random monkeys tapping stuff on the keyboard For some stuff absolutely, but for some stuff it will be better to get you know five million random people to kind of try and use something and This is where we can start detecting completely unexpected things because humans are unpredictable. I mean We had cases for example where people would misuse our system in weird and wonderful ways using Google Translate that will break everything and And You know It was impossible for me to predict this until we started working with real users and figure out figuring out how they use the system So our testers were kind of not predicting those cases now One one thing that's really interesting that is happening in the industries There are more and more ways of kind of organizing groups of people we're getting these kind of crowdfunding campaigns We're getting crowds Management websites and one of the things that Amazon is doing is this called mechanical Turk thing where you can hire 10,000 people to do a very simple task for almost no money and For example, if you're trying to figure out well, you know, will 10,000 people Smoke testing this stuff break it in some weird and wonderful ways. That is now accessible it's reasonably cheap to do that and I think kind of What that's going to allow us to do is allow smart testers to really focus on real proper exploratory stuff where kind of random human stuff we can farm out to random humans and predictable databases we can farm out to mutation testing and The big problem why we can't use this stuff now is Management of information because if I I can I can organize a smoke test with 10,000 people for almost no money But I cannot go through the feedback in any reasonable time. I Will see that the system broke but it's impossible to kind of organize that information easily or and it's impossible to Kind of figure out what actually happened where because we lack management tools for that and we like organization tools to kind of do testing sessions here now There are already kind of some Companies like user testing that are kind of you can pay them to bring relevant users on your website And they will film a session they will show you sessions and things like it So that's not going to kind of scale to the level of Amazon Turk, but we you know, there are some emerging testing management Crowd management said tools emerging So what I would really love to see over the next five to ten years is kind of the marriage of these two things Where we get better crowd management tools So smart testers can direct 10,000 people to do something and get a nice summary in five minutes How many of those 10,000 people were allowed, you know able to complete the workflow How many got confused how many broke the system? How many did something weird? So I think kind of that my second prediction over the next five to ten years is we'll start seeing these automated crowd source organization tools That will help us get humans real humans to use our stuff on real devices Not automated browser farms that are kind of good for predictable stuff But this is really random unpredictable stuff where kind of we'll be able to do that with a statistically significant level of testing Doing a you know a random test with three people. Okay, you know, they might break the system But if everything works does that really tell you anything but doing a random test with 10,000 people or 20,000 people is going to become really really interesting and I think what what we'll start seeing there is Not kind of the replacement for smart testing because it's impossible to do any smart testing here But what I think we'll start seeing here is smoke testing as a service Like, you know, if we send this and we send this to people in Thailand and in Finland and in Germany and in Czech Republic and in Brazil and they start entering their real names and their real addresses and stuff like will something weird break You know, if they use it with Google translate or some kind of smoke testing stuff is going to become a lot easier to do With these crowd management tools and I think kind of from a usability testing perspective this can give us automated focus groups lots of companies now make Bold decisions about their products by doing focus groups with 10 or 15 people That's not statistically relevant. You might have picked the wrong 10 people and then made a stupid decision for your product But imagine that you can do an automated focus group for 20,000 people and get statistically relevant results That's going to kind of revolution is how we do kind of usability testing how we do product insight and things like that so Um, the other really interesting kind of thing that is then left there is How do we help humans make better decisions given all these management tools given kind of a explosion of devices given these better test automation opportunities that are now cheaper and kind of that's what I think the second big opportunity of The the changes that are happening in the industry now is really important is kind of assisting humans in making testing decisions now Predictable stuff on the side is always going to be unpredictable stuff. There's always going to be stuff that we can only explore and kind of look at and one of the really interesting Potentials that happened over the last kind of 10 years in the testing community is approval style testing Approval style testing never really took off because it's too expensive to manage and approval style testing is Kind of as opposed to the other stuff the type of test automation where you predict the results and you compare Whether the system does what you predicted approval style testing is you have a baseline of what the system does Then you run your test and then capture what the system did and kind of approve that the change was not the first tool I've heard about this was called text test where it was really smart comparing log files and text test is incredibly good for Doing very very complex changes to big systems where you can't really predict everything that's going to happen So you tweak this thing here you run the process and then it compares log files And it's really smart kind of to discard timestamps to look at important things and just give you the difference and say well This is what the difference in the process It's kind of this what you expected or not and then if it is you can use the new processes the baseline So that can help significantly, but you know automate legacy systems complicated systems unpredictable system systems with too many moving parts, but it's too expensive to manage and Looking at what's happening in the industry now with the number of devices number of platforms number or you know the Deploying on the cloud where you no longer control the infrastructure or kind of all these other things We will have start seeing unpredictable stuff more and more and approval style testing is going to become a lot more important And this is where there's a couple of really interesting ideas that we merged recently I I kind of tend to waste months of my life playing video games when I can and One of the things that's really really exciting In the video gaming space now is this game called no man's sky No man's sky is a it came out a couple months ago but people have been writing about it for years because They are it's a space exploration game where they're doing a mathematically generated universe of 18 quintillion stars now I have no idea how much 18 quintillion is but it sounds kind of huge and each of those stars Has planets orbiting around it and each of those planets is supposed to be playable for months and months and months and This is kind of completely mathematically generated now That the what they what they did to create some sense of realistic environment there is they modeled Kind of earth's animals and earth's buildings and things like that and then they did some kind of mutation math So for a new planet, they would take a giraffe They would kind of shorten the neck they would change the colors or they would take a lion and they would kind of modified Somehow they would take some buildings and extend them shrink them So it looks kind of realistic now the big problem with something like this is How do you test this stuff? It's kind of there's no expected result and even if there is there's too many expected results for any human to kind Of reasonably test in any given amount of time and again, there is no right or wrong here. They're doing kind of mutations mathematically generated. So the way they're testing this is Amazing the way the testing this is pretty much the way how NASA explore space What they've done is they've built software probes that fly around this universe and film what they are seeing and in the developer room they have big screens that are kind of Showing a video of what these probes are seeing and then at some point when somebody sees something That's kind of weird. They stop they pause they rewind this well This doesn't really look good then they change kind of the math a bit They run it again They see how it looks and kind of they can approve or or deny the change and I think kind of this whole idea of Running probes through the system and filming what the probes do Okay for a video game you can physically film what you know they're doing and showing but kind of an idea that way something is going to kind of fly through our software in some way and Record what it's seeing or record what it's doing and then show it to us and then say well does this look good or not? I think that's going to revolutionize the whole kind of unexpected testing and help humans make better decisions but for that we need to be able to kind of very quickly see a comparison of the old to the new and There are some really interesting tools emerging in this space Five or six years ago BBC wrote this tool called rate Which was able to compare screenshots of a website and just show the differences Just you know help me make a decision. Is this what I expected or not? So we changed some CSS Here's a very quick diff visually and yes, I expected this or let's go back and This was kind of the early Visual kind of comparison thing helping people to make better decisions but then kind of tools like ZB a visual review emerged where there was some workflow built-in So not just kind of you know take screenshots manually and compare them But hey, this is the five screenshots We want do you approve or do you reject this and you can see kind of that integrated in the build system and for a while there was a tool called domreactor that He's now deprecated But I would love to see somebody kind of pick up something like this Which was really good comparing how a site looks in two different browsers and just kind of visually highlighting differences again just helping humans make decisions faster approving or rejecting this and Kind of now we're seeing kind of some really interesting tools like up little's up little's is this hole is a management tool that you can use up you more selenium or a bunch of other automation tools even I think QTP and then it kind of records the session automatically for you and is really good presenting the differences in the video So you can make a good decision whether this thing kind of whether the change is good or not without having to predict everything They changed up front So I think these things are still kind of emerging and it's very very early for these tools to be kind of really really effective But what I would love to see is kind of more ways of doing automated probes through the software Text test does log files up little's does user interfaces But I think this is a really interesting perspective if you start looking at really complex systems that are deployed on Complex infrastructures internet of things things that are kind of you know unknown and unexpected where I think what we're going to start Doing is a community a lot more is designing probes that kind of fly through our software record What's going on and then have these management tools present the differences so we can say well Yes, kind of that's the change I expected or hey, this is a bit of an unexpected change But let's talk to a business users whether this is okay or not and be able to manage those large processes like that So I think what we're going to start seeing is approval style testing is going to become a lot cheaper and a lot easier and kind of there's of course some other stuff when we talk about user interfaces that are more predictable than kind of let's you know where everything that changes and There's a whole class of new tools emerging for user interface testing now That's really interesting one of the biggest problems for UX user interface testing is layouts where kind of we can do a lot of functionality testing but even the kind of you know best predictable functional tests now can't tell you that something is kind of 10 pixels over something else and it looks really ugly and that's why we need humans to look at it and We're getting new tools now to be able to do Automated testing of layouts for example Jim Shaw wrote this tool called kihote, which is a unit testing tool for layouts for CSS and You can kind of use something like well I expect the top edge of Navbar to be you know at this point and It's not going to do this on a computational thing It's actually going to look at the screen and figure out where it is where it should be and say well It's kind of a bit lower than expected It's a bit higher than expected and you can use tools like karma to run an automated test like this in 20 browsers and 20 devices and very quickly get a report. It's amazing stuff It's kind of something that you know, we've never been able to do before there are tools like the Galen framework that does Kind of automated testing of layouts in a kind of given when then style and allows people to describe layout In a textual way and then compare this across different devices different platforms but the big problem with all that is kind of the people who are most concerned about layouts and for the most relevant to say something about layouts are graphic designers or UX people and They are not the ones doing the testing Because all this stuff is still very technical Developer has to automate the test or a tester has to automate the test and then there's all kind of roundtrip and bottleneck feeding back to UX people or designers and That's what I think there's another interesting trend emerging and We're getting much much better prototyping design tools The designers are now using to almost build the prototype up without any development on their own One really interesting up like that is a pop-up pop-up allows people to take stickies and draw screens buttons labels and things like that and then kind of Take photos of that and very quickly kind of assemble a workflow saying oh, this is a button when I click this button It goes to this screen or this is kind of a list and things like that and that's mostly used for rapid prototyping, but kind of What I would really love to see is kind of pop up plus one of those technical testing frameworks now We have technical capabilities for automatically testing layouts. We have technical capabilities for doing visual prototyping When those two things get merged What I would love to be able to do is kind of create a completely new language here That allows designers to describe visual tests by drawing something and saying ten pixels here five pixels there And I think we're not that far off technically I think this is going to happen over the next five years if you're looking for a startup idea I think this would be an amazing thing to do Because that would allow us to shorten the feedback cycle from kind of layout testing quite significantly And that would be absolutely amazing so the next opportunity that I think kind of We have over the next five to ten years is dealing with things that are kind of impossible to predict Humans are unpredictable Google changes stuff all the time without telling anybody Competitors do stuff that kind of we can't really predict or we depend on a third-party API that changes weirdly. I was changing between two flights six months ago in Paris and the Wi-Fi there was absolutely horrible and Google Decided to kind of break their API for real-time collaboration and kind of our entire system went down So I had to fix it over a stupid Wi-Fi network Panicly debugging stuff and I think kind of things like that are very difficult to deal with now about We tend to leave them off and you know test them in production, which I kind of is okay But it's not necessarily the best thing to do Facebook is famous for Releasing their stuff to 1% of users and then kind of there's no problems then expanding that a bit further Expanding that a bit further than testing in production lots of companies are kind of starting to do that now and We're now I think Starting to accept testing in production as a viable strategy But it kind of costs too much to organize It's not that easy to do and there's lots and lots of companies where they're not you know regular regulatory And compliance problems prevent people from doing that I think what's going to happen over the next five to ten years is we're gonna get a lot of help around Kind of dealing with things that are impossible to predict primarily because we're getting better tools for operational awareness We're getting better tools for how we can manage stuff and what we can manage. There was a Really interesting case study of Kind of things that are impossible to predict or whether they're actually impossible to predict or not Happened five or six years ago when Heathrow launched Terminal five in London and The first couple of days after launch it was complete chaos. They lost about 1500 bugs or misplaced them and They had to kind of Ship all those bugs to a completely different airport to be sorted and returned people got delayed Some bugs were kind of delayed for two weeks or so. It was complete chaos and At the same time the terminal three in Beijing kind of opened and it was flawless everything worked perfectly and Mary that sitting over there wrote a fantastic article about that a couple of years ago how Beijing Terminal launched basically flawlessly because they organized 8,000 people to try it out and They got people to come in and some people were I Assume, you know, they're told to be terrorists some people were told to be lost Some people are told to be busy and kind of they tested the terminal before running it and you know Of course, you know, you can say that you can do things like that easily when you know You have the Chinese army and everything is free but Actually You know Britain being Britain they had a parliamentary inquiry after kind of the disaster in terminal five and it turned out the terminal five was Tested by 16,000 volunteers They did 66 simulations there Before launching the terminal they were just looking for the wrong stuff so All those tests for unexpected didn't help them because you know, they never really tested what's going to happen when they bring so much luggage so And that's why I think kind of this whole problem of dealing with things that are impossible to predict is really really interesting and You can automate as much as we want but unexpected things will still happen And what we see now is a lot more push to kind of monitoring on the client with The kind of mobile devices becoming more and more fragmented with browsers becoming more and more fragmented And and kind of deployment on the cloud magical infrastructures You can do as much testing as you want on your kind of environment, but the app no longer runs on your hardware the upper and somewhere else and That introduces some really interesting kind of potential problems and constraints. We are getting constant problems from people installing stupid out blocking software and misconfiguring it and We're getting kind of lots of weird problems from people working on Browsers that I never knew existed and and things like that And that's why kind of I think monitoring on the client is going to become really really important and adding all that data up so One of the really interesting tools that came out recently is hot jar hot jar is kind of Analytics on steroids it does heat maps. It does Workflows it does all sorts of weird Analytics and predictions on the data That you're getting to kind of pinpoint where the problems are going to be and there's lots of libraries coming out like truck J s that is doing error analytics on the client and kind of Throwing away all those outlier edge cases that are you know Somebody trying something out on a completely weird browser on a completely weird device But then aggregating that data to give us statistically relevant information about well, you might have a problem here You need to look at it. You might you know start investigating this stuff so what I would really love to see is a combination of kind of tools like this with cloud deployment now where Maybe we can you know deploy this using mechanical Turk to 20,000 people Different countries different platforms different tools and use Tools like this to monitor what's going to happen and how it's going to work and then get statistically relevant results out of this So that's kind of I think what we're going to start having over the next five to ten is is much much better tools for behavior changes in production and Looking at well, you know are people doing something unexpected with this size is this exploding in weird and wonderful ways Are the workflows working as we expect and I think that's where we're going to start seeing much more interesting results and kind of Potentially integrating that with continuous integration getting reports like this Google is famous for this episode called 40 shades of blue where the Engineers and and the head of design had a big fight around the kind of changing the colors of the hyperlinks on the home page and This is a really interesting story because it's well documented for both sides You can read the blog post of the guy who kind of designed the stuff he was the head of design at Google then and He wrote a blog post why you cannot do good design at Google anymore And then there's a whole series of posts around kind of How they tested the stuff and what they've done so he wanted to kind of change the color on the home page to something that you know is is more beautiful and better and Whatever and they were asking why you know this change needs to happen and his idea was that This color is the you know He wanted to introduce some new blue color that is much more noticeable to a human eye and people will click more on the add links and What the engineers have done that night is they've deployed that color but also 40 other shades of 39 other shades of blue and They've proven that his color was nowhere close to kind of being better than the Current one and there was a article from the Guardian couple of years later where They were saying at a conference in the UK one of the kind of business people who was working on the Google home page said that The money difference was something like 250 million dollars Expanded to a whole year and the whole Google population. So stuff like that is kind of you know unexpected This guy was I'm sure a very qualified designer He rose to the ranks to be the head of design at Google. He knew the color theory He knew all those things but there was something unexpected that he couldn't predict as a result He quit and I think he's now working for Twitter where they have amazing shades of blue, but they can't make money where kind of You know Googles making a ton of money because they know how to do this kind of testing and I think with cloud with kind of these better tools This is going to become possible for almost everybody to do and we have those tests not just on production But integrated into a continuous build so kind of the last opportunity that I think is happening is kind of big data Is now everywhere and we're getting better systems that can manage big data. We're getting much much Better ways of analyzing this stuff and especially when we do sort of client side analytics and collecting data on the client that's a ton of Information that somebody needs to process if we're doing crowd-based crowdsource based testing that's a ton of information we need to process and There's going to be some really weird stuff to kind of of course will do dashboards and heat maps that's already emerging but One of the things that's really emerging now as well is It's kind of hall Discipline of machine learning The first time when it really hit me that this is kind of incredible was when I read an article in in I think 2012 About target the US supermarket chain where they were sending pregnancy coupons to this teenage girl And then her father went into the shop and started yelling at the manager wide You know at the sending pregnancy coupons to to baby coupons to this girl and a week later It turns out she actually was pregnant so the supermarket based on her buying patterns was able to kind of discover that she was pregnant before she knew she was pregnant and You know, there's no algorithm there. It's kind of machine learning. It's machine concluding stuff on on on patterns And I think kind of you know, we're given that stuff Imagine possibilities for kind of Testing there if we can collect all these data analytics on these weird Devices and things like that. It's not just creating hash maps organizing that information. It's kind of Putting the machine to learn stuff from that and predict where the problems are going to happen and This is where kind of Machine learning was for a long time Too expensive and in the domain of kind of companies who have too much money and too many kind of really smart people But it's now becoming available for kind of almost anybody Amazon has launched these kind of monster monster monster machines with kind of CPUs that I don't know thousands of kind of virtual CPUs that you can process easily and Kind of you can rent the trailer cheaply So Google open source tensorflow that's their machine learning toolkit and you can kind of benefit from you know This is what they use to power Google brain and to kind of predict odds and things like Microsoft of course being a quick follower Release their kind of distributed machine learning toolkit as well So stuff where kind of really smart people have developed We cannot just take and it's open source and we can run it on Amazon cloud and run it on the big data We call it to learn from this and the opportunities are really amazing MIT did this experiment a couple of months ago where they ran it on Ren machine learning on the github so source code of lots of projects of github and Try to figure out how people fix the most common bugs And there's this brilliant paper where they were able to kind of run a machine learning toolkit to Automatically detect kind of some very common types of bugs and to fix it to start submitting automated pull requests and This is why I think is really amazing where we can have machine learning trying to kind of analyze our development patterns our deployment patterns and kind of start predicting where problems are going to happen so Kind of I'm kind of almost Expecting that in you know couple of years will have this kind of big data risk threat modeling where as soon as you start Developing something is going to say well, you know you might want to investigate this component here I'm not entirely sure what to look for but you know statistically. There's a problem there So try and kind of you know investigate that a bit and that's where exploratory testing I think is going to move and Kind of I wouldn't be too surprised if we see kind of something like this happening in a couple of years where as We develop this you know an assistant or something like that proposing where to go and test so that's kind of those are my predictions for the next five to ten years and Most of these tools are already available if you want to kind of Manually combine them and experiment with those things you can You know immediately benefit from that stuff. I hope of kind of Giving you at least some new ideas how to start approaching testing. Thank you very much So I know over and a bit. I don't know if we have time for questions or not Okay Two minutes for questions. Okay Does anybody have any questions or is it too early for that? So the question is what are the top barriers to mature testing to get this level I I guess time Kind of people seem to be too busy all all you know doing stuff So, you know these are all new techniques new technologies We need to kind of I guess, you know invest in exploring them Although, you know all these tools are available even today So they're not that popular yet because I assume, you know people are too busy Doing exploratory testing on stuff that's completely predictable to start looking into this stuff So it's kind of a chicken and egg problem. I Don't know. It's it's probably contextual for every company Good. Well, thank you very much for spending an hour of your life with me. I hope this was useful. Thank you