 Okay, so hi everyone. My name is Moshe Milman. I'm one of the founders of Aptritools and in today's session we're gonna speak about how you can fix your automation challenges in the era of CI-CD. Before we get going just a bit of background about myself. As I mentioned, my name is Moshe, one of the founders of Aptritools and since we started Aptritools back in 2013, I walked with many different companies, tech companies, pharmaceutical companies, financial services, banks, retail companies, on helping them to improve their automation infrastructure and implement different types of automation frameworks. So I would like to think that I have quite a wide perspective about this field and I would like to show with you some of my perspective and give you some of these insights as part of that session. I also used to be a professional basketball player before I got into software, but didn't do that well over there, so I switched to software and it's my first time giving a conference talk in shorts at home, so this is pretty fun. I mean, I used to give a lot of them in person, but I guess it is what it is nowadays. So let's get going. The agenda for today we start by speaking about some of the market trends and some benchmark numbers about what's happening today in the industry. In this section I'll show with you some results from a survey that our company did last year about the state of testing where we surveyed about 300 companies about their testing, CI pipeline, sorry, automation, etc. And you'll see some of these results here and I'll also share some results of a different research that my colleague Angie Jones, which I'm proud to call her a colleague and myself did with some of the top brands in each of these industries. So in each industry like tech companies, financial services, retail, pharmaceutical healthcare, we picked a couple of top companies, top brands in that space and we asked them specific questions about how they're doing things, what's working for them, what's not working for them when it comes to testing and automation and CI CD. And we tried to look at the differences between what's happening with these top brands versus what's happening at the rest of the industry. And in this session we also tried to analyze some of these differences and see what challenges some of the companies in the industry are facing, which in some cases some of these top brands already solved and how you can fix some of these challenges. And we'll summarize with Q&A at the end of the session. So let's speak about these market trends and we'll get into it and we'll start exploring. So the first question and this was for the top brands. So just to remind you only like the top companies in each industry. This was about 20 companies that participated in that part. So the first question was how many of these companies have a major part of their testing automated as part of their CI pipeline? And we would look for like over 50% coverage that runs as part of the CI workflow or the CI pipeline. And not so surprisingly, 100% of these companies like all these companies actually had significant automation coverage running as part of their CI pipeline. And when you think about it, this is really important because all these brands or at least many of these brands are doing continuous delivery, releasing many times per day. And you would like to support this release velocity, you have to have a very high degree of automation, otherwise you just can't scale to this level of CI CD. Now, the surprise here is when we looked at the, when we compared it against the industry benchmarks, the results were pretty surprising because when you look at it here over 50% of the company, when you look at the rest of the industry, had less than 50% automation coverage. So half the companies that were surveyed, half of these 300 companies had less than 50% automation coverage. So when we looked at the top brands, 100% of them had more than 50% automation coverage. When we looked at the rest of the industry, half the companies had less than 50% coverage. So a huge difference between the top of the market or the top brands versus the entire industry. Now, another interesting thing that we've seen in some of these top brands that we spoke with was this new role that started evolving and some companies call it quality advocates, other companies gave it different names. But the idea of this role, we've seen that in many of these companies, the developers were actually responsible for writing the test or maintaining the test infrastructure. And in some cases, it was either the developers or the test engineers, but even when it was the test engineers doing that, they're still embedded as part of the development team and working as part of the feature team or scrum team. And there was no longer a separate dedicated QA or testing or test automation team. Now, the role of these quality advocates, so many companies mentioned that when they made that transition from having a QA team to having no QA or having the developers responsible for QA, they did have some challenges with the quality of the product and who is responsible for the quality and who is monitoring the coverage of the automated test and coaching the developers how to write better tests. I mean, some developers at least don't really learn how to write tests and are not really passionate about writing tests, so when you don't have someone who is owning that or responsible for that, in some cases, there are some issues or some degradation in quality. So this new role of quality advocates are basically responsible for writing the test infrastructure, again, in some of the companies, coaching the developers how to write better tests. So the developers are writing the test, but these quality advocates are looking at the work, coaching them how to write these tests better, doing some code reviews, etc., helping develop the test strategy and tracking the testing coverage and monitoring the team's work as part of the ongoing release process. Now, another interesting thing that we looked at was which programming languages are used by the top brands versus the rest of the industry. So first here, there was also a pretty significant difference. And when we look at the top brands, 100% of the companies were using JavaScript as part of their test automation libraries. And again, it doesn't mean that they used only JavaScript. Many of them used other frameworks as well. You can see about 40% had Java, about 20% were using Python, etc. But everyone were using JavaScript to some extent. And when we asked companies, why did they shift to JavaScript or how come JavaScript is so widely used? The answer was that as they started this shift or developers are responsible for broader scope of testing, they wanted the developers to be able to write tests in a language that they're comfortable with. And in many of these companies, the front of the development team is working in JavaScript, so they also switched a major part of the testing to JavaScript. Now, when we looked at the industry, so there is like two points in time when we did this check about the usage of different programming languages and test frameworks in the broader industry, not just in the top brands. And when we did that in 2019, I think this was around early to mid 2019, the results was like in the rest of the industry Java was by far the dominant language was almost 45%. And then JavaScript came second with only 15%. And then C sharp and Python were like third and fourth and Ruby came last. And in 2020, the results were slightly different. Java still remained in the top with quite a similar percentage. So not much change when it comes to Java. But we did see a huge jump in JavaScript and jump from 15% to 30%. C sharp and Ruby declined, you know, pretty significantly and Python gained some momentum, but only a slight increase. So you can see this trend even in the rest of the industry, JavaScript is gaining a lot of popularity and more and more companies are moving some of their test frameworks to JavaScript. So after looking at the programming languages, we also wanted to see which test frameworks companies are using. And again, when you look at the industry, this was also really interesting to see the results because, you know, when I go to conferences, when I listen to different talks and webinars, you hear a lot about Cypress and test of and playwright and some of these new frameworks. And sometimes you get to think, wow, like, you know, these frameworks are taking over. But still, when we looked at the percentage of the market, like from the total market, only about, I don't know, like less than 30% of the industry. We're using these other tools, while still 74, almost 75% of the companies were using Selenium as the primary automated testing suite. So Selenium is still by far dominant when you look at the industry. When we looked at the top brands, the numbers were slightly different. So in the top brands, when we looked at the test frameworks for web testing, Selenium had less than 50% and Cypress had a bit more in these brands. In mobile testing, there was a pretty even distribution between Appium versus Espresso and XCY. And when we looked at component testing, about 30% of the companies were doing component testing and using Storybook. Storybook is also gaining momentum and more and more companies are using it as part of their testing strategy. And visual testing also was used by 100% of the companies. So 100% of these top brands were doing visual testing as part of their automated testing strategy. And all of them were doing it with Apple Tools as their visual testing solution. Now, when we looked at the entire industry for visual testing, the results were very different than the top brands. So as I mentioned in the top brands, all of them were doing visual testing as part of their automation strategy. In the rest of the industry, about 50% were either not doing visual testing at all or doing it manually and not having significant automation when it comes to visual testing. So it seems like here there is also a pretty significant gap between the top brands and the rest of the industry. And it can explain some of the solution and challenges that we're going to speak about later. Now, when we looked at CI-CD, not surprisingly, all the companies, all the top brands had CI pipeline. And you can see the CI-CD tool that they're using. This is less interesting and we don't have enough time. So I'll skip that part. A lot of companies are using containers as part of their CI strategy and test environment strategy. If we have time, I'll share some more data points about it at the end of the session. And feature flags was another thing that many of the top brands mentioned as something that they do as part of their release and testing strategy so they can test some things in production or at a later stage. Again, some companies mentioned that they're moving away from feature flags, but it definitely came up multiple times within the top brands. Okay, so after we went over some of these data points and the results from the survey, let's look at a few additional trends that we're seeing in the market. Again, here it's mainly my perspective. And I wanted to share some of these other trends and then transition to the testing challenges and the solutions. So one trend, which again, I'm sure all of you are seeing that is AI assisted testing or using AI for testing. So there is a huge hype around AI. And pretty much every testing vendor now mentioned that they're using AI as part of their solution. And sometimes it can be confusing. So from my perspective, the three key areas where AI is used successfully when it comes to testing are first around reducing the test creation and maintenance overhead. And over there, there is like things like self-healing tests, detecting timing anomalies, doing crawling in a smart way, being able to crawl the application in an automated way without writing code. These are like key areas. The second area, which is probably the most mature use of AI for testing is the on the visual testing side, visual AI, finding visual regressions, doing image processing and comparing screenshots, using, you know, deep learning for these types of use cases and making sure that the look and feel and the UI is consistent across all the different browsers, devices and screen sizes. And the last point, which is also gaining momentum is around test data generation, test result analysis, detecting anomalies in the results of the test and stability, flakiness of the test, et cetera. Now, another area that's regaining momentum is called the testing. Sorry for the animated GIF. Someone told me that in every talk today, you have to include an animated GIF. Otherwise, it doesn't count. So I put one here, which is to some extent related. So codeless testing, it used to be not cool to speak about record and playback or codeless testing two or three years ago. And it used to be considered as, you know, companies who can't code or doing it kind of like as a trade-off. But in the last couple of years, I'm seeing kind of like some re, like these tools are regaining traction and starting to gain momentum. And it's adopted for various use cases. I mean, when we looked at the top brands, actually most of them are not using codeless testing, but I'm still seeing companies looking at this and the tools are becoming more modern, more advanced. Some of these tools also implement AI. So these solutions are becoming legit. If you don't have the skills in the team to create code-based testing, and they're even free and open source tools like Selenium IDE and Catalon and other tools. So, you know, if you don't have the skills in the team, definitely check it out. Now, another thing is that, I mean, there used to be a very clear separation between testing and development. It used to be different teams, different responsibilities, different leaders. And this is pretty much going away from what I'm seeing. I mean, now the testing is done as part of the development team, by the developers or by test engineers that are part of the development team. And there is no more throwing things over the wall. I mean, I'll give you the build and you, you know, check it. I don't care. If it works for you, it worked on my machine, everything is good. And there is no more this notion of QA gateway. I mean, there is a QA gateway and only when the QA tested and declares it as approved, it can go away. Now release can be sent to production at any point in time. The software needs to be released. And there is no such thing as, you know, a release like it used to be in the past. So, as I mentioned, as part of that, like developers started writing tests in more and more companies, the developers are involved in writing the tests. Sometimes only the developers are writing the test. In other cases, the test engineers are involved as well, but developers are helping. And when there are two main schools of thought when it comes to testing, and I'm sure you've seen that before, but there is the Pyramid School, which basically says, you know, do a lot of unit testing, do some integration testing and very few end-to-end tests. And the idea here is like unit testing are easier to build and maintain. Integration and also quicker to execute. Integration tests a bit more complex and end-to-end tests are the most difficult to write and take the most time to execute and a lot of time to maintain. So you should try to avoid a large number of end-to-end tests and only test, only use end-to-end for things that can't be tested using integration and unit. And then there is the Diamond School. So the Diamond School, again, same idea, but just saying, like instead of doing a lot of unit testing, do a bit less unit testing, test the most of the things as part of your integration testing. And again, the idea here is that like unit testing, even though it's easier to build and, you know, runs quickly, they're usually covering less and it's less likely to find issues as part of your unit testing. And the integration testing tools are becoming more efficient and the performance of the integration testing tools is improving significantly over the last few years. So now you can actually do less unit testing, test more with integration and still do less end-to-end tests. Both of these models agree about it. And also included one more model of the trophy of testing by Kent C. Dodds. It's same idea as the Diamond with some enhancements around static analysis. They only included it because I wanted to mention Kent. If you're passionate about testing, I highly recommend that you follow him on Twitter. He's writing some awesome stuff about testing and he's a really smart person. So highly recommended to follow him. And recently I also heard about a new option that people are doing which is no end-to-end testing at all. But the results here can be challenging. So I definitely don't recommend that option. So again, if you ask me which one is better, I mean the answer is I don't really care as long as you actually do testing and you properly test as part of your pipeline. And it has to be like an integral part of the process. And as long as you don't do the last option of no end-to-end testing because this is definitely not a good idea. And doing only unit testing or only component testing won't get you far. So it's definitely good to do a lot of unit testing, a lot of component testing, but you can't really overlook end-to-end testing. So another trend that I'm starting to see is that cross-barrel testing is actually declining. About two or three years ago cross-barrel testing was a huge trend and everyone were pushing to do more cross-barrel testing and test on all the browsers that the customers are using. And it's still very important to do cross-barrel testing, but the need for it is slightly reduced. And the reason is that today there are actually only three main browsers remaining. I mean, you have Chrome, you have Firefox, and you have Safari. And if you look at mobile, it's pretty much only two browsers, only Chrome and Safari. And you may ask what about IE, what about Edge? So IE is mostly dead, I think you would agree with that. It's very near end of life. And Edge recently moved to Chromium, so it's becoming almost identical to Chrome and there is not, doesn't necessarily make sense to test Edge independently. And the reality is that for most of the applications, the browsers are functionally the same. So if a functionality works on one browser, like a button click or a text box, the chances of this same functionality not working on a different browser, unless you do something crazy or unless it's a legacy browser, are very low. So finding browser specific bugs is becoming really rare, but the time and money you need to spend to find these bugs is not getting any cheaper. And the only exception here is visual bugs. So the rendering is still very different between different browsers and different devices and screen sizes and retina, etc. So when you look at all of that, you still see a lot of differences between browsers which still require cross browser testing, just needs like a more efficient way. So now that we looked at these trends in the market and some of the results of the research, I wanted to switch gears and get into the automation challenges and the solution for these challenges, which was part of the focus of this talk. So let's get, let's dive in. Let's dive into these automation challenges. And here in this section, I'm trying to cover some things that I'm seeing as common issues that companies are facing as part of their automation strategy and as part of their automation implementation. And basically this is how the modern software delivery cycle looks like, or at least this is how most companies would like to see it. And everyone is speaking now about CI CD, like everyone wants to do continuous delivery and release multiple times per day. And in continuous delivery, everything happens all the time. I mean, you write code, you test, you build, you deploy. It all happens continuously in kind of like this endless cycle. And if you would like to be able to do continuous delivery successfully, automation is extremely important. Automation is the key. And when I'm speaking about automation, it's not just saying, you know, I have automated testing. This automated test has to be part of integral part of your CI pipeline. It has to be really fast. So the speed is extremely important. If in the past you could run your test overnight or during the weekend and come back, you know, on Monday and look at the results. Now these tests have to run in seconds or minutes as part of every build or every pull request, has to be reliable. So you have to trust your test. You can't spend a lot of time dealing with false positives or dealing with environment issues, has to be repeatable and consistent. And you have to have full visibility to the meaning of the results of your test. So when a test is failed, you have to understand why it failed, what was the cause of the issue. And you need to be able to roll back quickly. So if an issue was found after deployment, you need to be able to roll back without a lengthy complicated process. And you also need to have observability and you need to have the right metrics in your application to be able to monitor it in production and find issues. Even once the application is deployed by looking at the user's behavior and the analytics trends and seeing different metrics that acted in specific ways in the previous version and compare it to how these metrics are trending in the new version and use that to find issues and anomalies. Now, there are many talks and webinars that I'm hearing people speak about this magical new world where AI is testing everything. AI is writing the test and executing the test and analyzing the results and finding the defects and you don't need to do anything. Now it's all done for you. So unfortunately, I think that we're still not there. I mean, the tools are definitely making progress when it comes to AI. I mean, I'm part of ApliTools and we're focusing a lot on testing. So we definitely see the benefits of AI in testing as part of the solution, but it's still only helping in certain areas. It's not completely replacing everything. So there still needs to be a lot of areas where you need to pay attention to the processes and the different aspects that still can't be implemented with AI. So let's look at the test information workflows and see in which areas companies are facing most of the issues. So this is a typical workflow. And again, I'm sure you're all familiar with it, but you do a product change. You execute the test. You analyze the fail test. If the fail tests are related to an infrastructure issue or a broken test, you fix it and you rerun the test. If it failed because of an issue in the product, you report a defect, find the root cause and fix the bug. Again, I mentioned here developer and tester, but as we spoke before, it might be the same person and it goes on and on. You build a product change, you author test to cover it, you run the test, et cetera, et cetera, et cetera. Now the first challenge is related to low test coverage. I mean, many companies, many teams are suffering from low test coverage and they can't keep up with the pace of development and have sufficient coverage as part of their automated test. And the main reason that we've seen for this challenge is that a lot of companies are spending the majority of the time on analyzing the test failures and fixing broken tests. Actually 40 to 70% depends on the companies of the test and engineering team time was spent in this area. And this caused a hit in the time available to author new tests. So because companies are spending so much time on analyzing the failures and fixing the broken tests, they didn't have time to author new tests quickly enough and started aggregating coverage and gaps. Now, when we speak about end-to-end UI testing, and this is what a lot of companies are using Selenium to automate these tests, test the end-to-end functionality of the system through the user interface. Again, we still like Selenium and Apio and Cypress, et cetera. And in this test, every test is a sequence of steps where you navigate through the application, click some buttons, enter text, move between screens, and then validate the output of the system in these different areas. So do some assertions to validate the text, validate the content, validate the look and feel, validate everything else. And the problems with that approach is that you need to add a line of code for every UI element, each and every UI element in the screen which takes a lot of code, it takes a lot of time to build and maintain it. And when the UI changes, you're starting to get flakiness in your test and it takes a lot of time to maintain the test. And just to show you an example for that, like every screen in the application nowadays, every page on your website or your mobile application have sometimes hundreds of underlying elements, and each of these elements has multiple attributes. This is an example that one of my friends sent me of a security software that they're testing which is like a nightmare to test, try to imagine testing these charts and images and the colors and the text and make sure that the functionality is correct through the UI. And also analyzing the test results is done using these complex log files which are really difficult to see where the issue is and identify the root cause. So the first challenge is to summarize it is low test coverage because every assertion that you add increases the maintenance overhead, so you start asking question, is it really worth automating? All these things check the system functionality through the UI, not really checking the actual UI and it can only catch expected bugs. So when you write assertions, you assert specific things. If you don't assert it, if you don't cover it in the code, it won't be covered in your test and if a new feature is added, it won't be covered in your test. So you still need to do manual testing as part of the cycle and it doesn't really scale. The second challenge, slow feedback. So again, we mentioned that the process of analyzing the failures and reporting bugs is taking a lot of the time. This process is done manually, fully manually by most companies and this is causing a delay because you need to wait for someone to look at the test results once the test completed running and analyze it and report defects and by the time these defects are reported, a significant time has passed and when the developer needs to do the root cause analysis and find the root cause of the issue, which is the third challenge. This takes a lot of time. The developer needs to do a context switching. He might be already in a different task and then revert the code to compare this current code or the previous code with the last working version of the code and find the differences and fix the bugs. So this process is very, very manual and it's taking a lot of time and it's causing a lot of delays that you can't approve as part of continuous delivery pipeline. And the last challenge number four is ineffective and slow cross-environment tests. So as I mentioned, a cross-environment test, it's a declining trend but still it's needed. I mean, most companies are doing cross-environment testing. They're just trying to find more effective way to do it. And the current approach for cross-environment and cross-border testing sorry, is ineffective and causing a lot of issues. And everyone wants to test on the same devices and browser that their customers are using but this prolongs the testing execution time and the test results analysis time and it requires an expensive setup an expensive test lab with devices and browsers. And when you think about it and ask the question what bugs are we actually trying to find? I mean, the server bugs are 99% environment agnostic. If you think about a database a broken database query I mean, it doesn't matter what browser you use to access the application if the DB query is broken this will happen on any environment it's not environment dependent and even application bugs in the UI in most cases environment agnostic and the only exception here is maybe legacy browsers but you know when you think about table sorting input fields these types of things as we mentioned before it's very unlikely to find issues which are specific to one browser. Now the last challenge which again I'm seeing a lot is lack of skills and experience and access to talent. So many companies don't have the right skills when it comes to automated testing and they don't have access to the right talent which knows how to build scaleable build and maintain scaleable automated testing more and more companies are coaching the developers to do it and because in many cases the developers or strong software engineer they can do it many companies are still relying on automation engineers but having challenges to recruit enough automation engineers because there is not enough people with this skill in the market. So now that we understand the challenges let's look at how we can fix these challenges and in this section which is the final section of the talk we'll speak about some of the solutions to these problems. So if you just to remind you the first two challenges were low test coverage and slow feedback and when we think about low testing coverage and slow feedback one of the things that can be helpful to reduce that problem is visual testing and just to align on the terminology what visual testing means it's the process of validating all the visual aspects of an application user interface on all the different platforms and it goes beyond functional testing of tools like Selenium to make sure that the colors the fonts the content the buttons everything appears correctly on all the different environments and it helps you prevent issues again I'm sure you're familiar with these visual issues that happen on specific environments or visual regressions where certain things are not presented well to the end user on web or mobile applications and the big benefit of this concept of visual testing is that it allows you to do full page visual and functional validation so when you think about it if you remember the example of this web page that we've seen before which includes hundreds of elements and each element has multiple attributes testing it with Selenium I mean just only Selenium I would need to go to each element and assert it individually while with visual testing I can add one visual assertion that will check the entire page and compare everything on that page with this one assertion or one line of code removing a lot of the overhead in creating the test and maintaining the test and just to show you an example that will help illustrate these benefits is if you think about this login page that I need to test so every application has a login form and you know it's probably one of the most simple forms of the application but even when we look at that form is you know different fields and different buttons and different images if I had to write code with only Selenium to cover it I'll need to write quite a lot of code I mean you can look at the code here it's pretty straightforward but I had to write and maintain about 18 lines of code with about 21 locators which you know may change over time so think about how much time it takes to write this code think about you know how does it really scale versus if you try to do this initial coverage with Selenium with the addition of visual testing you pretty much replace everything with one most of the code with one line of code which just checks the screen visually you just check the window and this is relying only on one locator for the navigation all the other locators are no longer needed so when these locator change when the application changes you have much less maintenance and additionally many of the changes may happen in the screen will not be covered by your traditional function of this so think about you know things that are disappearing things that are overlapping things that are added you know we spoke about new features what happens with new features in the first example even though I had 18 lines of code I still wouldn't find all these issues I would have to add probably hundreds of lines of code and I still won't be able to find new features that are added with the concept of visual testing I can immediately find all these changes without requiring more code or more maintenance now the key question when you know when you think about visual testing so how does it actually work you take a screenshot of all the screens in the application you compare it against screenshots from a new version or from a different environment so this way every time a new version is released you can immediately check all the screens versus the previous version and find all the differences and find all the issues and the expected changes and you can also compare it against additional environments so if now you know iPhone 12 is released you want to make sure that the look and feel on iPhone 12 is consistent with the look and feel on iPhone 11 so you can just say okay let's check iPhone 11 versus iPhone 12 but the big question that comes here that many people ask me when I explain about it is how does it scale like what about false positive what about false negatives how much noise and overhead is cost here and the good news is that with modern tools that are in this space and again of course I'm from Aputus so Aputus is one of these tools but there are other tools as well these tools are getting to a really high level of accuracy especially when you implement AI as part of it you can get to very accurate image processing not much false positives even like levels that you know like less than one error every million test and the reason for that is that the tools now know to look at the page beyond the pixel so it used to be in the past there are many tools that were available still many tools that are available which are doing pixel to pixel matching with thresholds there are a few other talks in the conference about visual testing so you can see the examples over there but these things don't really scale well because you know when browser versions are released things like that you get a lot of false positives the new tools know to look at the screen in a similar way to how a human looks at it and understand the structure the images the text and be able to deal with dynamic elements etc and the maintenance is much easier if you remember the log example that the user had to look at to find the issues now the maintenance is just a simple UI that shows you the differences it knows to group together similar differences so you don't have the same accept the same difference over and over again so it saves a tremendous amount of time in the test creation and the test maintenance effort so just to summarize the first and second challenges I mean with visual testing you get visual and functional coverage every team member can participate in this process and see the results visually and accept reject them and it's easy to create new tests it's easy to review the test results so it solves a big portion a big chunk of this problem the third challenge is root cause analysis and again one of the improvements that some of the modern tools are allowing you to to use today is that when you find difference when you find a defect you can immediately see the underlying code changes which cause this issue so you're not just showing the the defect you immediately show the underlying code changes and the developer can fix it right away the tester can within one click report all these details directly to the bug tracking system to JIRA or like you know send it to developer via Slack and the developer can fix it right away so some of the modern tools today allow you to do that and it saves a lot of time in the root cause analysis process now the first challenge we spoke about was ineffective and slow cross environment testing and this one is actually interesting because when you think about it how do you test how do you run a cross environment test today how do you create one let's say that it's a visual cross environment test so a typical test would look something like this you navigate to a page click some button check the screen click some buttons check the screen click some button check the screen etc etc now it looks simple but is it really simple I mean it's not that simple because when you think about it you need to run these tests and you need to visually test it on different browsers different responsive layouts you know portrait landscape if it's mobile different pixel density retina display etc so it's becoming pretty complex and what many companies end up doing is taking this same test that we've seen before and running it 10 times on Chrome on Firefox different viewport sizes i.e. Safari Safari on iOS etc and you end up with you know many different executions of this test but is it really a good idea to run each test 10 times of course not I think you would agree with me so what's the solution for that I mean one option that companies try to do is to run it 10 times but parallelize it run it 10 times in parallel so that's an option but this is still not fully solving the problem because it's still not a good idea to run it 10 times because if the test fails you know 10 percent of the times now you're running 10 times more often you're gonna have much more failures and it's the environment will be much more expensive to scale to run it you know multiple times on all the environments especially if you have a lot of tests so the new solution which companies are implementing is basically instead of parallelizing the test and running each test 10 times you can actually parallelize the screenshots so you can take the test running just one time locally or you know wherever you want to run it and then every screen that you want to test take whatever the browser need to render that screen you know the DOM HTML etc and send that to a grid of browsers and render this image across all these environments and what this allows you to do with the execution of one test one local test you can get coverage on all these environments and as we spoke before the functionality is not likely to change between different browsers except for very specific cases which you can use the previous approach for but for most of the tests you can use this approach which is much faster and more stable so to summarize I mean this idea covers all the environments at the speed of running a single local test it can find visual bugs which are more likely to be environment specific and it's at the fraction of the speed and cost of speed penalty and cost of a traditional approach so again a lot of people are asking you know in the chat like how do we do it like you know show me how it works so let's see a quick demo of that approach so in this case I took a Selenium test which runs with Aptitools but you can do it with any other visual testing tool so you open the test you navigate to a screen you check it you navigate to another screen you check it and then you close the test so let's run the test and see what happens in this case when I run it it opens Chrome locally opens the login form enter the parameters logs in get to the second page and you check these two pages and initially I ran this test only on one environment just on Chrome on a certain viewport size and I can go to the dashboard and look at the results of this of this test now that the test is completed let's go to the dashboard refresh it and look at the results and again this one is with Aptitools but you can use any tool to do this thing and you can see that all the screens were tested in this case on one environment no issues were found because they didn't change anything I ran it on the same version so it passed as I expected now what I did now I configured the test to run on 10 more environments so a total of 11 environments instead of just one environment and you know you can see here Firefox and Chrome and Safari and IE and iPhone 10 and Pixel 2 and again you know you can put 10 or 20 more it won't change anything but the nice thing here is that the test still ran just one time locally so I don't need to run this test 11 times I'm only running it one time so from a stability and speed perspective we didn't change anything and when I refresh it you can see here that after 9 seconds I got the results from all these environments from Chrome and Firefox and iPhone and Pixel and I can see in each of these tests I can see all these images and I can see how the page looks like in all these environments and if there are issues in specific environments I would be able to find these issues so you can just understand how much scale this can give you and how quickly you can execute the test on all these environments without all the flakiness and the cost involved with a traditional grid approach so one more thing I mean next week Apple is launching the and some new products so it just reminded me this one more thing actually two more things here so we spoke about the fifth challenge which was the access to talent and skills in the theme so again this is one thing which I'm really excited about and passionate about it's the Apple does test automation university so we launched it over a year ago and there are already over 60,000 engineers which students which two courses and pass exams and you know that certificate it's from test automation university you have multiple different courses over there for different skills and knowledge levels some basic courses that takes you to through the basic of you know learning Java learning JavaScript learning Selenium web browser IO all the different tools that are common and different languages and there are also some advanced courses about about design patterns and testing strategies and you know different visual testing and other types of things so if you're not familiar with TAU I highly encourage you to go you can see the link here go and sign up for an account everything is free over there so it's a it's a non-profit initiative to get more and more people educated about test automation with some amazing instructors my colleague Angie Jones is leading this program and if you follow her on Twitter you will see some awesome stuff almost every week a new course goes live so it's you know some amazing instructors are involved in that and the second initiative I mean again Selenium is an awesome open source project and there are many other really good open source tools so we're big fans of open source we also try to help develop some open source tools like Selenium IDE and one of the things we decided to do is to give free access of free licenses of appletools to open source projects to try to give back to the community a bit so if you're a maintainer of an open source project feel free to reach out and ask for an account and we'll make sure to configure one for you and your your team for any open source project yeah so without me we're running out of time so we'd like to conclude the talk as I mentioned my name is Moshe you can see my Twitter handle here feel free to ping me on Twitter or send me an email and I would be glad to be in touch I hope you found value in this talk and but again if you have any questions please don't hesitate to reach out so thank you very much