 Harvey one My name is Adam Karmie, and I'm the co-founder and VP R&D of public tools We're a company that provides a cloud service for automated visual testing As part of my work I get to meet a lot of developers and testers and I'm always curious to learn how they do visual testing to most common answers that they get is That one they have no idea what visual testing is and that's to they think I'm asking them whether or not they're using Sikuli So the main thing I hope you'll take away from this session is that you understand what visual testing is and that you can and should Automate your visual test, but if in addition you will remember that Sikuli is not a visual test automation tool and tell that to everyone you know that would be awesome So we have a lot of a lot to cover in today's session first I'll explain what visual testing is and why it should be automated Then we'll look at the different tools that are available how they work and the technology that they're based on and We'll conclude the session by explaining how automated visual testing can fit in your development or QA life cycle Of course, there will be time at the end to answer any questions that you may have But especially in this forum feel free to stop me at any point if anything is unclear So what is visual testing? It is a quality assurance activity that he's aimed to verify that the graphical user interface Appears correctly to the end user now this goes beyond the traditional Functional testing that you used to do with tools like Selenium and others called the DUFT Appu, etc Where the focus is to test the functionality of the application through the UI What we are focusing on here is making sure that the UI itself appears correctly that each UI element appears in the Right color shape position and size and it doesn't overlap or hide any other UI Now this type of testing is becoming increasingly difficult to perform in recent years Mainly because of the explosion in the number of in the amount of execution environments browsers devices operating systems from the solutions Applications are expected to run on So here you can see An example of a visual bug that we found in a Microsoft Asia management portal You can see here how the graph exceeded the expected bound of the page This is an example from Twitter. You can see how the notification timestamp overflowed on top of the notification. That's below it This is from the financial times here the article title overflowed on top of the article body and This is how the Amazon website looked like for Several hours for certain users on Amazon Prime Day, which was a huge sales day about six months So I'm sure that you've all seen this type of bugs before hopefully not at your workplace But I'm sure that you've seen them. You understand their severity. They can be very embarrassing They can hurt the company brand but in many situations They can completely cripple a website or application and end up costing a lot of money as probably happened in this case So why should we bother automating this type of testing? Anyway, we we've been managing without automation and been testing this stuff manually for 30 years So there are many reasons but the most important one is that the test metrics is just too big to cover manually Think of all the different web browsers and devices and operating system and screen resolutions We all know that if a site looks good on IE doesn't mean that it would look good on Chrome Right, and if an application looks good on a widescreen doesn't mean that it would look good on a smartphone So we have to test across all these different environments and doing it manually takes a lot of time and money And it's also error-prone if your application or website is responsive and most modern Websites are then you also need to factor in the different layout modes and test them across all these environments If it's localized to several languages then it language as its own fonts and images and resources and content and all of these affect the UI so we have to test all of these across all these different environments and The truth is that even if we don't change a line of code in our product We still depend on third-party upgrades The most natural one for websites is the browser itself it updates every couple of weeks, right? And whenever that happens it can introduce incompatibilities with our site So even if we didn't change anything it could still break in certain areas, right? So we have to test all the time We can't really focus our tests on just specific areas and it just impossible to be When it comes to mobile applications Quality is even more critical The fact is that it's much harder to roll back changes unlike websites when you can just fix the bug and Push it to production You can push daily to the up store and even if you find a way to To work around that and frequent updates take battery and data and eventually upset your customers And even with that customers are not forced to get your upgrades So they can just decide not to upgrade and be stay with your bug forever, right? And in general there's a much higher quality bar when it comes to mobile application simply because mobile users are much less tolerant to UI and UX bugs In addition release cycles keep getting shorter and shorter My companies that are practicing continuous deployment, which is the recent the current height Let's say a releasing code to production several times a day and with such short release cycles There's hardly any time to do any type of manual testing let alone making sure that the UI looks right in such a huge amount of environments So let's talk a little bit about the tools and the technology There are over 30 Visual test automation tools out there today And they all share the same simple workflow that have has four steps in the first step You drive the application on the test and get screenshots In the second step the tool takes those screenshots and compares them with baseline images These baseline images define the expected appearance of the app of the application in that point in the test In the majority of cases these are simply screenshots that were taken in the past in previous test runs and Were approved by a manual tester who looked at them and made sure that they are correct In the third step the tool takes the results of these comparisons Takes the screenshots and the baseline images and generates a report which includes all the differences that were found if any And in the fourth step a human tester has to look at the report and decide for each change whether it is a bug in which case he opens a bug or if it is a valid change because you just added a feature or fixed a bug or Something then he simply approves the new screenshots. So they would be used as baseline images for subsequent runs So now it's time for a first demo and I'll show you web driver CSS which is an open source visual test automation tool That adds visual validation to the web driver IO, which is one of the popular JavaScript language language bindings for web driver So I know that the internet connection is not very good So I hope that it will run smoothly But if not, I'll just show you the outcome and you have to take my word that it will that it actually Does what I'm saying that it does Okay, so for those of you are not familiar with web driver IO. It's very simple Let's just take a look at a very Trivial piece of code that uses it We created an instance of the driver and we indicate that we want to start a local Chrome browser Okay, in this test we start by opening up the driver At the browser, sorry, they're navigating to github.com and we just wait 10 seconds Very simple. It doesn't really test anything It just automates the browser, but we can run it and see what we get. Okay, so we have github.com opening up You can see the sign up button that will relate to next and the fact that we also have a scroll bar meaning that there is more Content for the page that we cannot see on the screen right now Okay, so we are waiting those 10 seconds and The browser closed as we expect Now let's add visual validation Checkpoint to our test and we'll do that with web driver CSS and we just add these few lines here that initialize CSS and also indicate to it that we would like to test this website github.com in two different widths The reason we do it is that the width of the browser is the primary factor that determines the layout of most responsive websites and in particular of github.com so by specifying those Screen widths that we are interested in we can actually force github.com to assume all these different layout mode So we can test it in all of them. Okay, and make sure it appears like Once we've done that right after we are Navigating to github.com we add our visual validation point Which is called web driver CSS We provide a name for this checkpoint in this case it is github and we provide a list of elements on that page On github.com that we want to test so in this case we want to test the body element, which is the entire page Okay, and we also give it a name in this case. It's on page. Okay, so let's run this test and see what we get Okay, so we started the browser started It did the first resize now we did the second resize Taking screenshots as we discussed in the workflow and we're done now if we go to the folder We're just ran the test you can see that I have two new images The first one shows us that it's the github checkpoint for the homepage element for 400 pixels and with a suffix baseline So because this is the first time I ran the test and I still don't have anything to compare with So the first one actually set the baseline starting from the second one on onwards I always have something to compare against if we open this Image you can see that it actually consists of the entire page even the parts that are below The viewport below the phone now those of you who work with chrome know that you cannot just get an Screenshot of the entire thing you only get the viewport right so what web drivers CSS did is that it actually took multiple Screenshot and while scrolling the page and produced the full page screenshots for us Which is great because we get much more coverage Okay Now the next thing that we're going to do is we are going to simulate a bug on that page and see how web drivers CSS captures it and the way that we're gonna do it is that we are going to execute some javascript Just before the validation point and we'll locate the sign up a button Okay, and we'll just change its visibility to heathen basically will hide the sign up but okay So let's run this test and see what we get Okay, first besides and no button Okay Second resize without the button and we are done if we look back at our folder You can see that we have two new images With a regression suffix this is web drivers CSS way of telling us that it found a regression in the test if we look at the regression image we can see that it's The same image only without the button just like this is the change that we did but in addition We also have a def folder that now shows us a screenshot of the new image of the regression image with all the differing pixels painted in pink Which is a very nice way to report the change and visually see it. Okay, so What I wanted to take away from this example is how easy it is to take your existing Web drive or selenium test and with just a few lines of code Enhance them so they would also be able to capture visual differences and save a lot of work for the Manual testers that need to go and they can concentrate the efforts on other things that human beings need to think and do Rather than just look at screens Of course, it is also possible to tie The results of the comparison directly into the test so that it would fail the test So we don't have to look at the folders and see if there are images there or not But we don't have time to cover these settings So we'll skip that for now Okay, so the next thing I want to do is to go over each of the steps of the workflow in detail And I'll start with the second step of comparison screenshot The reason I'm doing it is that I'm sure that you're all thinking about right now How stable is this image comparison? It's probably very flaky and unstable. I cannot rely on it So let's get it out of the way and we can continue with the remainder of the steps So if you allow me to quote Boromir from the other of the rings one does not simply do bitmap comparison And by bitmap comparison, I'm talking about pixel to pixel comparison The reason is that if you do that you'll get a lot of false positives and you'll end up hating the tool and throwing it away false positive in the sense of In the context of visual testing is a case where the tool tells you that there is a difference But you cannot see it as a human being or it's too small. You don't care about it, okay? And there are many many many reasons for these false positives to happen if you're just doing them Pixel to pixel comparison and now let's go over a few of the more common ones So you'll have a better clear picture about what I'm talking about The first reason is an image processing effect that is called anti aliasing. Maybe you've heard about the term But what is it in reality? So we have here a navigation bar and on it we have The playlist tab and you can see the playlist of magnified here at the center of the of the slide Now you can see that the playlist up here at the bottom appears to be white And if you look at the DOM for that webpage, it would say that the color is white But if you could if you look at the rendered pixels, you can see that many of them are not white There are many shades of blue right, so this pixel are actually actually Anti-aliasing pixels that were added by the rendering engines such as the graphics card and The purpose of these pixels is basically to make the font looks look nicer and more beautiful and smoother to the human eye But we cannot see any of it. We don't see those blue pixels here This is a very good thing. It is used for decades now in computer science, right? But what's the problem with it when we're doing visual validation? The problem is that if you are running your test on more than one machine Which makes sense if you have more than one tester or if you're running in a test lab Then it's likely that you have a different machine that is running the same test And if you have a different machine, it might have a different graphics card. It might have different Settings for the graphics card. It might have a different screen connected to it Or maybe even the same thing just a different version of the graphics driver All of these factors could end up with having a different implementation of the anti-aliasing algorithm Right and when that happens, you simply get different pixels Okay, so if you look at the same page exactly rendered on a different computer Okay, you can see that the anti-aliasing pixels are quite different here. There are blue here. They are in Purple and pink but down here the original image We cannot see the difference because basically the algorithm produces the same effect of making the font look nice to us So then again, you can see if I toggle between the two how different the pixels are The differences are significant if you're just doing pixel to pixel comparison. This will fail This will fail your test for nothing, right? It's not about and so you really need a sophisticated image matching Algorithm image matching engine that will be able to understand that these are anti-aliasing six pixels It's okay for them to be different that a human being cannot see the difference and they should be ignored and rather than failing your test Here's another example that involves a moving element who recognizes the what the website that is fragment is taken from Dropbox what exactly so in robots We have this upgrade the count element that is moving the reason that it's moving is because to the right of it There's a username and whenever we run the test is a different username And so the upgrade account element is moving depending on the length of that use But we would still like to validate This page so we have a moving element So one idea that we can come up with is just instead of checking the whole page Let's just take a screenshot of the div or element that contains the upgrade account And just compare that screenshot with the screenshot in the baseline It is the same and then it doesn't matter where it appears on page It's an excellent idea what but it would still not work Because if we do that and place upgrade account one on top of the other in those two screenshots You can see that actually it looks that upgrade account is moving as a whole But what the browser is actually doing it is positioning every specific character in that text Individually so you can see that the characters a c and o here are actually positioned in different pixels So it's impossible to see that in the upgrade account original example But if you look at the pixel that what you see and then again, you need to sophisticated image matching Algorithm to be able to detect and ignore this difference because it's invisible The last example I want to show you has to do with image scaling. It's very common Any image that you have on your website or application? Has an image that is displayed there every image element, right whenever there is The size of the source image is different and that of the target element Then the rendering engine has to fit it so it would have to scale it So it would fit the target bounds right and then again two different computers You'll get a different scaling algorithm Implementation of it right so if we look at this rectangle here at the roof of the car And we see the pixel that it contains and we look at the rendering on another By another computer you can see how different the pixels can be and then again and the original image It's completely invisible to us as humans But the the change in pixel is quite substantial and again, you need a smart algorithm to be able to detect and ignore it so There are other reasons other reasons that make this difficult So first of all you have arbitrary one pixel offsets in element position This could be just elements that are placed one pixel to the side or downwards or even a HTML table that has a column that gets to be one pixel wider and moves everything else on the page you need to be able to handle dynamic content like dates and Usernames and banners and commercials There are moving elements like the abrit account example that we've seen or animations and you also be able to be able to Compare images of different sizes and of course it has to run super fast Otherwise your test will take forever to run and in addition to that there are those that always get false positive no matter what they do But seriously the reason I'm showing you all of this is just to explain why if you ever tried Just comparing pixel and it didn't work Why it doesn't work and also to say that in recent years the visual test automation tools have come a very long way in handling these issues and allowing you to really perform Visual test automation at very large scale in a very stable way So let's go over these image comparison engines in more detail and see what are the primary ones that all these tools are based on The first is image magic It's a very powerful command line tool for doing general-purpose image processing Algorithms such as taking an image and scaling it and rotating it or changing it to grayscale Or changing the format in which it is safe and one of the other things that it does it allows you to compare to images It also provides a fuzzing feature that allows you to overcome some of the false positives that I just mentioned The way that it works is that you just in the command line You simply call compare provide the first image and the second image and it would produce the number of Pixels that are different between those images But in addition if you provide a third image it will Store an image of the second image with all the different pixels painted in pink just like we saw with web driver CSS The tools that are based on image magic rely on an error ratio to decide if the two images are matching And how is that done? the tools take the number of different pixels that we have here and Divide that in the area of the image and this provides an error ratio Now within the test you can decide what is the threshold of error that you are willing to accept as a Match right so you can decide that as long as the error does not exceed half a percent of Half a percent then your test will pass otherwise Next we have the three javascript engine resemble JS blink diff and look same They're implemented in javascript also based on a pixel to pixel comparison just like image magic They are also the tools that rely on them use an error ratio to determine a match versus a mismatch But they also Have better abilities to handle some of these false positives that I described So for instance resemble JS has very good treatment for anti aliasing pixels blink diff does a good job with Taking into account the fact that we as humans are less sensitive to color changes in some color ranges than others So they are more tolerant to to differences in colors And these tools do a very good job and there are Many many tools that are based on And this leads us to the third and to the third the matching engine Which is our own up a tool's eyes engine that we've been developing for several years now exactly to solve this problem And it handles all of these issues That I've shown you and many others very well can handle dynamic and moving content Etc. Etc. The two most important things that are That distinguish it from the rest is that first of all it does not rely on an error ratio So the idea here is it really simulates the human side And it will only show differences that a human can see and it has nothing to do with the size of the difference Or the size of the area of the image so just to give me an Example so if you have a very large web page like we've seen in the demo before If a comma would change to a period or a plus would change to a minus which is just a few pixel difference The tool would highlight that as a difference because it's a very important one on the other hand if you have a table on your website and one of the columns would be one pixel wider and That would move the rest would shift the rest of the image to the side and create an eighty percent difference Still we cannot see that as humans Right the fact that that the column became one pixel wider and the tool is smart enough to understand that a human cannot see it And we will just ignore that that difference the second Thing that is special about it. It is capable of performing structural or layout Comparison of images and this is very useful and I'll show you how it works With up little size so in this example We have the page x website We have on the left hand side the baseline image and you can see that it was taken on from and on the right hand side We have the current image or the image that we're validating that was taken on IE And if I toggle between the two you can see how different The two browsers render the same page We have slightly different fonts the position is slightly different You can see that the text in this paragraph scrolls if wraps differently This scroll bar is different, but all of this is okay I mean browsers are different and it's okay for them to render differently But still structurally the web pages are consistent and because of that all these differences are not highlighted If I would click this radar button over here You can see that it does highlight a change and if I zoom into it You can see that it did pick up that on IE. We have a missing element So this is one example of how powerful structural matching is in the sense that it allows it to have a single baseline image from one environment and use it to verify screenshots from many other Let's take a look at another example This one is from Twitter a baseline on Samsung S4 and Current image from Samsung S5 and we have several violations here We can see that according to the baseline the first switch should be aligned to the right of the image And this is being violated over here. The second thing is that To it at the last we should have an image next to it, but it's missing over here, right? All these issues are correctly captured On the other hand if you look at it these two tweets in the middle You can see that although they have different images and different texts in them still they are structurally equivalent And therefore they are not marked as different and this leads us to the second very Valuable use of layout matching, which is actually Validating extremely dynamic applications So in this case we have the Yahoo website where the baseline and current images were taken in a 24-hour difference Okay, so you can see that although the images are very different and the articles are very different still structurally they're equivalent and therefore The test passed okay, you can actually monitor production system this way If I would change this to a more strict match You can see that all the dynamic parts are highlighted in pink as you would expect and the static ones Aren't like this navigation, but however if I would change this to exact pixel-to-pixel matching You can see that the entire page is highlighted simply because all the pixels are indeed different only that Strict match algorithm was smart enough to detect those differences that a human cannot see and ignore Any questions about comparison of images before we move on? Yes You can compare power so you can compare the whole thing you have full control Yes Yeah, so now we will see how you can you can do that so As you saw with web drivers CSS it gave you the whole page Different tools don't all of them have the same capabilities some will only give you the viewport If you're asking specifically about up little size that you can decide which one So we use Specific specifically for up little is all it cares about is an image. It doesn't care where it came from We have different SDKs that connect into different systems. So when we're working with Selenium We asked the web driver for the image or set of images in order to create the full page screenshot when we're working with Uft we asked Uft for the screenshot, but once we get it we do the validation. Yes Yes, definitely Yes, yes, you can It's just images. So basically if you have two images you can compare them It doesn't care what the images came from if you draw them in Photoshop or rendered by browser, it's a set Yes Yes You can control it the way that you want to do it but specifically if you want to do cross device or cross browser then The the full answer is that you need to have a single baseline for every layout mode of your application So if you have like it doesn't make sense to compare a small Smartphone with a tablet right so different layout entirely for the for the application So you'll have one for small spawns run for tablets, but you don't need for each and every tablet You have it so sorry if so Yeah, yeah, so Specifically for the layout algorithm it allows for various images So if the image will be gone it will say that it's missing But if it's another image then we cannot say if it's broken or not So no because it cannot tell if it's just an image of something else or this image broken Woman has to look at that Okay, so now that we understand how we compare images how the different tools compares images And I hope that you have more confidence that it can work and you can scale your test with it because you can Okay So let's talk about how the different tools handle getting those screenshots and writing the application So as I mentioned Although all these different visual test automation tools share the same The same workflow we can still categorize them roughly in two categories The first is the quick feedback tools They share a similar setup where the tool renders the screenshots with a headless browser such as phantom.js and slimer.js and The tests are driven by a configuration file Usually this configuration file will consist of a list of URLs that the tool will visit and render the screenshot and validate The motivation for using this type this category of tools is to get fast fast feedback on code changes Usually it would be the frontend developers When a frontend developer changes a component or a page and he wants to make sure that it changes the dd didn't Didn't spoil other parts other components or other parts of the application. He would use those tools They run in the background They don't pop up browsers because it's a headless browser and because it's a configuration file Then you don't need to deal with maintenance and building tests and learning new languages or technologies It's very simple and easy and effective the downsides of it is that it provides you with very partial coverage First of all if you're working with phantom.js and slimer.js You're only covered with Firefox and Safari. You don't cover IE and Chrome which are quite Significant and important browsers. The second thing even for phantom.js and slimer.js you're using old versions of those browsers because Browsers update every two weeks these guys update every few months and you don't necessarily have the latest version So it's possible that your test will pass But still you have a bug on the new and real browser and of course because you're using a configuration file to drive your test You're limited in the navigation that in the interactions that you can simulate So you cannot really get to states of your UI when you need to fill out Forms and click buttons and hover over elements. You cannot do it with a configuration On the other hand the other category of tools does allow you to verify everything Right and usually the setup would be to really render screenshot on real browsers and real operating systems to run a lot of Test and do that in parallel and not surprisingly those set of tools will be based on a web driver and a selenium grid to Accomplish that and the test themselves will be based on a web driver or some DSL on top of The motivation here is really to mainly for Test automation team for QA teams that really want to make sure automatically that the UI looks good On all the real execution environments and they want to you want to really test all the different states And you want to have many state test and run them in parallel in order to fit things up The disadvantages of this approach is that it does require you to invest in some test infrastructure setting up a grid running testing parallel and of course you need to maintain Test code right but on the other hand for professional test automation teams You already have those Environments in place and you have the expertise to implement and maintain the test so it's not really a disadvantage to add these tools On this slide you can see a list of some of the selenium tools that are available First of all, you can see that the majority of them are code based rather than configuration based You can see that for almost for every language binding There is a visual test automation tool that can be used to augment it and the third thing to notice is that None of these tools would work with native Applications only for web applications except for up the tools eyes that can verify any application that up you cannot Okay, let's move to the third step reporting differences So many of the tools simply report the differences as files on the file system Just like we saw with web browser CSS You have the baseline image and the regression or current image and then the different now This seems like a simplistic way of reporting But it's actually very effective because files are very easy to handle you can easily send them for someone to review You can commit them to in your source control and keep a history of those baseline and the changes So it's quite a good way to report this kind of results There are other tools like the selenium visual diff that provides you with a nicer dashboard Where you have like statistics about the runs and what failed and how many differences were found and nice Tumprins of the failures that you can dream down and see up close And when it comes to updating the baseline Then all of the tools that are based on the file system will usually have a command line tool that allow you to Say that you are accepting a baseline And then it will handle all the renames and moving that the images to the right directories, etc Some other tools like Germany will provide it with a nicer UI when you can actually See the clearly the difference and click a button to replace the baseline image with the current image But then again when it comes to visual testing there are situations where Even a single change or a single bug can fail many many of your tests. So just to take Extreme example if you just change the header of your of your application And that's failed every test every image in your application that you have in your test because all of them include the head Just as an example So because of that and you don't really want to look at a thousand images of the same change on the header and accept And accept and accept or reject if it's about so all of the tools provide you with a mechanism to override the baseline Which means just I don't want to look at it Just accept all the new images in the new baseline and that's it But this is a very risky thing to do because among these images There might be bugs that you just approve and once you do that they become part of the baseline and Then the tool won't be able to tell you that they are there, right? What you would really want to do is be able to look just on the unique differences And not on all their occurrences, right? And this is part of what is called automated maintenance and I'll show you how that's accomplished with up to two sides So in this case we take the github example step forward Let's say that we've built a suite of tests for github. We have many many tests all of them fail We covered four different execution environments different browsers different devices different form factors and in total we had 76 mismatches Now although it is very easy to see the thumbprints of the differences see that it's changing the header zoom into it very quickly and See that indeed the github logo went away and decided it's a bug and reject it very easy to go and then Continue to reject all the differences that have the same change that we can see directly here Still if you have a thousand of these this could be a bit Overwhelming let's say so you can actually ask the tool to group those differences together and then once you did that You're only left with two images to look at right and then you can quickly inspect the first one See that The github logo went away. That's a bug. We didn't want it to happen And by that you're done maintaining all those images that have the same difference although there are different pages on Different browsers it doesn't matter and then look at the other one and see that in this case actually they get a Logo change color to green which is what we intended to do and accept that and by that we're done maintaining We could just save the entire batch and we by took few clicks. We're done maintaining the whole thing Okay, and there's a lot of features like that that really allow you to scale up your test and Today we have customers that are running Thousands of tests every day with very little maintenance overhead because of these abilities The next thing I want to talk about is how all of these fits in the development life cycle The quick answer is that it fits in all stages when we talk about unit testing we also already talked about how Frontend developers can use it as a visual unit test to make sure that the components that they build don't break when they change code Easy to send to code review easy to share At the integration testing level we've seen with web browser CSS How is it is to take your end-to-end test your existing tests and just add visual validation points to them with just a few lines of code When you do that you get an extra Bonus that the tool becomes a very powerful collaboration tool within the team because all of a sudden you have a dashboard Where all the changes are documented and everyone can see them So if you have a developer that added a feature added a new button Then ten minutes later when the automated test kicked in everyone in the team can see that change and immediately provide feedback and by and then what you get is that you Add drastically reduce the feedback loop within the team, which is of course the goal of every agile team to reduce that As far as acceptance testing goals, it's very common to Take a release candidates. It's about to be released and get screenshots from me from from it And compare it with a baseline of the previous release this way You can see what change and make sure that there are no unexpected changes before you release So you can validate the staging environment with respect to the production environment, etc And of course there are many teams that are doing visual testing in production They you can find that there are no missing resources on your production servers You can make sure that there are no corruption due to third-party upgrades If the browser that upgraded to some component or maybe you're consuming data from a third party You want to make sure that it continues to arrive and nothing is broken in your test and With this I conclude the session and if you have any questions, I'll be happy to answer Yep excellent question Yeah Yes, excellent question. The question was how does this approach compare with the Galen framework? So the Galen framework, I don't know if everyone's familiar with it is why is one where you need to create a spec in Code a text file that is a specification of the page and then it can be able to it is able to check According to the structure of the DOM whether it is consistent with the specification so the problem here is At least as I see it is that first of all who wants to write a spec and maintain It takes time you you can make mistake in the spec and just imagine with all the work that you have today to add another task That whenever you have a change you need to go and update the spec for this to happen The second difference is that it doesn't really test what the browser is showing the user We test what the browser is supposed to show the user So it's perfectly natural for the DOM to be in a certain way But there is a bug in the browser in the new release that it doesn't look the same as you would expect it to Maybe the element is there, but the foreground is white and the background is right. So it's transparent It's there in the DOM, but it's not visible. So there are there are differences. It's a very nice framework It's a very nice approach, but a different one and from my experience from being from what from being having to test these algorithms, I know how hard it is to When you have like your data change and all of a sudden you have 50 failures that are just text changes and then you look at the diffs and you see that the number 45 Changed to 43 and you have no idea what that looks like. You need to see it visually with this approach It's visually everyone in the team can do the maintenance. It's clear. You know the product This is how it looked like. This is how it looks today. This is a difference. It saves a lot of time. I hope it Yes Hi, is it possible to check on anything other than Differences in pixels as in can you tell it to watch specifically for a particular configuration and always error on on that I Don't fully understand the question, but okay, I was wondering let's say you have a very You have a dynamic page. So you want you don't want a very strict image comparison, but you do want some sort of way to Perhaps insert some sort of an element that if you see a particular little error icon or something Then even though it's not a very strict comparison. It should still always show as a failed test Yeah, and I check for anything other than Literal comparison between the two images. Yeah, so basically this most of the tools that I am talking about here are For visual validation and all of them all of them in that list do pixel to pixel comparison You can decide not to do the whole page You can decide to focus on certain parts of the page when it comes to our tool to up little size You can also validate at the layout level. So it's not Sensitive to some it can handle those dynamic as I shown in the example dynamic pages, etc And it would look for layout changes But beyond that if there is anything more strict that you can do that you want to do you can always Check that in other means you can augment the test in other ways that it would get you exactly where you want. Yes You can just it's free for ever just with minimal Our tool up a tool size is not open source. It's a commercial product But all the other tools web services says is open source. You're free to try it It has all the features that's limited usage For example, like I am running one of my selenium tests on one machine. I will assume So I can't hear you Hello, yes, fine. Yeah, I'm running one of my test and in one of the mic machine So I have a base screenshot. So and also tomorrow. I'm running the same scenario I'm capturing the same home page in another In another mission kind of thing. So I have two screenshots now So will it be same or the will it be validation will be failed because the screen resolution is changed Okay, but nothing has been changed apart from the screen resolution. Yes, so If the screen resolution meaning that you had a small Screen and then yes, yes So first of all, you shouldn't take a full screen a full page screenshot That covers the entire desktop the right way to do it is to decide in your test What is the size of the window of the browser that you want to test in and then even if it's on a bigger screen or a smaller Screen, it would be consistent as a part of your test plan You should decide what are the sizes that you want to test in it's not like you throw it away and whatever happens up No, for example, and I'll explain more but in addition specifically for As you've seen with with the web browser CSS in the demo You actually specify those widths and the tool would make sure that they would conform to that with and so it's not arbitrary Arbitrary matching with up little size in addition Even if you don't specify it automatically detects the environment in which the test is running the operating system The screen resolution and all of that and it automatically creates a baseline for that specific environment So it won't ever try to match by default Smaller screen resolution with a big one It will have different buckets for each of those environments and we automatically know with which to compare Yeah, put it. Okay. How many other testers use it? Yeah, so how can a manual tester use this to okay, so we have specifically for up to those eyes we have a Browser extension that allows you when you're browsing a page to click on a button and it performs the visual validation on that page This is one way to go But the main way to work with the tool is to add it to your automation That's the primary way to go Now the way the the role that the manual testers play there in most companies is that the Automation guys do a one-time effort to add validation points to the test and they're done They don't need to touch it anymore all the maintenance process Looking at the baselines approved if approving them is done by the manual testers now instead of testing There's a small portion of the system and spending a lot of time to do that You can test the whole system on a huge amount of environments and spend a Fraction of the effort that you used to do to test that small part because all you need to do is look just the changes If there are no changes you can free to do exploratory testing or the testing that you cannot automate Okay, and once you have seen a change you only need to approve it once you don't need to look at it again And again and again, so in most cases, it's the manual testers that actually Activate the tool once the automation guys just edit the validation points inside Yes, I don't know how much time we have left Okay, in some cases that might be a slight difference some kind of a water marking to say this is a test server and Proud server those kind of stuff right in such cases can we reuse across the servers QA stage? There will be differences but in the baseline itself right if we take baseline for any one server Can we reuse it across all three depends how tested depends on your environment if it's a if it's production data or not So if you can have an account there that is the same and showing the same deterministic results in both environments It depends if it's completely dynamic you can test it with layout And then again if it's the same data that just on different servers that it doesn't matter because still the page should Know sad that made me some differences right up for example the any one image might change saying that this is a set of server That will always be available in a test server that may not be available in a stage or abroad Yeah, so in this case you can either ignore choose to ignore that image because you know it can be different or You can Test by layout Okay, or you can do regression test on the staging or regression test on the production and then you will have consistent results We just one more there baseline images, right? Whether aptly tools will store our baseline images or we should have that in our so it depends on how you deploy it If it's in the cloud, it's our problem if it's on premise. It's your problem. Okay. Thank you In the slides, I just looked that aptly tool supports apm as well. So is that for native mobile apps? Yes so in that case there are various Mobiles with different screen resolutions So in that case is it necessary for a baseline to be set up for every single device or can I group it and then have that one Baseline for many of the other devices so you can do both things so many customers many of the companies that work with us they do have because The mobile UI and you weeks they invest so much effort in doing it, right? They want to make sure that it looks exactly right in the right color and that everything is exactly the same so they have a baseline for each environment and they use the automated maintenance that I've shown you to Quickly accept and maintain it There are other customers that do have a single baseline for a device that represents a category of similar devices and They use regression streak testing on that specific device and then just make sure by layout that the rest of them are Okay, so it's a risk on one hand You have only one baseline in it or about one baseline for one device on the other hand You have less coverage because it won't be able to see if an icon change with another icon that you didn't intend to Because only it needs to see that there is an icon there, right? So it depends you balance the what you want to achieve with how much you want to invest Okay, so the second approach would result in many false positives, right? So it would be advisable to go with the first approach in it to just have one baseline for each of us Everyone makes his own so I cannot say that this one or the other one For me myself I prefer to go and be strict everywhere because the difference is that with strict You can be a hundred percent close to a hundred percent sure that if the tool tells you that it's okay, then it's okay With layout If the tool tells that that's it's okay, then you know that there are no severe problems there But it could be problems that the tool is not aimed to find it this way So anyway, you you cannot be certain that it's okay. You still need to test it in other ways exploratory make sure that really Nothing broke in these environments, but you still need to look or you take the risk, right? Okay. Thank you Okay, I'm still available if anyone wants to I'm available today and tomorrow, so Thank you