 Welcome everyone to the session about why cross browser and divide platforms arrived for destruction by Raghavan Ambiya Ganandhan. Are we glad they can join us today without further delay over to you Raghavan. Good morning, you know, good afternoon, good evening to everyone from wherever you are joining. My name is Raghavan Ambiya Ganandhan and I'm principal software engineer in Expedia. And today the topic I'm going to talk about is why cross browser and device platforms are right for destruction. So the SAS platform cross browser and device platform are used where they provide different browsers, different browser types, different browser versions, different operating different operating systems, versions, mobile devices, mobile browser versions and etc. Now, why this topic? Basically, you know, what is what was acceptable two years ago was an acceptable last year. What was acceptable last year is not acceptable today. It could be speed. It could be cost. It could be futures. Mainly because technology is evolving so fast that unless we adapt to it, we're going to miss out so many things. So much of a positive things. And that's what this is. I'm going to talk about like a positive destruction. So this is a small introduction about myself, my profiles in LinkedIn and get repo and Twitter and my blog. I've been in this industry for almost 20 years now. I did start as a test automation engineer and I like it and I've been in the industry for 20 years now. I speak. My first conference was Selenium 2015 in Oregon. I mentor people. This is one way I give back to the community, my knowledge and my experience. So what is the problem description of this talk? As I mentioned before, cross-process device platforms are not built to handle real scalability that shift left requires in a cost efficient way. Now, what is shift left shift left is the ability to validate your change. Even before you merge your full request, instead of validating them after you merge and to the right side of the pipeline. You want to do that anything, any test, any test automation, very close to the code, which means that requires high scalability. Why you need high scalability is because you need to get faster feedback cycle as well. So the ability of the current SaaS platform to align with some of the key components of best practices of sub-engineering is lacking. And that's what we're going to discuss and what is the problem and how we can solve it. So the talk is going to be split into six parts where I'm going to first talk about what are the key ingredients of CACD pipeline? Because everyone of you here, your core, the nucleus of why you're here is you want to have a very good pipeline that makes sure that your product reaches production, your application or anything, your code, your changes reaches production faster and at the same time with really good quality. So everything we do revolves around the key ingredients or the best practices of CACD pipeline. And we are going to first look at that because the rest of the talk going to revolve around that. The second is the problems in current brass device or platform. Like what are the things they are lacking, which is not in alignment with the first point of key ingredients of CACD pipeline. The third one is the new use cases contributing to the problem. Things are changing. There are new browsers coming up. There are new things coming up. How these new use cases are adding support to our argument. There is going to be a positive disruption and why some of the features are lacking. And fourth one is evolution of supporting technology. Okay, we discussed a problem. We're also going to discuss the supporting technologies that I think could create solution for those problems we are discussing. And the fifth one is disappearing out layers. We're going to have a small few slides on what actually created SAS trend in the first place. And some of these are disappearing and what are they? And test smart, testing never ends, it only stops. So how we can test smart is what we are going to discuss. So let's go into that topic. So the first one is key ingredients of a CACD pipeline. Any CACD pipeline, these are the six components that we want to achieve. You want to run, you want to fail often, which means you want to run test frequently. If you fail often, then you will find issues faster. You don't want to change something and find next day. You want to know it now. Focused micro test, which means you start from unit test to any test you write to the top of the pyramid are focused tests that specifically focus on one particular thing. So when it fails, you know why it failed. The third one is test fast. This is linked to the first one. Unless you can test fast, you cannot fail often because imagine you're running your pipeline and it takes one hour to run your test. You cannot fail often because that's not test failing often. So for that, you need test fast where you have the ability to concurrently run your tests, not for your team, for your project, for your entire organization. The fourth one is fail fast. You know, you don't want to wait for everything to complete. You want to fail fast, like whatever fails, immediately stop it. And the fifth one is shift lap. We already talked about it, but the point here is, whatever you run to fail often, whatever you test relevant, whatever you run fast, whatever you fail fast, do them, run them close to the code before even you merge things to your main branch. That shift left. And the sixth one is visualize test results. What is visualize test results is in one project alone, like in one pipeline, you might be deploying your changes probably 15 times a day. It depends on number of changes, number of teams and people in your team. But you don't have all the time in the world to go and look at all the failures every time they fail because that's complete waste of time. We have technology. We can use machine learning to bubble up things that are relevant that are actually customer impacting problem and just bubble up. So you know, okay, these are the important problem I need to look at. So visualizing and help easy debugging is another important. So these are the key ingredients. Now, we are going to the next topic where we're going to talk about what are the current problem in the SAS platform right now, which offer cross-processor, cross-device plan features. The course, if you want to go and use someone and the course is based on the number of parallel connections, which means you pay. I want to get one connection. I want to get 10 connections. If you get 10 connection, if your organization is big and you have 10 teams and each have lots of cases, then your concurrency across the organization is just 10, which means your test run are going to be slow. How fast you can go to production is going to be slow. How fast you get feedback for your change is going to be slow. So you're going to be slow because of this business model. So the cost of your productivity, your quality and how fast your change reach production depends on how much money your organization can afford to spend on. So that's a one big problem. Second one is fixed costs, not based on actual usage. Imagine in last two years in COVID pandemic and we are still in COVID pandemic. There are many Fortune 500, including Fortune 100 companies that didn't make even a single dollar for a few months. Do you think they will be ready to come and pay you something per month on a fixed basis or a year when they're not even making money? So this again of you pay something, but whether you use it or not is not reflecting the current nature and the current technologies. So you pay regardless of the usage is a problem. The third point of the current problem in the current SaaS platform is not suitable for shift left use case. Because of the first point we saw about parallel connection, shift left requires high scalability. We will see in detail now. So I don't want to run my tests, you know, parallely only after emerge. I want to run app for every branch. I want to run my cases for every commit so that I don't have problems later. So not just testing out to change your merge for every commit to brand and this requires scalability, high scalability that I doubt current platforms are able to provide. So we're going to dig deep into this more deep into looking into this use case. So I've been in industry for 20 years and I know for like 10 years ago, you know, it's not a surprising news that people were even releasing once every six months. People were releasing once every three months. That was a norm. Mainly because there were no, there was no concept of microservice or micro front ends. Your entire website was one monolith application. You make a single single change. It's not just going push a small microservice to the production is you have to enter shift the entire code based production. That's why you were releasing so slow. Things have changed. Things have changed. So if you look at this website, all these green circles might be a separate project, a micro front end with their own CSD pipeline and they go to production on their own for customers. You might not see it. They are still together using a proxy in the background, but they are on their own. They are handled by separate teams and they go to production on their own. Now, going deep further into this, if I map this into a CSD pipeline, then there are 50 teams, for example, just a theory. And each team has five people. We have a CSD pipeline. Each CSD pipeline will run 30 tests whenever you merge your full request for 30 tests to run sequentially. If one test takes two minutes, it's going to be one hour. If they run in parallel, it might take two minutes. It might finish in two minutes. That's okay. But for an entire organization with 50 teams, the worst case, there might be 1,500 tests running concurrently. I agree that it will not happen because not every pipeline will run at the same time. The chance are very, very low. Even if you take one third of the test to run these in parallel against the current platform. It might cost you a million dollar or more. So, because if you need true concurrency, if you need really fast feedback, then you need to buy at least say, 700, 800 parallel connections. Now, what I showed you was a normal pipeline. Let's go further into shift left. So this is a slide from National Institute of Standards and Technology from US Department of Commerce. What it shows you is whenever you have a problem, you find a problem when you are coding. Then for fixing that, it costs five times the amount of fixing it at the recommend stage or before that. But as you go for the right side of the pipeline, every time you find issues, you're going to block people. More people need to get involved. You need to contact switch, especially in production. The course is going to go up. What do you want to test to the right side of the pipeline when you can do it at the coding, at the close to the code for much cheaper price? So the same slide, but I have expanded for one team, which is doing shift left, which means in that particular team, there are five people and they are working on five branches. And they want to run the test, deploy them and run the test on their branch or for every commit for one pipeline that requires 150 parallelization across the organization, around 7500. Again, even if we take one third of that, if you want to have true parallelization using SaaS platform, that's going to be a million dollars or a few million dollars. I'm sure no CTO is going to sign a procurement deal worth that amount for cross-prosive usage. So this is another view for the same where you can see different branches branch out, every commit, you take the chain, deploy it somewhere, test and destroy the stack. Now, and for each pipeline, you use your own scalable grid and run your test. Another reason why I think the problem is the platforms are fully dependent on expensive real devices. So there's no reason that why you have to test everything on real device. And depending on real device is actually increasing the cost of buying the parallel connection and everything. So we're going to look at like why is it important to run everything on real device or not. And I think it's not necessary. And that's another reason. The next one is cloud-based data center. If you are relying on data center, your ability to scale to some of the use cases like shift flap is going to be limited. But if you are in cloud, that could help reduce the cost and help align with the best practice of CICP pipeline. So we looked at what are the current problems in the current SaaS platform. Share your thoughts in the comments. The next topic is what we're going to look at is what are the emerging use cases that is contributing to the problem we describe? So for example, I don't know how many of you heard about in-app browser. In-app browser is a new way. Certain platforms like Facebook, LinkedIn, Cora shows websites on their native apps to keep users within their real so they can track the usage that happens. For example, if you open eBay, for example, on Facebook native apps on your iOS device, you will see something like this. This is not a real browser. This is something like a web view on Safari, I think. You can go all the way up to checkout. You can buy things and you will still within the real more Facebook native app. You wouldn't go out. So I thought, you know, let's see how good it is. And I tried to open my Gmail on iOS Chrome. This is how it showed. And this is how it showed on the in-app browser. My point is Akamai shows that the traffic on in-app browser is increasing. So companies advertise in Facebook, companies advertise in LinkedIn and Cora and everywhere. And the majority of users are not going to disable this feature and land on actual browsers, which you can. But if normal users, if they don't know, you might have tested on Chrome and Safari and other things, but they're going to end up using your e-commerce site on Facebook in-app browser. Is it tested for that? How are you going to test that? So this argument is we're going to have further addition to what we have to add as part of our discovery. So that's in, you know, this is a new use case. How would you even test it? You have a Facebook app, probably, you know, enter something and open the site or there's a way to simulate this using WebView or something like that. The next one is desktop is still the king of conversion. Why? If you look at the mobile traffic pattern, I think every e-commerce site will say that, you know, mobile traffic is probably 60% and desktop is 40% or low. That's fine. But if you look at the conversion, what is a conversion list? How many people are coming to your site and out of which, how many people actually buying something? So the global mobile conversion rate is 1.81 compared to desktop, which 1.98. So even though it's less customers, desktop is giving you more money compared to mobile. So it's not that just because desktop traffic has this decrease, you can reduce the coverage, but desktop is giving you more money. So you still need to increase or keep the same coverage rate on desktop browsers. So and I think this trend is going to be saturated at one point because people will try to buy something on mobile. And once they know what they want to do, they go back to desktop and buy it because it's sometimes your funnel might be complex. The next one is the release schedule of Chrome Firefox Edge are released every, probably once every five weeks. So if your website is sensitive to changes between versions of browsers, then you might need to test it, not just on the latest minus one or latest, but probably latest minus two. Majority of customers are either on latest or latest minus one because most regular users don't know how to go and disable auto update. And also the browser releases are based on phases. So that's why 80% of customers either on latest or latest minus one. So again, this might require you to increase your coverage and but it's very subjective. So the next use case what I think is supporting the argument is region specific process. Why region specific process, right? If you look at in the past, any e-commerce website, big companies, the revenue is probably 60 more than 60% coming from North America. But the trend has changed now where most revenues more than probably 50% coming from non-North American countries. It doesn't mean the revenues started decreasing in North America, but it just means that revenue is growing in non-North American countries, mainly because standard of living is high, the economy is growing. People want to experience new things, people want to travel. So which means, for example, in Vietnam, there's a cocoa browser. It's still based on Chromium, but it's a customized browser. UC browser in China, India, you can see the usage. UC browser in Indonesia, Yandex browser in Russia. You can see the trend is close to Firefox, even though they are based on Chromium. I always follow this approach, create a pattern and follow the pattern. That might take you to something and that this might predict you what's going to be in the future. And by the time, and this is the Chinese, the web browser market in China for desktop, you can see that KQ and SOGO browsers are reasonable piece. And also for mobile, Chrome is nowhere to be seen, it's just 8%. And by the time I see, I thought, okay, there's not going to be any new browsers. Brave and Vivaldi are based on Chromium, Brave focused on security, privacy and Vivaldi on customization. But there's a new browser called Flow, new rendering engine, not based on Chromium. Right now it's only available in Raspberry Pi, but this is a trend. I always think no technology or no tool is too big to become obsolete. So this is an e-browser, what are we going to do with this? So these are some of the new use cases. And so now we are going to look at evolution of new technologies. So we saw the problems, but these technologies are going to help us solve those problems that we cannot currently, that we're experiencing right now on this platform. These two are not new, they are already accessing the AWS bare metal or equivalent in Google Cloud or Azure, AWS Windows that allows you to scale, for example, Android M. AWS browsers, you can use AWS Windows and you can scale it. New technologies, for example, AWS EC2 Mac, AWS has released for the first time, I think a year ago, a Mac one metal, which allows you to scale and run, for example, IOS simulator, IOS Safari, Mac OS X Safari, because its usage is really, really increasing. Although it is only right now provided as a dedicated instance, but it's still, you know, to some extent scalable. There's a new trend of lots of Mac mini cloud, which is equivalent to AWS Linux, but Mac mini clouds. Right now it's primitive where people use it only for using it as a remote Mac machine, or they use it as a, for like Linux, sorry Jenkins agents and things like that. But there is a new pattern emerging, for example, Mac stadium or guys appropriately tool where you can run OS X on Kubernetes. Right. But you might think like hey, why not use Hackintosh? I think Diego covered it because as an organization, legally, you know, it is not a good idea to use Hackintosh where running Mac on Linux. That is, according to Max, Apple's terms and conditions, you shouldn't be doing that. As I said, no technology is too big to become obsolete. I know for a fact, there are top in universities like IIT in India, Stanford, people are working on a new technology, which takes best of both worlds, VM and container. The outcome is going to be, it could change, not just cross process, but entire cloud industry. Where, for example, Selenium Hub image, Docker image if takes one GB with this, it might be like one MB or less. So imagine the speed at which you can instead create and move faster. Imagine the amount of storage we're going to save. So my point is we shouldn't sleepwalk. We should open our mind, look at technology and see how we can make use of it. So what are the disappearing outliers? So if you look at, you know, many, many years ago, IE was a very unique browser, right, compared to the rest of them. I know Microsoft provided a virtual desktop where you can go and, you know, use IE. Even Microsoft realized that, you know, it's not going to be useful. They started using Chromium based edge, right? And IE 11 itself support ends on June 15, 2022, once I did an OS version, MS edge legacy support already ended. So IE is one of the main reasons why the SaaS platform started evolving and that itself is disappearing. And this is one less thing to worry about. This is the disappearing outliers. The next one is test smart. Do we have to test a change on everything? Testing never ends. It only stops. So how smart we can test? How cost-efficient we can test? For example, you have a change. Do you have to test on every browser on Windows, every browser on Mac, every browser on mobile devices, on iOS and Android? You will never, you know, finish that. It will be insane to do that. It's not smart. So the way I normally say is first test your change on a browser that can work more and then take a small software and test it on browser. So I thought to produce a proof of induction where in Macs, there's a concept of induction where mathematically induction proves that we can climb as high as we like on a ladder by proving that we can climb onto the bottom rock first. Don't worry about the upper layers. Can I climb to the first one? And then from each rock, we climb off to the next one. So if you take the concept to automation, base case prove that statement for n is equal to zero without assuming any knowledge of other cases. I don't want to worry about, can my chain work on browsers? I want to first worry about base case. Can my test your change on a browser and prove that code change is working? It's my code chain working. I just want to prove that my code change is working on a browser. Imagine that as a left-hand side of part of the equation. On the right-hand side, then you worry about, okay, I know for the fact that my code change is working and every case costs. And can that change my work on x, y, z browser? And I don't have to run the entire suit. Just pick important ones and run on your top browsers. So this is another way to look at does your code change work on a browser? Yeah, so the code change work. Now you check if it works on your browser with a small subset of changes. So this is a test pyramid for every change. So you can see that Chrome is used for step one. Then you can use Chrome emulator for all responsive tests. And if you have Edge, Firefox, Safari, iOS emulator, iOS emulator, and then use real devices for every code change. And this will help you with the scalability that requires when you move the shift left, when you want to run fast, when you want to fail fast. The key ingredients we saw in the CAC department. The same slide, but different mapping to different rendering engines and JavaScript engines where you can see that we are not missing anything. So this is the idea that we are looking into. And this is one of the examples I want to give. For example, you have different breakpoints of your page. Do you have to actually run this on real device to figure out your responsive page is working? No, you don't have to. You can just use Chrome emulator. We have to be really smart to use the technology, understand it instead of sleepwalking and thinking, I want to use what everyone is using. Just think and ask why 100 times before, you know, start using something. So real device is not a solution for everything. So the positive disruption and the use cases and the solution and other things you saw can be mapped to this test and project permit. So it could be SAS, on-premise. It could be any cloud solution. And on top have an orchestration engine, a container platform, Selenium, APM or any other framework, Ingress, real device. This will help you scale. Not your team, not just your project, across your organization, across any organization, this model will help you scale and stick to and adhere to the best practices of software engineering. Of shift left and fading fast and run fast and etc. This is an example where your infrastructure might be running on AWS, Google cloud or Azure. And you can use the orchestration engine and cloud and Docker and everything to create and scale, run your test, destroy it and you repeat that. And in a fully order-stable fashion, it could be browsers, it could be mobile emulators, simulators, it could be anything. And it could be repeatable, not just for your team, for within your team, for multiple branches, across many teams, across your entire organization, there's no limit on parallelization. There's no limit on how fast you can run. That's a key thing, that's what everyone is striving for, how fast I can reach the production. How fast I know if my change has caused production issues or not, how fast I can know if my change is going to break the pipeline and everyone waiting until I fix it. How fast I can context-switch. Even if you can save four hours purple request, that's a lot of time across the entire year. So this is what, you know, this is trying to achieve. So now, what you should expect from a SaaS platform besides adhering to the key ingredients of a CAC department, as an output, what are the things we should look for? A, it should save money, order scale infrastructure, pay only for what's used, maximize expenditure on cross-prosperity, minimize, minimize expenditure on cross-prosperity. Second thing is make money. You should be able to go to production really, really fast, not just goes to production but with the quality code. These two are business interests, business people will be really loved this. What is therefore ingenious is save time. Execute multiple test cases across multiple browsers concurrently. No limit of concurrent testing. Save time to pull request feedback cycle. And complete test run within a few minutes instead of a few hours. Enable shift left. Avoid production incidents. Help you test each one of your changes in two CAC scripts. So help each test, each health test, each pull request super fast. So every commit can be tested. Multiple peers can be tested concurrently. So these two are for engineers and they would love it. And visualizing is I talked about so that you can bubble up and figure out what are the actual failures that impacts customers. And accelerate change. I know things are changing. They get help to accelerate change. Things are changing. Definition of our roles have changed and it is continuously changing. So the challenge is to not stick there and complain, but to understand and pivot. There's a problem. The best time to plan the tree was 20 years ago. The second best time to plan the tree is right now. So if you have a problem, if you sit and wait, that somebody else is going to come and solve. Nobody's going to come and solve for you. So if this is our problem, we have to solve it. So that's a talk on the positive disruption. Thank you for listening and thank you very much. Thank you, Raghavan. That was a great talk. Very, very informative. And we are open for questions now. Okay, well, people are typing in the question. Okay, we have a question from Pooja Shah. I will just read that out to you. Could you please share the most challenging pain points of inculcating the culture of shift left. I believe shift left and what worked and what did not work. So good question. If you want to shift left, right, you may be a director, you may be a manager, you may be a senior director, VP, you may be CTO CEO. You can go and tell a team saying shift left, do everything close to the code. It will work for one month, two months, and that's it. If you want to have a real successful shift left, you need to have a change in mindset. You need to build champions within each team who could take that and then go and implement it. The way we have implemented within Expedia is we build champions within individual teams. We identified who would like to find the same wavelength and make them understand and create them champion and implement and have them as part of the solution. Help them use it as a community effort. Then they will go back to the team and then drive it. No authority, nobody is going to help you change it. It's the mindset and champions, creating champions within every team and empower them to go and change. And that's the only way you can have a successful shift left. Sounds very sensible. And I hope that answers the question. We have one more question from Ashwin Sapinarayanan. How do you group tests for other browsers? I think it's very subjective, but it suits your individual cases. For example, imagine you have five different pages. For your case one, in my case as an induction step one, I will be running everything. I want to make sure that my code change is working. I don't care about browsers now. But for browsers, I might have a test like a small end-to-end test. Can I go all the way up to the funnel? And probably can I go to this page, one particular page and do certain things? Because I'm already confident my code change is right. I just need to make sure a small sanity smoke test I can do on iOS Safari. So how do I choose? It's just your scope. Do you have five pages in your funnel? Make sure that you have two tests per page or something like that. That's what I would do if you were me. Makes sense. I had a question myself. Any recent developments in the device form area that you've noticed like making changes that we can adopt already or which weren't the place where I'm coming from. Of course, I could use the embedded browser. But is there a way to sort of cross that chasm between real devices and actually having embedded browsers? Any cheaper alternatives than the actual expensive device? So I think some of the solutions, for example, AWS is providing Mac Metal, right? If we can have a way to run iOS Simulator and make sure it's part of the grid, then you can scale it like any how you want. And for Android, Android, there's no problem of running on Linux. So you can easily already scale it. You can containerize it and add it as part of your Selenium grid and use an orchestration platform. And that's it. That's a perfect equation as part of the infrastructure pyramid. However, I believe there's no vendor who's already packaging the service yet. I'm not wrong. Correct. You might be right. So I don't know how many are providing the Simulator and Emulator. Basically, the cost or the business model is the problem. So yeah. Understood. Thanks for that. All right. We're just about reaching time. So I quickly like to thank Raghavan for sharing his experience with us today.