 Very QA dashboards in Drupal came a bit of history before why I'm presenting this today. Around one and a half years ago we were struggling with the problem in our country that was closing all these prints on time. Very similar story across the board. We wouldn't complete all the development but then the tickets would be in the QA stream. So there would be a few tickets in the QA and we could not close the sprint. And this was a trend. We tried to find out what exactly is happening, something that we are doing wrong. We tried multiple different things. Nothing was working out. So we went out to do some research. What's happening in the market? Is it something that a problem within this region, something we are not doing right? Or is it something that other people are also struggling with and maybe have a solution for us? And what we find out in the market was quite interesting. So the state of software testing, it is almost all the same across geographies, different industries. Automation, everyone wants to have it. Not everyone has it. Functional testing is one of the most common type of automation. There are very few people who actually conduct performance and security tests consistently. It's a one-off thing for a few. They don't even conduct any performance or security tests. The ratio of the testers is pretty small for every 10 developers. Sometimes it will be just one QA allocated. QAs are frustrated. They don't have time. They blame it on lack of processes. The test of time is spent on a lot of coordination, this, that and everything. And many people are struggling with the same problem of what we were struggling. A little further into our research, looking into where this QA and testing is heading into, we realize this is a growing market. Now, a problematic space, testing, where we are struggling, it is actually a growing market. And it is growing everywhere. It's growing in APEC. It's growing in North America. What exactly is happening? Then there were multiple articles, multiple discussions, multiple conferences that we attended, where we heard a lot about QA ops and continuous testing, all those concepts people are talking about. Focus on automation. Everyone wants automation right now. Breaking the silo. Traditionally it has been for every set of developers, you are allocated one or two QAs and that's how the testing happens. Now, with so many teams in an enterprise in different organizations, they want to break the silo. They want some sort of cohesion between the different testing teams are doing. Parallel testing is another very big concept that's coming up. Now, there are so many testings, performance, security, this, that and everything. I cannot do all the testing serially. If I have to do that performance test, then I do a security test, then I do automation. It is going to take me a day. I don't have a day to do all the tests and delay my deployments or any further development. And when I'm talking about parallel testing, I need to have scalable infrastructure. I cannot be doing a performance test on my dev environment. I cannot do it on my local. I need to have an environment that can scale just like production. I cannot do my performance test on production as well. So, all these things coming together actually indicates at a very interesting trend that software testing, as we knew it five years ago, that is going to change and it is changing right now. We add the cusp and it is more of a paradigm shift that we are seeing from software testing to a... Oh, sorry. If it is something that I'm doing, may I just keep on speaking or do we get it here first? Sure. So, basically it's a paradigm shift that we are seeing from testing to digital assurance. There are multiple reasons to it. The first one is digital transformation. All the organizations, they are moving away from their legacy systems. They are doing a lot of digital transformation exercises and all this digital transformation is fueled by a distributed system. We are going away from the monoliths. The businesses are evolving whereas it used to be a digital footprint would be an auxiliary thing or an augmentation to what they were doing on ground. A lot of businesses are now evolving into digital only or heavily digitized. Then there is user experience which has become paramount now. Ten years ago, if there was a website I want to go to, it is taking time to load four seconds. I would wait. Perhaps next time I would go there again. Now, take this example. I have got two applications. Uber, I've got DD. I'm standing just below this building. I ask for Uber. It comes at... It takes a lot of time to load. My user experience is not good. I install DD. I get a cab. I go. Now, once I've used DD, there is marketing team sitting there to give me all the promotions, all and everything to ensure that I don't go back to Uber. Uber is losing revenue. So the user experience, the way we had it five years ago and the way it is now and the way it is evolving is going to be quite different. The cyber attacks and security, I don't need to talk a lot about it. It's all in the newspaper. So important it is. The costs are increasing and we need to keep them down. We cannot be spending so much on testing and everything. The fragmentation in the testing market has happened. And the fragmentation is 10 years ago, if you were talking about automation, by default, it meant Selenium. Right now it means Nightwatch, Cypress. There are multiple other technologies that you are working with. And with multiple other teams, one team is using Nightwatch. The other one is using Selenium. The other one is using Behat or Cacomba. It becomes like even more fragmented and ensuring that there's cohesion, finding the right talent, having the talent pool, cross functional teams that is becoming even more difficult. So that fragmentation needs to standardize at a point in time. And parallel automation, we have already talked about there are so many automations that we have to do parallely. And then there are diverse devices. So again, going back 10 years, it was a few browsers that I had to check a few mobile phones, maybe a tab. But now the devices where we are putting the content is again diversifying. I've got Alexa, I've got my watch, I've got a kiosk, I've got a website, I've got a headless, I've got API product, all of these things. So the kind of testing that we need to do to ensure that I'm providing the best customer experience and my business is thriving has actually increased a lot. Whatever I speak, I spoke, these are the boxes. So if this is the paradigm shift that is coming up with challenges, as I said, we are sitting at a cusp right now of this transformation. And a few challenges are if I have to test or do all these extensive testing, I need to have this testing ecosystem, the infrastructure, building that takes time, that takes time of the DevOps, takes time of the tester, and also it incurs some cost. With DevOps, the speed of our development has increased, but the speed of testing, has it increased or no? That depends on the teams, how they are conducting their stuff. But in a lot of cases, the development speed has increased but not the testing speed and that is exactly the problem that we were facing. The development was happening, but last one or two days, there were so many tickets to test, we were not able to finish the sprints as we wanted to. Low tester to developer ratios are frustrating the testers now. The application quality testing needs more time, that's basically the frustration, and the test coverage that we need to ensure. I belong to NSI, I'm working for a client. I cannot afford anything going wrong on their production system. So I need to ensure I'm doing all the testing and even if the client is not really willing to pay for it, either I invest or I convince them why it is important for them. And it is not something, the solution is not something that we are like going to, we are discussing right now. Industry is responding to this problem. This is not something that we have found and we own the IP over it. This is something that I have researched on the net. We have found through multiple conferences talking to a lot of people, but the industry also has a response to it. Capgemini in 2020, they came out with Smart Foundry, a platform for better QA. World quality reports talks about having a platform to like improve the QA and everything. And there are multiple other things, other companies who are doing something. The solution internally, what we tried to implement was if I go clockwise, one, we wanted the infrastructure to be standardized and ensure that we are not paying for the stale infrastructure. We need to save costs. There are so many reports. If I'm doing a performance test every sprint, if I'm doing a security scan every sprint, after 12 months, I've got 24 different reports. Those are lying in my emails when I'm sending it to some different stakeholders. I need to have it in a central place. CICD, definitely I need to have it with all the testing that I'm doing. How about I've got all these reports, if I can compare them, I can compare a report that I got in January to the one I got in December. How is my application fairing? Is the performance going up or down? Is the security like there have been instances? Or the automation coverage, is it the same or is it going up or down? A single platform and dashboards. So one place I log in and I can see everything. This led to this particular thing. This is internal. This is not really a product. This is not something that is open source right now. We are just using it for solving our internal thing. What I want to demo today. We call it Tedboard, short for testing dashboard. A beautiful logo. We just got designed between ourselves. So basically what we are doing again going clockwise in the infra, we have been able to achieve standardization of the infra that we are doing and centralization. All the infra is in one single place for multiple teams. Again, as the SI, we are working for multiple clients. And all the multiple clients have their different sizes of projects and all. Getting the infra on demand. So if I'm doing a performance test every sprint, I don't want to pay for a stale infrastructure for two weeks when I would need it next. I want it to get destroyed so I can save on some costs. Single platform, multiple different projects can be on the same thing. And when there's multi-tenancy, but I don't want one project to see the reports of the other. So unless you are authorized or you are into that role. So enabling role-based access controls, scheduling the tests, performance test. I don't need to run in my working hours. I can just schedule it to run on a Saturday night and get a report Monday morning and see how it has gone by similar for security tests. So scheduling CICD is something that we are still working on. Reporting the live dashboards whenever I am conducting a test if I want to see how it is fairing or I want to cancel a test in between because I see errors so I can cancel it. Notifications, one hour before the test is going to run in one hour. The test has commenced. The test has completed. It's a success. So getting some notifications so I don't have to again go back into and know what's my performance for this print or the security of this print. I just get all the notifications in my email. And in the analysis part, one was trends. As I was speaking about across different sprints, how my application is fairing. Comparisons. And another ambitious thing that we are trying right now is building predictions. What goes into the code that actually breaks your performance or what goes in the code which gives you more chances of security vulnerabilities. So that's again under process. Quickly show the how this thing looks for us. I'm sorry for the bad UI. It's still internal. As I said, it's not productized. So we have not really given a lot of effort into beautifying it. It's more about it. It does the work. We are happy with it. So different accounts. It's a demo account right now. So I'll just go into one. So this is considered like there's one account, one client, and they might be running multiple projects inside it. Oh, sorry. I got logged out. Okay. So there could be one project. If I quickly go into configuration and just trying to show the different kind of things like we are doing with our projects, how we are setting up the projects here. So I can for different projects, I can set up the benchmark for performance for automation. Like how many, what percentage of automation tests they need to like be successful to call that sprinter success. I can do some more settings as I was talking about scheduling. So for performance and security, I can create the templates. Say for example, I want to create a template of 5000 users performance test. I can put in some test run duration, the ramp up time that will be there. How many users I want to like do this test with the number of servers that I want to like have a three to four. And the most important aspect, the speed at which I want to test. So my application will be opened in Spain. It will be opened into Womba. It will be opened in cranes. It would be opened in some outback and there will be different internet speeds. I want to run all these performance tests to ensure and tell my client that this particular application is taking this much time to load in such and such locations or at such internet speeds and it's up to them. What is their priority if they don't want to worry about the slow internet connections up to them, but at least we have all that view. And that is the parallel testing that we can do. So instead of just doing a performance test, I can run three performance tests at different speeds. Right now we are using JMeter in the background. So I have to upload a JMeter script. And once I have a template, I can set up a recurring test. I can just name it maybe if I'm doing it monthly, I call it a monthly test. I use the template that I created. I can make it like run monthly every Saturday or a Sunday, start day, time and all. Coming back to the demo project. This is how the summary page looks like. I only have performance data right now. We were undergoing some compliance audit right now. So the security and automation is on hold right now. I don't have that data. So if I have security and automation, I'll see similar kind of boxes here telling me about the details. But just going into the performance, say for example, I get a list of all the tests those have run. And it's not a very beautiful and very orchestrated demo. I'm really sorry for that. I would want you to use a bit of imagination here, how it is working in our projects. So every sprint, you run a test that gets listed here with a status. You might be running two tests in a sprint that gets listed here. I go into a particular sprint and I see all the data. I add that particular, in that particular performance test, what were the details? There was a benchmark that we set up in our configuration. It is meeting that very well. There are different pages, how they are performing. For detailed time reports, we are using Amcharts module in Drupal, which gives us a very beautiful kind of charting. We can just go into, dwell into a particular chart, zoom in, zoom out, take things in, out, and just analyze that particular test a little further. So there's average response, page response, based on the different pages that I have in this project. Error result analysis, if I have to do TPS chart, all of that. And even for the errors, so when I was running a test, and especially when I'm running a performance test, there will be a lot of errors that accompany when there's a peak load, some fallout of network or something. So even the errors at what time the errors came, the error bifurcation or which pages were getting errors, which pages were getting the maximum errors, all these standardized reports I get for each performance thing. And even if I want to, say, compare these for last three sprints, I was taking an example again. So last three sprints, if I compare my performance, how it has been fairing for different pages, for different, for, like, overall. So runs and comparison, it is like improving, then it degraded again for different pages, a particular article page, sorry, this active goals page, it had a very high, like, response time, it has improved comparing average response time for pages. So all of that thing, like, we are able to do across multiple projects in a single platform. Instead of having all these reports, like in emails or some Google drives and everything, anyone, specifically the BAs or the business people, they want to come and see what is the health of a project. They just come in, log in, they have got their access for a particular project. They can just go, see, compare, and have a discussion right there and then. So I could not show the rest of the dashboards. I have a screenshot just for this presentation. So similar for performance, we have got for UI automation, for mobile automation, they run, we get a standardized report. And that also provides us kind of an insight into over time how we are increasing the coverage through automation. Is it the same or is it going up? There were some, like, bad test cases. Those were taken out in a particular sprint. Then they were fixed and new test cases added. All those insights and security test trends, how the security has been faring, how many high vulnerabilities have been there or low or medium and stuff similar for API automation test trends. And couple that with your Gira reports. So the kind of visibility we had into the QA, the time we were spending into the QA and the results we were getting, just changing our approach, putting in something. And specifically, I really thank for Drupal with so many modules and everything. Most of it has been built over contributed modules and some form APIs. So the integrations we have done is use some APIs, otherwise, it's all modules, Amchart module, this module and that, Cropes, all of those. So it was kind of a side project. That's why it has taken one year to reach where we are right now with the adoption included. But the results that it has provided has been immensely helpful for our teams. And we have got really good feedback from our testers there and also the clients who are able to see these reports in real time now. Three or four major things where we think has made a difference. One is the dynamic infrastructure build. So how it was happening one year ago, I have a project, I go to a DevOps, I ask them, I need to run a performance test. Please set up me the ecosystem. They'll come back and say, I'm not available for next three days. Come back to me after four days. They'll put something in their pipeline and the backlog. Then they'll start building. It will take five days to build it. There's some bugs, some problem. We again go back to the DevOps. Same cycle repeats again. We don't have time. There's other priority item that we have now. Building those infrastructures is very easy. I can just go log in into this platform, create a new project. I'll come to that, but I can start building the, start running all these automations. All I need to do is write my automation, write my JMeter script or have my Zep proxy script ready. The infrastructure will be taken care of by the platform. There's Jenkins in the in the back end that actually fires up an AMI that would create the whole infrastructure for the test for the duration of the test. It will be there while the test is running. All the data is stored into Mongo or Drupal, depending on what kind of test we are running. All the data is stored. And once the test finishes, this instance that we created is terminated. So we are not paying anything extra even like for 10 minutes for when the test is not running. A few plugins that we have already created actually, there's on the list is bigger. So we have JMeter, Zeproxy. Now we are moving to Burp Suite and Selenium and Nightwatch already built in. So any team that is using Selenium or Nightwatch, they can just create their scripts upload and start using it. If there are more, sorry, Postman is also there. And if there are some more like Protractor, it again, as a CID project, it may take like two to and a half weeks for some developer to build those connectors because the API is there. And that becomes available to the whole company. So most of our projects are running Selenium and Nightwatch and Postman and all. We have all those connectors right now, but it is all extendable and extending as we are getting more tools in different projects. Now onboarding a new project, if I get a new project, I need to start a new sprint. Getting the infrastructure, getting access to these reports, getting access to everything that this platform provides is a five minute job, which we feel really happy about whenever we are creating a new project. So we just go create an account as in the organization. We create a project inside it. It could even be like a dev site, state site or production, or there could be like multiple different projects. We add the members, emails and start running the test. There's nothing else from infrastructure or any coordination that needs to be done. Now, how it has really helped the organization and especially the teams who are using it extensively right now, if I have to divide my teams or this whole structure into four specialized teams, the development team, the operations team, the digital assurance or the testing teams and the marketing, which is mostly declined. The cross functional collaboration has drastically improved across. The testing teams have more time now because they are not coordinating. They are not like spending a lot of time in periphery activities, not even for running the test. It's all scheduled. They are running on Saturdays or Sunday mornings or depending on like different projects. But because now they have time, they have more interaction with the client with the marketing team to understand what their goal is. Are you really looking for low latency or low network areas to take your product or no? So we'll conduct the testing or we'll come up with recommendations based on that. The understanding of the product and even the testing, the effort they are providing in the understanding they have of the product has improved. This is what feedback we have got. And the problem from where we started because of DevOps development was really fast and the QA was taking time, that has synchronized to an extent. I wouldn't say it's 100% the best that we could have achieved, but it is still far better where we were like a few months ago. So the coordination between DevOps and the testing team and overall the coverage has improved because they were very honestly, we all know there would be times that we run performance test. Sometimes there's so much of pressure of completing the sprint performance test. We do it next sprint or we do it after a month. Now that is all regular, which is giving us a lot of good coverage on all our projects. Yeah, that was it. Hope you like it. Any questions? Happy to answer. My question is, yeah, I may miss it a little bit because I switched between two. So your product is beautiful, your internal use, right? It's quite specific. Or you have a goal or vision to public it like open source or make it a more generic product? That's a very painful question. You get me the funding, I'll open source it next day. So with a serious note, the thing is when we started this whole project, there was a lot of speculation and skepticism that will it work or are we just putting our efforts into something that may or may not work because it was eating a lot of development time. So where we are right now, I have that confidence to present this in Drupal South. Things are improving. We definitely want it to open source at a point in time, at least have blogs or at least have kind of the path, how this can be achieved if you want to build it from scratch. The only thing has been like, until now we have been, we did not have enough data to call it a success. So that's all that has been there. As you have seen the demo, it was pretty raw. We did not build it in a way that it can be open sourced at that time because when we want to open source it, there are multiple other aspects that we have to look into. The coding standards, the modularity and all. Here the approach at the start specifically the first four months was get it to work somehow. Performance testing was our biggest pain. Get it to work somehow. Once that is done, then we expand into security, then we expand into automation. So the code that I have right now, if I give it to you, you will just throw it back on my face. But I assure you that this thing works and this approach definitely works. So perhaps next Drupal South I'll come up with an open source version if people like it and if we see traction that more people want to use it, very happy to open source it.