 Yeah, thanks. Yeah, thank you so much for the introduction. And again, thanks everybody for being here with us today. And so we're gonna be talking about really a couple of things, just the testing in general. And also there's this concept called the big ball of butt. And we'll kind of introduce that if you haven't heard about it before and I don't avoid that. It's generally where quality sharing has been headed. So Hari, why don't you tell us a little bit about yourself before I introduce myself? Hey everyone, thanks a lot for joining us. I'm Hari Padmanaban. I'm a quality evangelist at Harness. I'm very passionate about quality engineering, mentoring and helping people to get the true potential. And I've been part of a lot of transformational journey in engineering and quality in specific last couple of decades, right? So very excited to be here, yep. Yeah, and I'm Robbie Lachman. Don't short sell yourself, Hari. Hari runs all of our quality engineering organization here at Harness. So very funny in my career, I've been a software engineer. And my very first set of them as this is was actually the quality assurance team. I used to be like, oh, these folks always make me rewrite what I did or they don't understand the requirements. And you know, I was young and angsty. But as time came on, you know, I really start to respect what all quality engineering has done. And Hari's been really crucial for that with our transformation here. And we talk about some very specific problems that we faced here at Harness. But to level set, depending where you are, what are we talking about today? Well, what even is a test, right? Like, you know, it's something you take in school, like why is it in software now? Then we had to go over to Hari introduction to quality engineering. So as an engineering discipline, how do you go about, hey, making sure that that quality is just kind of a focus of your product? And then also boiling the ocean, exactly a very interesting problem that happens in software development. And even more so, it happens in quality engineering. So testing. So what is a test, right? So, you know, you might go hard back for your school days. You know, you're testing your academic knowledge or, you know, during COVID, you might get a PCR test, right? But what is a test? So in software land or technology land, basically you want to make sure that the features match the expectations, right? But also it could be functional and non-functional requirements. And so from a software side of the house, it could be, hey, you know what? I'm making a calculator. Can it divide properly? No, there's dozens of tests. You can run to test that. Now all the way to the infrastructure side, right? Like if you're making a certain change, like, hey, you know, I want to, you know, test that my machine came up. Hey, I want to test that, this particular portrait open or this particular portrait closed, right? And so you're basically testing functional requirements. And also you're testing non-functional requirements such as scalability, robustness, security. And so testing the stealing Hari's words here because Hari's been kind of mentoring me. It's that, you know, the requirements for what you test, they're always been there, right? But the methods that you've been using to test have been changing. And so there's a huge push for test automation. There's a huge push for test coverage. I'll look into that a little bit later, but basically testing is just as important, right? It's building confidence into the system or functions or features that you're building. It's kind of a core principle of software engineering that you should test it, right? Like, hey, you know what? We all don't get things right the first time. So making sure you have the confidence to kind of systemically show what's going on. And going back to, did the feature max the expectation, right? So this could be anywhere in the spectrum of functional and non-functional requirements. And this could be, this is a very human-centric problem, right? Like as humans, you know, we interpret things differently. And this is why the first set of nemesis, that's a real word in my career was the quality assurance of is that, oh, you know, they didn't read the requirements. I didn't read the requirements probably as the product manager would say, like, you're both wrong in this case. But it's really understanding that, hey, you know, how can we prove that? Going back to our calculator example, hey, how can we prove that we can divide? If you divide by zero, we get a great square, right? How do we prove this particular thing? To making sure that, hey, systemically did the feature max the expectation, making sure that multiple people agree that, hey, this is what you're trying to build. And so I'll talk about a couple of different tests, right? And so depending where you fall, like in the spectrum, up until you're coming from a software gen perspective, or coming from an infrastructure perspective, or coming from, let's say, a DevOps perspective somewhere in between, kind of the first thing that you would do is unit test, right? So very first thing to do as a professional software engineer, like out of university, this is one of the first things I had to write. I'm like, oh, I never had to turn, you know, work in with tests in university. But second, I spent in the professional world, JUnit was, I had to write geno tests. I'm like, oh, okay. But really when you're focusing on the unit test, it's really the smallest area that you can test, right? And so this is actually going to play into the big ball of butt. So unit tests are very specific. You're testing a very specific piece of functionality. So here I might be testing, what happens when I divide by zero, right? You're testing core functionality. Or, hey, I need to test specifically that particular port is blocked. And so you're basically testing what you've changed, right? And so where the big ball of mud will come in, as you can see that, you know, a lot of times you're coming in the middle of a software project. You're coming in the middle of infrastructure projects. You're just incrementally adding unit tests, right? And so this is a problem with the feature sprawl. This is a problem with test sprawl, with hard as you get into a little bit. But the unit test is the smallest amount of tests. This is kind of core tub, where the big ball of mud will come from. Secondly, it would be an integration test, right? So after, you know, and the S feels after you check that, you know, your particular calculator can divide by zero, you get a grizzler. It's time for you to deploy it, right? Or time for you to see in the broader picture of your application or the platform, how is it handling? Usually this might be more infrastructure related, right? So you're testing compensated controls. You're testing, here we go in this example to the right, you're testing a particular shared component. Like, hey, you know what? I tested that my calculator can divide by zero, but also my calculator is behind a login. So does the login service provide proper authentication authorization to my calculator, which I wouldn't know why you would build that, but just for example, and really making sure that in the broader piece of the puzzle is that, you know, you have the integration test. Usually it takes a little bit more of a purview, right? So to know the integration test that your folks going from a unit test, you're focusing exactly what you were building to, how does it play into the broader application platform? And there's all sorts of tests out there, right? So, you know, these might oversimplify the examples of unit tests and integration tests, but there's many ways to test, right? For example, if you're, you know, more modern ways, you want to build confidence that your particular piece of software infrastructure is robust, you might run a load test, right? Or a performance test that, hey, you know what, given a specific amount of load, and these are more modern ways of testing, right? Like, hey, you know what? Part of your deployment process might run load for an X amount of time to see what happens to your infrastructure and application, versus going to a SOAP test, which is a more modern load test that you're running at load for a very long extended period of time, because see what goes on. And again, like the moment we can keep talking about different types of testing, like fuzzy testing and et cetera, security testing, whatnot, even more modern ways that are such as chaos engineering or chaos tests, right? You're purposely pulling infrastructure away or purposely trading black holes. So seeing how your application will handle these particular scenarios. And so the amount of testing is always expanding, right? I asked Harry, a very funny question, and he gave me a good answer that I'm like, hey, what's that for chaos engineering right now? You have your ears to the street, and he's like, you know, it's just a different, the problems that we're trying to test more have always been there. We're just having different ways of going about measuring that. And so going back to, how do we go about testing the calculator, right? So going back to our division feature, now getting a little bit ahead of ourselves and don't touch test there, is that, you know, there could be three types of bias that are just implied with your testing, right? So when you're looking at when you're testing this calculator feature, one might be, you know what? Who, when you're looking at how valid a test is, it's like, who wrote the test, right? A lot of times in engineering when there's business controls that, you know, there's controls for, can the author be the enforcer or something? Should the author be testing, be the one testing it, not to a point, right? Like, you know, there used to be someone, if you're doing pair programming, usually the more senior insurers, the SDET or test engineer, there's a test even valid, right? Like how do you validate the test? And then also, when do you execute the test? And this is something that we've seen here at Harness is that, you know, our build times are getting crazy, that because it was a big ball of mud phenomenon that we have to be dealing with that. It's kind of a, I would say, I've seen this problem multiple places that I wasn't able to put my hand on it saying, oh, you know, this is actually becoming a real problem because our build times have been growing exponentially as we add more features, as people come in and build certain incremental features. And this really leads us to the big ball of mud, right? And so what exactly is the big ball of mud? Here's actually a distributed trace diagram, not of our application, but one I found on the internet. But when dealing with modern microservices or a modern distribution system, is that it's what I like to call the fog of development. No one person has the entire purview. And so when you're building something, especially as a software engineer, you really focus on your module. You're focused on your set of features. You're focused on your part of the application or in this example, you're focused on your services. And so a lot of times we, I've never been on a project from inception. Because usually it's already started. I must have been on 12 or 13 deaf teams in my career and, you know, I came close to starting at inception but I've never started at day zero or day one, right? And so usually you're focused on building a certain amount of infrastructure, a certain amount of features. And you're basically writing test cases to cover that and for modern teams, you know, this has been more of a problem because like, oh, you know what, if we're going back to add a specific feature, we need to have X amount of code coverage, X amount of test coverage. Like, you know, we're relying on the college and your teams to kind of tell us like, hey, you know, make sure that you're covering this and this, but what ends up happening, especially when we're going to, you know, we're building multiple times a day now, back in my career we've built like maybe once a day, deployed like once a month, now we're doing that multiple times a day is that we would run all the test cases, right? Like, hey, you know, I'm, as a human, you know, I knew what I wrote, added a little bit more incrementally to the platform of the test suite. But as a safety program or safety stakes, we would actually execute all the test suite. Like, oh, you know, Robbie changed the example I like to use or another friend of the team who's like, hey, if you were to, for example, like let's say replace the gas cap on your car, you know, and you know, you would test that the gas cap is, is secure, but you wouldn't test like every other test. So test the airbags, test the brakes, you know, you wouldn't give a barrage of tests to the car. Well, we actually do that in software. We consistently run regression tests all the time saying, oh, I don't know what I changed completely, but let's make sure this has been increasing our build time. And this is what we like to call a big ball of mud. It's similar to another design pattern that it keeps rolling and rolling and rolling or this might be some argument that something more of a strangler pattern that you keep adding and adding and adding to. Well, I'm going to hand it over to my buddy, Hari here, talk about more of the principle and profession of quality engineering and what can you do to a measure and be also avoid the big ball of mud. So Hari, I'll stop sharing your screen again. Thanks for setting it up. Give me a sec and yeah, here we go. Thanks for once again. So Ravi has set the foundation in terms of what is the basics of testing and then the big ball of mud. What's the perennial problem that impacts the whole of software engineering, right? So I would like to start with quality engineering and again, before getting into quality engineering, I want to step back and then try to cover the basics on in terms of, okay, what is testing? What is quality assurance? And how are the related to quality engineering or what is the difference, right? By definition testing in the testing phase, what we would do is either check or validate if a particular application or a product meets the requirements, right? We have a set objectives and we check in the end of the cycle, whether they meet our objectives or not, and then say, okay, it's a green thumbs up or not so that it can go to production for the customer usage, right? So testing basically just checks if the requirements are met or not. In this process, we are trying to get an answer for the question, have we built it right, right? So that's what we do in terms of testing, right? Whereas in quality assurance, it focused a lot more on systems and processes throughout the development life cycle and then tries to ensure that we have the right checks and balances to check whether we are building the product in the right way because we have a requirements given, we involve in a lot of reviews, the requirements review, the design review, we have the change control processes, we would have the test cases and we have everything in place and try to ensure that we are building the product in the right way. Whereas testing, which comes in later, is ensuring that we have built the product right, right? Now, both testing and quality assurance, I mean fit and fine, right? For many, many decades, it used to be the purpose. This was the time when we had the traditional development models, like the waterfall model, the iterative model, the spiral development models. It used to be fine, although there were limitations in testing and qualifications in silos, it never used to be a more pronounced, right? With the advent of agile, wherein we are moving to production not more frequently, things started showing up its limitation, right? So people started looking at, okay, quality assurance and the testing, the way we are following is not keeping up the pace so that we can meet our business objectives and go to the production much more often, right? So it ended up getting into the point wherein people started saying that, okay, the software testing, as we know, is there, testing is there, we don't need testing anymore, we don't need qualifications anymore because of the limitations, right? So that's how the need for a quality engineering came into picture. When testing and quality assurance worked in silos and was focusing more on processes and the validations, quality engineer focus a lot more on the mindset. Now, the quintessential need for quality engineering is about engineering quality, right? So there's a huge shift in mindset in terms of ensuring that we engineer or we embed the quality processes throughout the product development lifecycle and don't work in silos anymore. We work with everybody, right? From the product managers to the developers to the customer success, two or not, right? We work very closely with them and ensure that we are engineering quality at every stage and we are ensuring that we achieve the right quality, the right speed and the frequency that we need, right? Now, if somebody asked me to define quality engineering or come up with a statement to arrive at the vision of quality engineering, I would use the first principles of quality and come up with something like this, right? So the vision of quality or quality engineering is to provide at most value to customers and be a force multiplier in the organization by establishing a culture of continuous quality, right? Now, in the traditional world, the continuous quality was not a need, right? Because we used to have monthly, quarterly, sometimes even a yearly releases. So the quality team had a lot of time to run all the test cases, all the scripts and then certify if it can go to production or not. There used to be time when the production releases were delayed, but in the agile model, we don't have that luxury, right? So the quality engineering had to reinvent itself or let the quality teams have to reinvent itself to arrive at the quality engineering mindset so that we achieved continuous quality. Now, continuous quality means achieving quality at both speed and efficiency, right? So in today's day, speed without quality or quality without speed has no relevance. We cannot get into the point where in say, hey, I'm going to take three months or a month and beyond and I'm gonna run all my test cases and give a quality releases. So don't ask me about time, but again, I'm gonna give you a quality release, right? This wouldn't work. On the other side, we cannot even say, okay, I'm gonna give every day release or maybe weekly release, but I'm not confident about the quality. None of these works, right? That's the essence of quality engineering, right? Quality engineering ensures that we achieve continuous quality at speed and the efficiency. The other day Ravi and I were talking about the problem statements, right? He was asking me, what is the biggest challenge slash opportunity in quality engineering today, right? The biggest challenge and both the opportunity and the challenge is continuous quality. So let's assume that we have enough bandwidth to ensure that we do automate everything. We empower everyone in the organization for equitable quality ownership. And we also continuously monitor and improve on the efficacy and efficiency and effectiveness of the quality engineering, right? Let's assume that this is taken care of and this itself is a bigger problem. We can talk about this on a different day, but assume that this is taken care of. We still have a perennial problem that has been plaguing the quality industry for a long time, right? I mean, I call them as the quality blind spots, right? The two major blind spots that's been there and it still exists, but it's not been answered thoroughly, right? The first one is like, do we know if we have, whatever test we have, is it enough for achieving quality, right? We would have automated, let's say, tens and thousands of test cases. We would have test cases, both manual automation performance and all, but can anybody clearly confidently say that do we have enough test cases that is good enough for us to achieve quality? The answer is no. I mean, there are different ways of optimizing it, but again, there has never been a clear solution on achieving this thoroughly and convincingly, right? The second aspect is, even though we have a battery of tests, do we have to execute all the tests to ensure there is a good quality, right? And again, this holds good across all the layers of the testing and the test case and the test steps that we have, right? For every change, for every release, do we have to execute all the test cases and is that the only way to achieve quality, right? That's been one of the biggest problems that we have and this brings up to the topic of boiling the ocean, right? So if we have to ensure that we need to run all the test cases, all the test scripts, I mean, I'm assuming both a combination of manual and automation, if you have to execute all of them for every change, for every release, then you're actually boiling the ocean, right? So let's take an example, right? Let's say you're walking by a river or a sea or so, or the water source and you need a cup of water, hot water, right? What would you do? Would you boil the sea or the ocean? Or would you take water, water you need, put them in a container and boil them, right? If you're doing this, then why are we doing differently in the testing world? Traditionally, what has been followed is you have the set of sanity and depression, so good. And for every release, we need to make, execute all of them, right? Is it the intelligent way? Probably not, right? And most of us do that, right? And this is at a release level. What about for every change, every change that can happen within a particular microservice, within a particular module, within a particular, you know, Java file or so, right? If you have to make changes to it, do we have to run all the test cases because we have embedded or PR merges to execute all the test cases before it goes to the next step, right? So why are we doing this in terms of boiling the ocean, right? And if you have to blindly follow the model, we will execute all the unit test cases, we'll execute all the component test cases, we'll execute the whole sanity and depression suit. If we have, let's see, whether you're having a monorepo or a monolithic architecture or a microservice architecture, why are we doing things the way we have been doing, right? So I think there is a better way of ensuring that we are efficiently executing the test cases without compromising on the quality risk, right? So the suggestion would be in terms of, you know, slicing the test cases across each layer, the unit, the component integration system testing and depending on the change, we would execute the test cases, right? So for example, let's say we have 15 engineers in an organization and then they're making two to three PR merges every, every day. And we have, let's say, 5,000 test cases, 5,000 unit test cases. And again, it scales as we add more engineers and I'd say add more modules and features, right? If we have to execute all the 5,000 or 50,000 test cases for every PR merge and let's hypothetically say it takes around 40 to 50 minutes for each merge, that's a mammoth waste of time, right? We would invariably spend roughly around 12 mandates of effort in a day to execute all the unit test cases for each PR merge, right? So what we are suggesting is, and in fact, this has been one of the critical issues that has been plaguing us as well and then slowing us down. What we are trying and implementing is to ensure that we will be able to map the change to the test case, right? We can figure out what is the call graph based on the chain that we have and we already have a test cases map to each of the changes at each layer. And based on the change, it intelligently picks specific test cases and only executes them, right? And then gets to the next level of PR merge, build and deploy and test and things like that, right? So that is an intelligent way of testing because we are not boiling the motion but we are actually taking the required amount of water in a container and boiling it and then consuming it, right? So this can happen across all the layers. At Harness, we have started with unit testing and then the functional testing but you can start doing this at different layers. You need different solutions and approaches to solve this but yeah, this is how we can ensure that we are testing intelligently or we are embarking on an intelligent testing model rather than boiling the motion, right? Coming back to the quality engineering aspects and let's say if you have to give you a playbook in terms of how to be effective quality engineering team or how do you build quality engineering mindset in the organization? Let's say let's capture the key quality metrics, the key coverage metrics and velocity metrics, be aware of the Hawthorne effect wherein what we measure is what we would get as a behavior from the organization, right? So focus on the critical, most important metrics, follow the layering approach, both in terms of automation, both in terms of the feature sign off. I mean, we do this in Harness as well wherein for every feature sign off or every story that we are developing, right? There is automation at each layer and we are measuring and reviewing the automation at each layer and together we provide a sign off. So unless a unit test or a unit UI test or AP automation on the end to end automation is not done for a story or a feature, we don't provide the sign off. So we build that right from each feature, each story and then we build that layering approach all through and then we maintain it, right? Automate whatever is possible. Don't just focus on the conventional functional test, non-functional test, AP automation, UI automation. Automate whatever is possible in terms of integrating, in terms of triggering the building, in terms of executing the automation, the notifications, reports, dashboard and whatnot. Try to automate as much as possible because that builds a lot more velocity, right? In terms of ensuring that our coverage is good enough and we are continuously bringing on our coverage, focus a lot on the customer found defects. We call it a CFD, whatever is leaked into production and do a detailed root cause analysis and come up with the action items and measure improvements on this action item because that can go a long way in improving the coverage, right? Because let's say when we started with a set of customers, the customers also evolve, the customer use cases also evolve and mostly talking about the B2B kind of a model, right? So it evolves and we learn from each of the opportunity, and then we can build up our coverage as well as build up on the risk aspects of our test cases and then we make good progress on that part, right? Monitor the non-obvious metrics beyond your defect leakage, beyond your test cases, automation covering, focus on a lot of things that is not obvious in general, right? Do you monitor your, the production logs? Do you monitor the customer, I mean the environment usage? Do you monitor how the onboarding of the customers are? Do we monitor disease that? Because the primary objective of quality engineering team is to ensure that we make our customers successful, right? We need to delight and then make our customers successful and only when we look at each of the aspects we will be able to come up and better our, the maturity of the organization, right? The shift left and shift right, the quintessence approach of the quality engineering, wherein we work very closely with the product managers, product management team, developers, work with them, help them to make intelligent and better testing, right, from the initial stage and also shift to the right, which is on the customers, on the customer's domain, understand them better, right? One of the examples that I could give is like, we took the top 10, 15 customers scenarios, we analyzed them and then we tried to build our sanity suit and then also make our test cases much more robust, right? So that helped us a lot. So understanding the customers, their current requirements, their future requirements, how do they scale, that that is very, very important, right? The last part related to the big ball of mad and intelligent testing, just select your tests and run intelligently because that helps us to achieve efficiency and we'll be able to effectively make an impact in terms of the quality engineering, right? So that would be the typical playbook I can throw. Any questions? Yeah, cool, thanks, sorry, that was pretty informative. Let me just put up the QR code there for everybody and let me share my screen. Yeah, if there are any questions, feel free to ask them. You know, if you wanna keep it interactive, if possible, if not, it'll be me asking hardy questions again, how I learned so. So yeah, if you wanna, you know, first off, if you wanna learn more, feel free to give this a scan here or go to this URL, this build URL. Let's see if we have any questions in the QA area. Let's see. So I can ask hardy questions. I'm like, my old nemesis is in my career. But yeah, I think what a conversation you and I had was pretty interesting for the audience here, if you know what's here it is. No, I asked, what is after chaos in GERA, right? You gave me such an eloquent answer and I wish I wrote it down. Like, hey, you know, in the testing world, chaos engineer and chaos testing and probably one of the more bleeding edge or new ways of testing, I was able to bring down production services years ago without chaos monkeys. So I was like, oh, this is not a new concept for me. Services were going offline all the time for me. So, but what would you say would be like on the next edge, right? Like the next generation of testing or? One thing, as I said, was the continuous quality and then ensuring that quality is achieved at all levels. So as in when we move the territories from the infrastructure that is hosted in our own labs and move to the cloud and then we have more pieces coming together, I think there is more surface area that is exposed and that's where you have security and the chaos testing, right? Because with the chain that we are doing, many of us would go to production every day or multiple times in a day. There are a lot of moving pieces, right? To ensure that, okay, I mean, things have moved a lot beyond functionality, right? I mean, functionality is probably the simplest of the things to do. But if you look at the performance, look at the security, look at the chaos aspect, look at the infrastructure stability, I think that's big, right? But still industry has not solved two fundamental problems which I shared, right? We don't know whether we have enough tests that covers all our requirements and there are ways by which people have been using model-based testing to normal with more test cases. But again, I mean, we end up using fuzzy logic to bring up test cases, but that is not enough because if that was the case, it would be the simplest of the problem to solve, right? The second aspect is, okay, what do we run? I mean, we are still scratching surfaces, we have started implementing in terms of the unit test and functional test, but we can go a long way, right? I mean, I know many, many organizations which has probably a huge team of like 100, 200 people of quality engineers, right? I wouldn't call it as quality engineering, but quality engineers who run tests for days, weeks and months, right? And that's the scale at which people work. So one, look at the foundational problem, look at the environment which you're moving into and then that just keeps evolving, right? Can we implement machine learning and then try to speed up the process of making things more effective, right? That's where things come into picture. Hey, great answer, always like, very eloquent with the answers. So we have a couple of questions coming in the QA box here and I also put up a backup slide, all of Hari's work here. It's not a way of categorizing stuff, right? The who, what, when, where, why, of what you're testing also has importance. And I think that actually plays into the first one from, first question is, when do we run complete, when do we run a complete test suite versus only testing the impact of code? So like, how do you know when to run it? Like on that spectrum, when do you run everything versus when don't you run everything? So I give it to the master himself. So I mean, definitely we would need a good amount of end to end test, which will take the integrations into consideration, right? But I would say, if you can have a much more frequent release cycles, the need to test everything end to end, every time will not come into picture. Let's say, hypothetically say, if you have a, the microservices model, or let's even if you have a different model or maybe getting into the microservices model, but if you're doing frequent releases, as long as your architecture, again, that's what Ravi was talking about, the big ball of mud. If you don't get into the big ball of mud, paradigm or the pattern, we don't have to run everything every time, right? You will still have, let's say 10, 20% of the end to end, so depending on the product architecture you have and how evolved you are in terms of microservices model, but let's say the remaining 80% can still be optimized, right? Yeah, beautiful answer. I'll try to take the next question here. And so the next question here was, speaking of enough test coverage, is static analysis like SonarCube enough? So let me take a stab at that question. So like anything, there are dozens of ways to test confidence throughout the cycle, right? So this is more software-generated specific. I consistently get C's and D's in SonarCube. I'd say C's and D's, so SonarCube will give you like a letter score. I'd say C's and D's ship software because a static analysis tool is just that, right? Static analysis tool, understanding what the tool or the test can test for is really important. Like static analysis wouldn't take into your infrastructure into consideration, it's focusing on non-executing code. So it's like, oh, you know what? Robbie likes to put double parentheses instead of single parentheses. Well, it's looking for stylistic things. It's looking for anything that can be checked or inferred. For example, there might be some basic security things like, hey, you're not, the common one is like, you're not sanitizing your inputs with a filter because there's no filter attached to this call that's coming in here. So it's able to tell you certain things, but again, it's like, it's from a development side. Sure, because static analysis is meant to run really quickly. It's meant to give you feedback, you know, in seconds, not, you know, running a scope test or a load test that can take a half an hour or hours depending on how you just have it set up. And so I would say that, no, it's not enough. It should you have it? Yes, of course, like using a static analysis tool as part of the course these days. So I don't know if you have any other opinion on that hard or not. I would say static analysis is important, but that would never be enough, right? Because it talks about the limitations of the code or how somebody has written a code, the code complexities. It can give us good insight, but again, it wouldn't cover us on the functional requirements on the site, right? So it is needed, but wouldn't be enough. Good. Another question here is, what languages are the hardest to test on? Machine code, binary. The beauty of Java, like, you know, or like three GL or four GL languages is that like the authors of the language, like, you know, like have a lot of like call graphing and ways of like inferring items, but, you know, I'll let you give a, this would be being funny. I will be able to answer unless I've worked on all possible languages. Yeah, but I would say that irrespective of the languages, I think what is needed is more on the architecture and how you're layering, right? I mean, the soft coupling or the hard coupling, how things are independent, the layering of the code, I mean, because you need to layer the code as well, it cannot just be together and how you layer the automation, that's more important. And again, since I've not worked in all possible languages, it would be difficult for me to say it, but I would say more than the language, it has to be the architecture, it has to be the way we are layering it, right? In the same Java language, you can probably mix all the code or you can layer it at different structure and then build a dependency graph and then run it in a better way, right? So I'd probably say that, okay, more than the language, the architecture and the patterns matter. I agree, great answer there. Let's see, two of these questions are kind of similar, that's here. So they kind of like play, well, actually they might be a little bit different. So I'll take this or I'll post this one. First is that, when, and this is more of a, it was more of an academic question that if you're only testing a fraction, like so the person will say, if they're only testing what they wrote, how do they know that only changing, let's say two modules will not adversely affect something else? So this goes back on the, you made a change somewhere and something else blows up. How do you validate what happens? The courts have a big ball of blood. Yeah, so as long as you're not tightly integrating everything, the impact will not be there. Again, I can clearly watch for this because I also come from the traditional development models wherein we used to take three months of development, one month to test everything and then push it to production. The way you layer and then integrate things, that can make a lot of difference, right? You may still have to run the integration and end to end test, but the unit test, the functional test, that can be clearly, clearly called out saying that, okay, it doesn't have an impact on the other side, right? So if the most of the approaches that we are implementing right now is on optimizing the unit test and functional test. So if you have a microservice model, if you're ensuring that the contracts are completely integrated and tested, we can be safe in this aspect. Cool. Another question here would be. Sorry, if the- Another question here. Yes. Cool, cool. Let's see. Another question here. What would be the criteria for selecting test automation or a framework? So if you were looking to bring on like Selenium or JUnit or something like, I guess it's like a prompt, like any, I can take a generic stab at this question. How do you know that it will be viable? Like if you pick any sort of test framework, if it's going to be a viable framework, let me take a stab at this first and then kind of get your answer. So it's like picking any sort of technology, right? Like it's just taking testing out of the picture. What if you're looking to build functionality? Like why do you pick a certain provider? Like, oh, you know what? I'm going to be using Apache Kafka or using Cassandra or using, you know, some sort of spring framework, right? Like it's all about, it's testing might be a little bit different because you're getting more, you're getting a little bit more out of, let's say building or more functionality than building it yourself. And so one thing might be, hey, you know what? Would it be, generic answer would be vitality, right? So if you're picking something that's open source, you know, is industry moving towards that? Are they, even like a test framework like Mocha or using like JavaScript, right? Like it's our node, it's, you know, hey, is it, are people leveraging it? Is it, are people contributing to it? Is it, is the project continuing to evolve? Right? So that could be a big detriment. Because again, also there can be other concerns. You know, because at the end of the day, like what you test, like, if it's a, especially if it's a unit or functional test, is that the, you know, it's, there's lots of ways to do it. You know, you can just test language specific. A lot of times testing, speech or specific language, looking at other frameworks, there might be, if you're doing UI tests, there might be, you know, some sort of robotic stuff that's out there that it's not specific to language, it's more specific to how the software will function at the end. But not sure if you have any other, you know, when you're looking to include a test package here as a leader, like, if you have anybody driving through any sort of criteria, right? Yeah, that's a fair answer. So you basically have to choose who is the audience and what is the skill set. And also choose a language or a tool which is supported and ensure that that is highly extensible and it meets it as a good community and it meets the current and future needs, right? So choosing the right tool for the job is very important. And of course we all go through this kind of questions, right, in terms of, okay, do we choose a Python-based framework? Do we choose Java-based framework? Or do we choose JavaScript-based framework, right? And depending on who the audience is and in terms of who's going to write it, their comfort and also the long-term objectives should be there, right? I mean, most of us have been using, the coordinators have been using Java and Python-based framework and off late we are trying to work on Java-based framework. Again, it all depends on your team and the sustainability and ensure that others can also use it when somebody in the team is already moved out as well, right? So if it's extensible, simple to learn and use, you have a good community support, it more or less meets the need. Awesome, thanks so much. I guess the million-hour question here. So what approaches would you suggest to run tests in an intelligent fashion? So like what we do here is like we try to map source code changes to test suites. So I'll take a first step at that and then you can kind of correct them wrong. So pretend I'm a engineer here still writing, did a feature development here at Harness. We're very fortunate that we play in JVM-Land here at Harness and they're going back to, what languages are harder to test? So we're Java, Go, Scala, all that. And so we're able to generate a call graph, right? Saying, oh, you know what, this is a call diagram that we're generating when we're building something. Let's make sure that we have tests here, here, and here to cover these. But also part of this problem and what it alludes to is that we actually are building, it was such a big problem for us here at Harness is that we're actually building a product that will actually map the call graph for you and visualize it, but you know, hardly not sure what kind of direction we give data to the engineer still. There are two ways to solve this problem is one of the source code instrumentation and another one is the bytecode instrumentation. And again, the more we do at bytecode instrumentation, it's highly scalable. Let's say for example, you have a Java Scala and it uses JVM, if you do it at a bytecode level, it's much more easier. So once you have the bytecode instrumentation and you can derive the call graph, then we'll be able to map what changes and what needs to be run. Like let's say you have a call graph of, let's say 10 nodes. And if you're making a change at the eighth node, we can safely assume that we can execute eight and ninth and 10th nodes, right? So that way we can build that. Yeah, I think Ravi, I think this is a bigger problem for everybody in the industry. And I think we need to start supporting and exposing the solution that we are building. Yeah, yeah, I think it's almost due time. You might get a preview. If you scan this QR code, this is part of my, you know, circus showmanship. It's like, it is coming, yeah. So we are, you know, I'm quite excited about that. But we want to... Just to highlight, right? We are able to get anywhere between 40 to 60% of time and effort saved, because it's not about just the time, which is based on execution, but it's also infrastructure on which we try to run. We're able to get anywhere between 40 to 60% of savings. And again, this is at the initial level. And the more we optimize, we can get much better at that, right? Yeah, that makes perfect sense. So let's see, next question here. The biggest, big issue in many projects that I've worked on is unstable hardware. It's hard to validate the infrastructure. What would be a good way of testing the infrastructure before deploying? So I have my opinion, but I want you to go first, because it might be more aggressive. Sure, sure. Yeah, I mean, I think the simplest of the solution is ensure that we are provisioning the infrastructure for both Dev and QA, the way we are provisioned in the production. So as long as we are maintaining the same thing and again, we are able to run that. Of course, there will still be smaller issues, but this will take care of probably 90% of the issues when we are doing the infrastructure provisioning through the code. So that's actually embedded within our own development and deployment cycle. And again, we are able to provision it and then deploy our code and then continue with our test triggers and then result analysis thing. Very diplomatic answer there. And that's kind of for the aggressive answer. So this just happened to be a lot, right? Like, if I go back like, Dolly, if I go back a decade ago, and like this used to be that different sets of people would be provisioning like a VM, it was more statically provisioned. So like very, very modern deployment today, it's infrastructure aware, right? So your infrastructure is like a huge rise of infrastructure as code and all the benefits for that and our favorite word containerization, right? Like ephemeral infrastructure, more buzzwords that you're able to provision the infrastructure at the time you're making a deployment. And so for those advancements to occur, it's truly like a piece of software, right? Like, you're able to specify exactly what you want when you want it, but going back to the static provisioning days that it takes, like I need to let somebody know a few days in advance, like, hey, can you bring up, you know, robby.vm2.com? I have additional pieces of application and restructuring to deploy. There could be a lot, right? Because like, you know, there's not a consistent, let's say descriptor language between the folks. And also it's also hard to prove that, was it exactly what you pointed to, right? Like, was it the application that was the problem or was it the infrastructure, right? Like, hey, why is it running slower now? Oh, well, because in our Dev and QA instances, we had a four core box in production they gave us a two core box. I mean, it's making this up, right? There's some thread blocks that are happening. We can't resolve them quickly. And so it's to making sure that you have an environmental parity. So at least a very valid point that Harry gave there, you're able to have some sort of parity between your environments. Also, you might just start running, you know, moving completely to automated infrastructure and, you know, at or near deploy time infrastructure provisioning. You might just want to run some tests, you know, kind of every day, right? Against, statically against the infrastructure that's provisioned. So that goes into the other question I'm tying into the question above it. So the other question was testing and production is like, is that continuously testing? Is, you know, more of a mindset, right? Like always test, always be testing. So like always be closing, right? Like, hey, you know, a lot of times it's more of a, I'll let Harry take the second part of the answer, but it's very intrinsic, right? Like when I was deploying to production, I used to get a lot of angst because your deployments are never over. You know, when are you off the hook for changes, right? Like your next deployment starts with the other one has went in, right? And so just making sure that you have proper monitoring in place, proper measuring in place. You know, if there's any sort of ways that you can kind of pre-execute the test leads, you can do that. But I'll give Harry back the chance for more of an academic answer. So I'm assuming that when there's a change, you're pushing into production and then trying to test if that is the case. And if your production infrastructure supports it, let's say you have a blue green deployment model, you are able to deploy into production and test it right there. That is continuous testing. But if the system is set in such a way that you don't do testing in your, during your development or when it moves from development to maybe staging and then to production, but you're only doing in production, that's not continuous testing, right? So if you're testing from the time the code has been checked in through the process and also in production, that would be continuous testing. But testing only in production, not through the process, that would not mean continuous testing. Gotcha. I think we have, I think we're coming up right up time. So we have time for one more question. And the last question would be when looking into optimization, what three areas to keep in mind? So kind of a white paintbrush here, but I'll let my buddy Hari, Hari take it home. You had three things. Yeah, I would say in terms of quality effectiveness, right? I mean, we need to continuously optimize the infra. And again, we have seen time and again in respect to what we see. Again, I come from a different environment setup. And then at Harness, we have continuous monitoring of the cloud cost where we have posted all the environments, QA production, whatnot. There is very, very high opportunity to optimize the cost. Then the second optimization would definitely be on the testing, right? What are we testing? Is it NF? Are we doing more? We can, we will be able to look at that. And the whole process of delivering the software that needs to be looked at for optimization, right? And this would probably be top three areas for optimization. There can be more, but this is state of the bat in terms of where we can get immediate value, right? Cool, I think that's it. We're right at the top of the 50 marks. So, Christina, I think we're all set. And sorry if we couldn't get your question. Feel free to reach out to us. Sure, go ahead and let's see how they get in contact with us. Well, great. Thank you so much to Ravi and Hari for their time today. And thank you to all the participants who joined us. As a reminder, this recording will be on the Linux Foundation YouTube page later today. We hope you're able to join us for future webinars. Have a wonderful day. Thank you.