 Good afternoon, everyone. Can we start off? Everyone glad to see so many people here. I was a little worried last year of the conference, second-last talk for the conference. I didn't know if I would have an audience, but thank you all for coming. I'm Anand Bagma. I work with ThoughtWorks, the principal consultant. And I've been doing testing for about 15, 17 years now. We get different parts of testing. But this is not about what I have done or how I got here. It's more to share an idea or thought which got conceptualized into a usable product of sorts for many. And I wanted to share that concept with you, share that tool with you, and see what you think about it. And if it works for you, by all means, how you can take it forward from there. This is a technical topic in that sense. And to help me understand how much in detail of the technology I can go through, or versus the concept, can I get a show of hands to who are hands-on testers, developers in the room, anyone? Awesome. That's great. The others, I'm assuming, are leads, managers of sorts. Yeah? Great. So I think it's a very good mix. I'll still try to mix it up so that it doesn't get lost in just one way or the other for everyone. And I'm hoping Vodafone doesn't keep crashing on me because the demo is going to depend on that. The Wi-Fi was not working for sure. We'll get to it as it comes up. So I'm not worried about the Vodafone connection anymore. I have some sort of a fallback plan. Let's hope you don't need to go there. So what is the typical objective of any organization? Anyone? Awesome. Have you seen the slide? Had to check. Okay. So the main thing is to make money or provide some form of value. No, it's essentially about the value. And of course, it cannot be value for no cost of sorts. But to make money, you have to deliver on time, get to market on time to make it usable by your audience. And of course, if there's not going to help, if that product or your service that you're offering is not going to be of good quality. Right? That is one part. That's the objective. What is the reality of organization? Right? They are spread across the world. For various reasons, the globalization, the cost factor, being available 24 by 7, the team size is just too large to have in one under one roof of sorts, more than acquisition for sure, talent. Right? There could be many other reasons, but organizations are spread all across the world for good reasons. Now, since I'm talking about testing, I'm a core tester by heart, what is one practice that makes teams successful? And I'm talking about testing, and I'm looking for answers. Awesome. Maybe you have seen this one. But you are absolutely spot on, right? Test automation is one of the practices, right? That is going to make teams successful. At the same time, what is a practice, or either one of the practices, that makes teams unsuccessful? Spot on. I think this is the first time I'm getting the answers right, the first answer, but that's great, right? And anyone think why it is the case? The same practice being successful and not successful? Okay, so that's a question, right? Why is it unsuccessful? And we'll address that shortly. That's one part. Return of investment, yes? Okay. Features keep changing. You were saying? Repeatable behavior. Repeatable behavior, fair. Get obsessed with automation, spot on. That writing code is in my DNA. I have to write automate each and every test, regardless of if it's adding value or not, if the feature is changing or not. I have to automate each and everything. Any other reasons? Sorry, one other time. Yeah? Okay. It's a great point. So does that sort of answer the question why it can become unsuccessful, right? That if it is not done right, if the thought process is not right, if you don't identify the right candidates for automation in the first place, it's just writing another code. Oh, Mr. Vinayaki mentioned that it ends up that the product is testing the test versus the test actually validating the quality, right? There's so many other factors, the composition of sorts, the capability skills, there's so many factors, right? So if done right, this is one of the great assets the team can have to ensure the health and quality of the product is good. At the same time, if it is not done right, this is the biggest waste of money in that sense, or one of the biggest waste of money organizations can put in because they're not getting any return of investment, okay? So where is automation placed in the food chain of the software development life cycle, right? What is automation treated as a first-class citizen in your organization? And couple of questions why, how you can potentially answer that, right? What is the value automation is giving out to the team and what value is automation getting from the team also for that matter? Are you able to put in the right practices to build it correct, right? At the same time, what is the quality the automation is really resulting in? What is the feedback that you are getting out of it? What is, has it been really built well to configure, to scale, to spread out across the massive configurations and team sizes? Think about it. If anyone of you is answering yes, think again because if it is not done right, a short-term benefit is different from taking it live and making it maintain it. Having those results on a repeatable basis, right? So very quickly, I'm going to segue into some of the principles and practices I think are very important to build the test automation framework. And by framework, I mean it's the glue or the thin layer that interacts with the product on the test and the different set of things required to make the automation successful, the test implementation successful, okay? So is this readable? Yeah, okay. So principles, some of the principles. Code quality. How many of us have heard or said test code should be of production quality? At least some of us have heard that. I strongly believe that because there is nothing different that you're building from a test automation perspective. It has to be of great quality, right? To have a test code of great quality, you need to follow dev practices to ensure you're using the right design pattern, the correct level of abstraction layers in the framework, and make sure you keep on refactoring as a regular activity instead of an ad hoc once in a while activity if I ever get to it, right? Bearing wherever required because test building automation framework is also a complex piece of code. Certain bits are easy to implement or write code for. Certain bits are extremely difficult or challenging in terms of logic. They're consciously where you can to achieve the right kind of results and implementation. Make sure the framework is going to be extendable and most important, evolve your framework. The analogy I always like to give for a test automation framework, right? It's, again, very similar to building a product itself that if I have to build a bridge, I'm going to have all form of designs and analysis and research done about it, the terrain and everything. I'll draw massive or various different types of blueprints. I'll know exactly when to schedule which things to make it how built correctly. But do I start off by ordering the light pole and the paint or everything required for the surface, the peripheral things about the bridge? No, I would start from the foundation, right? Same thing is very applicable for any product development as well as test automation framework development. You need to know what is the vision of the product that you are testing or you're going to be testing again. With that vision in mind, start evolving your framework bit by bit based on what is required so that you do not over engineer your framework at that current point in time and you build your framework. Let's look at certain practices for automation framework. The first thing in principle we spoke about was the code quality, right? Being of production quality. First thing I always like to believe, I always believe myself or always tell my team members no copy paste in test code. It is very easy to go over. Bodo, this is a similar test. I'll just copy the code and tweak it around. Don't do that. Remember your design principles, your practices are about code quality, refactoring and evolving, right? That's the first thing. Make sure your framework is configurable. How many of us are going to have just one environment to run your test again? No, right? So make sure your test can run against multiple environments, for example. Think about the test data. How is it going to be stored? What type of test data is required? Is it sufficient to just keep it directly in the test? Do I need to separate it out into XML, Yaml database for that matter? Think about it and again start implementing accordingly. Build slash reuse tools and utilities. The more you reuse, the more easy it is going to be for you to maintain it and build it forward. Use correct levels of logging. Let the test logs be verbose because you would never look at the logs if the test is passing. If the test is passing, you just need to look at the results as it passed or failed. If it has failed, that is when you need to very quickly find out where that point of failure is and accordingly take corrective action. To help in logging, you can also, or rather should also, be taking screenshots at various points in time, assuming it's a UI-based test, right? Take video recordings wherever required and if your framework is good, you could potentially say if the test has passed, I don't need to store any of these artifacts. Let's just delete it off. Those storage is cheap. I don't need it. If the test has failed, then I want everything of that available to whoever requires it to do a root cause analysis as quickly as possible. Last but not the least, CI. Who has not heard about CI or continuous integration? Hang on. Okay. So these are the things that are most important. Again, by no means are these the only things, but these are the things that strike out to me if you ask and wake me up in the middle of the night and ask me. These are the things that I would say are the most important things in terms of building a framework. So let's have some more interaction, right? How many of us in the room here are part of large organizations? Let's say beyond 500 people. Almost everyone. Awesome. Okay. What are the average number of products in your organization? A few or tens? Hundreds. Hundreds? Yeah. Several hundreds. Okay. Very complex type of environments, right? What are the average number of projects across all these? I'm guessing it's going to be x times the number of products at least, right? Fair assumption? Yeah. What is the typical technology stack across all these products? Is it all Java-based or all C++ or all .NET? Mixed, right? Awesome. A fun environment, right? How many of us are part of merged or acquired companies or have been part of? Again, a very fair number, right? Because it's a reality. This is living in a very different world, and this is not uncommon at all, right? When companies acquire or merge with other companies, do they think about the technology stack? They think about the business proposition and how it builds into that complete lifecycle based on the space or domain they are in, right? Last? Well, not really, but distributed. Everyone sitting under the same roof? No? Very few cases it could be a really small company compared to these other size companies, right? Only in that case it could all be co-located in one floor of sort if not multiple floors in the same building, right? How many of us have heard about a common test automation framework? Do you just want to explain quickly what do you mean by common test automation, okay? Everyone clear about what it means? So what we are saying is there is, regardless, he mentioned model-driven framework. That's one type of framework, but what essentially a common test automation framework in organizations means there is one single framework decided upon some particular technology stack or whatever tools around it, and then different products, projects would have to use that for automation across. This common framework provides the most basic utilities, tools, logging, reporting, test data management kind of mechanism, but then the implementation of tests for each of the products and projects inside it would have to use the same thing, regardless of what is the product under test in that portfolio. It could be Java, Siebel, SAP, .NET, any of this technology stack, you still have to use the same product under test for the framework, right? Is it good or bad? So this is a different model of sort. So depending on the, and it's good to know, but what I mean by common test automation framework really, right? Yeah. Yeah. So that is the difference. So what is the name of the framework that you're using? Okay. So what I understand is there is a common set of processes and guidelines around where tests are going to be stored, how they're going to be stored, and the actual automation might depend in some cases using cucumber, in some cases directly using selenium, Java, whatever, right? And what I'm calling, what I'm saying about a common test automation framework is it's a common set of tools, utilities, and processes that need to be followed by each and every product under test in that organization for automating against that, right? So there's a slight difference of sorts. Okay. But what do you think if you are forced in that complex environment, regardless of the product under test technology stack, use that common test automation framework? Is that good or bad? Both, right? So why would it be good? Why would it be bad? Cost reduction, that's one. That's potentially a good side. But resulting behavior is forcing everyone, right? But again, this is under context, right? Each context determines the answer, good, bad, or unsure in some cases, right? But is it easy or difficult? Right? Again, it's easy or difficult because difficult in sense, the technology of the product under test might be so different that it is not as easily supported by the framework. That's why it's difficult. Easy because it reduces cost, easier to onboard people and share that knowledge base across, right? It becomes easy also in certain aspects. So again, no one common answer to those. So here's my question to everyone. If a product technology stack cannot unify in an organization, then why are the testers or the testing force to use the same technology stack across that organization? Questions? Yeah? Exactly. Absolutely. Absolutely spot on. Again, it's very context-based, experience-based. What is the right thing to be done, right? And with that question, I want to take some exam, go through a case study, right? Or to drive the point even further. So everyone has heard of Microsoft Outlook, right? It's an email client. It started off with a desktop version for Windows. Then there's also a desktop version for Mac. There's a web version. There is Android version of the app for Android. And whatever various versions, right? It does not matter. How will you automate the testing of Outlook? Can anyone use a common test automation framework for it? So what you would typically do, right? Like experience and research, what do you do? You would say, okay, for Windows, I'll maybe use, if I want to go open source, I might use white or maybe QTP. Right? It works for Windows. For Outlook on Mac, I might use Automator. For Web Access, I can use anything pretty much, right? Selenium, Ruby or anything. For Android, maybe Robotium, Java combination. Again, there is a bunch of tools available on each of these. And great, my problem is solved. There's no automation across all these different versions of Outlook. But now, how will you automate an integration test across these different versions? So for example, for a user who has installed Outlook on Windows and, let's say, has an Android native app available for testing, right? And whatever other version. But my test definition is an email drafted in one of the products should be reflected correctly in the other product. That is my integration test if the sync is working correctly. How will I automate this test? Because what we have said now is, you've got a test framework Outlook on Windows, a test framework Outlook sync on Android. And this is my end to an integration test. And I need to be talking across these, right? I modify the draft from here and I want to verify the updated draft is seen again over there. How will I automate this kind of scenario? I've chosen different tools and technologies for it. So the problem is you cannot use that same technology across, right? And this is easier. This case study is a made up case study. What a very real case study that you can connect with in that sense, right? It drives the point better. So what we're looking at is something that is going to go across these. Let's leave this case study aside. I had a problem that I experienced on my, you know, couple of years ago on one of the projects. I built a Kukumba-Capribera web driver based framework on one of the flagship products of an organization. We made it work. We proved it successful. And then the organization went ahead and started off their own automation practice. They hired a lot of automation people to do that. And for whatever reason, the new test manager, he chose a different technology stack for automation. Not because it was this stack that I had used was not compatible with the other products also, but any other reason that we had. But the problem is that these products were talking to each other in a certain way. And now we came up with two different frameworks with products who are talking to each other. How are we going to do the integration testing around that? Either he moved on the new testing that happened in the framework that I had created, or I had to let go of one year work or work of automation effort and migrate my test into the new framework. It was a loose, loose situation, right? And that's where I came up with a solution. I thought of a solution at that point in time, and I implemented it. I called it TAS. So what is TAS? To automate the last mile, the integration test across different products, this solution works beautifully. It is a platform independent solution. It's an operating system independent solution. It is a technology stack independent solution. What this allows you to do is you use the correct set of tools and technologies to automate the product under TAS. If it means there's just one tool and technology, by all means, then you don't have a problem, right? It's one framework that you can use. But you are not getting restricted by the testing tools and technology. You're actually looking at what is it that needs to be tested and automated and choose the right set of tools for that. And then to do the last mile automation of the integration test, you would use a framework like TAS to bridge the gap between these frameworks. I want to put in some disclaimers here before we get into the demo and look at the technical aspects of how this works, right? This is not a integration between products kind of a tool. This is a testing integration product tool. And why it's important to understand this difference, because it doesn't have any security support of sorts, which might be required between the different products. It is probably not as robust also. This is just for testing to enable the integration testing again, right? So please keep that under consideration. This is not a load testing tool. Again, I'm putting this as a disclaimer before explaining how TAS works. But I've had questions asked to say that, okay, can this be used for load testing? No, this is not for that. This is purely from an integration test automation perspective. So how does this work? No, so this is not about tool 8. So let's say, let's use the outlook example itself, right? On Windows, I cannot use, I'll have to use something that can interact with Windows desktop. So why talk QTP, right? On Mac, I've never used QTP, but I'm assuming QTP doesn't work on Mac, right? How will I automate on Mac? I'm supposed to use something else. It's not because of a limitation of the tool, right? I'm using something else for a very valid reason. Now, I need to test the integration between these two products. If I've configured my same account, Anand Bhagmar at hotmail.com, I've configured it in Outlook, both these products. How will I ensure the sync is happening correctly? Updates are going through correctly. That is the problem we are trying to solve. We're not trying to solve a tool problem. In fact, I'm saying use the correct tool to test that product. Don't get biased or forced to use something else, which might not be optimal. Only then can you automate the right test for that tool. Exactly. That is the integration test, right? So if we have the outlook on Windows, this, let's say it's done by QTP, right? And there are hundreds or thousands of test cases automated over here. Just for outlook on Windows. Outlook sync on Android, for some reason, I'm using Robotium, and I have a complete set of tests applicable for this version of Outlook, automated over here. It's only for the very specific scenarios, like sync, that I need to look at the integration test. That is the scope what we are talking about. So now, if we have these, and this end-to-end test framework is nothing but an orchestrator, what task architecture says is that you have a thin layer of task server on top of each of these test frameworks, which essentially becomes service providers, and you add a thin layer of task client on top of your end-to-end orchestration, and now the client can talk to the server and get the work done for you. So the task integration test will say, okay, create and save an email as draft. The task plan will talk to the framework, or QTP framework, and say, okay, run the test to create and save an email as draft for me. Give me the response back. The response could be that email ID or whatever, right? A unique ID to identify that. The client takes that information and cross checks with Outlook sync on Android and say, hey, do you have an email in draft folder corresponding to this ID? Give me the response. Okay, modify that draft. Take the response. The client is again going to tell this server, okay, verify this updated draft is available to you. It's a pure orchestration of sending certain requests. I don't care how you do it. Give me the response back. I'll take the response, go to some other system or other product, use that by passing that information, get a new set of information back, and orchestrate that integration test out of it. The product, the Outlook product, not the server. Outlook on Windows? That is what it is. Yes, Outlook client on Windows, right? Do I see an email in my draft folder in Outlook on Windows? Do I see that corresponding email in draft folder in Outlook on Android? The server is pushing it, you're right. We are not testing the server. We're testing the client side. No, this is just a very simplified example. Typically you would have, if you have B2B kind of a thing, right? You've got something warehousing and processing and whatever, right? And these are completely different types of systems. How will you really interact with them to see if my shipment status, or if I cancel the order, what has happened, right? Those kind of things. Exactly. So this is just a very highly simplified version of the problem, exactly. And we'll get right into the demo to actually look at an even more simplified example of how this works. So can you hold on to that question? We'll get to the demo. And that will explain a lot of how tasks really work. And that actually is going to answer a lot of the questions that you are asked right now. And then we can recap again what is missing out. It depends on the team. How much of CI and CD ready you are. It completely depends on that. If you have got these individual tests, products automated to a great level, maybe your team strategy or decision might be to keep the integration test manual, which is fair. Again, as long as it's a conscious decision by looking at various parameters and complexities of the environment, maybe that is what you need to do. This is assuming you want to automate the last mile, right? Not sure. And again, outlook is an example here, right? I'm pretty sure they're doing something automated for this. So quickly in the demo, what we are going to see is what is Task Server? It is a Ruby project, but I'll walk through exactly how it works. The Task Service Provider, in terms of the outlook on windows, what we spoke about, right? I have a Kupumba JVM project, which is going to do a simple search on a Google Home page and validate the results and return the search count to me. Right? And the client in this case is a command line invocation. It's a thin client that I have. Using the Task Plant itself, but not from any other integration framework right now to keep it simple. Okay? So, see if this is going to work. So Task is an open source project. It's available as a Ruby gem on Ruby gems also. At the same time, you can directly go to GitHub and you can download the source code from there if you do not want the gem itself. Okay? This is how you would get that. What I have done is I have a Task Server sample project, which is again a very thin project and you can start consuming and writing this thin layer of tests on top of your framework immediately. I'm using RubyMind as my editor right now. What this does is... Sorry, this is not clear at all. I wonder if it's closing the line. There you go. Okay, great. So, how many of us over here are comfortable with Ruby or understand Ruby? Okay, very few. So, I'll just quickly read through what it really means. There is a simple readme file in terms of basic steps how to start using this. Gem file is about what are the library dependencies that you want to use. In the Ruby world, it's called gem. So, what gems I want to use in this particular project. So, this just specifies those dependencies. Rake file is my build file. Corresponding to the rake would be ant and java side or maven, griddle. These would be other build files. But over here, I just have one command to start the Task Server. Right? This is all that I have over here. The crux of Task is the contract. Contract by definition is an agreement between two different parties. Right? So, if I want to... Let's say Dilip, he is the test manager, lead or the person who wrote tests on Outlook on Windows. Right? He owns or rather, he's responsible for, accountable for that repository of sorts. And the integration test. I know I want to execute certain tests on the product that Dilip has tested already, has written automation for already. I go to Dilip and say, hey, Dilip, this is my integration test. I want to create and save an email as draft. I'm sure you've got hundreds of tests already on various combinations. But this is a specific test that I have. What information do you need from me in order to implement or make a separate test similar type, what you already have, but available for me that I can call. Right? So he's going to say, I've got 10 different variants of this test. Maybe it is this test that you want. I'm creating a copy of this for you. But for you to use this test, you need to give me certain set of information. Right? Which is the email ID, for example, for which you want to create and save this email as draft. Is there a subject or recipient or body that you want to be included as part of that draft? That is his requirement that he's given. I said, okay, fine. That's what you need. I'll give that information to you. But what I want in return from you is a unique identifier of how I can identify this draft email. Because in my draft folder, I could have hundreds of emails. Right? I want to uniquely identify the email you have saved for me. Give that information to me. And on similar lines, this is what happens. Right? So the input parameters based on our negotiation is what has come out as what is required for me to execute this test. In this case, I want to search for something. That is the input parameter. If I don't provide this parameter, I cannot call this contract. Right? When I invoke this contract, I know whenever it completes, I'm going to get a result count in response back from this test. That is a contract agreement. Now, that is one part about what to execute and what to expect in return. Now, this is where the value of task really starts coming in. So this is a simple YAML file, which is a much, much more simplified version of XML file for those who don't know YAML. Contracts is the highest level. And then you start defining contracts over the specific contract. For my integration test, in this case, the name of the contract is just cucumber. Bad name. I would have rather have it create and save email as draft. Timeout. Now, because it's a contract and I'm executing a test, Dilip is going to tell me, okay, fine, I'll run this test for you. We've agreed on the input output parameters, but it's going to take at least 30 seconds for the test to complete before you get a response. So we are, again, negotiating, not even negotiating, coming to a common understanding what a timeout means, right? So at most, at most, right? So this is a higher limit for this. So I know that it's going to take 30 seconds for this test to complete or it will be done sooner. The directory is where, again, Dilip has told me, in fact, he's the one who is now ironing out the details over here. In which directory do I need to go to before executing certain command to run the test, right? So because this was from a Mac machine, the directory is in this format, and the command that I need to execute to run that specific test that he has created for me, okay? The input-param format, we'll leave that away for now. That's a more advanced feature of sorts. But what this does is, right, the timeout, directory, and command. Now, as an end-to-end integration test, my default timeout might be 10 seconds. But before invoking a test which Dilip is providing for me, I know I need to change my default timeout. Otherwise, my test will timeout and fail unnecessarily. It's a false failure, right? That's one part from a client side, from an orchestrator side to adjust and based on what is required. On the other hand, what we have is, if Dilip does not give me a response in that much time, I know I need to terminate that connection and say it's a timeout error because I don't know how much time it's going to take, right? That's one part. The second thing where this becomes really powerful is on Outlook, on Windows, it is a Windows machine, right? So I need a Windows path over here. The command could be a QTP-based path how to run a specific test. The only thing Outlook on Android is going to be a Linux path probably, right? So the path, just by specifying the contract and the directory and how to execute that specific test, we have abstracted the OS, the platform, the technology out of it. Task server is a simple Ruby server. You just install it and start the server and it will run any command that you have given from the directory you have specified. So now you start seeing the difference, right? It's a very thin wrapper. Task is implemented using Sinatra. What Sinatra does, it takes this YAML file, reads it, and creates a web service on top of it, a simple REST web service. Now if it's a web service, I don't care where it is implementing, what is implementing. I can call it from my browser, from command line, or from other tests and orchestrate that, right? The other thing that happens as part of this is in my cucumber framework, which is now calling the task server is going to call, right? I'm just implementing a specific test for that contract in the before hook, so when the tests start running, all the input parameters, they come as environment variables with a specific prefix to me. I'll get those environment variables out at this point in time over here. I extract those environment variables, and I use that information provided as input parameters to run the test. At the same time, after the test completes, I know what the contract needs as part of output. I'm going to put that as a JSON format in the console because task server, when it invokes a request, it starts running it as a separate process. It has handled on the standard output and error. So you get to know all that information. Because we've put out the JSON in special markers in the console, the task server is able to parse that information out, extract the JSON values between those markers, create a JSON out of this string value, and send it back to the client who had invoked this request. Also what gets sent behind with this is the complete output of what the test run was, because as an orchestrator, I might want to know what was the step in implementing this. Because we're talking about test framework, that kind of information going back would be okay. If it's not okay, it can easily be hidden off. In interest of time, I'm just going to show you a couple of other quick things. So we spoke about the contract details, so what we need to do is specify the contract details, implement the contract, return the results, and run the server. Running the server is as simple as just running a command line over here. Sorry, this is not yet. So the command says actually rake, task server, and minus v is just over both. When I start this, the server is running. It's a thin wrapper. You don't need a separate machine or anything. So all we are doing is start server. That's the only thing that we need to do by passing in the contract absolute path, and it starts up a server. As a result of this, what happens? If you just see from a browser, right? This is a very lame UI that we created on top of it, but if you just go to that Sinatra service, to that port, you will see what is the contract name, which is hidden by this board, unfortunately, and the other parameters for it. So this potentially becomes your reckoner of sorts. If someone needs to consume this, maybe for manual testing. I just want some data created in some system to use it for further testing, right? I can just invoke it using a REST plant from the browser, giving this information, right? So it becomes very powerful. I don't have to worry about how Dilip has actually automated it. Maybe he is using a stub and returning dummy data to me, or he's actually going to the database manipulating something, giving it to me, or running a test on top of that product. Anything is possible, but I don't really need to care about it, because my focus is once he gives me that information, is that same thing seen somewhere else also, right? The task plant, as we said, we just need to collect the input parameters based on the contract that has been defined and invoke the contract. Invoking the contract is, again, just a simple web service call that we have done programmatically before. It's the same thing. You just have to create a hash map and send it to them. Task is implemented in Ruby, but the task plant can be in any language or scripting language for that matter, because it's just about creating a connection to that service and you are making that call, a specific call, right? You consume the result and continue with the test orchestration on the client side. So in summary, the contract decouples the technology barrier. The timeout makes sure you have predictability in the test execution. Passing of input parameters is in form of environment variables. The task server will be sending the input parameters as environment variables, so your consuming test or implementing test should be able to read that and execute accordingly. The result is coming back as JSON, which is a very standard format. Output is in JSON. Console errors and logs are also written as part of that and also any exceptions that might have happened. Task is implemented in Ruby, as I said. So why is this a good idea? It helps you automate the last mile of testing, right? There is no code duplication. I don't need to implement Dilip's test in my framework again, especially when he already has a test like that. Implementation of the contract lies with the framework that is testing the product. So if the way to create and save an email has changed, Dilip's test would already have changed to cater to that. All he needs to do is update this one test also, and from a consumer perspective, nothing else has changed, as long as the contract still remains valid, right? So the test evolves as the product changes. It decouples the technologies for you. This also helps a lot in the manual testing or in setting up test data. Anyone can use it in terms of manual testers or into an integration testing. BS can use it to set up or to validate functionality, just set up certain things in different systems and use it. The most important aspect what it comes out from is each product can be tested in the best possible way. And best I've put it in quotes because it's very subjective, very contextual. I'm not saying what is best. It depends on a lot of other parameters, right? But based on all those considerations, you can use the right technology stack to do that testing. You can use tasks because it's open source. It has Apache 2 licensing. It's available on GitHub. It's also available on Ruby Gen. And the sample project that I mentioned, Task Server, that is also open source. It's available on GitHub. You could just download it, updating the contract file, and the instructions I still need to keep on updating it. But with very basic steps, you can get the task server started as long as you have the correct contract file specified, which is pointing to the right test. You can get started with it and use http client from a browser and start consuming those tests. Most important, negotiate the contract, set up and configure task server very lightweight. You don't need separate servers. If, for example, you have two, three different frameworks on the same machine, you can have two, three different task servers for each of them. Or you could have just one task server for all these two, three different frameworks with different, unique contract names. It will go through specific services. Does it really work? I have seen it work on a live project. More than 1200 download doc tasks says it works. I have not had any complaints. There have been feature request, support request for sure, which is going to happen anyway. But there are teams using it. But before you use it, identify if task is really the right thing for you. If it is going to solve a specific problem, as the concept is available to you, you can create something similar very easily or that works best in your environment. And you can do that last mile automatically. If you want to help in enhancing tasks, there are a couple of tasks I have identified which I need to work on. I have just not had the time. It is open source. I will be accepting full requests or collaborators to work on this, looking forward for your emails on that. This really helped me being creative, right? So one thing is about talking about a tool or product, something that has been created. But the other thing that it helped me do is not just resort to I'll do this testing manually, or I'll start duplicating code. I could come up with a creative solution for it. It was an innovative solution to the problem, not something that has been defined or used for various times. Doesn't mean you have to reinvent the wheel every time, but again, look at it in the right context. It helped me with another open source contribution for which I really feel very happy about. With that, I would like to say thanks. I think we are about five minutes over. But I appreciate your time. Any questions if you have time looking forward for that?