 So few of us were in the workshop with Linda in the last two days about fearless change. One of the big learnings was sitting in the new smoking. The more you sit, the more harm it is doing to your body. And she created it to its as bad effect as in the smoking, even secondhand smoking. So let's practice that. Let's not make sitting for smoking exercise. And given that it is after lunch also. Let's all stand up for a few minutes. Two minutes? We'll do some exercises. We have to make sure we wake up. And in the process I'll also wake up before the start of the session. Okay? Have you had a good lunch? Very good lunch or good lunch? Very good lunch means you just increase the exercise time that much more. Back. Too late. Too late. So what we want to do is just do a few energizers of sorts. The first thing what we want to do is just raise your hands on the side. Down. Turn around. Except the last row. Except the last row. And let's just introduce ourselves to the person who we have in front of us. We just want to make the ice. And... Okay, that's enough. So, yeah, please take a seat. Thank you for bearing with me. I'm just trying to clear my voice and get started. And at the same time, hopefully that would have given you a few more minutes of makeup time. That doesn't need time. Good afternoon, everyone. No, that was a little slow. Anyway, this is not a class. This is another lecture. Thank you for coming to this session. I know there are other interesting choices also. I am going to talk about... The title at least says... To deploy or not to deploy. Decisive tests can analyze that. A very quick introduction about myself. I'm Anand Bhatma. I'm currently playing the role of a test practice leader at WhatWorks. Titles are in the review. I'm a hands-on tester. I do everything related to testing. And I've been doing so for more than 15 years. A very quick contact information if you want to connect with me later If you want to have a follow-up discussions, you can get my idea or more contact information from the above in the page. And that's what I'm really going to talk about myself. What do you really expect from this session? Why are you here? Because if there is a mismatch, as Anand said in the morning, it could be great to use the law of two feet. So why are you here? What do you expect from this session? It will not work in your case. Just a heads-up. Don't be surprised if it is not going to work. And I'll tell you the reason why. Understand your perspective. Understand your perspective? Yes, we'll be coming back in some ways. Understand your thoughts, whether if you have done any extensive research, you've decided whether to go ahead and check in and deploy or to stop because there is a lot. We'll definitely be talking about that. That is really the crux of these next 40-45 minutes that we'll be interacting with each other. Any other thoughts? Any other expectations? You had mentioned about the test analyzers. So how do you go about selecting that tool and why that tool? That answer is easy. I created it. But there is no clue. It won't go right now. But we'll be talking more on that. Why not test and analyze it? Okay? So now why I said no, it's not going to work for you. And that's the risk of I see, especially when I go to conferences, when I've seen others go to conferences, we come for immediate solutions in certain ways. And more often than not, we take a solution out of context and, in fact, what's right in our situation. And it will not be the right fit. So the reason I'm saying it is not going to work is for you to really look at all the different parameters, to really investigate and see if it will help you. If yes, then this is the best tool you can ever have. Okay? But it's not about the tool. It's more about the thought process. So with that, what are the criteria for determining if your code is ready to get into the next phase of the cycle or the department cycle? Sorry? It compiles. It compiles first most important check, yes? Passes all the tests. Passes all the tests, okay? Defect three. Sorry? Defect three. Defect three and passes all the tests. We are going to put up on this topic after about ten minutes. Or hold on to that talk for sure, okay? But yes? Making sure that dependencies are not cyclic and they don't cause issues. Making sure dependencies are not cyclic and they don't cause issues, okay? So the biggest different criteria, right? And depending on the nature of the application you're working on, the domain, there are many more parameters that need to be added and considered, and it's not... Is it a easy process? Can we just take a half minute look at whatever the potency is and good to go or not? No? Okay? So, before we get to the next stage let's get a pulse of your... what you guys in the room. How many of us are not in the software industry? Okay? Everyone is from software industry. How many of you are working on agile based methodology? One more second. Okay, let me change the question. How many of you are not working on agile based methodology? Okay, there are a few. This is still going to be applicable for you also. Though I've talked a lot in the agile context it is still going to be applicable for other methodologies also, okay? How many of us are managers of any different sorts of... any different levels? How many of us are managers in this room? One, okay? How many development managers or development managers mostly? Any testing managers? There's no... Okay? PNs? Project managers? Any business managers? Any developers? Okay? Any testers? We've got a really good way of doing it. At least if you switch into your own perspective for each role there is a different perspective and become my name is the product that you are now. Okay? And there are various reasons for it. What is the main objective of any organization? Make money? Okay? I would also equate it to provide value. And seriously don't talk because you've seen the slides in some ways. Okay? But many like it. One of the main objectives is to make money or product some value, give some service to your customer base. The other objectives are I do not make money or provide value if I'm delayed or I take too long to do that. Right? So it has to be in a timely fashion. I have to get my service or product out to a market in a timely fashion. And is this the only thing? Most important, right? But actually it's the most important because none of these can exist independently for an organization. If any of these is missing it cannot be the minimum to solve. So quickly I'm going to rush to a couple of things about reality in organizations. We are a global organization. We are in a global, globalized software industry of sorts. And for various reasons. There's globalization. There's cost factors. We want ready for best and availability. There's talent in various different all parts of the world. Team sizes are too huge to only have under one roof. And there are also mergers and acquisitions which result in the teams really getting spread across the world. Okay? Given this ecosystem, given the value of the organization each team, each product team or each program of work needs to have some set of processes, guidelines and use, adopt some patterns which is going to give or bend them on the path of achieving their objectives. Right? So what is one of the practices that makes team successful on this path? Collaboration? Communication? There are many other practices, right? What are the practices that we use in our day-to-day activities? Procedures. You mean more in terms of process or the process? You are right. And so far, you are right. But there's probably at least 100 times more number of practices. And I can keep going until I get the right answer or you give me some more answers and I'll give you the wrong answer. Learning from mistakes. Learning from mistakes? That's not really a practice though, is it? Yeah, maybe it is. Okay? Continuous improvement. Okay? That's one of the process, a lot of practice. Retrospective, discipline in what sense? So this is more on the technical side, right? What I'm looking for. And, okay, that's fine. You guys are not going to get the answer. I'm looking for any stress optimization. Okay? This is one of the practices. Not deep practices. It is one of the practices that we need to employ to get on the path of having good quality, get the product out in good time to make money and provide value. What is one of the practice that makes the teams unsuccessful? Sorry? Time consumption. Time consumption in what way? Sorry? To deliver the products. To deliver the product, okay? Bad quality, okay? Not bad. But that's not a practice. That's a measure of whatever practice you are going to do. Lack of quality control. Lack of quality control. Okay? Wrong estimation. Wrong estimation. How many of us are part of successful projects we've never had in the problem? So why are we not able to come up with more practices which have not worked for us? Requirements not analyzed properly. Requirements not analyzed properly. Now we are getting some there. Do we need to do another round of practices in place? Team is not set up in various ways. Team is not set up, especially to set up running in various ways, okay? These guys are not going to get this answer. So it is also test automation. Okay? Why? Because if test automation is not that right, it is going to be more of a crippling factor for the team than of enablers who get to the next stage. So it is very important to do test automation correctly. What is test automation? Can you? So it's the safety net. What do you say to the safety net if something that was working right now I have written a test around here. I want to know if that comes in the case. It's a safety net, right? The net, how many of us know about fishing? Almost everyone, right? Fishmen use nets to catch fish. Would fishmen use the fishing net or for very big grid size to catch a small fish? At the same time, if a fisherman wants to catch big fish, does it help him to have the net size when he can do that? We end up catching a lot of trivial things which he's not interested in. Automation is exactly like that. If you don't have effect of automation, you might end up with so many tests which is not really providing any value. At the same time, if you have a lot of gaps in your automation, you should still go to clean through. And you will see issues, defects later on in your product life cycle, which is going to be very costly to fix. How many of us have heard about the test automation period? A few of us. So, very quickly, I'll explain what the test automation pyramid is. It is a pyramid. Behind the pyramid, there is a main circuit. What the pyramid represents is various different types of tests which are applicable for your product under test. And it could start right from unit tests, integration tasks, view tests, web service tests, UI tests and various other types that are or not listed over here. The circle on the top represents a very focused manual or exploratory testing that is required, that is essential on the product. Now, why? Because not everything should be automated again. You don't want to automate things which are going to take a lot of time or cost to do that. Why is this pyramid important? Because as you start going from bottom of the pyramid to the top, what happens is the cost of implementing each test increases, the cost of getting feedback from each test increases and of course, the time that it takes. Why? Because at the unit test level for a dev machine, I can run the test against the code by just compiling and running tests against it. I don't have to deploy it in any environment or any container but as I move up that pyramid, I need to get more and more pieces of my product together in order to enable those types of tests. So what this means is we need to have a test very effective the test that has the lowest level of the pyramid should touch the granular piece of the code. The time that the inverted pyramid represents the product and the test in fact of each test on the product. Unit test touches the granular piece of code as you move up the UI test touches the breadth of the product that all integration points are set up for example, all systems are set up, databases are hooked up, external layer databases are set up correctly. It takes a long time to get all this done. So the least number of tests should have the impact on the widest of the product not the granular pieces. What this also means is when we get to the view test is some more technology facing test or a check box. That's a granular test. I don't need to have a full flash UI test framework for that. At the same time the top level test web services UI and manual expertise test they have a business facing test. What value am I really solving? What feature am I really validating for the product? So that is what the test pyramid is. But if you look up reality and this is where I want you to introspect on the products that you are working on I see there are two types of products at least that most organizations are involved with most teams end up with. One is called the ice cream for anti-parallel. Where what we have is we've got the least number of UI tests maximum number of UI tests and what happens when you have so many UI tests? Do they rather like me every time? Do they get consistent results every time? How much time does it take for running and getting feedback from these? As a result what happens in spite of spending all this time on UI automation we have to put a big effort on manual and explanation testing. This is a classic anti-parallel which a lot of organizations end up with. It might be a good place to get started with but if you don't have the right focus on the teams you'll end up over here. The second is a good test pyramid anti-parallel. Can anyone guess what these two are? One is a developer's test pyramid second is the Q and T team test pyramid there is a big divide between them. It's a side-off, not really a divide. If your team is working on the other side of this wall and the development team is over here as the result what happens is there might be a lot of scenarios test cases that get missed out in automation because of lack of collaboration. There might be a lot of duplication also that happens between these two pyramids because of lack of collaboration. As a manager I know Hitesh is your manager right now Can you look to duplicate a test or can you look to miss test? Can't look to miss test. It's an obvious answer. That is the biggest risk to any organization, any team to miss any critical scenarios. You're not going to be able to test complete per se but you don't have to miss any critical scenarios. So this is an anti-parallel that we definitely want to try on the wall. So in each of your teams what does your test pyramid look like? Is that the right pyramid to be in? It's something for you to determine. Pyramid is a philosophical thing it represents equal sides right? All three sides are equal but it never says how much should be the ratio between UI and unit test for example or UI and web service test for example. Only the team can determine what is the correct split eventually it's all about how quickly can I get my feedback so all my tests are complementing each other to give a sense of quality. Okay? Given that we know all this now we all know how continuous implication works. Good assumption, bad assumption we need to get some coffee for everyone. Everyone seems to be nosing up. Because everyone know continuous implication? Anyone not work on continuous implication before? Everyone has. Everyone tries to get to the media content. Okay? Simple thing, any developer does a check-in you want to run a suite of tests automatically on every check-in as often as possible to ensure nothing is broken. And there will be various different types of tests that you want to run in a particular sequence for that matter to get the earliest feedback. Now what happens when you run these tests what would teams do with it? You go and look at the dashboard you might have a various different set of tests than with the rover there but from the dashboard you will actually plan make sense what is the set of quality of the product. I had 10,000 unit tests out of which 1% paid is that good or bad? Okay? After unit tests I have got my web service test and they all passed fine. My function test all passed fine. In most cases that is sufficient as long as my build is clean it works well for smaller teams and again it is a relative term in that sense but if you have a certain number of jobs in Jenkins for example you will still be able to aggregate the results out of them and make meaningful dashboards out of them. How many of us really work in small teams? What is the size of the teams? Is that a product team? That is not teams. The team is 90 years or 90 crores? More than the crores. Okay. I did situation you can regret. Okay? The point I am trying to make is for small teams these dashboards are sufficient to give a sense of quality give a sense of friend what is happening in your product and to see if everything is going fine. There are still certain limitations that we will see. However, what happens in daily complex teams think about the globalised globalised picture that we saw earlier typically teams are 300, 400 people spread across the globe in various different time zones working on different functionalities and this is not even talking about the external dependencies in the organisation workshops. It is such a large team and Siddish has been part of one of such teams so it was about 50 people from Europe. At the same time, his team had about 50 jobs in the Go server that we have here. Now what are the form of aggregation we do for these 50 jobs you are not going to get a proper representation of what the quality really means. There is a lot of effort that needs to be put in. Okay? And if the teams are really large in that sense we have to start creating functional teams of sort break it down into various different levels and that is where what really tends to happen is you have a product group you just want a related wheelchair one or one CI server product three and four let's have another server set up for you and you can put up set up your jobs and manage your company's integration but what happens is all these products they are really more interplaced they are not related they all need to go live at some similar structured time management so someone, CEO, CTO gets the most status are we good for the release of this core banking platform in 3 years time? How are we going to do that? We've got different information in different servers that information might be in different things at each time so what happens? Different team members, managers or whoever start doing this work manually and create this kind of consolidated report for someone to share and say yes we are on track so if we put caffeine on continuous integration we get to continuous delivery where on every second potentially if the build has passed everything has passed I should be ready to go live I should be ready to take it to the next level continuous in delivery with that imagine the effort is going to take to collaborate and get these reports together on a continuous fashion for someone who very clearly needs to get them there's a good reason to get them avoiding doing this manually but that is one how can we accurate in terms of the data that is the other aspect so how do you really know your product is ready to go live or ready to go to the next level it's not really an easy thing unless you are in a small team where you can manage that much more effectively also this is the part where I was asking earlier does it really really means there are defects how do you really try and manage that these were the problems where which I had come up with one of the projects I was working on about 3 years ago and I had this idea to say no I need a better way a more data-driven approach to enable making better decisions really I cannot or don't even want to spend time in building a product or a solution it's not possible it's like the pyramid, there is no right answer unless you put in a lot of context to it so how can we get data in a uniform fashion for the enterprise solution to enable the relevant stakeholders take better decisions to enable teams take better decisions am I ready to go to the next level and that is how TTA also tests friend analyzer though it says friend analyzer there is definitely much more to this than just friend analysis and you will see that you are in form of a demo so there are couple of aspects to this demo that I am going to talk about one is what is the friend analysis capability of this failure analysis which is also equally important and I am telling you why in the demo itself dashboards we don't want to replicate information which is available in various other systems that our enterprise is using just because we want to do a unified tool each tool is doing things in its own best way possible we already have an enterprise we don't want to duplicate anything we don't want to replace anything at the same time we want to provide a consistent holistic view of the product and its quality in one place so we will talk about dashboards some admin and how to upload data to dashboards so with that we will switch to a demo so I have TTA running on my machine right now there is a reason why I am not going to zoom in Suryash was one of the developers who helped doing the UI for this and then he ran off on another project so I have left it at the same state and I was waiting for this moment to tell you when you left me I need your information so its not really about the content over here I think the concept is more important what we are trying to solve with this the reason I am going to zoom in but please be aware that this project was really built as a beach project of sorts and a site-packed project of sorts built by QA's whoever it might be right out of college with a little help from others so its again not about implementation its more about the concept so first thing we are going to do here is friend analysis that by all means is very important especially in enterprise products where I want to look at the product development life cycle itself might be popular for years if not more but there are also different versions over a period of time and the product sweetest around for decades for a good reason core banking is a classic example in that way so in that case I don't want to know what is my current state of the product I want to know how it compares with the earlier state also I want to know if I am making progress or not it is very important to understand behind the square box is the different types of tests that have been automated for sub-project 1.1 its functional, integration and unit test in that sequence what it says is there are 350 functional tests 98 configuration and 99 unit test and the time it takes to run these so at a single click using the latest data available with TTM for the sub-project I know what my current looks like now this becomes very valuable for me to say from a practice perspective this definitely is not my I know my product better I need to be doing more at unit test level need to invest more on unit test inside and I need to reduce my functional testing not just to reduce the numbers but have more effective tests less number but more effective tests so this can become a conversation point for various team members to say this is where we are how do we how do we want to be in the next 3 months and you can sort of use this as a reference to start planning on so this data becomes very powerful with additional functionality implemented on this I can say show me 2 pyramid views for the same project 1 from 3 months ago and 1 today how do I compare I don't want to create 2 different views for this but I want to do a side by side comparison you can do various things once a data is available ok that's one a very quick second example in this case it's more powerful as I just have functional integration but it might be right for my product this is not telling you what is right or wrong it's just telling you what your product is doing at this point in time so that is the pyramid view next let's look at a comparative analysis I know there is a lot of effort in my organization where we are putting emphasis on automation of various different types what value is it really bringing to the team how is the quality of the product across all these different types of tests really happening I don't want to go to cncok this is aggregation of my unit test this is aggregation of my functional test this is aggregation of my sanity test so what are those job aggregation groups are because you cannot aggregate unit test and functional test it's not going to provide any value in jet fields or any cncok so how do you really figure out what is happening so in the comparative analysis view I select a time range a particular project and subproject and then I select which type of test am I really looking at am I interested in for this in this case I am going to have all these tests selected what types of tests which I have data for I just wanted to plot a graph for this so what this graph tells me is for the time range each of these tests and there is a legend with sort of price to map of color what type of test it is it will then be each circle representing a test run what was the status of that test in that time period so now I can look at it and say most of my tests are at this level there are times when it suddenly tips why is that happening without seeing this holistic view again in a large enterprise so if we are going to find out where was that focusing on finding the most I don't want to get lost in that big ecosystem of tests and thousands and thousands of tests and so many different teams I need to know a starting point this becomes very powerful from that perspective it is showing a trend and it can show a trend for as much data as we really have let's look at another example I see project one which has got many different subprojects as part of it but I am interested in comparing only the unit test across all these different projects and I want to see what is happening so here is the unit test coverage and the past percentage this left side is the past percentage over here I want to know how is the unit testing happening across all these projects because I want to see so many types of tests you want to filter it down narrow down the scope likewise you can do it for functional test salary test, regression test so again at the click of a button giving the right parameters you can look at data over a period of time across the enterprise different products, projects, subprojects whatever classification might be and plan make some sense out of it and figure out is there something that needs help is there something which is going very well you can help other teams to do better it becomes database approach of that field based approach and anyone can see this the next thing which I have found to be very very valuable is test execution parameters how many of us do performance testing on our projects automated performance testing ok not automated ok we do performance testing a couple of us when do we typically do performance testing sorry when there is a problem before it is taken sorry some before the next any other patterns before we say the story is done we have it as a criteria the story cannot be said that it is done until the performance criteria is right ok ideal situation almost ideal situation performance or any part benchmark or requirements as part of the story that is ideal state however it is very difficult to get it down to that level in most cases again so I hope you continue going on that path for sure ok what I found of value is we can only test any way in dev environment QA environments whatever the environment might be whether it is functional or any type of test each test has execution time now if I have written a login test for example from a UI perspective it was successful login test or unsuccessful login test when I am validating the error message over there that is a valid test if I can capture this number at test execution time and it will include lot of other things also right open the browser and wait for the browser to be loaded or login test to be loaded before anything else happens so it does have lot of noise around the action metric but if you start capturing this metric on a consistent basis you can figure out if my login test was taking 10 seconds after opening the browser loading the login page can we capture this as a thread to see is my login test taking more than 10 seconds at any point in time does anyone see value in that it is what I call a functional performance test really not a true application performance test it is not a separate environment or anything there are lot of factors where things might go wrong and just using that on the project I was working where there was no special performance testing than at all but it was an option side that you are working on there had to be some form of performance testing done as I am capturing this metric in form of automation work just pull in the CSV and look at it but this becomes even easier because tests are running anyway so what I am doing is again I select what exactly am I looking for the time period when I am looking for it will tell me which tests have executed in this time period and I select any one test and I say plot what this does for me is in that time period it will tell me for this test it kept on going for about 15 seconds 20 seconds 3 seconds it went all the way 37 seconds and there is a spike now we know because it is not a dedicated performance environment that there is a gap and especially on the top you can sort of ignore up to less priority there is a network deal there was something in certain cases you know for sure the ones that ended very quickly especially this one something really went wrong maybe the page itself did not load up on that graph there was any so you can ignore that but then if looking at this graph I say this seems to be the general time period and 22 seconds that is the general time period anything out of this range is an anomaly and now I can do data points focus on which test execution cycle should I focus on to identify something really changed with this for example this for 28 seconds that is out of the range why did it suddenly go here it then went down again but what if it is not going down what changes happen in the product at that point in time which potentially cause an increase in test execution or there might be other reasons but now again I have a direction where to focus on to cause analysis I am not shooting in the back ok so this is what the trend analysis does any questions thought so far we will get to that point now so question was can it compare between browsers it has the ability to do much more than what we just said but we will get to that part in a little ok so this is all that seems there is a fair amount so I am curious as to why some of the gaps that you have found in SOMAR and why you would choose to spend up a different pool yeah so so now connect me from I am not directly on SOMAR myself but it captures methods from directly from the code itself right it has a plug in ecosystem your CI can SOMAR aggregate the projects your tests are reported your unit tests and your integration tests there is time learning there will be a fair amount of overlap there will be a certain cases so the advantage of this one is I could be running tests on my dead machine itself and I can see it would be immediate to capture that that is what I wanted I do not think from a functional testing perspective SOMAR actually I do not really know about SOMAR as much to comment much on but I do not know there are certain features that SOMAR does not have which when you get to the failure analysis becomes really valuable for a team to do that so maybe we should talk after this session so I can understand about SOMAR does the statistics mean that there are certain dates no let us say it is a project life cycle and we are checking in test cases day after day and then you are running the test and then this information gets collected let us say on a particular date just shot up so probably that is the point to look at or analyze that why it is shot up on that day exactly and this is exactly what it is doing it is just giving you data points in a visual form so that you can investigate at the correct place where you are trying to find out where to look at which builds to look at which comments to look at for example so let us look at failure analysis I think it is a lot of value can come out in failure analysis so one thing is compare analysis it is what you are asking compare between browsers the current function it does not support it all we support right now is if I select a project sub project which type of test I want let us say it is an integration test I automatically get the set of dates for which I have test trans over here and I can compare ok over here actually I have jumped a little bit ahead close your eyes and just listen to what I am saying for 10 seconds if you see I for example I get a report or not if you see 95% pass test consistently last time it was 95% day before yesterday it was 95% today it was 95% does it mean the same 5% test are failing at one test how would you know compare it but if you have thousands of tests so you see the problem I am trying to this is not even a comparison this is what compare once does select two builds of similar types I am just selecting different trans and I am saying compare this and tell you what is happening in this case I see the number of unique test failing on both days so test is a common failures across both I want to identify that I want to identify the test which have failed a unique test which have failed in both these test runs so I might have AB failing yesterday and I have BC failing so this will be ABC for that matter these are all the tests I have failed and that combination test which have failed on both days test which have passed consistently test which failed yesterday but not today and the fourth is test which failed today not yesterday with this I know exactly where I need to focus with a test what is going to be most valuable for me test which was passing yesterday is failing today that is a regression I need to that is where I need to focus on it with the kind of metadata that we captured with the appropriate test we can extend the function to say this type of test running against I versus file formats compare running against Rc1 versus Rc2 compare the metadata is there you just need to write the appropriate variant displayed in a little more vision format where it can start focusing on what is happening so it can be extended to support different browsers or different OS whatever combinations or build numbers as long as you are capturing that information with extensions or extensions in the code to support that in TTF you should be able to query okay another important thing 5% test failing if I have got 10,000 unit tests 5% of those tests are failing are we sure they are failing for different reasons they could be, could not be 4% might be for the same reason for that matter something changed in the mock I am using that is why all related tests are failing so in failure analysis again going through the same pattern I select a specific run do the failure analysis work so this as you makes a big assumption that whatever error logging you are doing in your automation whichever form of automation has to be consistent it has to be within its messages that it creates for what TTF does is for a specific execution cycle look at it, analyze it and group tests which are failing for the same reason based on that I have got a lot of failures in this particular test run but 33% of my tests are failing with the error message for class 1.1 and these are the tests which are failing because of that there is 3% tests which are failing with error message for class 21.3 these are the tests for that which test will I try and fix first so again say it is a 22 you are trying to identify the low hanging fruit which is going to get you towards a green build as quickly as possible that is what failure analysis helps you classic quotation or other grouping based on error messages to help you focus on the right thing okay one thing I want to show you over here before we run out of time is we spoke about different dashboards sonar has got excellent reports if you are using jira for requirements tracking that can get excellent reports out of that for defects and whatever else you don't want to duplicate any of that just because you want your tool to be popular and it is never going to be possible getting all types of information in one system so what provision I am doing here instead of I have to do a lot of context switch I can just give the link to that particular dashboard give it a meaningful name so in this case I am going to get PTF I am going to say HTTP localhost 3000 and what I see over here when I go to the home page I have got external dashboards configured and just pulling in reports from whatever other reports might already be configured so I am going to go to some other place I am not duplicating data at the same time I don't want to do that context switching too many times one bookmark can get me everything so that is it for the demo side of things very quickly I am going to jump to because we are almost out of time I am going to jump to how does this really work and the value it brings to the teams so typically when we manifest from CI the machines that are configured with CI and what that does is when the command comes into the machine you do a clean, compile, set up run the test and send the results back to CI the difference now becomes is after you run the test you are changing it making the slight change to send results to PTF first and then return to CI so CI still has the information it had earlier and only difference is you have to send the results over there also why is this very popular very powerful my CI could be anything it can go, Watson Jenkins, Malko, Team CD, anything my build tool can be anything we have a gradle, add, we create whatever it might be MSFest anything my test can be implemented in any language it does not matter it is any Xurit or TestNG based reports does not matter and any programming language for that now all that you need to do is look at the build tool and write that small snippet to send the results to PTF at the relevant time that is the only difference everyone else is unaffected by this so this becomes very powerful there are many, I will be uploading these slides today to my blog and the video also will be available after this session we can talk more about how you can really use PTFC and how it works most important thing is you set it up there is a manual upload page where you provide all the details required for uploading test results and see if the upload is successful what that means is the result types the Xunit test reports that is required for TK to consume we will be able to see there is parity there is a match in that as long as the reports can be passed PTF is going to work for you if it cannot be passed then PTF to support the report types that your test framework is generating and then it can be passed there are at the same time what are hard coded things in terms of types of test for example functional integration unit test you might be using different terminology in your organization in your teams we will need to figure out ways to customize that directly import or make it extensible in PTF remember this is a very decent product when it has a side project we prove and there are a bunch of features which already have in mind but it's not the full reach a rich feature set right now but try it out it might work how can you help try it out give feedback suggest features more importantly this is available on GitHub it's open source project artworks has allowed me given me time and help in building this and we made this open source send me full request and we can work collaboratively in whichever fashion to add more features and make this better and more user with that I would like to say thank you and not go even more beyond what I was supposed to any questions?