 So while the sound is getting set up, I think we should kick off in interest of your time. So good morning, everyone. Everyone had a good keynote session in the morning? Okay, today I'm going to talk about enabling continuous delivery in enterprises with testing, what that really means. And this is a case study of a project that I was working on, how we got onto the path of enabling CD in one of the enterprises. A very quick introduction about myself. I'm Anand Bagmar. I'm working with ThoughtWorks since five and a half years now, been in the testing field for about 17, 18 years. I love testing. Anything related with testing, I would love to get involved with. I have been getting involved with. Enough about me. There's my Twitter hashtag over here, as well as some information how you can find more contact details about myself from the about me page. Okay, so enough about myself. This is really about what you are here for. And before I start off on my rant of what I think the content is, I would like to know from you guys, what do you expect from this particular session? How you can continuously deliver, okay? That's a very loaded statement, but we'll try to address that in some ways, okay? So testing is sort of part of the DevOps cycle and how is that really related? Seeing that. Any other thoughts? Okay, so infrastructure requirements and other performance related aspects when coming to continuous delivery and how does testing fit into that, right? There was some challenges with continuous delivery, okay? And how do you manage the dependencies around that? Okay, what is the role of testing in continuous delivery? Okay, to make sure you actually continuously delivering and not delivering one thing and breaking what has been delivered earlier, okay? So in enterprises, maybe some sections of that enterprise are in the CD, on the path to enable CD, whereas others might not be. So how do you really fit in together these different aspects? Requirement changes, how to handle that? We will not be covering that. Enabling continuous delivery in BI world. So you could potentially take examples from this in your context, but we won't be covering that also. Sorry. Tools used for continuous delivery. Yes and no sort of. But I think we've got a fair mix of expectations over here and the point is we will not be talking about any of these. This is a magic show. We are going to prove that a triangle is equal to a pentagon. Confused enough? We'll see that towards the end, if we get to the end what we really mean by this. So this is a story, a true story for that matter, of a core banking implementation for one of the larger banks in Australia. They were on a path of continuous delivery and how many of us are from the banking segment over here? Excellent. Please don't shoot me if I say anything wrong. A quick show of hands again, how many of us are testers over here? Any testers? Okay. Developers? Excellent. Who am I missing? Of course, BAs, few, PMs, product owners. Who else am I missing? Stakeholders, business, managers. Okay, excellent. So we've got a complete mix of what the team is sort of structured of in any particular domain. Okay. Banking domain, as we all know, especially the ones who are in the banking field know, or the finance field know, it's usually complex. But regardless of the complexity, any organization has got specific objectives or the same objectives overall, right? Those objectives are, they need to make money by providing some value. To do that, they need to get their product or services offering out to the customer base in proper time. Otherwise, it is not going to be able to provide value to the customers, to the user base. But as we heard, if some of us were here yesterday, just in one of the talks, I think Jeff Patton mentioned that, is just getting out on time is not sufficient. You need to have proper quality. The feature set that you are delivering to the end user should actually be valuable and usable by then. And these three things go hand in hand. If any one of this gets missed out, potentially it's not going to have the impact on the segment that you're talking about. Now another reality is that organizations are spread across the world. And for various good reasons, right? Globalization, the cost factor, it's very expensive in one place versus the other. You want to spread that. You want to have 24 by seven availability or keep that machine rolling continuously at all times. The team sizes are just so large enough that it cannot fit under one roof. You have to split across in different floors, different offices, in the same city, multiple cities, different countries, continents, whatever it might be. Team sizes do require that kind of a split. Mergers and acquisitions happen, which means that the organization you're merging with might not be in the same place. So again, you are spread across, right? And of course, talent is available all across the globe. You want to tap into that rich talent pool from the best possible conditions. So there's nothing different from this perspective what this bank was going through. They were split across, rather the main business of course was in Australia, but they also had business in New Zealand. And to help them in the implementation services, there was work happening from India and China as well. So there were multiple locations, lot of locations in Australia, New Zealand, and quite a few locations from other regions also, which was helping deliver the product to their customer base. Let's talk about what the scope of the program really is. And that is what truly adds to the complex nature of CD. Hopefully, you all will be able to relate with this, even though you might not be from the same domain. So if you look at this weird-looking diagram, it's not an Apple, it's not a Amoeba or whatever other thing we might have learned in biology. The gray area is the complete core banking platform for the bank. This has got a lot of systems inside it, but the important ones are there are some external dependencies or external interfaces, integrations that need to happen with existing systems in the bank, which are not directly related with the core banking. It might be certain things like payment processing gateways or various other factors, right? Compliance related, it could be anything. The blue icons represent external systems that this platform needs to integrate with, external from the scope of the bank. This is something, maybe it's government organizations or irregularly authorities, which is doing independent validations or compliance text, whatever it might be again, right? So these are external offerings or external systems that this core banking platform is closely integrated with in order to provide whatever the bank needs to do. The intention of this core banking implementation is to replace this gray box, which is a legacy core banking platform, the mainframe system essentially, with an off-the-shelf product from one of the many providers that is there. You want to do a complete transformation from the gray to the red. Of course, we are buying an off-the-shelf product as we might have seen from our experiences. Any off-the-shelf product can rarely be used directly in your own context. What it means is you need to customize it in whichever fashion required, whether at superficial configuration levels or actually change the product the way it works in order to fit it to your purpose. So there's a lot of customization that needs to happen in this whole process. Along with this customization, because you're changing the core engine of how banking works, you also need to change certain aspects of these external integrations that are there, internal to the bank or external to the bank also in order to fit into this new way of the new platform. So it is a pretty complex thing. This was a very, very simplistic view of the core banking platform. What it really means is this is the architecture, a very high-level architecture view of the different systems that the core banking platform entails. Red is the product, the new product that is going to be replacing existing systems. Green is the existing bank system that get affected by this, which need to modify to work with this new platform. Blue is the external interfaces that you need external systems that you need to integrate with. And there are also some grays which are there in the scope but they don't need any modification at this point in time. So we understand the scope or do we? What this diagram represents is there are 130 systems which means 290 interfaces in this context for this whole core banking transformation project. It is a huge scale. The very aggressive best case estimate for this to do this complete transformation was five years. Realistically speaking, it's probably going to be around seven to 10 years which is again very aggressive. Now, for those who have been on the transformation journey whether replacing one system with the other or whatever, can you really do a big bank approach in that? You take the system down, flip the switch, start it up again and you are on the new system. Does it really work like that, especially in the enterprise context? No, right? What you have to do is in phased fashion you start taking down, replacing one system with the other and you start off on that journey of changing one piece at a time in some logical sequence and eventually you'll be able to pull the plug on the old system and now there you go, you have implemented everything. By that time, of course, there'll be new systems that need to change because again it's taken quite some time. What are the execution challenges in this particular context? There are multiple partners involved in helping the bank get on to this journey. With multiple partners and we're talking about enterprises, right? This is not just one smaller services organization that is involved. It is much more than that. Large partners, large vendors involved. And with each partner, they have their own policies, their own way of working, their own constraints, how quickly they can adapt or change on this path. So the bank is one. There are at least three different partners involved in this. So getting all four aligned together, it is a big challenge. The stakeholders involved. Imagine we are talking about 130 systems. This is purely from the core banking implementation perspective. There is a huge level of governance involved in all this, which means you multiply that at least three times. That is the partner stakeholders also getting involved in that governance model. So there are a huge number of stakeholders and getting everyone aligned in what is to be done in which fashion is tricky. Not impossible, it is tricky. Now the biggest challenge is agile. The bank wanted this to be implemented in the agile way. Why? Because they wanted to see incremental value come out of this change. It's not a big bank approach. I've given this to you, see you after five years. It is not like that. This bank is, yes, they have money, but they don't have an endless pocket, right? Any organization would be prudent in that sense. So how can you get incremental value? One of the good ways is agile. Now how many of us in the room over here work in the agile way or some variant of agile? Well, we are in agile conference, so I'm guessing almost everyone. Now, if I put you all together or give you a question sheet of sorts with very explicit questions of what agile means, do you think everyone will come out with the same definition? The same way of working, the same principles, the same practices, principles, yes, but the practices, it's not going to be the same, right? Everyone has got, unfortunately, a different interpretation of what agile is. Likewise in this case also. There are various stages of agile adoption and understanding. One is about, I don't know agile, but I want to learn it, so I'm a learner of sorts. The other is I'm on a path, but I'm a checklist-driven agile. I have got a list of things, if I'm doing stand-up, if I'm doing my iteration planning, my show cases and everything, I am agile. Is that really being agile? Not necessary, but you are a step ahead of someone who doesn't know that. And then you get to being agile. You truly understand what is required, you understand the principles, you understand the context, and you pick up the practices which are going to be best suited in your context to implement and deliver as quickly as possible. So there are various stages of being agile adoption, and Martin Fowler has written about the agile fluency model also, which talks much more in greater depth about what it really means. But the point over here coming back to this context is the bank wanted to go agile. They had a certain conception of what agile means and how to implement it, but their partners, some of them were traditional waterfall-based companies. They used to execute in that fashion. Some of them were on the agile track, but again, different understanding. So how do you get them all aligned together because if you want to deliver this core banking platform transformation, you all have to be aligned as one team, one thought process, one clear understanding of what is to be done. Huge challenge. He spoke about the money. So there is money available. It's not an endless pocket. You have to be prudent and you have to see, start seeing value out of what you're doing incrementally and not wait for five years to know if it has worked or not. You cannot just keep spending for five years without that kind of feedback. So that is a problem. And for various reasons, trust is a major, major issue because again, different policies, different constraints, different way of working and distributed nature when you are not co-located in all possible ways for all possibility members, that is a huge problem. How do you build trust and keep that trust going on for that sustained period of time? There are some other challenges also involved in this. We mentioned about it's not a big bang approach that you just flip a switch and now you're suddenly on a new system. You want to coexist with the existing system and start replacing it one by one. Now that is a major challenge because in 130 systems that you are already having, you are adding the complexity of many more systems and not just saying that, okay, my payment gateway is, I'm just going to replace from one to the other. You might change it for certain parts of your core banking implementation but the other parts still need to use the older system because everyone has not yet migrated over or moved over, right? So coexistence is a huge problem. Likewise for data migration. The data has to be there, it's banking, right? I cannot, it would be great if my statement balance just doubles up, hey, I'm happy, but what if it reduces? I'm not happy. So data consistency is extremely important from this perspective, especially when you're coexisting to make sure that the data is correctly stored. The scope, it is so huge, it is difficult to take a thin slice and say, okay, this is what I'm going to do in the next three months or six months and I'm going to deliver. Because of so many stakeholders involved, so many systems involved, getting all of them aligned and remember, it's not just the core banking implementation team, we're talking about so many other external systems also. They also need to do some work in order to fit in the new platform. So how do you get that aligned? That is a big problem. Defect management. With the partners, so many different partners involved, so many different systems involved, ownership of all the systems is not in one place. So if I'm taking off the shelf product, if there are issues with the core product itself, how do I raise those issues? How do I get them prioritized and get them fixed so that I can get that into the system as quickly as possible? Likewise, if I'm customizing the product, it's some other team potentially doing that customization, how do I identify that and get those issues fixed? So it's a huge problem about how do you really maintain, manage all these issues, prioritize them correctly and get it going. And lastly, the team distribution and integration. Distributed nature, the hugely integrated systems with mainframes and databases of various different sorts and so many other systems, it is a huge challenge to get that coordination done. So these were various different challenges from the implementation perspective for this core banking project on a path to say, yes, we want to do CD. Along with this, as if the existing challenges were not enough, because it's a bank, there are a lot of non-functional requirements that we had to cater for. Security, no-brainer, right? Banking system, it has to be secured. Performance, it has to be performed from various different ways, right from the integrations of different systems to the end user experience what is really happening. Now it's a hugely complicated system where the end user is not people who have accounts in the bank, which could be just savings, current account, whatever it might be, but there could be loans and various other insurance policies. There could be various types of customers and you need to have the system performed on that basis. Auditability, if anything goes wrong, this is where the government policies also, each country policies also kick in to say that if I'm in Australia or wherever, I need to also conform with these other things. And if something goes wrong, I need to know exactly what happened at different points in time so I can trace back to the origin and figure out what the problem was. Compliance, very big thing and different for each and every country. So even though the bank is operating in Australia, they might have different compliance requirements in New Zealand, for example. How do you cater to those? How do you implement those? How do you validate those? Big problems. And accessibility, that becomes a huge thing. So these are just the top five non-functionals that I've listed but there are many more that we need to think about in this whole context. So how do we really get to having good code quality in order to start delivering value? The way the system worked and this is a very, very simplistic view of that whole platform or the team structure, there is a core team where this is the off-the-shelf product that we've bought and because it's such huge in nature, the bank is working tightly and in collaboration with them to say what the requirements are and how it should be done. The core team, the developers would write code, put it in their subversion system which now because we are working as one team in some ways, would also get pushed towards bank subversion system, get synced manually in some form, manually or automatically triggered in whichever way. Once this code is of decent quality only then it needs to get pushed into the main subversion system and that's where the code would get compiled again and tests would run against that. Now, against this particular subversion system, this is where the customization teams would start working on it. Now I have the off-the-shelf product, I need to add my customizations on top of it and it's going to go in the same subversion system. Likewise, the integration and configuration teams would be working on the same thing and the systems team that, okay, I now need to say how do I print a checkbook for this particular user from the new system? It would connect the printers to it and say, okay, can I print the correct checkbooks over here? That kind of thing. Or tellers in the bank itself, how do they connect their systems and see the data? So that becomes another systems team to see that type of integration. And of course, the NFR teams, because it's such a huge thing, ideally, you would want every one of these to be sitting together as cross-functional teams and working on delivering the functionality. But for such a large scope, there has to be separate teams also created to look at the whole and not just part of what has been done. For the scope of this discussion, we are not going to focus on the NFR teams or the system teams. We are going to focus on the core product team, customization, and integration configuration teams. So from this kind of a complex system, how do you really get early feedback and ensure that the quality of your product is good enough for you to take it to the next stage? For you to potentially say, yes, I'm ready to go live or get into, get at this particular feature in front of my internal users, bank users, so that they can start providing me early feedback. So remember, the objective is this, right? Nothing has changed from this perspective. So how do you really achieve this? The first stage of getting to continuous delivery is, of course, continuous integration. You have to get this itself set up. Anyone over here who's not aware of continuous integration, everyone is, I'm going to skip through. A very quick thing about it, it's integration verified by an automated build, including testing, could detect integration issues as early as possible. This would happen on each check-in that developers would do. The process would be, anyone checks in code, it goes to a subversion system that tests automatically and likewise for any developer from any team across wherever they might be sitting, any change is done, you would want tests to run around that as quickly as possible. Continuous delivery, we'll get to that part. How is automation integrated as part of CI and why do we need to do that, right? We'll touch upon that briefly. If we don't get a chance to answer that explicitly, let's talk after the session, so we can cover that part. So continuous integration, it's a natural extension of continuous, sorry, continuous delivery is a natural extension of continuous integration. The whole intention is to make deployments boring. That is a boring process, that is a very error-prone process and if you can automate that part, you have taken care of a lot of issues that can potentially happen in your deployments and moving code from one environment to the other, for example. So again, a link over here from Martin Fowler's article on continuous delivery, you can read more about that, but the intention is that you quickly iterate over what is your business idea and the color is looking really bad thanks to the light, but there's some business idea, you take that idea, build on it, design it, do whatever process you need to follow to implement that and deliver it to your customers and the cycle keeps repeating, right? You just want to keep doing that as often as possible, as quickly as possible so that your feature set, whatever you're working on, gets to the end user. So how do we get to continuous delivery? This is what we are really here for, right? We understand the principles of agile, I hope. To implement those principles, there are various different practices that can be there. The principles might be tens in numbers, manifesto has got four principles that would potentially lead into many more well-defined things, what need to happen to implement the principles in the manifesto, the tenets in the manifesto, but those principles can have hundreds of different ways to be implemented and which way is right for you depends on your team, your context, your product, your domain, your team skill set, your team distribution and various other things. One of the practices that is essential is what I want to ask you, do you know what that is? What is one of the practices that is essential to make the team successful on the path to CD? Continuous improvement, communication, collaboration, early feedback, how? Exactly. So essentially it is automation and I remember I'm saying it's one of the practices, right, it is not deep practice. All the answers you said were correct, but the answer I was looking for is automation. To get to continuous delivery, you need to have good automation in place, otherwise what happens? Think about you're making one change in that big complex ecosystem. You would have to manually regress each and everything in order to release that. Now, would that be a viable option? You've checked in your code to CI, but if you don't have a backing of automated tests for this kind of environment, ecosystem, is that really going to make sense? How much time would you need to regress all those? So automation is one of the key factors, the key practices that need to be implemented well in order to help in early feedback and know what is the quality of the product. Do I need to move back, roll back some changes and fix them properly before moving on ahead or I'm good to go to take the next step, right? Likewise, what is one of the other practices that can make the teams unsuccessful? Automation, why? Automation done wrong, explain. Can anyone explain? Pass but not pass, excellent answer, quantify that. So it's essentially saying assert true equals true. It's always going to pass, regardless of what's happening in your product itself. Other problems related with automation? Low coverage of tests? Falls alarms? All scenarios were not covered. Maintenance, sorry? Data, sorry? Baselining the data. Very important especially for such a complex system. Automation coverage is not complete, right? Sorry? Too long time. Okay, now we're getting to the crux. Duplications. So there are various other things, right? Based on our experiences over here, potentially we could come up with a unique reason for each person in the room over here. That could be a reason why automation has gone wrong. And what happens when automation goes wrong? That's how we create a market for manual testers, okay? Unfortunate thing, but true story again. So what is test automation? Test automation is a safety net. Everyone knows how fishing is done, right? We have fishing nets. Fishing nets are of a particular grid size. Can I use any grid size to catch any type of fish? If I want to catch a large of fish, would I use a net with this grid size? Does it make sense? If I do that, I'm going to catch a lot of junk or debris along with that fish. If at all I catch the fish. What is important is you need to identify the right type of test which is going to give you the right type of coverage. The right type of confidence in your test suite and the results related with that test suite to say yes, this is the feedback. This is not a false alarm. This is not a mis-scenario. It might be the test is out of date, but it has actually caught the right issue. The product functionality has evolved, but I have not updated my test for it. A valid situation. So you need to identify the right type of test. Now, what is the right type of test? Very subjective. For that, the test automation pyramid is a great reference to see what can be done about. How do you identify the right type of test and where it should be automated? Anyone not heard about the test automation pyramid? Good show of hands. So I'm quickly going to go through this one. Test automation pyramid was a concept introduced by Mike Cohen in his book, Succeeding with Agile. Martin Fowler has again written about it in a very simplistic terms of what that means, but I think all of these definitions are sort of lacking what the pyramid means. So let me quickly explain. A pyramid by definition is a pyramid. It's a equilateral triangle. What it says is there are various different types of tests in the pyramid, and these type of tests are of course dependent on what is the product under test, right? These names listed here as types of tests are for reference. It really depends on what is the product that you are testing and if a certain type of test is going to be applicable for it or not, right? So let's assume that these are the type of tests applicable for my product under test. What the pyramid says is the base is the widest, right? So we need to have the maximum number of tests at the lowest layer of the pyramid, which is the unit test. As you move up the pyramid, I'll have more and more higher level tests, integration, JavaScript, view test, web service test, UI test, and on top of it, there is a explicit mention of manual slash exploratory test. Now I am very hesitant about calling these manual tests because when we say manual test, typically what we mean is there is a detailed test description written, click on this, enter this data, click on this, enter this data, and those are the worst kind of tests that you can have. What is essential is if you have the right test automated, you need to really focus on the exploratory testing, things which may not have been automated because you did not think about that before. In certain weird combinations, you are testing out a particular functionality that is exploratory. You may not have automated those kind of things. So this is what the test pyramid says. Now what happens is as you move from bottom to top of the pyramid, the cost of implementing each test keeps growing. Why? Because at a unit level, I have the code checked out on my machine. I'm running the test against the code base, not necessarily having deployed it to any environment. It is against raw code. I just build it, compile it, and I run tests against that. I do not deploy it anywhere. As I start moving up, so my unit test might be just based on the classes that or modules that I am using, just writing tests against that. For integration tests, I need to start pulling in other modules or components together, and now I need to start running tests against those. So the cost has gone up. I have to wait for someone else to have finished certain things and made it available to me. It might still run from my machine itself, but now I'm dependent on someone else. And that is delay in feedback. I have to wait on someone else. So this is lesser in numbers because remember the pyramid is going tapering towards the top, right? So these are lesser in numbers, but more costly to implement. As I get to JavaScript, again, likewise, JavaScript might not be just one JavaScript that I am using. There could be multiple JavaScripts coming together, and that's how I need to run the test. For view test, web service test, UI test, you need to get more and more of the system together and deploy it for view test, for example. I need to have a UI component on top of it to run the test, which means I need to build it, deploy it maybe locally or elsewhere, and then I'm going to get to it. For UI or web service, I need to have the environment deployed in, the complete code base deployed in some environment. Only then I can run tests against that. So there's a cost involved, and especially if it's a much more complex system, you need to have more and more of these things coming together, the correct environment provision in order to get that. So the cost keeps on growing. Likewise, the value of each test needs to keep on growing also. The unit test is talking about granular code change that you have done. You've changed one parameter of a method for an important logic. You need to have tests around that. Where says, when you go to the view test, you need to think about it from a UI perspective, how is that really coming together? So the value you keep on abstracting of what the test is asserting against, validating against. Likewise for UI. And the bigger problem is time. In order for me to get an environment up with web services and UI deployed, it is going to take much more longer if I've just made one method change in my code. I need to wait for the complete build and deploy cycle to be done to get to this stage where I can run the test. So it takes a long time to deploy. What the pyramid does not really say though, and this is a missing link from existing documentation in my belief, is exactly this. The impact of the inverted triangle is the product, the impact of each test on the product. Unit test is talking, covering the very granular bits of the product, versus as you get to the UI or exploratory of the web service test, it is covering the breadth of the product. So the tests need to be focused in that fashion also. That is very important. The last aspect about this is from the, and this thing is actually messed up, showing correctly on my machine over here, but the view test till the unit test is the technology-facing test, versus the web service UI and exploratory is looking at the business-facing test because you're covering the breadth of the product. What is the business functionality I'm trying to validate? That's what the focus should be. So this is what the test pyramid says. However, if you look at the reality, there are two types of situations I've typically come across. One is called the ice cream cone anti-pattern. It's the same test pyramid. However, it's completely inverted. Why? Because I've got minimal unit test coverage. My maximum focus is on building that massive UI test automation framework, which is very slow, which is very brittle. It doesn't provide the right feedback. It is not up to date with the product functionality, all the issues that we spoke about from functional automation side. And as a result, what happens? There is a larger effort of manual and exploratory testing on that. It's a typical anti-pattern. Over here, I don't know why we still do UI automation. We should just scrap that whole thing and focus on manual in that sense. It would be easier. There's another pattern that we typically see, especially in large organizations, called the dual test pyramid anti-pattern, where there are two test pyramids. Can anyone guess what they might be? No, these are all automated, right? But there are two test pyramids for development and testing, right? Developers are involved in automation, but because of the silos created in the way the organizations function are structured, the developers do not really talk to the testers as much. So they end up building their own test pyramid. The testers are held up building their own test pyramid, which is focused on the specific layers that they're talking about. However, there are two problems over here. One, there could be potentially a lot of duplication across these because there's a silo, there's a wall in between. They're throwing it over the wire, okay, I'm done, now you test it. The bigger problem is that you might miss out on very important scenarios to be automated. That to me is a bigger problem, right? So with all this information about automation, which is one of the key aspects of getting to continuous delivery, if you have all the right type of automation done, so how do we really get to continuous delivery? And this is what I started doing for the bank to get them on the path. Now again, remember this is a very simplistic view of the whole thing. For reference, let's look at the legend. And by the way, all these slides will be available on Slideshare or from my blog. It will also be the link from the conference agenda also. So let's look at the legend, the pyramid. We are focused on five different types of tests. Unit, integration, web service, functional UI, and manual exploratory tests. That's the pyramid layers that we are using. So given this program of scope that we have, what we want to do for the bank, first thing what we have is there's a developer environment where the developer is actually making changes to based on the functionalities required to be implemented. So in the developer pyramid, there's going to be non-existent UI tests because I don't really care about UI at this point in time. I'm making core granular changes. I'm making unit level changes mostly and integration tests, maybe some JavaScript test or sort or not. So my developer environment should focus on these types of tests, mainly the unit and integration. And of course, manual exploratory testing because I'm doing some changes. If I can validate it as quickly as possible, that would be great. Next, when the developer has done this testing and is happy with the changes, the developer commits code into the subversion system or version control system rather, you get into the stubbed environment. The stubbed environment is, I've got all other integration stubbed out. I'm just focused on getting all the developer code from my team into this environment. Everything else has been stubbed out. What type of test can I run over here? I'm going to repeat running the same test, but potentially I can validate some UI over here because I've got all other changes also coming through. It's a separate environment. I'm not blocking off my machine to run the UI test necessarily at this point in time, right? So we've got limited UI tests. Then once my stubbed environment is done, then I say, let me move it on to the semi-integrated environment. Semi-integrated environment, I might say that, okay, let me, from the context of the bank, I'm integrating with other systems from the bank, also at this point in time. In this case, I do not need to run my unit test again because that has already been done in two places. I don't need to repeat that. However, now I'm going to do UI JavaScript integration at this point in time because I'm integrating with these other systems. When I get to the completely integrated environment, over here, I do not want to run my unit test. I do not want to run my integration test because that has already been done. Rather, this is limited in number, not completely excluded, right? Because now I'm integrating with the external to bank systems potentially over here. I want to test those integrations also. And once all that is done, when I get to my pre-prod or UAT environment, I'm not going to run my units. I'm not going to run my integrations. I'm going to do the full validation from web service and UI layer. And of course, in each of these stages, we've got the right type of manual slash exploratory testing done, okay? With me so far. So, what does this leave us with? In order to get early feedback in a complex environment, you need to think about what is going to be the right processes to set up. What are the practices to use based on these processes? And what are the tools that should be used to help implement these practices? Typically, we would go the other way. Hey, wait a minute. Our organization already has an enterprise license for this tool. Let's use this tool. And I know this practice has worked in some other project before. Let's use it. We've got a center of excellence, which recommends these practices. We are going to use it. And then how do we create a process around it? It's eating the other way around. What is required is understand what is to be done. Set up a process based on that, based on the processes, what tools and practices, and what tools will really help you achieve the goals. You might end up with the same state, but it is a much more thought through approach. Based on all this, you identify the correct and appropriate number of environments. Now, environments are costly, especially as you scale up the enterprise, what you are really delivering for a core banking system. You would potentially need for a stubbed environment, they needed five servers, physical servers. Not really required at that point in time, because no one had even thought about virtualizing it, which could easily be replicated again. So think about the right type of environments. There will be constraints. Question the constraints. Are these valid constraints? Can you work around these constraints? But the right type of environments if you identify, and the right number of each of them, it will help you tremendously in getting that early feedback by running the right type of tests in that. Okay, doing smart automation, using the right tools, practices, identifying the right test is going to be very important, and more important is you identify the right type of test for specific environments. So potentially a UI test, if you have implemented, you could say this is targeted, not for my developer environment, not for stubbed environment, but anything other than these, I want to run these tests. What that means is you have to categorize your tests also correctly. Is it a UI test, is it a JavaScript test, is it an integration test, is it applicable for which environments? You need to start putting much, much more thought into that and make it configurable. DevOps is extremely essential because once you run all the tests that you have, you are ready to go from a developer environment to a stubbed environment, and you don't want to manually configure the environment with whatever configuration changes, test data, base-lining changes, whatever might be required to get the stubbed environment or the semi-integrated environment going. You need to automate all of these things. So just by a click of a button, you say take this code base, deploy it in this type of environment and it will do everything required for it, including running migration scripts or whatever else is there. The biggest thing is testing cannot work in isolation. You cannot keep the testing teams separate from development activities or DevOps activities because otherwise you might miss out on very important context that is to be carried forward. So collaboration is very, very essential. Test consolidation is equally essential because as the product evolves, I have written my first test, as the product evolved, I have written a login test. Now I am in iteration 10 or iteration 100, whatever it might be, do I still need that login test? Ask yourself that question. If it is not required, if it is covered as part of something else, remove it. Just increasing the number of tests is not going to improve the quality of your test, the quality of your product. Consolidation is extremely important and so is maintenance. As the product evolves, am I being smart about updating my test framework? Have I figured out a better way to implement my test data creation, for example, to make it faster to run the test? You have to constantly maintain the framework. You have to constantly refactor the framework as part of regular activities the way developers would have tech tasks to do refactoring and clean up the code base periodically. You have to do that for your test automation code also and test automation, remember, it's not just functional, it's all types of tests. Test prioritization. If you do not prioritize your test correctly, you will not be able to say, I want to run this as smoke test. So if my login test has failed, it doesn't matter if I'm doing my account transfer test at all. It is going to fail at test, the login itself. If you prioritize it correctly and put it in the correct groups where you want to schedule the test runs to be happening, it will give you earlier feedback because in a large system, you would end up with potentially tens of thousands of unit tests. You would end up with hundreds or thousands of functional tests, but if the first five tests can tell you that it doesn't make sense proceeding, this is the worst build possible, you want to get that feedback within a few minutes. You don't want to wait for two days, dozens of tests to run and give the feedback that you could have got in the first five minutes. Okay? You need to have the common repository where all these tests are stored all different types by all different teams. The common repository is essential. Otherwise, I don't know if I'm writing a functional test which does not make sense to me to have it as a functional layer. It makes more sense to have it as a, maybe a view test or a JavaScript test. I can go directly into the code base and see do I have a test for this type? If not, maybe I can add it over there itself instead of having it as a functional UI automation. That is very important because the more tests you push down the pyramid, the faster feedback you will get out of it. So you need to be doing that. And lastly, you need to have a single dashboard to tell you the status of what is the quality of your product. CI dashboards will tell you all my unit test what is happening, all my integrations what is happening. But how can you get a consolidated view of what is the quality of your product from one place? And this has to be transparent and visible to each and every role on the team for sure because as a manager, I would like to know the quality of a product. As a test lead, I want to know the quality of my functional test, for example. As a team lead, I want to know for my stream of work, what is the quality of all tests for my stream? So if you have one dashboard, one place where all these data is collected, you'll be able to make more meaning out of it. So yeah, the question is who has a say in defining the process, what is the right process to set up? Practices and tools as well. If you keep it in the hand of one person, it is going to be a biased view anyway. At the same time, if you have the group like this sitting and thinking about process, it's going to be just chaos, nothing else. So it's important to identify the right roles, the right people from each role, which is going to contribute to what the way of working needs to be. It starts from stakeholders, from where the business requirements are coming, from the technical implementation team or the IT teams to say, who has a say in how it is going to be delivered. So you start thinking about the requirements itself and how you would deliver it. And you would keep going down in that fashion for all roles to say, I need product owners, I need business analysts, I need developers, I need testers, I need these roles from a mainframe team as well as from, let's say, Java implementation team. Because if it has to be a common process for the team, you have to involve the team with it. And when the team comes in together, the right team comes in together, that is where you can really get valuable input while it would work in one case but not the other. Why it would work in Australia but not in India? Culture aspect also becomes very important. So you have to mix in these type of things. Yet, the team can get really large in such cases, right? Especially if you look at a core banking thing. So again, there's a governance structure of sorts. You start off with the top layer to setting up that governance structure to say the stakeholders talking to each other. This is what we agree at a fundamental level how we different partners are going to help you collaborate and achieve the goal. Now this is at a high level, at a 30,000 foot level. At a 10,000 foot level, who is going to be required to help that? Maybe it's the whole core implementation is split up into five different streams. At the stream head level, you would think about the right rules to come together and do that accordingly, right? So it is a combination. It's not just one person who should say that. And of course it has to be in the best interest of the client, not in the best interest of the vendor, how you can continue getting business for 10 years or 20 years going beyond that, right? That becomes vested interest. That usually becomes a point where the value is lost and it's all about money over there, okay? So in interest of time, is the triangle of Pentagon now? Or who said yes, okay? Or is it really a n dimensional polygon? The answer truly lies in how many environments you have identified to say this is what is going to, if I run tests, the right type of test in this environment, it will give me the quickest feedback. It will give the team working on that subset of change, the quickest feedback to know if that change for those changes are good or bad. And you keep on adding that. So the answer truly is it is an n dimensional polygon and I rest my case by proving the magic words. It starts off slow, starting to think from a, let me rephrase what I just said. It definitely starts off slow, but release to production doesn't necessarily mean releasing to the true end user. It is how can you get the changes you have done as quickly as possible to the next level so that others can start interacting with it. If I have a payment gateway, that's what I'm building an API for a payment gateway. The quicker I release it to at least one partner, vendor or consumer of that API, I'll start getting feedback from that, okay? So release to production doesn't necessarily mean it is going live, truly live to the end users. It might be to internal users. It might be to internal systems, but I have processes and practices and tools well aligned to say that if I make a one line change, I will be able to confidently run the test as quickly as possible and say yes, it is ready to go to the next stage. That is what it truly means. So from the bank perspective, their milestones, the five year program, they split into three phases, phase one, phase two, phase three. They targeted a certain section of the business what they would want in each phase. To implement that, the first release was just, we will release it to an internal customer, internal bank employee. So if a customer end user wants to set up a new insurance policy of a specific type, they would call to the call center. The call center employee would use this internal system only for that one type of policy and see if it can be done. That is continuous delivery. It doesn't have to go truly the end state, right? You prove it works. You prove the environment's work. The basic setup is working and then you start scaling it up. So if I quickly go back to this other slide, right? This one. In a developer environment for customizing of the core product team, this pyramid might make sense. For a mainframe system which comes together in a semi-integrated environment, for example, the UI layer does not make sense potentially. Maybe the unit level doesn't make sense. They are just integration tests in some form that you have written. So for each type of system you are building that you are interacting with, that will define their own test pyramid. It is not going to be the same pyramid all across. So there'll be one band of say, potentially there might be 10 different layers over here and five of them might be grayed out for a specific team. It is possible. It is complex to build that kind of thing, but if you start thinking proactively about it, how can I get quickest feedback for the changes I have done, you are in the direction of doing that. So the question is in this case study, how did we be able to achieve it, this type of strategy towards what stage, what value was really derived from that. So this program is still going on. It is going to go on for another three years at least. It is a massive thing. And the value of getting the early feedback, why did the bank want continuous delivery? No one actually asked that question. Why do you want continuous delivery? As I said, bank has limited money. Even though it might have sanctioned X amount of dollars for it, no one from the bank is or the CFO, whoever who's actually going to sign the check is not going to give you that X amount of dollars immediately. He'll say, I'm going to give you 5% of that right now. Show me something, show me some ROI on that. With this approach of continuous delivery, you start thinking in a way of how can I get value quickly out? I see something, I tell my stakeholders, here's the value you're seeing out of it and give me some percentage more. So I can spend more on that next level of investment. So this approach has helped in terms of saying, what is the thinnest slice? How can I show it to some stakeholders, some user base out of it, start deriving value out of it and then take on to the next level? Yeah. So the question is how do you really set up API for stubbed environments? How do you set up stubbed environments? It's a very core automation related section, session and discussion. The stubs, the way you would create it depends on what you are trying to stub. Are you trying to stub a mainframe? Are you trying to stub an API? The technologies would depend on that. There are plenty of open source and commercial tools available for it. There'll be plenty. Yeah. Okay, so sorry. I'll pause this right now and we can head outside and continue the discussion. Thank you very much.