 Thank you So we got a half an hour this when I first put this talk together It was a big topic not just because it's mainframes, but it's a big area to talk about So first of all I'm gonna talk to you a little bit about myself just so you understand a little more about me I've been a thought works 12 years a principal consultant Canadian I live on Vancouver Island. These are things that are of interest to people and I've been working in the areas We talked about but over for over 10 years I worked in rehabilitation with people with severe disabilities and worked in research before getting into IT I think experiences like that color the way I see the world and colors how I think about problems So I think those sort of bits of information are very important for people to understand So Cobalt gets a bad rap These are very common quotes and how people feel about cobalt within the high-tech industry We think you know that we use the word legacy a lot We don't like it in fact I get asked by people in our company asking things like What do you work in this area and why don't I do something useful? Actually, I get questions like that which throws me off. So people badmouth especially green screen applications It kind of hit me one day. I was doing some Node.js work in VI Plugging along and I looked down and I realized that the VI editor that I love so much is a command line text editor written by Bill Joy in 1976 You know and I realized right then that Frampton had just come alive in my command line, you know so I'm not going to start badmouthing Systems like mainframe systems or green screen systems because if it's durable and is good there might be something there There's the right way to build software. We all talk about it. You know, you got your test first You got your CI you got your integration work in your pipeline right through to production Ops is right in there. Everybody's doing well, you know, you're Freddie Mercury at Wembley 1986 and That's you know, how we all are is any anybody in the room have a situation like that. That's what they're working in If anybody put up their hand, I was gonna make sure they bought drinks for everybody else in the room because I don't think that's the reality the point is We talked about legacy systems, but we think about Twitter and how many fail-wales showed up During the early days. Was that suddenly a legacy application? Because it wasn't doing what it meant to do was it fit for purpose is the question We should have been asking so legacy has nothing to do with technology or platforms. I'm discovering I'm gonna talk about cobalt and why it's relevant in our world and why we should pay attention to it. That's Google in 1999 Yeah, they're kind of nerdy back then too Cobalt systems we can compare to the big things like Google and Amazon. These are big things cobalt is big I'm gonna give you some stats taken with a grain of salt because it's difficult to quantify how big cobalt is in the world in mainframes But Google is everywhere. I'll agree with that. However It is estimated that 60 to 75 percent of the world's transactions happen on cobalt mainframe systems We use them every day if you use a credit card You're probably interacting with a cobalt system if you sometimes go through traffic lights in major cities You're probably talking to a mainframe somewhere If your insurance if you are getting a prescription filled a good chance that you're dealing with a cobalt mainframe system Somewhere underneath the covers you might not see it I do about 10 to 15 searches a day mainly for you know faux places and places that I can buy Thai food But the average American is estimated to Interact with a cobalt system 13 times a day and we're talking about things like credit card transactions in banks Amazon recorded about 27 million sales in their last peak holiday period Google serves now over 5 billion searches a day Cobalt transactions account for 30 billion transactions worldwide and that's a very conservative estimate So these are big systems. They're part of our ecosystem. We live with them and lastly Google has hundreds of millions of lines of code It's estimated somewhere between 180 and 240 billion lines of cobalt are running in production today It's also estimated the average cobalt developers supporting between two and five million lines of production code So these are things we should care about if you're a little worried about that number actually That's kind of in line with C++ in Java numbers as well. That's the order magnitude is quite large worldwide So what does this mean? What does this mean to us first question is why don't we replace these systems? Everyone says let's just get rid of these There we go, why don't we just get rid of these systems and you know that's that's you know The strangler pattern is what we talk about. Let's just drop these systems. Let's put something in we like something. We enjoy, you know Well, that's late wins by the way if you search on late wins and dance, you're gonna see some amazing videos of these guys to dance Replacing a cobalt mainframe system like for like is a problem. You're gonna go and do a business case like this I want to replace this with a technology. I like let's suppose you're a Java guy I'm gonna replace it with Java and I'm going to spend years doing this I'm gonna spend a lot of money and we're basically gonna have the same thing we started with except it's gonna be in something I like It's very difficult to do a business case and get investment for something like that Also, these systems cost replacements cost tens of millions of dollars two billions all you have to do is do a little research on how often people have tried to replace Saber and Also look at the years of effort in the organizations and the businesses that have been stuffed trying to do this type of work And lastly, there's a high likelihood of failure The US Postal Service considered rewriting their cobalt-based system with 15 years of legacy Information in it as they called it then they realized that was core business logic and they just replatformed instead These large systems have decades of additions. There's workflows are embedded into them and they're part of the DNA of a company And you spend hundreds of millions of dollars and you don't get a lot of return for replacing them So mainly especially in the fields of finance and insurance people don't replace these and Lastly with these big integrated systems within companies. There's something called muscle memory that I think about with systems That's Martha Graham. I think she's universally considered to be the 20th century's most important dancer And she's the mother of modern dance to many people She can do that movement because of muscle memory. She's trained over and over and over again repetitious Movements to eventually she has can do with them with a without conscious effort In fact, if she dances while she thinks then she actually can't do it She has to let go let her muscle memory take over and she can do the dance years later dancers can replicate Movements like that because of muscle memory. I think large enterprise systems like this imbue muscle memory into the companies that they're in Into the organizations they're in the people have muscle memory from using it teams have muscle memory from using it and lastly the whole organization's processes Evolve organically with the systems over time. It's not just a matter of putting business logic in but it changes the way Information flows in the company the way people communicate in the way people talk So if you're talking about replacing a large system like this, you have to think about that factor and Lastly I've come to realize that legacy is a myth that was UFO spotted over Italy in 1960 apparently was to do with John F. Kennedy the first Catholic Pope The first Catholic president Being elected president and the Pope in the aliens somehow all coming together So I think there's a lot of legacy there as well But what I believe my myth is it really doesn't matter if you have co balls or mainframes Java AWS Ruby on your stack I think the question is is it fit for purpose? Is this system doing its job? And is it fit for purpose if the answer is yes if the answer is truthfully yes Then the next question is how do we invest in this system? And how do we take advantage of that if it isn't fit for purpose for whatever reason? Then it's certainly a good time to replace it But if it is then you should invest in it and if we're really very very good at our CD practices Then we should become very good at bringing these folks into the fold and bring mainframes in the fold So I've stopped using the word legacy. I think this is the heart of CD growing the tent making it bigger and bringing other other types of technologies other types of thinking on board and improving I'm talking about the Sun simplification Program Sun Sun Corp is a insurance company and bank out of Australia And I was fortunate enough to work with them for about a year and a half before coming back to Canada in the US They have a strategy that's working They're evolving and they're improving their systems and their company Simultaneously and they have a strategy around their mainframes and it's working across their company and they're getting tangible rewards And their company is growing and prospering by that. So I'm going to talk about that experience By the way, this is a very subjective Measure I just want to say that I'm giving it my own personal view my own personal story around this There's I think a lot more gonna come of this. So there's a multi-year effort at Sun Corp What they're calling the simplification across banking and insurance. I was working the insurance side but overall they're investing 300 million dollars over several years and they're looking for a benefit of 225 million per annual in savings. That's a significant investment and a significant improvement and In the insurance side, their their margins increased after the first year of the simplification program by we're talking in the neighborhood of 1% Which doesn't sound like a lot but when you look at the scale and the size of insurance business That is a number you want to pay attention to They also had an agile transformation building box program in place already They spent several years working on culture many of their areas had learned around agile principles So you go up to a business area go up to finance you see cards on the wall You see them having stand-ups, you know in the finance department So they're an interesting company that way because they've adopted the culture and they've adopted some thinking Not in all areas of the business and certainly not in the mainframe side yet, but they adopted that throughout the company So there's some skills in place. There's some values in place and they also worked very very hard at this During the simplification program not about replacing technology per se But aligning around the customer and aligning around the customer experience So a big part of this was replacing their customer facing systems Which are their online systems and improving their processes for working with the customers internally So it wasn't a matter of just changing a system and say getting some benefits But they were looking at aligning their company around a particular goal and lastly They took on a strategy of saying we're going to retire 14 systems 14 mainframe systems and pick our bet and bet on one system and invest in that So if you look at the whole lifecycle of Explorer grow sustained systems and retire They took one off sustained and moved it back to invest and the rest of the systems are on track to be sustained and eventually discarded So that's a very clear clear Goal and a clear strategy that's working for them simplification cleaning up in alignment It's nothing wrong with having a mainframe system probably not good to have 15 of them So that's what they were looking at So this program is ongoing and I was working on the program team as the test manager originally and working through the test plan And the mainframe team was running into some troubles and getting traction in this program When you have hundreds of people working on a program very complex if you've ever worked in one of those big internal Programs and a lot going on they were getting a little lost in terms of their ability to get going and get some traction so I went down as delivery manager to work with them and When I showed up we had about six to seven months to production and a feature hadn't been done yet So things are a little bit people are starting to get a little nervous So I showed up and I was embraced and loved as soon as I walked in the door They couldn't wait to see me And the reason why is they were doing very well Thank you very much in many ways and they've been doing very well for a lot of years and suddenly this agile guy shows up And you know they were wondering what was going to happen They were wondering what was going to occur and one of the things I had to do right away was start to learn about Listening and adapting I didn't understand what a mainframe was I didn't understand the culture there And I didn't understand the way people think and talk about problems. I had to listen very carefully So why is this working? Why does a program like this work? This these sort of you know It's anyone worked on a program where you've had hundreds of millions of dollars. You've had hundreds of people working Have you ever been involved with those? They don't generally work. There's a very big failure rate in those types of endeavors. So why did this work? First thing I'm going to talk about is organizational factors That's 1922 on a 22-story building standing on a standing on a something or other two chairs anyways and not much under them so organizational factors very much a balancing act First of all, there's a clear direct business case I often see business cases that read like a Hercules Perot novel and I don't know who did it They're unfathomable. They're unattainable. This one made sense right from the get-go It was approved by the board of their company in 20 minutes And when you talked about the benefits and you talked about the direction They got it right away. Everybody could align by it. So every team knew what the goals were It wasn't like there was this vague thing to improve something or other very straightforward goals The simple technology strategy is very clear moving to one system brand at a time They have different brands and also in refreshing the customer systems on top of it So we had a new look for the customers and much more interactive and much more responsive systems and then lastly aligning on those with consolidated Processes and people oriented processes within the company to be able to respond to customers. So very clear very simple very direct Milestones were understood big milestones by product by brand and people understood what those look like and Everyone understood what the measurement looked like on those and lastly the estimation when they started a program of this size people often do They come up with models. They come up with big spreadsheet models. They do a lot of Ford calculations They run Monte Carlo simulations. They do all sorts of things. They looked at past Large projects that worked and use that as the basis. They said that's about arcades to start with They did the old velocity measure, but more at a corporate level and that was a very sensible thing to do the goals they said were very challenging and They they weren't easy goals and the time frames were very aggressive, but they're all Doable and the question became why can't we do them as opposed to why aren't we doing them? So moved ahead very quickly. There's shared values in the program top to bottom there's a shared leadership model and People grew leadership skills within the program it was key to find local leaders and grow them in their capability and support them and the program team as a whole was focused very much on leadership as opposed to Technical management, so we were expected to show individual leadership We were expected to reflect back to each other and learn leadership skills and imbue that into the people around us So people were Encouraged and brought forward to be leaders. Secondly, there was gender balance in this program top to bottom I'm not going to go into a lot of details around that but any successful endeavor. I've been in my life There's gender balance within the organization and within the teams And I've seen both sides of being in the IT industry is easy for me to look at the old guy teams But having worked in rehabilitation. I was often the only guy in the teams and very a female dominated industry I've seen both sides of that and the thing that I've learned is gender balance across the board really helps talk to bottom the delivery teams had gender balance and They started with a lot of unknowns. There's a lot of risks there They didn't say we have to get everything known and they didn't spend a lot of time trying to prevent risk They fenta they spend time fixing problems as they come up and move on don't do it twice But they spent a lot of time moving forward taking steps and getting going in the right direction You don't learn you don't get better until you start walking and they developed evolutionary strategies Which means they tried things and if they weren't working they didn't beat themselves up They moved to the next thing and they tried something if things were working and they were suddenly slowing down or they're not getting the metric returns They would improve things things like card walls things like program walls interacting with teams the scrum scrums Whole process with 20 teams was reworked. I don't know how many times do we found something that worked So it wasn't a matter of saying this is the way we're going to do it That's our standard There's a constant growth and exploration and trying to get better at things small incremental changes measure each week and see how we're doing and There's program alignment business and customer alignment right from the get-go One of the problems was there is a clear separation often you get with these types of teams between the people who use the software The people who support the software from a business point of view and the team's writing software, especially in these large legacy style systems So what we did is we brought them together and we put them in alignment The people are going to be supporting the software from the business point of view We're also working with the people to be supporting it from a technical point of view we Spent a lot of time measuring the management teams very very closely we measured them much more closely than we measured the delivery teams Delivery teams are pretty good at self-evidence But we started to measure the management teams much more closely in their performance daily and weekly Was the sort of granular level that we measured people's performance on and was very outcome based We didn't say you're 40 or 60 percent on a task you were done or not done It was a very much an outcome based organization and lastly. There's mistakes. We're learning possibility. We're learning possibilities We learned from them not there was no problem making mistakes problem was doing mistakes a lot not learning from them Not getting better, but if he didn't make mistakes nothing was going to happen and there was a no fault escalation Policy there and what I mean by that is teams needed decisions teams needed help and it didn't matter what it was And if they weren't able to solve it it we encourage in fact We spent a lot of time going through teams looking for problems and pulling them out and saying we'll take that on from a Management point of view so some of those would be like I need the CFO to make a decision in a company that size Someone from the management team would take that on and they'd have a week to get it It was very visible to the team and needed that decision made what was going on with it and daily that person was held accountable Those types of and you know it was nerve-wracking because you'd have to stand in front of a lot of people every day And they'd say how are you doing on that? Have you got that problem solved if you moved it along? And so we were pushed very hard to be accountable down and into the teams and across the organization So personal accountability a lot of responsibility, but a lot of support at the same time So these aren't technical things, but these are things around changing the organization in the structure And so the mainframe team is very conservative came into this type of environment and started to respond They started to change and they started to get much better because they were empowered So I'm gonna talk about some of the technical factors That's the typewriter of the future in 1970. I love the typewriter of the future Test strategies were key This is how you tested a football handle at 1912 If you're looking for direct intangible feedback, this is one of my favorite pictures for that so test strategies There was automation in place Already some automation around some of the green screen parts of the application That was there. So when I got to the mainframe team I decided that we needed to invest more and luckily we were able to take make decisions around investment as a group So we invested much more in test engineering not so much in the functional testing because these systems are relatively stable relatively strong, but we invested in the risky areas Integration we had a pricing system that we needed to integrate with that was very challenging to integrate with because pricing and risk systems are finicky We also integrated with some of the new systems. So all the new interactions with the website were all automated And lastly we supported a great deal of uat what people traditionally think of as uat so in the agile world we think about business testing early and often we think about Testing individual stories and features But a big part of this was testing the processes and the people and the readiness for production We were able to run the company on on this new platform and in this new integration So we invested a lot of time in uat. We gave automated testing tools to business testers We let them run tests themselves. We let them record tests and say we found a defect I'm going to show it to you and share that around We let them feed back through those testing tools back into our test automation suite So it's things they were learning about the process and fitting it onto the new software system We fed that back into the test cycles We also used uat as a way to see if the processes were working in correct alignment with the systems We had to bring those two back together and we had to see how usable the systems were and how people could work with them We also supported training. We had to train 5000 people in a month And so we used our automation engineering to help support training getting environments ready get data ready And we put a lot of data into those systems We believed right from the get-go that the best testing is still done by people by experts who are business people Experts who are technical people and the automation supports those efforts. It doesn't replace it you can bring up a high level of automation and then bring people up to a much higher level of Testability and testing because people are very good at what they do People are gonna be the best judge of fitness The business environments were turned on as early as we could this was painful Extremely painful with the mainframe system with batch systems But we did the effort and was worth it in the long run by a long shot And lastly we used exploratory and testing techniques in the large We did mission-based testing for parts of the systems when we could we taught exploratory testing techniques to business people If I was going to teach a technique to high school students or teach skills to high school students I would teach exploratory testing techniques to them and I'd sick them on Airline websites. Let me tell you and they could record things in Facebook. That would be I'd be fun defect management was a Value that we set early. We realized that changing a business process or changing Changing the way people worked with the system is a good solution to a defect or a gap It isn't always a technical solution So we worked on that together and I would say over half the defects that we solved were through business process changes or Training changes if I could record more about that or get more metrics around on the program I would have and what we learned is that Business decision business decision makers became very sensitive to what was possible in the system There's a virtuous cycle there and vice versa So we had really good conversations instead of saying how we're gonna fix this It was like you probably can fix that am I right or should we change in their process or should we adjust together? We had very quick conversations very meaningful conversations because we understood each other's world much better And lastly the solutions were managed jointly with them. It wasn't this thing of saying You know we got that we got the bug solved. It's moving up through the stack We resolved it by making sure that people could use the system that could be trained on it And we could manage our business processes on that that was very important. It was the best triage experience in 25 years for me Environments, I'm gonna pick up the pace here mainframe build pipelines don't exist They're very difficult to do we started with a very big release We released early to production so we can make sure that things were working This was new territory for us releasing on the large we released 47 systems into productions over a two week period And over a million lines of code The next step though we did right away was plan the next release and planned where we could automate Based on the experience that we had we found places to be able to deploy automatically configured Automatically and learn from that and we started to get repeatable configurations. This had all been done manually So we started to take steps in the right direction You couldn't do it in the large but we could start stepping in that direction We learned from our manual processes The manual process is the friend of you and if you don't have to automate everything you should look at those processes first Think about improvement as your goal continuous improvement in these systems and in your processes analyzing measuring work in progress and Deploying it and then investing in the right places in automation. These are big systems to change into a Continuous delivery world. So how do you start? Let's start in the small. Let's look at the manual processes and get very good at them and Testability that's the London double-decker bus till test now. That's feedback for you, baby CD investment we spent a lot of time and I learned something about language and understanding here The team wanted to replace a key section of code that was funky. No other way to put it We had a lot of technical debt in one area They wanted to spend a lot of time improving that and on a very tight schedule I was going to on a six on a well four months of test development I was going to take the best developer we had and turn a loose for six weeks under this problem And it was not a decision I was comfortable with because of the tight time frames we had and I didn't make a decision The the lead developer came up to me and he said well We're going to deploy it it should work probably for most of the upcoming systems that we're gonna be working with and we'll have new features There but we don't have to turn the features on right away And I went bing and I realized what they were talking about was feature toggling and I'd been listening with the wrong Ear I've been listening with a Prejudice dear if you will and I realized what they're talking about by feature toggling I suddenly realized that is an easy decision for me This is a good practice It had to translate into another system how we thought about the problem how we attacked the problem But it was the same principle and at that point you said yes to it and then you you know Put plan B in place to make sure you got to your production But that's something I learned was about listening in an environment that I was unfamiliar with it was very difficult at first We made integration testing easier We got the QAs and devs to work together logs integration messages data elements We started to expose them We made them easier and we learned to trigger individual batch commands If you ever worked with batch systems, they're the devil and trying to test batch commands are very very difficult So we learned how to trigger them independently So what we did is we ring fence the system and the integration points and how we worked with the system And we put testability all around that we got a handle on this system And then we started to spread out into the other systems. We attacked it that way. What did we learn about this? Two minutes the main frame wants in one thing I learned is that main frame folks want into the continuous delivery world And we need to invite them in and we need to work with them and we need to be positive about that Don't evaluate legacy systems as legacy evaluate them as whether they are fit for purpose and whether they're worth the investment If they are bring them in Recognize CD practices when you see them even if they don't look like CD practices Think about them differently think about your principles go back and question the way you think about things and adapt good practices into new areas That's very important Main frame people are overflowing with ideas It's a very conservative culture But when you go in there and you start to work with them you discover that they are keen to try things They just haven't had an opportunity to do it They have lots of good ideas and they have a vocation towards their craft and their systems like everybody else here And you have to encourage that and find those people and bring out those ideas in them. It's extremely important You need clear business outcomes when you're working with these large enterprise systems You can't just go and say I want to improve systems You have to have a clear goal measurable goal and align to those and you can get investment You can get the virtuous cycle going and lastly if you're going to start anywhere Working with these systems and working in CD practice as I strongly encourage you to start and test practices Certainly things around build configuration deploy and other types of activities are very important But I would start looking at those as work in progress improvements But I would invest heavily in test engineering right out of the gate I think it's the best approach and it gives you the most of the most bang for the buck So I have a minute left Or two is that correct? Two or three minutes Well, if someone's got a question they put their hand up what I'm going to do is just to show do a quick technical Look here one of the perceptual filters. I had was around gooey automation. I don't like gooey automation I think it's a bad thing to do just that's my prejudice coming in What we found out is that green screens are durable green screens are very fast and they are very they have a lot of Strinks around gooey automation So you're going to see a little test running here and run that and what you're going to see here is a demo There's got a little mistakes in it, but that's okay The green screens are remarkably durable and remarkably quick and I found this to be an incredibly good testable endpoint So they're kicking off a little test here and this is actually going through an amazing number of workflows This is creating a new company. This is and creating new New contracts for that company That generally takes an analyst about 10 minutes to do 15 minutes to do and what we discovered through this was that green screens are amazingly fast and amazingly durable Not just like not just the systems and cells but testing them is very fast So what's going on in the bottom is jobs are running and they're waiting for the jobs to run And then they're kicking off batch jobs to run in the background which is quite remarkable So what we were doing was for supporting uat testing because we had this type of capability We could set up 500 to a thousand policies anytime we wanted for uat to run We could set up Training environments with thousands and thousands of policies for people to learn from but what we found and we use concordians So I'm going to stop it right there This is just an example of some of the concordian output But I was very surprised that GUI test works so well on a mainframe environment We we didn't think it would we put a lot of time to the test engineering and working with the system to make it work properly So when you look at through those systems and that type of speed You need to work with the strengths of the mainframe system and the test ability and the testing of the mainframe system Can be done in different ways and the automation can be done in different ways Normally, I wouldn't attack a system through a GUI But I learned that the programs and the GUIs are very well aligned to each other and they're building blocks in each other You can take a program in a GUI screen and put them together and build them into very virtuous blocks of testing So it was quite something so anyways, I'm sorry. I didn't set that up properly in the slide I'm terrible at slides obviously and I've run over my time So if you want to catch me in the hallways and to talk I'd be happy to talk with you more It's been a pleasure talking to you today, and I hope to see you soon. Thank you very much