 For our next talk, I'd like to welcome to the stage Laurel Dixon Bull and Randy Langehenig. Laurel is the IBM Offering Manager for Urban Code Deploy. She manages the business end of the product focused on pricing and packaging, while also keeping close to the product's technical capabilities and client feedback. In her 17 years at IBM, Laurel has worked in a variety of areas, including internationalization and localization, the client early access programs, and software release management. Randy started his career at IBM right out of college. He started in the testing organization for the AIX development team and has had different roles inside of IBM, from developer to testing lead, to client facing technical specialist and several others. Randy's area of focus though has been DevOps, including build automation, automated testing and continuous integration and deployment. He has been applying these DevOps principles with the clients that he works with for over 15 years, and he's passionate about seeing folks achieve success with their business goals. Today, together Laurel and Randy will share with us how to transform legacy IT into a ZOS DevOps team. Now you may be thinking to yourself, when I came to KubeCon, I did not expect to hear about the mainframe. This reminds me of a time I was speaking with an IT leader of a large enterprise and they shared with me, my organization is a museum of technology. And I think this is true for many large organizations where they have many different exhibits that are different technologies from different eras and they all coexist within the same organization. And so for many of you, I'm curious if you wanna share and chat. How many of you have Kubernetes but you also have virtual machines and you also have bare metal and you also have ZOS. So these teams need to work together and sometimes it can be very, very difficult across those different disparate types of technology. So today Laurel and Randy will share the core principles and fundamental concepts behind how to transform a legacy team into a modern one and they'll even give us a live demo of how GitLab Ultimate on ZOS works. Let's check it out. Thank you for joining Randy and me today for this talk about GitLab Ultimate for ZOS. The title of our talk is how to transform legacy IT into a ZOS DevOps team. Randy and I, we work in the IBM DevOps team and our mission is to help customers deliver quality software faster. Randy is a DevOps technical specialist at IBM and I am a product manager in the IBM DevOps team. In case you're wondering, ZOS is the operating system used on mainframe computers. You might be wondering, why are we talking about ZOS at KubeCon? Do they have anything to do with each other? Well, yes they do. Most, if not all organizations are investing in hybrid platforms and solutions. So most organizations have a combination of legacy and more modern technologies and platforms. And this has been accelerated by the pandemic. Everything has changed since the pandemic. More work is done online, more buying is done online and we have very little tolerance for slow or clunky systems for purchasing or working. Consequently, agile teams are working hard to go faster, to deliver changes and legacy teams are struggling to keep up. GitLab offers a DevOps platform to help the agile teams work faster and the legacy teams to become more agile. This year, IBM and GitLab partnered to add ZOS features to GitLab Ultimate. IBM of course brings a lot of experience and skill in mainframe or ZOS technology. And GitLab obviously has a lot of experience in software delivery platforms, which with key collaboration, integration, security and delivery capabilities. To give a quick refresher on why GitLab is so important to DevOps teams in general, take a look at this summary from a forester study about how DevOps platform like GitLab solves common problems for businesses. In forester study, they found that more than half of the customer surveyed use six or more tools in their software delivery tool chain. And over 70% said that end to end governance or visibility is a challenge and no wonder with six different tools. And understandably, almost 70% said that handoffs between the teams slows down product delivery. The reality is that organizations could improve their software delivery speed and quality by reducing the complexity related to having multiple tools, like one for code repository, several for tests, collaboration deployments, et cetera. Because when you have all these tools, many people are involved just to maintain them. And then the integrations between the tools can be tricky and awkward. And this complexity also drives up cost and while it slows down speed. Now, imagine that we're talking about a ZOS software team. If you're new to the topic of ZOS, keep in mind that 70% of the world's data still resides on the mainframe. And if you're using the ZE operating system yourself on day-to-day work, you know it's a legacy system, but it still does most of the processing and transactions for banks, retail, and healthcare systems today. And if you spend more time with modern technologies, you might not realize that the mainframe can process over 30 million transactions a day. That's even more than Google. And IBM's latest mainframe hardware can process up to 146 million transactions per second. So that's why it's still around and vital to the global economy. And here's a fun fact, the Z in ZOS means zero downtime, zero downtime. It's not only incredibly fast, it's stable. Hardware related faults are less than three seconds a year or seven nines, that's 99.59% availability. It's really the holy grail for availability. And you don't have to replace the Z machine every year and machine hardware is designed for redundancy and very few operating system updates. Another fun fact is that 90% of credit card transactions go through Z systems. For example, if you pay bills online, you use a nice front end that's user friendly, it's modern looking, it might be personalized with your name and your account information, but the actual transaction processing that pays the electric bill, that's done on the Z back end. And this is true almost anywhere. If you order a Porsche or a Corolla on a cool web front end and you make all your selections and pick out your accessories, maybe you use a marketplace and then you can follow the delivery of your new card down to the hour. You're using the front end, but we know what system is transacting all that data. That's the ZOS system. So through the partnership with IBM and GitLab, IBM has added functionality to the GitLab platform for Z developer teams. The biggest contribution is the build system for ZOS languages like COBOL and PL1. This is the build tool called dependency-based build or DBB. You plug DBB into the GitLab platform and you have the ability to adapt GitLab to the needs of the ZOS team for ZOS application builds. GitLab for ZOS also takes care of the translation for ZOS languages from the Git repository and adds a CI runner for Z. And we have more features underway. There's also easy integrations with other IBM Z tools like those for tests, deployments, development, if you want or if you already own one of these specialist tools. So why is GitLab so valuable and that specifically GitLab for Z so valuable for a Z team? Well, GitLab addresses major pain points common to the ZOS software development. First of all, those teams are generally isolated from the rest of the application team. The Z team works separately and so many times when I visit customers and I meet an application development team and I ask, who runs your mainframe dev team? I'd like to talk with them. Nobody knows. It's not always the case, but a lot of times it's true. They're so separate, they don't know each other. So DevOps really should be bringing everyone together. So if you're doing DevOps, but we can see this, this isn't often true and that's why we're offering GitLab for Z to make that possible. Another pain point for the Z development team is that they use separate non-agile tools. The Z application team typically uses legacy tools. A lot of times they're green screen and these legacy tools are not agile and changes to Z code is actually seen as risky. The Z platforms run enormous applications. Now keep in mind that source code management is the core of everyone's pipeline and in ZOS application development world these tools are decades old. The ZOS legacy tools have become a hindrance to the Z dev teams and it's also becoming clear that they cannot move as fast as the distributed teams. So having a modern source code repository like Git gives the Z dev team all the code at their fingertips, not just the code being changed. And once you have a more agile SEM and with a build tool like DBB that can build not only what's changed not the entire application like is done with the legacy tools development becomes much more agile. And then there's this opportunity to move faster because you know, you can only move as fast as your slowest contributor. And then finally, another pain point is that ZOS development skills are becoming rare. And this is something to consider. ZOS developers are quickly aging out and retiring. Many opted to retire in the wake of the pandemic before the pandemic, the average age of a Z developer was over 55. It's under 50 years now. And by the end of the year, we expect the age to be 45 fast approaching. So we're losing a lot of the older devs and we're quickly onboarding replacements and onboarding to a modern tool like GitLab it's gonna be a lot easier than a legacy tool. And you can move your developers around now that you have a common platform. They don't have to be so siloed and the silos won't be as stark once you can move people around on the same DevOps platform for the enterprise. And the pandemic has also accelerated interest and investment in hybrid platforms and solutions to support the business flexibly. So hybrid systems provide security, stability and the scale of the mainframe and also the agility of the front-end systems. So really development needs to be hybrid as well and there really should be one platform to support all developers. So these things, the siloed nature of the Z team the tools they use, the rarity of Z skills they all combine to slow down delivery, introduce risk and keep special Z skills in a separate team. And those are the big problems. A DevOps platform like GitLab reduces the tool chain complexity replaces expensive slow legacy tools and unites the Z developers with the distributed developers. Now they can both be more agile. So with that now that I've given you a bit of a runway and context I'd like to give Randy a chance to talk. Randy is going to give you a demo of GitLab with dependency-based build to illustrate the ZOS build capability and a couple of other things. Randy, I think you want to see a chart first and then you'll go into the demo. Right? Yes. Thank you Laurel. All right. Okay, in this demonstration I'm gonna be playing the role of a mainframe developer that's responsible for an application called Catalog Manager. Our team is working to modernize the way we develop our Z code to adopt more agile methodologies moving away from the waterfall approach we used to follow. To help us with this modernization we are using GitLab as you can see on the slide to drive our development including using GitLab from an SCM and Git from a source code management perspective as well as using the really nice pipeline capabilities from a CICD perspective as you see on the slide. We're also using some great IBM Z DevOps tooling to help us achieve our goals to deliver quality software at a much quicker pace than what we had in the past. Let's go take a look at this new modern environment. Let me share my screen. All right. For this demo I'm gonna be working again with this application named Catalog Manager as you can see here. This is my project within GitLab and this application is written in COBOL and it uses CICS or Customer Information Control System to handle the transaction processing on the Z operating system. Now let's take a look. One of our testers found a bug in the program and opened the new issue as you can see here. Let's open up that issue and take a quick look. All right. We can see that this issue's been assigned to me and I've got a milestone as well as a due date here and some labels, all the great features within Git that we all, for GitLab that we all love. You can see in the description of the bug he's provided a really nice set of screens to show me. This is sort of the 3270 view, green screen view of the application and you can see that as he gets to our item this item 0010 and clicks on that. The description that's displayed for this item on this page is showing a debug message. This is what we're gonna fix in this demonstration today. So what I've done is I've actually already created a merger quest for this bug. Let's go to that merger quest really quick here and you can see that I've already committed some code to fix this problem. Let's look at that, that the code change really quick in the code. Before I do that though, I do wanna show you with IBM Wazi Analyze. If you are a new developer working on a program like this one, catalog manager, using Wazi Analyze we provide a developer with a visual of the interactions between the programs. So you can see the sort of a call chain that's happening with your program. And I can see here, as before I did my changes that we were making a call to an assembler program called wait and this isn't actually a good thing. This looks like a bug to me. So having this visualization of these Z related programs is very helpful for our developers, especially as we onboard new developers as Orl's mentioning earlier. Okay, so let's go back and look at our code change actually. I'm gonna open this up in the web IDE within GitLab to show you the changes that were made actually here. And if we scroll down, you can see that there was a bit of code that I had to change to fix this bug and here it is right in this area. And you can see that it was making a call to that assembler program called wait that we saw earlier in that Wazi Analyze view. Now I've commented out these lines to correct this bug I believe I have and I've run my pipeline against my feature branch to do some validation of those changes. One thing I just would like to point out here from a mainframe developer perspective, a lot of our clients, they use intelligent IDE. Examples are IBM developer for Z which understands, you know, cobalt code, PL1 code. It allows you to kind of, you know, view your cobalt code and your copy books if you have those side by side as you're doing your development, it's really quite nice. And if you want to, a lot of our clients are using Visual Studio to do that same type of work from an IDE perspective. And of course that's easily done as you integrate with GitLab and with our IBM tooling. So this looks great. Looks like this was committed for our merge requests. Let's go back to that merge request really quick. And for the purpose of this demo, what I'm gonna go ahead and do is of course I could have somebody on the team that would approve this change. In this case, I'm bypassing that. And I'm gonna go ahead and mark this merge request as ready so that we can take a look at a running pipeline and kind of what's going on behind the scenes. I'm gonna go ahead and click the merge button here. Again, there will likely be somebody else on the team that would do this, but I'm gonna go ahead and do it here for us so we can take a look at a running pipeline here. Right, so this will begin to run our pipeline against the, in our case, the main sort of the master back to main is our development branch. And so we're going to begin the running the pipeline there. Let's go ahead and take a look at it as it runs. You can see now it is running the build phase of this pipeline. And this is really where IBM dependency base build comes into play as Laurel was mentioning earlier. And what it's doing, it's initiating what we call an impact build or sometimes it's referred to as an intelligent build. And I say that in that it detects the code that you've modified or changed since the last commit to development and it will compile that code along with any other program that was dependent upon the code that you've changed. So it's much faster than what we used to do in the past because in the past as a developer we're working on a program that included let's say a hundred COBOL programs it would have to compile all of those and it would take quite a bit of time. You can see that our build is already completed here which is fantastic. So it's an intelligent build and along the way let's take a look at that build process where DBB was called. You can see the output here. As we scroll up in this output log you can see that the build finished a clean state which is fantastic, good to see that. One of the things as we're moving faster and becoming more agile we wanna do that but we also wanna ensure that we have good quality code. And for the mainframe developers IBM has an ability to run Z unit test cases very similar to what you would do on the open system side or cloud native side where you have N unit or J unit type of tests that are run automatically at build time. We're doing the same thing here from the Z unit perspective. And again it's intelligent. It knows what code you changed and based on the code you changed it runs the corresponding unit tests based on that code change. So we can see the results and another thing that's nice here is we transform those results into a J unit report format that you're able to see within the GitLab interface which is really quite nice. All right, so this is fantastic. Now let's go back to the pipeline. We can see now that's running the analysis phase and we can see that it's running a code quality check. Essentially what's happening here this is again is having another level of quality completeness with our work. It's checking to see if the code you changed is meeting rules that your company has set up to ensure quality. And these rules are specific to COBOL or PL1 programs out of box but you can write your own rules as well. Rules, a few example rules. One being avoid language elements that are obsolete in enterprise COBOL 5.1. You can check for that and if you find those you can alert the developer of that situation so they can remediate it. Another example of a rule is avoiding a select star in your exec SQL code. If you had that in your code we'll identify that and alert you of it and of course they have different severity. So if it's a high severity we'll fail to build at that point so that you can remediate that issue very quickly. Very quick feedback loop for the developer again ensuring really good quality code. Then of course you can see my pipeline's already run. That's how fast it is. It's already completed the packaging phase and packaging what we're doing is we're integrating with Urban Code Deploy to package up the artifacts of our build and then creating a mutable artifact that's pushed in this case to Artifactory which is a definitive asset repository and it also integrates with Urban Code Deploy to make it aware of this new version. The version label is typically the build tag or the pipeline IID in this case but so you have really nice traceability all the way back to the build and this is fantastic from that perspective but you can see this is using a command line client for Urban Code called Buzz Tool to integrate and to push these artifacts over to these different tools, Artifactory and of course Urban Code Deploy. And then to finish this all off the remaining steps now are just to do a deployment. So we have triggered an automatic deployment in this case into our integration environment using the integration with Urban Code Deploy and this integration environment allows us to do some initial testing of our changes integrated with other applications just to make sure there's not any integrated integration issues that need to be resolved. In this case, we've done our testing everything looks okay. So we've set up our pipeline and GitLab to allow us to do a manual sort of step here of initiating the deployment into our acceptance test environment. I'll go ahead and click that button. That acceptance test environment is actually a much more production like environment with a lot more data. And so we can do additional testing in that environment before we go into production. Now, if we go back into Urban Code Deploy this is the Urban Code Deploy web console. Let me log in really quick to it. We're gonna take a look at the application catalog manager. It leverages a really nice application model so I can see first part of the model are the target environments. And then of course we have our components here that are being deployed and it tracks inventory and those what's deployed where which is really, really nice. Let's look at the history. It's got a rich history of everything that's taking place. We can see we've added another layer of governance here just to add an approval for this deployment into our acceptance test environment. Let me go ahead and respond to that so we can kick off that deployment there. And when I do that, you can see this interface gives us a really great trail of audit. We can see the sort of the who, what, when, where, how, details at the top. We can see who approved it. And then of course, if we go down further, we can actually see the deployment process as it's running. And you can see how quickly it is. This is so great because in the past with our legacy solution from a mainframe perspective, when we needed to promote a change for our application from dev to test, we couldn't just deploy our changes. We had to deploy all of the programs and promote all of them to test and then promote all of them to production. There's a very time consuming process and risky as Laurel was mentioning earlier. But you can see here, there's no need. It's really simple and much more agile. We're able to deploy our updates into the environment. You can see it's complete. And in this case, it's doing a phase in the kicks application so that it's updated to include the new changes that we've just put in place. So with that, I'm gonna go back to Laurel so we can see the end result of these changes. Randy, thank you. That was great. So we have now back in the issue that you originally found for yourself, right? A screenshot from the original report. The person who originally reported the problem with the good result, right? You don't have that debug message in there anymore and the catalog entry is perfect. So that was a great demo. Thank you for that. So what you really showed was how the pipeline leveraged dependency-based build to perform an intelligent build of the application that includes quality testing with ZUnit and code quality to ensure that the code meets standards of quality for the corporation or the enterprise. And you also showed how you can perform incremental deployments quickly to the ZOS LPAR on the mainframe system using IBM's Urban Code deploy. So this is an exciting new world for our mainframe teams using more modern tooling to meet goals of quicker delivery with great quality. So we'll end the presentation with a suggestion that you listen to the replay of a webinar that a colleague, Chris Trowbridge did with a customer from OneMain Financial and they talked about using GitLab with DBB for IBM Z applications. And it's a really interesting webinar and there's a lot of discussion about the cultural differences and how to introduce new modern tools to legacy development teams. So with that, I'll end the webinar and I wanna thank you all for joining us today. Thank you.