 Hi everybody So we're gonna talk about UPS you heard some DHL story. You heard a lot of other stories. You'll get the UPS story now That's next With their not to there you go perfect. Thank you beautiful there we go Okay, so this is who's on the stage today My name is Mark. I'm an enterprise architect For for UPS and with me is rich west and jignesh. So they are Product owners and managers at UPS. They're really responsible for application teams which You know have been onboarded onto UPS onto open ship so One important thing was You know to understand is that that UPS has a long strategic partnership with with red hat we You know we've been working with Red Hat Enterprise Linux for a long time and and there was this interesting synergy where We had adoption in few sources a few stack So we had purchased fuse for our integration problems And when red had purchased fuse then it sort of came together and there was another company here talking I think it was a Elise silver a less severe Which had a another similar story where they you know had Understanding about few source and they were trying to use some technologies from few source to really automate the deployment of their integration software At that time it was something called fabricate Which was you know, which could which could do things like deploy an entire application You know CICD stack so when it came together with red hat there was this like synergy where open shift was You know covering some of that same functionality, so We really wanted to you know, we we were familiar with that stack. We've Visiting a lot of open shift You know community events and things and we started really seeing that there was this alignment that open shift covered some of that Same functionality we're looking for we wanted to you know Created the capabilities to accelerate the delivery of applications Kind of get the infrastructure out of the way from application teams And then provide some really robust You know runtime features like HA and load balancing and things like that which Which we knew we needed Then as we started using open shift we saw that there was really more there There's a possibility to kind of create what we think is a private cloud and and provide a kind of roadmap or a Runway that gets us on to this hybrid cloud, which aligns well with our with our needs So one of the things was you know, which was the first application that would that would justify that investment And for us it was really this this edge platform and an application called sip, so I'll I'll turn it over the rich Thank you So I'm gonna show you a quick very video of the application site, which is part of the Edge program This is not a trailer for Avengers You start with data and UPS has one of the strongest data networks in the world Today we're creating new ways to put it to better use An example is edge a suite of more than 20 initiatives and development that will work in unison to optimize operations It leverages data to assign tasks and minimize driver and inside employee overtime for on-road PM sort and AM sort operations To get an idea of how edge works, let's visualize the PM sort At night our operations tackle many tasks some simple others complex With data we collect throughout the day a central computer constructs a detailed dynamic operating plan that breaks down Assignments further into tasks for a variety of work groups to perform One of those is the sort plan in near real-time Edge analyzes and optimizes staffing resources and prioritizes next task instructions for employees unloading and sorting packages For example before vehicles return to the facility Edge begins analyzing data and prioritizes which cars should be unloaded first If others arrive with higher priority packages It modifies the sort plan and issues updated tasks wirelessly to unloaders and the management team Edge also takes advantage of data to develop an operating plan This plan separates tasks among employees Balancing workload and minimizing overtime. It's particularly critical when reviewing package exceptions Throughout the day some packages are not delivered for various reasons Edge analyzes and divides the entire amount of work among available resources As new exceptions arrive the workload is rebalanced and communicated to employees in real-time Through optimization edge minimizes overtime balances workloads and improves quality It's this dynamic use of data that creates greater value in the UPS network Data feeds everything in the world and UPS is using it to revolutionize its operations network driving costs lower and margins higher Real quick Presents there we go There we go So SIP is one of the biggest initiatives in the Edge program as we just mentioned Based on the information that we gather we synthesize information from over 40 systems We boost the speed of decision-making within our operations. It also feels our smart logistics network strategy Because of our openshift platform We must deliver it as business needs change and also improves overall customer satisfaction the information we we provide to our supervisors is on a Mobile device with that we can alert the supervisor as the area is needing most attention So if more packages are one area of the building versus the other area they can easily with the information we're providing With this platform is to help the supervisor make those decisions To get our package delivery through the system as quickly as possible and improve customer satisfaction I with that I'll turn over to to Ganesh to talk about our DevOps transformation journey. Thank you, Rich Good afternoon everybody So I know this slide is a little busy slide but with this slide what we wanted to highlight was the ecosystem that we put together Which helped us reach our CICD goals Leveraging DevOps practices to minimize the The turnovers that happen between development team members and operations team members to get the code from a development environment to production environment With the ultimate goal of delivering the software to meet the business needs and the customer's needs in that process of creating that ecosystem and Changing the culture more importantly right a lot of other partners talked about changing the culture Which is which is a very big thing? We realized that it was very important to have a platform that we can rely on that we can Count on as well as it can integrate well with all the technologies and tools listed on the slide there And sure enough open shift with red-edged support came through and we were able to deliver on this product called Scythe that we just talked about With that I'll turn over to mark and he can walk us through the Learns lessons learned while bringing our open shift in UPS Yeah, the the remaining slides are basically like lessons learned One of the one of the main things was You know once we decided that we're actually going to build out open shift There was a long vetting process. We also did a bake-off you heard some very similar processes Which were done at other large companies. It's the same Same kind of thing there was an early set of vetting of the apple of you know of the of the solution The ones we decided to actually build it took only about four months with the help of red had to get it in place Open shift goes GA and in January and by April 14th. We deliver a production cluster Which the site application team ended up deploying on? And at the same time we had to automate it right so our strategy was very similar to what others are doing which is Putting it on physical infrastructure So, you know bare metal servers, which means we need to automate the deployments of bare metal servers Which is a scripting automation like Ansible That some of that skill we didn't really have in house at the moment So we went with red had to kind of help us deliver that so today we have three clusters We did a very similar thing where we broke up our dev our stress in our development clusters, which are three distinct clusters But across those causes we have about 4,000 containers running production has 1500 ish across two data centers So we'd be we were able to achieve at least from a deployment perspective some some very high rate of you know of deployment and The next thing that you realize is that you know, it's not just about the infrastructure and you you heard it today It's about building a practice right because our our group, you know as an enterprise team was really about You know had to had to achieve some transformation in the organization We have the same kind of cultural issues that other large organizations deal with we wanted to enable things in all these areas like new Architectures we had to come up with patterns and practices around microservices. So the first of this we took on was really microservices So what is the what is the governance problems that you encounter with with with deploying microservices? You can't do microservices without automation. So we have to pick a stack That's clear for you know for deploying all our applications quickly. So we selected Jenkins with Then had then as soon as you start building Jenkins pipelines the question becomes what's our testing? Tools and and how do we enforce policy? So we we piece together a stack of tools that really allowed for that and at the same time When necessary plugged in with our existing tools So a lot of custom code ended up being written for attaching to things like our change management systems We have existing change management tooling or a business continuity systems, right? So we had to forward traces off to our existing continuity BCC systems So all of that had to be solved And the main point is that the scope the scope is really much larger than Then you realize if you're trying to to transform a large organization you have to cover all these areas Our team is also responsible for training, right? So we we develop in-house training and from from our perspective we kind of identified You know what were our? Targeted training groups like who are the users of the application of the platform So one is obviously just developers who are developing new applications So we have a kind of tailored training just for them which focuses a lot on CICD The nuts and bolts of the underlying platform, but then targeted training for Infrastructure and infrastructure platform teams so they if I'm a team that owns an enterprise deployment of a database What is what does OpenShift mean to them and it's it's different? Their view of the infrastructure is different. They may not be using the same exact You know Deployment approaches that that they'll still have a pipeline which drives their deployments But they may not be they may be doing Docker builds directly not maybe S2I builds, right? so we had to target training and Really put that all together so that we could you know have a good foundation for transforming the organization Another thing that we had to do is really measure ourselves and have some kind of model for measuring ourselves This is from a Go-to conference in Copenhagen. There's it's kind of an adapted Model maturity model, which we added some things that their model does not include things like governance Which is important at large organizations, but you know and we adapted it and started really using that to Measure ourselves, so we ask you know, where do we start? you know which which of these tracks are really you know that we're behind on and And then what can we achieve in the next year right? What are the barriers and what can we achieve in the next year? So this gives you a nice model for saying you know for judging yourself Another thing we encountered was just size and scale so everything we do at UPS is has to has to scale and One of the limitations we found was kind of on the ingress Router side so as you start throwing just huge amounts of HTTP traffic at at the cluster You find that you you know the at least the basic setup You know there's kind of a basic five server or five node setup That has at least two like Infra nodes that run HAProxy routers and those they just will not scale once you really throw large amounts of volume and That means adding multiple instances the HAProxy proxy having you know Assigning different ports so they can live on the same infrastructure than having And a hardware load balancer in front that really load balances traffic across both in front of us Another thing we found was that once you set that up if you're gonna do docker pushes into that same infrastructure If it's this is your test environment, which may not be necessary for tests but if you're gonna do docker pushes directly in then the ports you open better be also opened on the HA Router on the hardware load balancer because docker requires ports open on both all the way through Just small things we encountered that you don't know until you do it Sizing sizing was another thing. How do we plan out the purchasing of hardware? We with again with the help of red hat we Put together a kind of model like a three-point estimation model, which says, you know, what's our best worst-case volume? Come up and make sure that you include things like efficiency, right? So what's your for a given transaction per second? How much CPU and memory you're gonna need in terms of millicores and memory and then And then come up with a model where you kind of you know, build a normal distribution and say for 90% confidence Given these best and worst cases I will need this many servers, you know to deal with production and That's that's where we eventually got to it was took some time, but this was this is a maturing process The other thing and we heard it already today some other Companies were doing things like reusable code bases for Jenkins We encountered the same thing that the pipelines all tend to have the same basic steps in them Login build deploy verify promote right and it really is in your best interest to capture that in a shared code base that you can give back to Application teams and give them something that is at least a basic setup and for us We it's a little more than basic because we took all of the get flow and the entire branching strategy involved in big flow And really captured all of that in a pipeline We did some very clever things like for for things like feature Branches we spawn new infrastructure. We create open shift projects dynamically based off check-ins to feature branches Right things like that. They're only possible in open shift in this infrastructure, so You can either build your own or wait for ours because we're trying to open source ours We're going through our own little legal review process to get to get our code base out Other thing was Jenkins, so tuning Jenkins for scale, right? You can either distribute Jenkins, which we've heard a couple of application teams, you know a couple couple Companies doing that where they're kind of distributing Jenkins masters allowing app teams to spin up their own For us we started for at least from the perspective of let's build out a centralized Jenkins infrastructure Which we already had let's expand it out But it has to scale now. So how what do we do we use the kubernetes plug-in? Start creating build agents in open shift allow the Jenkins masters to offload build agents into open shift and then tune the Jenkins master to aggressively Spin up Jenkins build agents and these settings are specific settings we encountered that was two days of digging through the Jenkins code base before we figured that out So The future I mean we've had some good success with open shift, right? Are We've really had we now we're in a spot where application teams are coming to us We've we've issued, you know, we've done some of the early transformation work There is this You know drive now from application teams around the organization that they know their future is to build Especially if they're building anything that runs on Linux You know their future is to come build on open shift We really think open shift will start to consume most of the Linux workload that's in our database in our our data centers, so We want to take 39 build out a 39 cluster Expand our workloads. So that was one thing that we really didn't do at that time because we you know Building persistent workloads and stateful sets from our perspective is you really need container native Storage for otherwise the management just becomes out of control We we started playing with it By you know building NFS mounts and trying to manually deal with that stuff and it's just it will not scale when you know At the level we want to do I'm gonna take advantage of some of those metrics and monitoring features and the open service broker API That's an open shift 39 And really give our developers that full stack automation like once we get these persistence in these basic Things in place so we can deploy the full application that will complete the solution for our app teams And that's that's our hope That's it