 Well, hi, everybody. I'm Andrew Harrison. I'm the lead IT DevOps engineer for Omnitrax and team lead for the Agents of Change. And I'm Brian Tominson, a senior consultant in container infrastructure at Red Hat. And we're going to talk today about how Omnitrax was able to implement OpenShift to achieve a lot of the goals we've been working on for the last year. Quick history about our company, get the way back machine, go back to 1983. We created the first routing optimization system for the short haul and last mile industry. In 88, Qualcomm created a system called Omnitrax. It was the trucking, reporting, transportation reporting and compliance system that was initially the acronym. We developed on that and turned that crank for several years. Between 2006, 2012, Qualcomm and another company, RoadNet, expanded their dominance in the market. And in 2013, Omnitrax was acquired by Vista Equity, at which point in 2014, we were able to merge with RoadNet and XRS. And all of that to say, we've acquired a lot of companies over the last few years from 2014 to 2019 trying to manage that M&A multi-cloud, multi-environment space was very difficult. So in the last year, we've been working on the Omnitrax One project, the first unified platform of all of those various systems, all those various companies. And OpenShift was a major part of that. And our tight cooperation with Red Hat was a major part of that success as well. Quick piece about where we operate in most of North and South America. We have a large presence in Brazil and in Canada as well. So starting out about just a little over a year ago, what we had, what we were building on, most of it was on VMware, almost 4,000 VMs spread out across a mix of everything. We had VCF, we had Vblock, we had standard ESX hosts, some automation with V realized automation, not a lot. A lot of it was manual. We had a little bit in AWS, but everything that was there was, again, just something manual that somebody had spun up, a lot of shadow IT going on there. All of our management, all of our project management style was all waterfall, had these huge monolithic apps, isolated tower groups that didn't communicate well, didn't work well together traditionally. Had people in them that wanted to collaborate, but the mechanisms weren't there. And everything was always very driven towards a timeline rather than a goal or a project. That's where we came from. In the last year, one of the major things that we did was switch to OpenShift. We decided to containerize everything. We wanted to create a microservices platform for our company. We chose OpenShift originally, it was 3.11, that was available, what was available at the time. As soon as OpenShift 4 was in tech preview, we had it in our environment. I don't like making declarative statements. I've been told by other people inside Red Hat that we were the first to roll it into production, take that for what it's worth. We rolled it out into production very early on. We've been doing it for a long time, in terms of how long it's been in existence anyway. With OpenShift 3.11, we were able to get immediate cost reduction because we moved everything out to AWS, got rid of on-prem costs immediately. We wrote customizable playbooks to make sure that everything was infrastructure as code and that it was always adept and repeatable. We were able to provision and interact with new infrastructure that way. We reduced our environment deployment time from over a week to less than two hours. And we had a real total transformation in the IT department and especially in our organization, our team, the agents of change from the traditional waterfall style to a real agile methodology which greatly reduced all of our release times and everything else. When OpenShift 4 became available, we're currently using 4.118. We have 4.22 in sandbox cluster right now, which is kind of nice. Using the UPI installation, we reduced our deployment time from two hours to 40 minutes. We were able to manage and extend the cluster farther by built-in operators. We had a lot more cost reduction by using the CoreOS model for the master nodes so less licenses to worry about rail nodes. Direct line to architecture and design teams through Red Hat Consulting through the other members of Red Hat to work on our team. And for the maturity of that agile process that I was talking about before and making our true CI CD pipelines possible. May I hand over to Brian here for a minute to talk about how we operationalized OpenShift, utilizing operators in Ansible, Brian? Yeah, so one of the things that we ran into is in the OpenShift 3 world, you had OpenShift Ansible and sometimes that would take hours to run. There were a lot of quirks, a lot of moving parts because you had to accommodate every base use case that was out there. So with OpenShift 4, of course, you've got the IPI and the UPI installers and user-provided infrastructure and the installer-provided infrastructure. So at the time, only UPI was available for AWS. So what we ended up doing was, so what we ended up doing was we took the UPI, we wrapped it up in Ansible. And our first pass at that was successful and worked out pretty well. But then we decided we can do better. So we ended up going and taking our Ansible playbooks and setting it up to be modular and our repo and everything. We treated it just like any other software project. So our infrastructure is treated just like software. We version it, we release it, test it, everything. Operators came in really handy with that because an operator workflow is real simple. You just set up a CR and off you go. So we were able to greatly automate management of the cluster. We actually got to replace a bunch of our playbooks and then spun up a few new ones. We ended up running into a little bit of an issue and that was with getting the dev team set up and everything. There was a bit of a learning curve on OpenShift 4 for them. So in an effort to make life easier for everybody, including ourselves, we actually built an operator that they use that they can go ahead and deploy all of their pipelines. They can deploy their applications. They maintain everything as infrastructure is code. The operator reaches back into BitBucket and will look for changes and automatically reconcile. So it helps facilitate GitOps in that way. It's currently a class two operator so it can do seamless upgrades. We're working on how to get it into class three but the cool part is is because it's an Ansible operator the developers can go ahead and just fork it and make it service specific if they needed to or they can just use the generic functionality. Next slide. As far as Ansible goes, we have a very strict model on our team. Basically if it doesn't exist in Ansible, it doesn't exist. So we spent a lot of time getting the underlying OpenShift infrastructure set up and managed and beforehand, you know, and Andrew can clarify on this but they didn't really have a lot of automation around their old infrastructure. So this was kind of a new thing having everything just defined in code. We modularized the Ansible playbooks so we basically have a main, we've got config, we've got ops, we've got add-ons and things like that and these are all get sub modules under main. So any specific stuff goes into there and then we let the UPI do its thing and then our config repo runs and takes care of setting up, for example, active directory, logging, all that sort of fun stuff and then add-ons adds on a bunch of operators that we have that we like to use or to help with quality of life issues for the devs and so on. We can go from deploy.yaml in our main repo to an operationally ready cluster in 40 minutes now which is just absolutely amazing because we can tell the dev teams, hey, we're standing up a new cluster, go ahead and tag your repos now and by the time the cluster's up they're ready to rock and roll. Absolutely. Yeah and like I said, we release our infrastructure the same way that we release software. You tag it as a version and push it out and off you go. We really stuck to an operational excellence model, we really tried to step up on that and we incorporate everything from hash core vault for security stuff and secrets management, splunk logging and all these different things so we really go through not just day two operations but day two through 30 as far as the care and feeding of a cluster. The operator model did not change the fact that we still needed Ansible to do some things, we templatized a lot of CRs so now a lot of things we wanna do, it's just one command and it makes it really easy to delegate out to developers different tasks that they wanna do. Next slide. So yeah and then so today we had that slide a few minutes ago talking about what we had, it was that spread out environment all over the place. Today we have everything in AWS, all of our infrastructure is set up there. We're using Artifactory for our images and x-ray to scan them. We've got Victrops integrated, these are all SAS solutions we've adopted and brought into the mix. We've got Victrops involved to do notification and alerting to our teams. We took the SRE model and kind of turned it on its ear a little bit. We did not want to, we're spinning up 35 new scrum teams over the next year. We're not gonna be able to find SREs to work for each of those scrum teams, it's not gonna work. So we took a different approach, we're using a virtual ops approach where we're embedding with these teams now and teaching them to be the ops part of DevOps. They are gonna be their own SREs, they're gonna be through the role of architecture owner, the role of team lead, they will have the ability and the proper permissions in OpenShift to spin up their own namespaces, start their own projects, apply the, give people grant access to them and all of that. So really allowing them the ability to carry and feed for their own areas, but at the same time giving them the ownership of it. So now when they are building their applications, they are actively thinking about the operationalization of it, that's a hard word to say, sorry. They are actively thinking about that. They, you know, and so they know that they can feed it in the logs in the Splunk and then get that information back, invict their ops as a notification if something goes wrong. That's just, it's just part of how they do their daily job now. Again, incorporating Splunk as well, we've got HashiCorp Vault as our secrets and certifications, secrets and certs management. All of that ties into OpenShift very well, that it's all very seamless. Sistig was something we adopted early on and because of being so far out in front of the edge on OpenShift, we've had a little bit of time to get those two working together, but we've had some great success with those guys and they're working very closely with us and with Red Hat to bring those, bring their service offering to us. It's been a good journey so far. We've got Jenkins inside the cluster as an OpenShift plugin, allowing the, again, the dev teams to own their pipelines. So we're not sitting back having to manage all of that. They own their own pipeline for their code, which has been a great thing. Again, taking some of that operational burden off of the central IT team and dispersing it inside their own team, developing that, getting everybody familiar with the operational part of it so that they can have that virtual ops rotation where you've got one person out of the sprint each time to handle that for them and not having to come to us every time they need something. That takes us up to, oh, I'm sorry, I'm sorry. There we go, lessons learned. One of the things we learned early adoption of OpenShift 4 was critical to our success. It depends on how important being out there on the cutting edge is to your company, though. It caused a lot of headaches early on. There were blockers because of external vendors not being caught up with the technology wave. So for us, we were very much looking forward, very much understanding that we were gonna be doing this for the next five years and didn't wanna get stuck in an older version. So we took that risk and by, that takes us to the second point, being very tightly coupled with Red Hat while we were doing this. They've been embedded with our teams for the last year. They are part of our team as far as I'm concerned right now. And that tight coupling with them has been very key to our success. It made it possible to overcome the early adoption issues very quickly. It drove innovation within our team and within Red Hat. The stuff we were working on was feeding directly back into their development to bring OpenShift to where it is today. That transformation we talked about earlier that from traditional systems administrators in an operational role to DevOps engineers, we did that in less than a gear. The guys who were on our team, myself included, not the Red Hat guys, but the Omnitrat guys. We weren't doing this a year ago. We were traditional systems administrators in our silos. We had Linux guys, we had VMware guys, we had network guys. Now we're all on a team together. We're all doing this stuff every day. It's been an amazing transformation for the technology we're working in and for our careers, it's been great. And that tight partnership facilitated that shift very easily. Having the operators available to us it allowed us to offer infrastructure as a service to the dev teams almost immediately, almost day one. Reduced the need for a very large team. We do this with four guys. Speaking of, I was the guy who didn't include the slide about my team so I apologize, I'm gonna call him out now. I wanna thank Daniel Missel, Damon Edstrom, Don Harrington, and Charles Berry, those are the Omnitrat guys. Brian Karsten-Klassohm, Vadim Zaraf, Ben Bateson, David Lewis, Christian Pulizzi, and Madhu Mari. Thank you all for those guys. I didn't put the slide up there and that's my fault, I apologize. Brian, anything else that you can talk about with Red Hat picked up during this engagement? Yeah, I mean, so we actually learned quite a bit. We learned that maybe having three masters and three workers as a starting point isn't so great. So there were some tweaks there. We learned that the team working on the machine API and everything for AWS, people are awesome because that helped out a ton. I mean, we're able to scale up now in less than five minutes really, really easily. It's automatic, the integration with AWS. And the operators on Operator Hub are just only getting better. We've got some contributions that have gone up to, for example, the AWS services operator. There's some stuff that's, you know, we're still working on like database auth method for Vault and things like that, but we're working through it really well and it's been a lot of fun. Yeah, and that touches on another thing, is not just, you know, this partnership, not just with Red Hat, but with all your vendors. You've got, if you're going to be out in front like this with the latest and greatest versions of OpenShift, you have to have good relationships with your vendors because you're going to be going to them and saying, hey, we're doing this in a completely new way and I need you to help me get it working because it's not. And they will, they'll help you. They want to keep your business. So reach out to them and have that relationship with them and it will definitely lend you your success. Any questions? We're going to save the questions. Okay, we're saving questions. Save the questions so we can get to food. All right, oh, yes, get to food. That's much more important than questions. Thank you everybody. Appreciate it, thank you very much. Thank you so much.