 I'm Stephen Boyd. I work for the Electrical Training Alliance. This is a very small not-for-profit organization working with the National Electric Contractors Association and the International Brotherhood of Electric Orders. That's a mouthful. However, what we do is we create the curriculum used by all of the apprentices going through an electrical apprenticeship program. Some 15 years ago we were still publishing books and working through selling those books and then the worldwide web crashed into the scene and everybody started to say, hey, let's do these things online. So we became somewhat of a software company embracing such technologies as Moodle for our learning management system and several other homegrown applications to support test generation in a proctored manner. We have learning record stores, all of these things that educational institutions have to deal with as they work through their processes. As you can imagine, working with those, you're dependent on other technologies. Moodle, for instance. You can only do so much before you're constrained with their technology. Although we've greatly modified that. I lovingly call it our frank and Moodle. We still use it and we're still dependent on those technologies. Recently with the demand for workers, we were given an opportunity to create our program online so that apprentices could actually get to their curriculum throughout the country. Some 40,000 people average a year applying for apprenticeship and take their actual first year apprenticeship program through a process that we're developing. Part of the grant required us to create tracking software. Some knowledge there. The slides do go into a little bit of this, but I'll just talk through it. If you can imagine managing a training center and there are 250 of those across the country, each with multiple apprentices, anywhere from 15 apprentices to several thousand in some of our larger programs, you have to deal with how many hours have they had training? What curriculum are they taking? What are their online classes and so forth? All that has to be tracked. The software that we've developed or wanted to develop would replace some of that. Replacing it because we have systems out there that all use some of them. Spreadsheets, Diane laughed at one point because I told her we have an application out there that uses Foxpro for some of you senior developers out there. We can definitely work with those, but the problem came that all of our users ended up having to double entry their data or sometimes triple entry their data because they would maintain the apprentices in their local software and then they would go to our learning management system and they would have to enter their users again and they would get grades and they'd have to export that and go to their systems. So we wanted to create a system that allowed for more natural transitions and once and done technology if you will, and we took this opportunity. I'd been playing with OKD for some time looking at replacing our LMSs or working within our LMSs, enhancing them in some way and this became the prime opportunity for us. So we were able to create a system that enhances those. We have several monolithic applications and their development times are very slow. We take several weeks to several months to even deploy features. We are moving in an agile way, but all of that technology is just slow going with the testing and everything else. So wanting to enhance that process, we made some selections. As I got into it, we worked with outside development partners and I had done all the business analysis work to get this up and running and handed it over to our seniors, you know, our basically our project management office and then COVID hit and they said that we're halting all external development, but since you're a developer as well, why don't you just go ahead and continue developing this, which was a little bit more than I was planning on taking on at the time. So I did engage Red Hat and I got a contract with Red Hat to help me get things set up where we chose the right tools. In our case, we ended up using OpenShift dedicated, which is a supported managed service for your OpenShift container platform. There's somewhat lagging OCP. I think you have control of your slides now, so try advancing with that clicker you have in your hand. Absolutely. All right. So we've gone through most of this. We provide the cracker, the trench, labor grant, tracking, reporting, so here we go. This is the good guys. They're still not showing, so I'm not sure. Maybe there we go. Now I can see it. Cool. Okay. It's a go. So this is kind of what we worked with and we've got quite a bit going on. We chose a microservices approach. We decided to use Quarkus for most of our stuff. We're a lot of Java on our back end. We do have some Node.js, some very other items, but as you know, our Moodle would be PHP. None of that is living here. These are just augmenting services and they work in concert with all of our other systems, but we did go ahead and choose to use Quarkus. It's very fast. We have a lot of services that don't need to be running all the time, so we don't want to take up space or resources when they're not needed and Quarkus allows for pretty much instant on, do its function and then spins down. So it's very, very lightweight. We are using Atlassian BitBucket for our repo, simply because we also use Jira and Confluence for our documentation and support. We did want to use CICD at the time since we were doing complete green field. I tasked Red Hat, I mean, since you're paying Red Hat anyways, let's go ahead and get some good stuff in there. So we pulled in Red Hat Pipelines, which uses Tecton. It is a native CICD pipeline, allows us to go ahead and do all of our services. It is capable of doing everything. However, we wanted to take a look at Argo CD as well because I personally have a love-hate relationship with Yamal. Diane said she loves Yamal, we like that. However, I like version control and all things and Argo CD gives us the ability to have our deployment Yamal's version controlled and synchronizing our deployments across our fields. So what do we have? We have two clusters, both OpenShift dedicated. The weirdest thing for us was that our non-production cluster is about three times the size of our production cluster. The reason being is all of your stuff runs in your non-prod cluster. Your pipelines and your image streams and all of the content that you're doing, including your testing, your image building and all that. So we actually need more resources for our non-production cluster than we do for our production cluster. However, our pipeline takes it at any time we commit to our bit bucket. We automatically start a build and we have three namespaces inside our non-prod cluster. We have a development branch, a test, and a UAPT. Development just is where it gets built. It's where everything gets done right when we kick off with the bit bucket and it allows me to make sure that my services are seeing each other. Once they're talking and I can just look at things and make sure everything's working, I can promote those and tag those so that they end up in test. Once they're in test, we actually build our end-to-end tests and we use JMeter for that. We do spot testing with Postman, but for the most part, we can script JMeter into our pipeline so we can build that. So we do build out our JMeter tests inside the test environment and all of those are private. There's no routing to those. Those are all just console in and you deal with what you want to do. You can port forward to do some testing. Once we promote those and we feel like they're stable internally and we want our external partners who work with our learning management system, learning record store to actually access them, we promote it once more to the UAT environment and I kick off and let them know. While it's in UAT, they have access to it. While it's not on this diagram, we're using Red Hat Open API Management. So it's another managed service that basically gives you an on-premise three scale deployment. So all of our APIs are backed using three scale. So that is how they would access those. Part of our problem was we want everything to be destroyable. That's the whole beauty of the OpenShift world is that you wanted to be long living, scalable, resilient, and recoverable quickly. So while SMEs at Red Hat are maintaining the cluster, we wanted to make sure that if there was a catastrophic event and a cluster goes down, we could rebuild everything. Well, how do you do that quickly with databases? And as we were working with Red Hat, we found that all of our databases would be better served if they were off of the cluster. There are lots of options for those, including Amazon buckets and everything else. But we worked with a group called Cockroach Cloud, Cockroach Labs, to create for us some cloud-produced Postgres-compatible databases. And they have multiple ways to consume, so it worked for us. If you see our development environment and our test environment, we're using a developer cloud account, which is free. For the most part, you get a very small, there's no guarantee, there's no uptime. But it lets me do my testing and it deploys it to cloud just like my production uses the same technology. When we get to UAT, we need a little bit more longevity for our data. So we're using their in-service operator to maintain a couple of nodes inside that we use for backups and everything else, just for UAT so that the developers can have some long-lasting data. When we do move to production in our production cluster, we are paying for a region, one region right now, three-node cluster of Cockroach Cloud, which again is a Postgres-compatible database. But what it allows us to do is make sure that we could rebuild our cluster within, I'd say hours. It takes a couple hours to spin up an OSD cluster. But within the, once those clusters are available, we can redeploy everything because nothing lives within the cluster as far as our images go. So all of that works because we're using these technologies. Our CI-CD builds the image on our OpenShift cluster, pushes it to Quay. Why Quay instead of Docker, Hub or some other system? Again, we went Red Hat, we were working with Red Hat, but Red Hat gives you the ability using Quay to give you security testing. So immediately, once an image gets pushed to Quay, it starts scanning it for vulnerabilities, all of the vulnerabilities that all of your Red Hat stack are covering. So you can always be sure that your stack is secure. So benefits, I'm just trying to keep an eye on time since we had some technical difficulties. So we had a couple of things. Because we have multiple users using the technology. We have external users using this technology. We did not get rid of our monolithic applications. So we needed to measure impact across the board. So our existing process for an applicant coming in is that they apply in person. They can go to a training center and they apply. And there's about 50,000 of those a year. I was a little off on my memory. 50,000 of those a year. And they would go into their local training centers and apply. They would have to go to another training center and apply if they wanted to go to two regionally closed training centers. For us, Baltimore and DC are very close and you could walk into both of them and apply. After a rollout of this system, they can now apply to multiple programs anywhere in the country by filling out one application. We lost controls of our slides. There we go. Entering job training reports. So as an apprentice goes through their on the job training, they have to fill out evaluations. Those evaluations then go to their journey level worker who also fills that evaluation and they get submitted to the offices. So all of these offices are very small staff. They only have anywhere from one sometimes up to 15 people. But you can imagine with 36,000 apprentices a week entering those reports across 250 training centers, that's a lot of data entry. So after rollout, all of that's handled via a mobile app. They actually fill out their evaluations. It gets kicked over to the journey level worker. They do their comments and when it's submitted, it's already in the system. Finally, advancement operations. As you go through the process, five year program for inside three year program for outside residential and telecommunications. You have to look at, have they met all the qualifications for education? Have they met all the qualifications for on the job training? Have they done any certifications, whether it's regional? So you can imagine that same 36,000 a week, you'd have to go collect all of this data and then figure out who's eligible for advancement throughout the program. With the new system, all that's done for you, it automatically takes a look at what you've set up as your guidelines and parameters and all of those microservices go through the process. So DevOps metrics a little bit more applicable to what we're talking about here. So again, we did not replace any of our monolithic applications. We are supporting those with some microservices. So our current process for our non microservice based deployments, a bug takes us on average one to two weeks. Whereas with the microservices, we're rolling out fixes one to eight hours. New features can take anywhere from two to four weeks for our learning management systems, our learning record stores, test generators and other tools. Whereas a new feature with the microservices anywhere from two to four days and that's high. In most cases, we can have something out in a day. It's just rapid and we do go with that model. We roll out our skateboards and our bicycles and our Harley-Davidson's into a forgrafter. We go through the process and we get these out there, but it does allow for rapid cycles. Epics, huge epics take anywhere from two to eight months with our current monolithic applications and we can roll out new epics in one to four weeks with our current system. So in terms of OpenShift, whether it's OKD, OpenShift Container Platform, or in our case a managed service with OpenShift dedicated, it allows us to rapidly deploy services to augment support or even fully support new technology. You can deploy front insights, middleware or back insights, all of those being supported securely. In our case, we're using three scale through a Red Hat Open API managed service. We have two clusters, a non-prod and a prod cluster. We're using Tecton pipelines to get those deployments out so every new service gets a new pipeline and it pushes right through the process. Argo CD keeps all of our environments in sync and Quay keeps our containers secure. That is all I had unless there are questions. Well, I think that was an awesome way of doing that and thank you for segueing with the slide snafus and all of that. Looking to see if there's any questions from the audience in person and that apparently the folks virtually are not figuring out how to use the QA that well or they are surprisingly silent. I am always thrilled when we get to see the architectural diagrams like you've shown and really get them the deep dive in on what people are actually doing. A great big shout out to Cockroach Labs and the work that you're doing. Someone just asked, why do you like Argo more than Tecton? It's not the like the technology. We actually started with Tecton and use it quite a bit. Once we kind of stabilize on a service, we're using Argo only for the YAML management. Deployments, maintaining version control because it does use customized IO built in right into the container. It's more of a preference. You can use either one for your deployments. We chose Argo CD because I had a steep learning curve with YAML documents. You guys make fun of YAML but YAML is the duct tape of the universe these days. We used to say Pearl was the duct tape and I'm showing my white hairs in my hair these days but YAML, I don't know where we'd be without YAML these days. I can feed that. That doesn't make it fun. That doesn't make it fun and boy I hate the white space issues with YAML. It's quite a fun learning curve. I actually know one of my friends and colleagues from another startup. I was in Ingy.net, Brian Ingersen, who is the author of YAML. Shout out to Brian and his cohorts. You could probably find them on Wikipedia somewhere or in a coffee shop in Seattle. So we probably owe them a lot more debts of gratitude and free coffees for all of the work that they did and the pains in our lives that it has become. There is one other major significant advantage to using Argo CD in your production environment if nothing else and that is the ability to lock it at a specific level. So while Tecton is great for doing your CICD and getting those out there with Customize, it's always monitoring with your Argo CD. It's always monitoring your versions. So if someone with the authority accidentally either accidentally or maliciously does a deployment that you're not ready for, Customize will see that change. Argo CD will kick in and bring you back to the approved version automatically. So there are some advantages to using Argo CD. It's a flavor, it's a preference really. I like Tecton, it's very clear. When you start getting up there in the services though, when you think about having a pipeline for your deployment, a pipeline for your patching, and then a pipeline for your building, you've got three, maybe four pipelines just for every service and if they're all in the same namespace, that's quite a long list of pipelines to manage. So again, personal preference, they both do wonderfully and I've had no problems with either. All right, and there's one more question coming in and I think we have time depending on how long you pontificate. What criteria did you use to pick a storage provider and what was the biggest challenge you had with storage? Right, we looked at multiple options there. We actually played with Crunchy for internally. We went just plain vanilla Postgres with containers and ultimately what it boiled down to for me was again, very small not-for-profit organization. It's me and one other guy working internally with our external developers. So going with a managed service provider made sense for us, but the operator works great as well. So even internally, I would say find out what your uptimes are, your survivability with the national company that we have in terms of supporting 250 programs, 240 programs country-wide. At some point as the popularity spins up, we are also going to need to have regional data and Cockroach does provide a very native cloud-ready system for going natively across multiple regions. Right now, we're single region, three node cluster, but Cockroach Labs assures me that whenever the time comes, they'll be right there to make sure that I can spin up another node in another region, another three-year-old customer region, and my data will be seamless. So those were the criteria we chose in terms of picking what will work for us. Awesome.