 How's everyone doing? Have everybody full? Don't go to sleep just yet? Or maybe I'll help you go to sleep. One of the two. Basically, like Diane just said, I'm from weather.com. We're an IBM company. We're actually what's known as Watson Media and Weather at this point. And that's the umbrella company. And underneath that, we do weather.com and wonderground.com. I'm assuming most of you probably have heard of that site at one time or another. Even if you use the competitor's product, you've heard of us. So with that said, OK, we started out about two and a half years ago now, where we moved everything to AWS and OpenShift. And life was good. We were really enjoying and digging everything that OpenShift brought to the table. And then, as usual, life gets messy. And in this particular case, we got bought by IBM. And so obviously, like a lot of companies, IBM encouraged us to move to the IBM Cloud. And to be perfectly honest with you, it's been great. I'm the first one to admit, hey, I was reticent. I'm a DevOps guy. I don't like people throwing those type of changes at me. And I particularly don't like them picking the stuff I run on. Because I have a production operation site. We do about 23, 24 RPS per page at any given time for any one of our clusters across the world. And we operate seven different clusters. Each of those clusters have about 60 nodes. So we're not a small site. We have a few pieces and parts out there. And in particular, the one thing that we had to get really, really good at was migrating and moving from one Kubernetes platform to another. And then guess what? Then IBM and Red Hat got together. And we're actually moving back to OpenShift. Honestly, we've already done all the work to move back to OpenShift. We're just waiting on the budget people to come up with the money for us to actually do the move. It's totally in a county thing. And one of the ways we did that, and I'll go ahead and start showing you some things. And by the way, anybody who wants to contact me information-wise after this are hopefully if I get invited back at some point, maybe a little better Wi-Fi. Basically, one of the first things we did was we took and we terraformed everything. All of our clusters, all of our configuration, production, non-production doesn't matter. We have everything in terraformed. And yes, we have it for OpenShift as well as the IBM Cloud. So we're not stuck in either one, and we can do both, and we can mix and match. As a matter of fact, what I was hoping to have, but because we couldn't get the budgets in place in time, I was actually hoping to show you guys IBM's hybrid cloud, which I was going to show you Red Hat on AWS and our Red Hat on IBM Cloud. Next time. And that was a big thing. But just to show you, it's not just smoke and mirrors. We do actually have OpenShift already on IBM Cloud. And all that's done through Kubernetes running our terraform. If I want to start a new cluster, all I do is create the terraform files. I actually just clone what I have already and then configure based on the need. I got my new cluster. And that's all the work we have to do. All the Ansible stuff, all the search stuff things, the requesting the search, getting the search put into place, everything. And I do mean literally everything we have through terraform. We have to, we're three guys. WonderGround.com and Weather.com, we have three DevOps guys doing everything. Now we have a whole lot more development teams, including three big CMS teams. And they absolutely are at it all the time and they're very busy. And so one of the other things that we had to do, and it was funny hearing some of the other folks talk about it, but one of the other things that we did was we, I'm sure you guys are familiar with Netflix and the load that Netflix brought. Well, we watched them quite a bit because we want, just like everybody else, we want to learn from the best. And right now on the internet, nobody's really handling more load with containers than they are. So most of the problems that any of us are going to run into, they do. Not an advertisement for Netflix DevOps, but it just makes sense. You go look where people are actually doing some really nice things. And one of the things that they did is a mono repo. And so we're in the process, not totally 100% migrated over yet, but we're in the process of moving all of our stuff, all of our services. This is literally wonderground.com services and weather.com services all in a mono repo. And within that mono repo, any one of these services gets spun up and you'll see a Jenkins.json there. Well, what that Jenkins.json does is it's basically the Terraform equivalent of generating a Jenkins file. And so that's what we do in order. Good afternoon, everyone. Meetings are resuming on the main deck. Please make your way to the main deck. Give a seat. Thank you. You're welcome. And with that, and so basically we, as you might imagine, on the DevOps side in particular, we've tried to, we've turned Terraform into a tool it was never designed to be. And it's working out actually pretty well because it's very declarative. It's just JSON. And yeah, we have to take some liberties and we have worked closely with the IBM. And I'm assuming at some point we will work closely with the Red Hat Terraform team on the Terraform provider, but right now we're able to do both in pure Terraform and generate the whole thing. And we do all this via Jenkins. One of the other nice pieces that we built into our migration environment that enables us to be very flexible is we dedicated a particular Jenkins process to having the build, which is all source code and product related. And the pipeline, where does stuff need to go? And deploy the actual deployment piece that pushes stuff to any particular type of cluster, we have separated and abstracted those three pieces apart. And so what we do within our environment by abstracting all those pieces and parts is we're able to let development do and concentrate on the Jenkins side of things and the automation side of things that are related directly to their build. It's all about building the container for them. They don't really care anything beyond that. And then our QA department manages that pipeline container. They're the ones who know where things should go and when they're good enough to go to any particular environment. And then the deploy side is where we spend all of our time. And the deploy side is the operation side. It is the actual keeping all the stuff and the deployments working properly. And there's plenty, honestly with three people there's plenty of work there for us. And if it was not for this sort of automation we would have no chance with our particular crew. And like all of us we have what we have and you gotta make it work. But what I can tell you is that thankfully both Kubernetes and you can't mention OpenShift without mentioning the underpinning of Kubernetes both have evolved so much and so much contribution has come back from OpenShift into the Kubernetes community that's really propelled and made our life so much easier. It was one of the reasons I love and I don't know if you guys are familiar with the, any of you guys familiar with the IBM cloud? So there are a few of you. Well, anybody who's familiar with the IBM cloud you're used to dealing with work basically worker, we'll call them node pools, worker pools which is just a collection of nodes. Well, you can do that same thing under OpenShift in the IBM cloud with the exceptional difference of in the IBM cloud we do get to use the OpenShift UI, the web base UI which is the one thing I miss the most when I first had to migrate to the IBM cloud away from OpenShift. And so I'm tickled pink to get back into there and why do I love it so much? It's because I can in the non-prod environments as developers are trying to figure out how to run things and how much they're actually gonna need as far as resource. I can give them free reign because the OpenShift console allows me to define everything in such a way I can put the correct controls in place while giving them the freedom to do what they need to do. And that's in a development intensive environment like ours, you guys know how valuable that is. So that said, one of the other big things I was gonna show you is some of the monitoring we put in place. The one that I like the most is we put an aggregated Grafana. We're able to look at all of our clusters through a set of dashboards in Grafana. It's all Prometheus driven. We put the Prometheus exporters in place to collect the statistics that we need. We actually got such good information out of it. We actually changed our HPA and we now drive our HPA off the Prometheus statistics. And because of that, we've been able to go back to the community version of the Node Auto Scaler. How many of you guys are Node Auto Scaling currently? Because you probably realize that's where you get your savings. The more I can do there in my off periods and bringing those number of nodes available down. Well, it all depends because I don't know how it has been for you guys, but our stuff runs pretty efficient. And so we don't push CPU. And so I don't get any scaling on the CPU. We don't push memory. I don't get any scaling off of my memory utilization. We push requests per second. And that's how things drive in our world because we're not CPU intensive. We're information intensive. We're not memory intensive. It doesn't take a lot of information to show a span of weather data. But what we are is that we get a lot of requests and we would bottleneck real quickly if we did not have that custom metrics set to drive our HPA. And once we got that down, we could finally add back in both Node Auto Scaling and an HPA that worked. Because what we were having happen was we were essentially crashing because of the number of requests because there was nothing to drive up to actually allocate a reason to allocate more resources. And then once we were able to put the custom metrics in there where we're doing it by RPS now, now we get the real driver behind our sites. And so we do appropriately scale. And of course there was some tweaking on the statistics because as you probably already know, nodes don't spin up like that. It takes a few minutes to spin up a node. And so when you're spinning up nodes, you needed to have already allocated the node before you needed it. And we do that by playing with the amount of RPS that drives that scaling. That said, I guess you guys are all familiar with the site itself. As Diane has told us, she wanted to hold off until later on any questions. I had a lot more stuff I wanted to show you, but the Wi-Fi is still running just a tad slow. And I'm not gonna torture you guys with that. But this is also, are some of you guys familiar with New Relic? In the IBM Cloud world, we use New Relic a lot. It's not a requirement. It's a choice. And they have some pretty good visuals, but it's still just being driven off of my Prometheus data, our data being spit out by our individual node apps. And we're predominantly node. We have a little bit of PHP on the CMS side, just because that's where that world is mostly at with WordPress and Drupal in our particular case, it's mostly Drupal. But it's mostly node and we have agents in both of those systems, plus we have some agents that work directly off of our custom Prometheus data. And with that said, like I said, the other thing that I'm really proud that we were able to pull off was this aggregated, I'm literally looking at all of my clusters at one time and I can then zoom down to any individual one. This is all standard. The dashboard styling and the actual data behind it, nothing elaborate, nothing you guys haven't seen before. Sorry, I talked with my hands. I bumped into the mic a few times. But we don't do anything special other than we built basically a scraper that aggregates that data together, because we need to look and see how this whole thing's performing as a whole. And as some of you may have already found out in the past, you don't really see that looking at just one cluster when you have, like in our case, seven, eight clusters in production. I guess that's about it for me. I'll keep it short and brief. And if you guys wanna contact me about anything, please feel free. It's been interesting rocking in the Wi-Fi and I appreciate your time. Thank you. All right. All right. Thank you.