 And next up, I want to invite someone who has been successful in pushing his corner of the world to be more open and collaborative. So please welcome James Pinnick. Good morning, everybody. I am with Verizon Media and do set the stage a little bit. Verizon Media is a house of brands, many of which are hopefully already part of your daily habits, Huffington Post, TechCrunch, AOL, and Yahoo. Now these are very popular sites and running them requires quite a lot of infrastructure. Consequently, we operate one of the largest infrastructure environments in the world. Interestingly, as of this moment, it is largely now powered by open source. Overwhelmingly, powered by open source. Hundreds of thousands of physical machines with millions of cores and it's all managed with open source. Largely, managed with open source. It was not always thus. When I began my journey at what was then Yahoo, our infrastructure was built almost entirely on custom platforms. We had our own bare metal system which tied into our host name, asset, network databases, probably a familiar story for many of you. And we did this because we had to. No one else operated our scale. There was nothing off the shelf for us to take advantage of. We lived in a world of 1,000 custom tools. YAMAS, YBIP, Waikiki, Wysar, Wynst. There's quite a few more up on the screen. And at the astute amongst you will note that there's a common theme here. I work with some brilliant engineers. But naming things has not always been our strongest suit. By the way, Apache Traffic Server, ATS. Originally started at Yahoo as YTS. So even that, we started by sticking a Y on the front. Which leads me to the next point, which is then, as now, we were very passionate contributors to open source. But the key fundamental building blocks to our data centers were all closed source tools that we had built ourselves. But we're not unique. Our tail is not unique. I've had the opportunity to speak with many infrastructure operators around the world, from very small enterprise groups to very, very large intergovernmental public agencies. And what I think is really fascinating is we all do the same things, although most are a lot more creative in naming their tools. So why is this? Why do we all do the same things? Why do we all build the same fundamental tools ourselves in a vacuum? This was a necessity at some point, sure. But why does this culture persist? And I'll tell you why. One, we're kind of stuck in this rut. We assume nobody else needs to solve the same problems we need to solve. Who else out there needs to put an operating system on a machine, right? No, I'm special. I'm unique. I'm a snowflake. There's no point in putting this out there. Also, the code's terrible, and I don't want anyone to see it. Two, building things yourself is fun. It's an opportunity to learn and grow, explore and create something on your own without any outside influence. It's a lot of fun. And in the short term, it actually seems to have a much higher return on investment. You have an idea. You build a prototype. You go to production. It was fast, easy. You can go from an idea to production very fast. But in the long run, you will lose out to open infrastructure every single time. Why is that? Because as I stand here right now, someone in the world is working on a piece of software that is moving my infrastructure forward a little bit more. They don't work for my company. They don't work for me. But someone in the world, somebody is improving my infrastructure. When I go to bed tonight, someone in the world is waking up, and they're going to start improving my infrastructure for me. They're going to push on the things that I care about just a little bit more. And when they go to bed, someone in my team is doing the exact same thing. I once had a boss tell me that I was the only software architect he'd ever met who didn't insist on building everything from scratch myself. And he asked why I chose to use a certain open source cloud platform, OpenStack, instead of designing my own thing. And I said, the reason is because it exists. It exists, and it has an active and passionate global community who are constantly working to improve their product. The value of that force multiplication cannot be understated. However, I'm preaching to the choir here. So the bigger question is how do we do it? If you thought moving over 20 years of infrastructure to open source tooling is easy, I've got news for you. It's hard. It's not an overnight thing. Yes, there are people around the world building tools for you, but it's not the saw that builds the house. It's the carpenter. So we started out with OpenStack. We experimented with some virtualization clusters, and then we realized that we were dodging the biggest problem. We had to focus on the thing to begin with, the thing that runs your data centers, the thing that powers all of your infrastructure. We had to start with that, bears. Now bears, sorry, wrong deck, bear metal, it ain't easy, but it is necessary. Early on, we realized we needed to deal with this seemingly basic problem. I mean, turn on a machine, install an operating system. How hard can it be? Very hard, very, very hard. So we realized before we needed to grab our tools and build this house, it's got to stand on something. It needs a foundation. Bear metal is the rebar and concrete which forms the foundation of your infrastructure. VMs, containers, functions are all awesome, but each thing stands on the layer beneath, and they all stand on top of the foundation, which is bear metal. Bear metal is not flashy. It's not glamorous. It's not the latest cool new toy that people are making breathless blog posts about, but you got to deal with it. So we invested there. We changed the business to make infrastructure as a service bear metal an option and then a default, and then it was mandatory. Like one way or another, you're going to use this. What was once a heretical idea became common sense, and I really like that expression from heresy to common sense. Are we done? No, we're managing the bulk of our infrastructure, hundreds of thousands of machines with millions of cores. We focused on the commodity 80% of the problem first. We took some of the special cases, we made them into commodity cases, and now we're taking the rest of the special cases and we're focusing on them. It's very hard, and it takes a lot of the three Ps, passion, persistence, and persuasion, but it can be done. On top of bear metal, we're running virtual machines, Kubernetes, containers. We have Kubernetes on bear metal and on VMs. Containers exist at all levels of the stack. We run our production workloads, run at all levels of the stack. One day, my dream is to push all of our workloads off of bear metal and into higher order functions. Even then, if I'm successful, if I get everything into VMs and functions and containers, underneath it all, there's still gonna be bear metal, and that's why I'm really trying to drive this home is this is critical. How do, and the other important part of how we got there is my team. Like I said, it's not the carpenter, it's not the saw that builds the house, it's the carpenter. Well, these are my carpenters. We've made fantastic progress, and I am so proud of them and all that they have accomplished. This is them in a mandatory fun event. I'm gonna finish up by saying, if we're doing this, you can do this too. I'm a special unique snowflake and so are you, I get it. But if we did it, you can too. If you're right now in this room with me, even if you're just using this as a quiet place to frantically finish up your slides for today, you're on the right track. If you're watching the recording, you're on the right track. Put that ego aside and get involved and don't just consume open infrastructure, participate and give back. The more you put engineering, engineering and architectural expertise, but most importantly is operational expertise, the more you put it in that to the community, the better the community is gonna be able to produce a product that helps, you're gonna be able to shape the product and the better that product is gonna suit your needs. I hope that our experiences moving over four million cores to technology like OpenStack Ironic can help encourage you and give you the confidence to do the same. Thank you very much and have a great summit. Thank you.