 Now, to get things kicked off, let's welcome the OpenStack Foundation COO, Mark Collier. Good morning, everyone. So how many of you are here at a summit for the first time? Let's see a show of hands. All right. This is amazing. Well, as you are filing in and getting your badges, you're probably looking around and going, who the heck are these people? And what are they doing here? What am I doing here? So to put it really simply, we are a community of people who build and operate infrastructure. But what's really special about this community is that we build and operate open infrastructure. And we're going to have a lot of time, like any tech conference, where we're going to hear about what's being built, what the tools are, the open source projects. We're going to hear from the people building them. And that's going to be awesome. We're going to be hearing about that throughout the morning. I want to start off by dedicating this keynote to all the operators out there, the people that actually operate the open infrastructure that we all rely on every day. I have so much respect for these people. They are doing amazing work to put all the stuff that the communities are building into action, into practice. Those of us may not realize that every day, but we are relying on the stuff that they do, the infrastructure they operate. We're talking about planes, trains, and automobiles. And sometimes we need all three to get where we're going. Now, some of you are too young to remember this movie, but that's okay. It's really good. It's called Plains, Trains, and Automobiles. Okay, let's move on. Now, this infrastructure, this open infrastructure is being operated all over the world. And so these operators, in my view, are really starting something special, a movement around open infrastructure. And it's happening all over the planet. And the types of problems they're solving for us are really inspiring in my mind. So we have, for example, the Ontario Institute for Cancer Research right here in Canada. They have a thing called the Collaboratory, in which they're combining a lot of different open source technologies to power their infrastructure, which then empowers the scientists who are in search of a cure for cancer. You may recognize the center photo as the CERN Large Hadron Collider. They are an amazing pioneer, not just in science, but in computer science. For 30 years, I feel like if you want to know what's the future of infrastructure, or just the future of technology in general, find out what CERN's running. You'll probably be running it in a couple of years. Last but not least, the Square Kilometer Array. This is a multi-decade project to build the largest machine in human history. And it's actually gonna be able to look deeper into space than any other instrument we've ever built in humankind, historically. And it's all made possible because of open infrastructure. And so beyond research, we also have other ways in which our everyday lives are empowered by open infrastructure. Retailers, large and small, that we are all familiar with, banks, and of course, insurance companies. And in fact, progressive is someone we're gonna be hearing from in just a few minutes. Now, the reasons people embrace or are drawn to open infrastructure as a strategy vary. So people have different reasons, right? We all, we often hear cost and compliance. So I mentioned that Ontario Center, Institute for Cancer Research, and they have told us that they're able to actually operate at 40% less expensive infrastructure by using open components. And the compliance landscape, let's face it, is more complicated than ever. I mean, any of you that do business in Europe probably have concluded by now that GDPR is in fact a four-letter word. And of course, there are many other forms of compliance, but that's the one that people like to make fun of right now. So I had to slip that in there. So beyond cost and compliance, to me one of the most interesting developments in infrastructure and cloud in general is that our operators are being asked to do more for their businesses, for their end users. So people expect their infrastructure now to be able to handle artificial intelligence, machine learning. Containers are really a given these days at various levels of the stack because of how powerful they can be. And people are starting to experiment with serverless. And so this is the world that operators live in right now. More pressure on cost and compliance. More pressure to deliver additional functionality in their clouds. And on top of the functionality piece, they're actually being asked to do it in more places. So cloud is no longer just about the data center. We're going to be hearing a lot more about Edge throughout this week. As Jonathan mentioned, we have Edge keynotes and there's other content. We'll be hearing from some folks that are leaders in the Edge community later this morning, as a matter of fact. So when you put all of this together, one of the conclusions that I've come to that's a little different than perhaps we thought cloud was going to turn out to be is that there's a myth, I believe, that cloud is consolidating. And I really think that that's absolutely not what's happening. We were told in the early days of cloud, it's just going to be the lowest speed x86 chips, really cheap hard drives, scale out as far as the eye can see. And that's it. That's the beauty of cloud. And that's not the way things are playing out. The ways in which cloud are diversifying are being driven by both hardware and software. And those are being driven by the demands of the applications and the workloads, right? And so, for example, if you just look at the hardware landscape, there's, of course, x86, still a huge part of everybody's infrastructure, right? But GPUs are absolutely mainstream now. And it's not just for Bitcoin mining, believe it or not, or gaming, it's also for a lot of other applications, of course, HPC. And we're starting to see AI and machine learning algorithms that can operate much faster on GPUs. And there are other architectures that are emerging as well that people are experimenting with, both at the edge and the data center in terms of things like FPGAs, other custom processors. Google actually unveiled their third generation tensor processor unit a couple of weeks ago. And people are starting to look at ARM again for the server space. But really, I think the software is probably the biggest topic for the week here at an open source conference. And I thought it was really eye-opening when I looked at our schedule of topics. To me, this really tells the story of what life is like for an open infrastructure operator. So we have more than 30 different open source projects, many from different communities that are being discussed this week. So why is that? Well, this is the way people are solving the hard problems when it comes to infrastructure in cloud. And so from the standpoint of an operator, this is also where the challenges come from. So they can draw from this incredible catalog or universe of things the builders out there are building to solve hard problems. And that's awesome. But putting that together is also the challenge, right? And so when we ask why is it that people are solving these problems in the open these days, it's really because it's the best way to solve problems. Scientists have been trying to tell us this for a long time, right? This is how science works. So why shouldn't computer science work the same way? And really, the broader point here is that one is not enough. One is never enough. One cloud provider is not going to be enough to power the needs of infrastructure globally. One open source project is not going to be enough. It's already not enough. There's no one open source project that can possibly power a complete cloud with all the capabilities people expect today. And one foundation is not enough. So none of us should be thinking with our blinders on in our silos about our piece all the time. We have to specialize, certainly, and build the components that we are good at. But we've got to keep looking at the big picture because that's what our operators need. And even with all of that open source that powers and makes possible open infrastructure, the open source components are actually not enough either. And that's because turning open source into open infrastructure is not trivial. You have got to integrate and operate all of these pieces. And a lot of the speakers we'll be hearing from this morning are talking exactly about how we're trying to tackle this problem as a community and as an industry at large. And even beyond the technical challenges, if you think about it for a moment from the standpoint of these operators, they are not only consuming open source from, say, 30-plus projects and communities, trying to keep pace with that and understand what new releases are, trying to figure out kind of the nuances of the different communities, trying to participate. That's a lot to ask of these operators. They're trying to deliver infrastructure we rely on. So we can't expect them to know every nuance of every community, right? For example, I mean, there's no way we could expect every operator out there of open infrastructure to know what kind of drink you're supposed to buy the sender team when you run into them, right? I mean, that's just like absurd. By the way, it's fireball. So that's one more that you can check off your list, but hopefully by the end of the week, you can complete your bingo card. So let's go back to who exactly is operating open infrastructure. So I talked about some big users earlier, mentioned the planes, trains, and automobiles, talked about the science, people that run open infrastructure for our benefit, as well as retail and finance. And it is true that the number of industries running open infrastructure is really impressive. And by the way, it's not just private cloud, it's also public cloud. So dozens of public clouds all over the world are built on open infrastructure, and that gives people another model to consume infrastructure. But I will let you in on one secret that I did recently discover, which is that logos actually don't operate infrastructure, as it turns out, it's people. So people actually operate the infrastructure. And if there's one thing that I would encourage you more than anything this week while you're here, is to get to know these people, learn from them, teach them, collaborate with them. These are the people that actually make open infrastructure happen. They're the ones putting together all these technologies and making something more powerful out of it than each component on its own could ever be. For example, Joseph Sandoval, who I hope is out here somewhere, and his team at the Adobe Marketing Cloud, they operate a cloud with over 100,000 cores with a team of four people. These are the four people, as it turns out. So this is your chance to meet them and learn from them. Another example would be Eli Elliott from Gap. So Gap is a massive retailer. They're operating a combination of different open source components for their infrastructure, OpenStack, Cloud Foundry and other pieces. The last person I want to mention is Ricardo from CERN. Now, Ricardo is somebody who is, again, working for this incredible organization that's driving innovation in infrastructure and other forms of technology, pushing the limits. So when I tell you, you know, the time is right this week for you to spend time with people like this and learn from them, you know, I'm no different. I mean, I would do anything to spend five minutes with Ricardo and just pick his brain. I mean, this is somebody that I know I could learn a lot from. Oh, hey, how you doing? Well, this is a weird coincidence. Okay, awesome. Well, welcome, Ricardo. Thank you. All right, so what kind of stuff are you doing at CERN? So we are doing a lot of container infrastructure, also OpenStack based on OpenStack, so we've been playing a lot with federation these days. Okay, yep. Cool, is there something you can show us? Yeah, maybe. Do you want to see a live demo? A live demo, sure, why not? Let's see. Live demos never crash, right? That should be fine. Exactly. So this is an example of one of the applications we've been using recently, which is to try to process all this data that you mentioned with these big machines we have. Okay. So we are using Kubernetes, OpenStack, Magnum, Manila, and we'll try to do a demo of what we are using here. Okay. So this is, this will be live, so. Live? Uh-oh. Yeah. So there we go. This is running on real infrastructure out in the world. This is running from the CERN cloud. The CERN cloud, okay. So I kind of scripted the text so that I don't lose too much time, but we can see here that we have a bunch of Kubernetes clusters. The three first ones are deployed at CERN internally, and then we have one in T-Systems, which is an European cloud, PolyCloud based on OpenStack also, and then we have one in GKE. Okay, so each of the ones that say Atlas are referring to the CERN OpenStack private cloud. Yeah. That you operate and then you have Google Compute Engine for GKE and T-Systems. So this is awesome. So you're leveraging all those different clouds. So what do we see in there? These are the three ones that we mentioned that are running at CERN. So for this demo, I have three clusters of 20 nodes each. So the first thing we have to do is to create the federation. So we're actually working to integrate this functionality into OpenStack Magnum so that we don't have to do this manually. For this demo, I already pre-created. We can see that we're starting with an empty federation, no clusters in. Like, well, that's no fun. Yeah. So the other thing we're also working on OpenStack Magnum is to automate the management of the federation inside. For the moment, this is still being reviewed. So we will use the native tools from Kubernetes. So let's add the first one. So we just added one cluster to our federation. OK. I'm adding a second one, because one is also not fun. And let's see. So we have two here. So there we go. We just added two clusters to our federation, two separate clusters. Kubernetes clusters deployed with OpenStack. So to start the demo, we'll use this tool that our physicists used to submit their workflows for data analysis. I'm very familiar with physicists. Of course. Who isn't? So we have a couple of examples here. So we have one analysis to kind of look for the Higgs boson. Sure. Yeah, I mean, I'm sure you all look for the Higgs boson in your spare time. We also have dark matter. Dark matter. I don't know if we want to find that. And supersymmetry. Supersymmetry. OK. That's kind of scary, too. All right, let's try to find the Higgs boson, should we? Let's go. Oops. Oh, no. No. You know, they made me promise not to make a black hole joke, but I feel like maybe we might have made one. All right, wait, wait, wait, wait, wait, we'll get it back. It's life. You can see. Go, go, go, go. Maybe we should have gone with the dark matter. It's coming, it's coming. OK, he can see on his command line that. Let's go, let's go. Here we go. OK. So we'll try to submit this demo, just to show that it's live. OK. And then, so this will generate the workflows. It's submitting the workflow into this federation. So we'll have to wait a bit because these are complex workflows, so we'll have to wait a couple of minutes. But we can see already that there's one job running there in this Atlas Recast X, the third from the top. We'll have to wait a bit that the workflow is submitted. OK, so just to give everybody a little bit more background on this, so you're talking a lot about federation. I notice you've got some Kubernetes commands in here, and we know you're running on some different clouds, OpenStack, and Google-powered clouds. Like, what kind of federation are you talking about here? So we have a lot of infrastructure inside CERN, but we need to expand our compute resources because we have this big detector generating a petabyte a second that we can't handle, so then we filter it. A petabyte per second? Yeah, so we can't handle that. So we kind of filter the data. We filter the data to something like a couple of gigabytes a second, then we can manage that. You've got to throw most of the data away just because of physics. You guys know something about physics. Exactly. And then we still store something like 70 petabytes a year, so it's still quite a lot of data. We need a lot of compute power to process all of this. That's crazy. So yeah, so that's why we have been looking to use technologies that would allow us to rely on something that we could expand our cloud to the outside of CERN. So you're federating Kubernetes across multiple clouds. Is that correct? That's it. OK. And so then you have this containerized workload goes to your physics workflow engine, and oh, look, something's happening. There we go. So the submission of the workflow started. So we are using two of the clusters, the ones we added to the federation. We can see x and y. You look carefully. The second one from the top is the federation. If you look at UIDs, they correspond to the same UIDs of the jobs in the clusters that are part of the federation. So one thing we can see as we are saying is that we have all this resource. You actually think in UUIDs, I guess. That's scary. With time it comes. So we mentioned we have these other clusters here that are not being used because we didn't add them to the federation, so let's give it a go. So let's add the third one that we have at CERN. OK. Let's add two systems, which is this public cloud, so external. This is not at CERN any longer. And the final one is GKE, as we mentioned, so let's do that. And we should now see that we have the five clusters there. And now these jobs take a while. So it might be that it takes a couple of seconds to show here, but hopefully we'll see here. So we're now running this workload for your physicists who are analyzing data, looking for the Higgs boson on five different clouds using Kubernetes all through the magic of the software that you all have been able to integrate and build. Exactly. So there you go. We have jobs running on all the five clusters in this federation. That's amazing. Yeah. Well, listen, I know it takes a few minutes. Thank you. Yeah, let's give them a round of applause. I only jinxed this demo a little bit, so it's not a big deal. Listen, I know it takes a little while to actually discover whether this is going to find the Higgs boson. Do you want to show us this before we? Yeah, so here we see the workflow is quite complex. Like physics is really complicated sometimes. So all these steps are individual jobs, and this is what's being submitted. So if we had one hour, we could wait for the result. But I guess we don't have an hour. But I happen to know that Ricardo is doing some sessions later today and throughout the week. So maybe go to the session and find out if we actually found the Higgs boson this morning live. Thank you so much. Thank you. Appreciate you joining us. Yes. All right. Well, I love live demos, and I was sweating a little bit, but it seemed like we got through no black holes. Just to wrap things up, I want to point out one more thing about operators. I've been talking about the operators that I'm inspired by, how they have unique challenges versus those of us who are on the build side, building the tools that they put into motion. But the fact is it's really not one or the other. We have a lot of operators who are also builders. We often refer to them as super users, in fact. And some of these are just amazing operators that what they've done is looked at all of the tools available, and they've found either ways to improve those to fit their use case, or in some cases created new technology to fill those gaps, to make it easier to operate, to make it easier to make it repeatable. If you think about putting all those technologies together, even if you get it working right once, you want to be able to do it over and over and over again, upgrade it, lifecycle management of that whole stack is a real challenge. And one example of some operators that are also builders are some folks from SKT and AT&T. They're starting a project this week just kicking it off called Airship. You can check it out at airship.org. And one of the great things about this community of developers and that their focus is that the first thing they wanted to do to drive collaboration was create a Git repo inside the infrastructure that our communities is used to collaborating on. And they're already getting pull requests and you can see some of the folks' names here. So it's really, again, all about the people. And if you are as inspired as I am by these operators that are changing the way infrastructures run, and you want to change the way you operate, I just want to welcome you to the community that builds and operates open infrastructure. Thank you.