 Good morning, OpenStack. Please welcome Mark Collier. Day two. You survived day one. That's impressive. It's been really amazing to get to reunite with old stackers from all over the world. I think Jonathan mentioned we have people from 63 countries here, which just blows my mind. And also just getting to meet a lot of new people. You know, we kicked off the forum. A lot's been going on and everything I've heard from the first day has been great. So it's awesome to have you all here. I just wanted to start off by talking about something that kind of clicked for me recently. Which is that we've reached this insane moment in history where all of the big things we're trying to accomplish are all relying on computing power. And that's a little bit of a crazy thing to think about. I mean, when I was a kid, I was just hacking on Apple II's and, you know, skipping class to play Doom with my buddy Maxwell in our dorm room. So, you know, I didn't think that the whole world was going to be turning to computers and us computer geeks to actually build the tools they needed to do real work. But here we are. So it's a big responsibility and we've got to rise to the challenge now. I mean, if you think about how everyone who's trying to cure diseases or automate transportation, augmented reality, everything that's really kind of wild and crazy out there that's going on in human achievement, they're counting on us, the computer geeks to come through for them. You know, we've reached the point where all science is essentially computer science. And I didn't see that coming, but it's a big opportunity. It's pretty cool, but it's also a huge responsibility. So we've got to make sure as we work together to build the tools that people in every discipline are now counting on to make breakthroughs for humanity, we've got to make sure they work really well together because people, if the tools break, you know, that we're not really, we're not doing our job. And we've also got to make sure that they're accessible to everybody on the planet. So today I want to talk about a lot of those tools that are emerging, how we can do our part to make them work better together because everybody's counting on us. And when it comes to making them accessible, there is some really good news. You know, it's not a new phenomenon that people are relying on computational power for breakthroughs, but I do think that this open source first methodology is relatively new. I mean, it's now considered accepted best practice that if you're solving a really hard problem, yes, it's going to be in technology, yes, it's going to be in software, but for the first time, open source is considered the best way to solve hard problems. So that's a pretty special thing for all of us who are involved in open source. And if you take an example here, you know, AI and machine learning. This is the hottest area right now. So many things have been written about it. It's captured our imagination as we think about, you know, the future of how our planet unfolds over the next 10 to 20 years. There's probably nothing that's going to have a bigger impact than AI as a category, machine learning, and then, you know, deep learning being kind of the latest ability unlocked by just massive computational power. And I guess, apparently, we're also going to revolutionize the tennis game for robots. So we've got a lot on our plate, is my point. If you look at AI and machine learning, these are things being led by the powerhouses of technology. The companies that just amassed the smartest talent they can possibly pack in their companies. And these are missions that fundamentally are going to determine the success or failure of their entire company. The biggest tech powerhouses in the world. These are companies like Google, Facebook, Microsoft. And for the first time, when they're solving what they consider to be the most important problems, they're doing it in the open. And they appear to be really doing it out of competitive imperative. You know, there was a time not long ago when there was a flurry of announcements from Facebook, Google, Microsoft. Ours is open source. No, ours is open source. We want a bigger community. Well, we're going to build an even bigger community. Well, this is a very welcome trend, in my opinion, having worked in open source and working on OpenStack. So I think that that's a pretty big turn of events. And with all that's been written about machine learning and AI, I mean, I've read a stack of sci-fi books this high about it, and half of that stuff's coming true. I think it's actually been under-reported the role of open source in solving these problems just in the last year or so. And these are some of the projects that are emerging. And if we look at other hard problems in computer science that are being tackled, of course, they're all related in some way to cloud, because that's the way everything's being done these days. From big data, application management, on and on, some of the most interesting work is being done in the open. And if you look at CICD, the Spinnaker Zool is actually a project that was created right here in the OpenStack community, so give them a special shout-out. But everywhere you look, you know, Kafka in real time, CockroachDB, that's kind of a crazy-sounding database. We'll see if we get to learn more about that a little bit later. We may have a demo, so there's your preview. What all these things have in common? So a couple of things. They're composable and they're cloud-native. Composable means that they're built to work together with other pieces in the system. You know, the days of a monolithic application that's trying to solve every single aspect of the stack, those are dead. I mean, every one of these components is built to be dependent on and rely on other parts of the system, higher in the stack, lower in the stack, depending on how you organize it. Putting them in a stack is sometimes a challenge, because we're talking about distributed computing. But the notion that we need specialization in these different projects to provide services that have expertise is really proliferating. I think we're going to go further together as we see this continue to happen. So things are being, the work is being chopped up into smaller and smaller components where individual projects are formed and standards start to emerge. And the key is, if you want to compose open infrastructure, you need to be able to have good interfaces between all these components. So the opportunities to bring together more value by putting different projects, different services together is greater than ever, but it's also a challenge if we don't create them in a way that's really designed to be composable. Cloud-native, the whole concept there is that every one of those projects that I showed earlier in each of those areas, they were built with the assumption that there's going to be programmable infrastructure they can rely on. So they just take that for granted as a solved problem. And that's a pretty nice world to live in. If you can write software where you no longer have to be thinking about the underlying hardware, to be thinking about compute storage and networking, you just believe it's going to be there. You believe somebody else is handling it in some software, some service. And so that's very appealing, and that's why there's so much innovation happening. Since infrastructure became automated, and certainly Amazon, Google, Microsoft, and the many OpenStack clouds out there have helped make that a reality for developers today building these other tools. So that's where OpenStack comes in, of course. We've been working on automating compute storage networking for seven years, and this is not a small problem. This is actually something that is extremely complicated. So it's great that all of the new tools out there can take for granted that this works, that it's a solved problem. But it's through the hard work of everybody in this community that we can now feel confident that we have solved that for all those different uses. And if you think about the range of things in your data center you want to automate, it starts with bare metal, it goes with storage of all types, block and object, et cetera, and we now have file storage, identity management. I mean, these are things that every single cloud operator needs, applications expect to be there. And perhaps the hardest and most ambitious of all was the software-defined networking and network automation work that's happened through Neutron. And so if you look at each of the projects that we have that actually you can compose to solve those problems, they all have names. You know, sometimes we actually just talk about OpenStack in the abstract or as one platform. But the reality is, as people want to compose open infrastructure, there's a great opportunity to get to know each of these individual OpenStack projects, the teams, the PTLs, they're all here this week. And this is what actually provides that capability to all those applications that are relying on it. And so, at the end of the day, the key point here is that we are dependent on each other, we're relying on each other at each layer of the stack to do our part. And so Swift is for object storage, Ironic is the bare metal service, Neutron is networking, sender is block storage, Keystone is that identity management service that I talked about. And to sort of make them a little more accessible, we gave them all these cute little mascots. So if you want to know about Ironic, get to know the Pixie Boots here, I believe, is what we call this guy. This is going to help people think about how OpenStack can be used as individual components. Swift is probably one of the most forward-looking projects in terms of from day one, it was built to be composable, meaning it was built to be used with or without other OpenStack components. So shout out to the Swift team for thinking ahead on that. And we've seen lots of users that have other automation systems, but they just want object storage, they're able to bring in Swift. And so if we think about the composable open infrastructure landscape, this is just an example to tell you the concept of what's possible out there when you combine the best-of-breed tools at each layer. So yesterday eBay talked about how they have a massive OpenStack cloud, they run a ton of Kubernetes on top of that. I thought it was kind of funny because they said, this is the Kubernetes portion that runs on top, it's not very big, it's still thousands and thousands of VMs and nodes. So it looked pretty big to me, but at eBay scale, that's considered an experiment on top of the much larger OpenStack cloud, but they talked about machine learning is the resource that people are wanting more and more of, they did live dynamic provisioning to add more. Application management platform for helping you manage the lifecycle of the application, then of course, as I said, all of these things take for granted there's programmable infrastructure underneath. So if you wanted to build that yourself, if you wanted to compose some open infrastructure, you know, you've got a lot of choices out there. You might pick TensorFlow, which is the project that came out of Google for the machine learning piece, but there, as I said earlier, a number of options. Kubernetes is certainly the most popular in our community according to our user survey for application management, and then with OpenStack providing the set of services for all that programmable infrastructure layer. And so when we think about composable open infrastructure and how the world is depending on us to solve these crazy hard problems, it's really not just infrastructure as a service. I'm using infrastructure here in a little bit more of a broader context. So it's really infrastructure for ideas or for tools for decision making. This is what humanity is counting on. So we've got a lot on our shoulders now that the world has realized that computing is the best path forward, whether you're trying to go to Mars or a cure disease. We've got to build the right infrastructure for ideas. And, you know, if those applications and those application frameworks and infrastructure, we're all dependent on each other. We need each other. And our users need us to make those tools work together. So if we need each other to make that stack a reality, then we've got to work really closely together in order to serve our users and serve everyone who's relying on all that infrastructure for powering the ideas of the future. However, there are a couple of things standing in our way. So, you know, when I was wasting time playing Doom in my dorm room when I should have been going to computer science class, I was thinking that, you know, maybe I would end up working in computer games, but instead we just end up with cool graphics. This is the big bad guy that you've got to defeat at the end of every game. So there are a couple of things that are on his mind that we need to overcome. One of those is complexity. So Lauren, Sel, and Terry Perez talked about this yesterday. We all know that we're living in increasingly complex times when it comes to data centers and all these tools and the stacks are proliferating. So if we're going to compose all these things, that's actually making things more complex in some people's minds. But we've got to be responsible within the OpenStack community to make our components that people are going to pull in as they compose open infrastructure make them less complex. We've got a really cool demo here in just a few minutes to show some progress in that area, but working hard to fight complexity really matters. And this is a theme we hear time and time again, and we've already seen some progress in the forum just this week with operators and developers getting together and saying, okay, you said you want it to be less complex, what does that mean? And as Terry said yesterday, we've started to remove and deprecate certain features that weren't as commonly used in some projects. We're removing configuration options. We can't just have more of everything expected to be okay. We've got to really be disciplined about making things less complex, easier to consume, and therefore easier to compose with all the other things happening outside of the OpenStack world. The second thing that we have to guard against, and this is the hardest one. It's the Not Invented Here syndrome. You've probably all heard of this, NIH. There's something in human nature, I believe, that drives us to want to reinvent, drives us to want to rewrite, to do things over and over again, and repeat history. And sometimes that's the right thing. Sometimes we need to rewrite a tool. Sometimes we need to start from scratch. But oftentimes, that's a real waste of time. And with everybody in the world counting on us to build tools to solve their problems, we've got to make sure we don't do too much of this. And this is probably, again, I think one of the hardest things that's happening. I mean, if you look at the... to make a brief political aside, there's a lot of nationalism, let's say, that's become popular around different parts of the world. And it's a little bit like the NIH syndrome. If it's not from my country, people in my country, you know, I'm not interested, and, you know, I know, certainly, that this community can overcome that. I mean, we have people from 63 countries right here. So we work across boundaries every single day. That's what we do in the open-source community. But we have to make sure that we don't fall into this trap when we're looking between communities. So when we look at another open-source community and say, well, they're not like us, they use Slack. Oh, my God, no. We use IRC. You know, they use GitHub. We have our own Git repo. People are going to have different cultural norms and different decisions, but we have way more in common, way more in common with other open-source communities. And there's so much at stake that we're really starting to make big strides in this area to give you a couple of examples. You know, one of the things that we launched a couple of years ago is the app catalog. Turned out it really didn't need to be invented and live in OpenStack. So we've recently made the decision, as a community, hard decision in the open with a lot of discussion, to wind that down. So we're starting to make those hard calls. On the other hand, on their side of it, when it comes to looking at open-source tools that already exist, just yesterday there was a discussion about in the forum that I heard from Dems, who's out there somewhere, is that we've been discussing the need for distributed lock management. This is something that we need and a lot of different OpenStack projects need to rely on that. And so we decided, let's not reinvent the wheel. Let's use a LCD. And so that's the direction that we're going to incorporate that as an expected service in OpenStack Clouds going forward. So we're going to do a lot, and we're going to ask a lot of other communities. For example, Kubernetes, if you look at block storage, persistent storage, there are all kinds of efforts popping up. In that community, in that ecosystem, different companies starting up saying we're going to be the persistent storage company. Well, that's certainly their right, and people are going to experiment and try to go the right direction for their community. But we have this thing called Sender. It has 80 different backends. It's incredibly easy to stand up and consume in an independent way. So those are some examples where I think if we work together, we can actually go a lot further looking across communities. And so I think that this is going to be one of the things we're going to try to accomplish this week.