 our MC, Joe Weinman, to the stage. Thank you. So great to see you all back here today, which is like actually a miracle given the 101 traffic. So who came by helicopter? Nobody. It's like it's very strange as MC trying to get here because you feel like really you should be here like first and you're sitting there in traffic and you're like, oh no, it's not going to work out. So here we are. It's a jam-packed agenda with a lot of great keynotes, a lot of great panels. I'm not going to spend a lot of time on housekeeping except to say that out of the generosity of the OpenStack Foundation, you have received a highly valuable white paper, which you're probably sitting on hopefully no longer. It's also available online at OpenStack.org, I assume. And so because containers are just so hot, it addresses some of the latest developments and plans for OpenStack to address and integrate container management and things like that. So with that, perhaps it's no surprise to know that our first panel of the day will be addressing containers. And so we've got Frédéric Lardinois from TechCrunch, and an distinguished panel of true experts in the area. So without further ado, I will invite them to come out on stage. You're a panel of one. Panel of one? It's just me, apparently. Okay. Well, you want to interview me? Yeah, please. Containers are really good. Okay. Yeah. Bring them on. Okay. All right. So we're going to take a reality check on containers this morning. Get started. And we're one short, but that's supposedly will be solved at some point. They can? Or should I? Well, why don't you get started? Because you want to give us an update on the container project. So just most people probably know you, but why don't you introduce yourself briefly? Okay. Well, good morning, everyone. What's going on with Magnum? My name is Adrian Otto, and I'm your PTL for the Magnum project. For those of you that don't already know, Magnum is a containers as a service solution for OpenStack. You'll be reading about that in the little booklet you got in your chair this morning. The project was just incorporated into OpenStack recently. It was in, I think, March that we joined the OpenStack umbrella. And we currently have about 83 contributors to the project from 24 different affiliations. And the reason why I mention that now is because I gave that same figure when we met here in Vancouver in North America, and I gave you the statistics, and those statistics were half of what they are now. And that community is extremely exciting, and I think it's exciting not just because of the technology that we're working on, but the way that we're doing it. We started with an idea that containers should be part of OpenStack, and we went to the community and asked how should this be done, and it all came together, rather than it coming from a single sponsor who had their own singular view of the world. It was much more of a collaborative effort. Did you feel a lot of pressure to integrate containers into OpenStack in some form? And how did the community approach you? Well, I mean, personally, I'm extremely excited by the technology. I have been before it was popular. And we've been using containers in Rackspace at scale for many years, but when they started to become really accessible, it became obvious that this was going to change how developers perceived cloud, and that there needed to be an answer for how to connect these things together because containers themselves don't really answer every problem that you have. All of the infrastructure problems are still infrastructure problems, and all of the application management problems are kind of a different set, and we needed a way to get these things to work together nicely. And so Magnum provides this concept of a bay, which is a unique concept that works as a security isolation for where containers run, which was something OpenStack had containers actually for quite a long time. For many years, we've had it in there, but it hasn't been safe to use in a multi-tenant way because you could potentially have neighboring containers with hostile workloads that weren't properly protected from each other that would definitely be a problem. You introduced it in IceHouse, I think, right? It's the first release of Magnum. The first release of Magnum was for Kilo. Okay, for Kilo. It works with Kilo, and it works now with Liberty. All right. So what's coming? What's happening? So there's a bunch of new things since I addressed us in Vancouver. We now have support for the MISOS Baytype, which I mentioned yesterday. This is MISOS with marathon framework on top. We have now a redundant, multi-master configuration for Kubernetes. Now anyone who's set up Kubernetes knows that it's a lot easier to set up than some other systems, but to set it up in a highly available way, to set it up so that it scales really easily is actually not as easy as it sounds. So having that all kind of taken care of for you by OpenStack is a key benefit. So that's in there now, too. We also have some key security capabilities that we didn't have before. And I told you that I was going to try to make Magnum ready for primetime production applications by the Liberty release. And so by Tokyo we'll have all of this merged, which is basically a solution for TLS identity management, for access control and line level security, so that all of the communication within your Kubernetes clusters and within your Docker clusters and so on and so forth, that all of that is properly secured and appropriate for public networks. And it's actually rather difficult to use TLS as an identity solution independent from something like a Keystone. And so there was quite a bit of engineering work that went from a number of different companies working together to get that to work properly. So that'll be another key feature. All right. You think you'll be ready in time for Tokyo? Come hell or high water. We'll make it work. You might also see a user interface show up. There's a bunch of the horizon developers have expressed a sincere interest in the project and we've got code already up for review that puts a horizon plug-in together for Magnum. So we'll have something visually appealing for you to try out. All right. Great. Thank you very much. You're welcome. Why don't you guys introduce yourself too now? We've talked briefly. Yeah. So my name is Shen Liang. I work for a company called Rancho Labs, which I started about a little bit less than a year ago. And we focus on container runtime and container management. You know, we have two products. One product is called Rancho OS, which is a very small footprint Linux distribution for optimized to run Docker containers. The entire Linux distro is 20 megabytes. You can actually use it to run KVM around OpenStack as well. I think there are some folks trying to do that. Our flagship product is a software, a piece of open-source software called Rancho, which is a container management platform, Docker management platform. And perhaps one way to think about it, it's sort of a little bit maybe a little bit like Magnum, except it doesn't just work for OpenStack. It works for all kinds of different clouds. But that's a very rough comparison since I'm following the Magnum introduction. I think that would be probably a good way to think about it. I'm really happy to be here at the OpenStack Silicon Valley because I've had some association with this community for many years. I always loved how OpenStack and these cloud efforts made infrastructure discussions really cool. I've never seen a group of engineers so passionate about infrastructure and come together to discuss it. Before Rancho Labs, I was at Citrix for three years as the CTO of cloud platforms. So yesterday, Steve Wilson gave a keynote about Citrix's perspective on OpenStack, so I used to work with Steve. And I got into Citrix because my company cloud.com was acquired by Citrix back in 2011. And we developed CloudStack, which many of you probably know. It became a patchy CloudStack. But a lot of you may not know is cloud.com was a very early supporter of OpenStack. In the early days, back in 2010, we attended the very first OpenStack summit and really put a lot of effort behind promoting the project. So very happy to be back. All right? I was thinking about a short introduction, but it's fine. Derek, do you want to just quickly introduce yourself as well? I'm Derek Collison. I founded a company called AppSara about three and a half years ago. We're trying to flip the problem on its head a little bit. So multi-cloud, containers, deployment, orchestration, all of these things are important. But what we try to do is drive trust. And so three and a half years ago, I said let's flip the problem on the head and put policy, governance, and security at the base of the foundation. Don't bolt it on at the end. Since the foundation of the company, I would always talk about deploying diverse workloads. When our diverse workloads is anything from a green field app to a bare OS, but of course in between is container-based images and Docker, and we fully embrace them. But understanding how to secure the network in a multi-cloud environment, understand the runtime, understand what comprises a workload, and all do that transparently so DevOps can get their stuff done, but ITOps can feel like they're safe. That's the core of what AppSara does. I'll try to keep it a little bit shorter. I also created a messaging system called Nats, which is starting to get a lot of momentum around kind of a new way to do cloud-native infrastructure. Before that, I created an architect to the system called Cloud Foundry at VMware, and then spent six years or so before that at Google. Well, the whole point of the session is to take a reality check and what's happening around containers. So let's do that first. How many of you here are using OpenStack and containers together right now? In production? Nobody? Nobody yet? How many of you think you will use OpenStack and containers in some form together within the next six months to a year or so? All right, that's very different. How many of you think you can do it within a year? All right. Well, you probably have already. Yeah. It feels like there's a lot of hype in the container world still. When you're going out talking to vendors, talking to customers, they're probably asking about it, but what are their questions? What are their concerns, Derek, if you want to? I'll kind of up-level it just a little bit. I think a lot of the things around containers that people have in front of mind, security, networking orchestration, persistence, how are these things going to get resolved? My opinion, they're going to get resolved, and I'm not too concerned about that in terms of the time frame. But what I think is very interesting, and I'm interested in the audience's feedback and the panels as well, is the macro theme around what's going on is that everything we've been doing in IT, and I started this in the 80s, so it's been a long ride, we try to figure out a way to make it faster, better, cheaper, lighter, whatever that is, and we continue on that cycle. So I still remember the day when a PO went to Dell or HP, it took you six weeks to get a machine, and you put a single workload in the machine, and then the machine failed, and then rinse and repeat. What happens is that eventually you get to go so fast that you're not doing the same thing faster, you're changing the way you do things. And so for me, the big thing about containers is not that it's a faster way to orchestrate or, I mean, spin up workloads per se, but it's gotten to the point where it's so lightweight and it's so fast that it's fundamentally changing the way people are, the behavior that they're doing in terms of architecting things. So you're hearing things like microservices, right? It's always a good idea to decompose a system into smaller pieces, but it's usually a pain to figure out how to orchestrate and put all that together. But now the fact that we can decompose them, and there's not a massive tax around virtual machines and the memory, and that there's orchestration engines from OpenStack and Magnum and Rancher and AppSeras platform, of course, to try to take on the undifferentiated heavy lifting, it's changing our behavior, and that's the biggest thing. I mean, VMs aren't going to go away, right? They're the new legacy, but for me and for AppSera and I think the platform ecosystem at large, the bigger question is, how's it changing the way developers are developing systems? And that's the biggest thing that we've kind of attached ourselves to. Yeah. No, we're doing a company, Rancher Labs, that's like 100% focused on containers. So my, I mean, obviously I'm personally very bullish on containers, and when I honestly, when I saw the number of hands that were raised from people who are like using containers and the OpenStack in production today, I would actually venture to say maybe the actual number could even be higher than we give ourselves credit for, because what I've seen is we talk to a lot of infrastructure teams and they're running infrastructure and they see virtual machines getting scheduled every day. But a lot of times you might have less visibility about what's actually running inside the virtual machines. So it's entirely possible your DevOps team or your developers might be the stupid hi again. Sorry, I'm late. James, you made it. I kept talking at breakfast. So I was saying it might be entirely possible that there are a lot more container workload running, right? Without necessarily the infrastructure being aware of it today, which is totally okay. So I think I'm probably a lot more bullish than most people here. You're looking at it from the technical perspective mostly, right? What are the demands you're seeing from me? Well, I think there's a big difference between hype and excitement, right? It starts with excitement and it can turn into hype, but hype is something that we do when we're truly activated about something. We did it when cloud came around. We did it when software defined networks came around, right? When you had programmable storage, we got excited about these things because they were breakthroughs. And now containers are truly at the point where they're beginning to break through. Does everybody agree that containers are a breakthrough? Just to level that out there. James, do you? Yeah, hi. Sorry, I'm behind. You know, I look at the biggest changes that are really more organizational and architectural, I think. You know, I look at Netflix and they built a series of microservices on Amazon, reduced their overall heap size and changed their whole architecture, and they did it with VMs. They did it with Redback deployments and they got there. And I think they got a lot of the benefits that all these people that I talked to in enterprises were trying to get to. They're really focused on culture and app architecture. So my honest opinion is like where I tend to see this is like you could do this with VMs in the right way. There are obviously some really cool, you know, net positive benefits in terms of process spin up and efficiency you get. But I tend to come at it for more of like what did Netflix benefits they get from this regeneration? How do you enjoy that? They're now going to containers. So to me, that's the true exciting part of the market, which is the more cultural and architectural changes. And this is one of the benefits, which is different than when VMware came out and gave you a fundamentally different hardware utilization ratio, kind of without change. I think the container tech itself has been around a long, long time, even in mainframes and Solaris and stuff like that. So that's not ground changing. The fact that they're lighter weight, they're faster spin up than VMs. And what's interesting to me is that there's this ecosystem of, hey, I can build faster now by building less. I build this one little piece and I assemble a whole bunch of other container images off the shelf that now, you know, with the help of Docker, which, you know, has really risen in terms of popularity, I term it as the new tar ball format for complex workloads. People will actually stand behind those formats. So Ubuntu, you know, marks here will say, yeah, we'll support the Docker image that represents Ubuntu and MySQL and Redis and all different types of things. So now, suddenly, it's assembling systems. You don't have to build them all. At least that's kind of one of the big themes, again, that I'm seeing that I think is advantageous. And whether it's containers or maybe it's microtask virtualization in three years, we're not going to stop. We're going to keep going. But those are the themes that I think are powerful for us. Yeah. I mean, I think containers extremely revolutionary, not necessarily in the sense that, say, it's a lighter weight execution format or something that could, you know, potentially complement displaced virtual machines. I think that aspect has probably been around for a long time. But I get really excited when I see, you know, a lot of the early innovations that companies like Google and Netflix created around microservices, you know, automated deployment, seamless upgrade, you know, sort of represented, which nowadays getting codified in systems like, you know, like Docker compose and the kubernetes. And it's really making all that capability available. And before containers were popular stand alone, like you could get that, but you really had to, you know, install, you know, maybe wholesale adopt a Netflix open source project or maybe, you know, like pass guide, by the way, has been doing this forever. So it's really not new. But just really bring that experience more down to the, you know, to the container layer that a lot more people can start consuming. I find that really exciting. Do you guys think that the end of virtual machines is near or will be there in five or ten years? No. It's a legacy. You call it a legacy. Ever going away. Why is that? Because there's a good reason why you want to use it. Okay. So in what 2003 was roughly the time frame that virtualization became available between 2003 and 2006, it became widely adopted. Why is that? Well, before it was very difficult to compartmentalize workloads and consolidate workloads into small footprints. And after virtualization, that became possible. And there's now since 2003, right, we've got 11 years of innovation on top of that. For how do you manage, how do you do dr? How do you manage migrating? How do you deal with storage management? How do you take care of networks, right? There's a ton of value in there that don't exist on bare metal to the same extent that people want. And they're going to continue to get from virtualization. And containers are really not trying to solve an infrastructure problem. They're trying to solve an application bundling, distribution, making it more efficient for developers, eliminating environmental drift between your test environments and your production environments. These are extremely compelling reasons to be using containers that have nothing to do with what virtual machines are good at. So you'll see them used in combination for a very long time. I think, you know, VM as a legacy thing, I'll agree and disagree with Adrian just a teeny bit. I think a lot of the new architectures, the new way of thinking when people are talking to vendors around different next-gen platforms, I don't think they're going to be thinking VMs unless it's at the lowest layout of the infrastructure. I think the patterns are changing. And I think they're not going to be stuck on containers. So I can promise you in five years we won't be talking about containers. We'll be talking about the next thing that's even lighter weight and provides better security and can be moved around. But VMs, pardon me, they're a part of our legacy and certain workloads need them. But again, if you look at across a very large-scale operationally, what you're paying for the most when you're looking at your budget outside of headcount, which is the only thing in IT that gets more expensive, it's memory. And so anything that's consuming lots of memory starts to become an issue where you start decomposing one thing into ten different things, and it's still running in a JVM, which is kind of big, and then it's running around a VM in a hypervisor, which is kind of big. And the actual functionality that you do could probably run in two meg. Eventually that shows up on a spreadsheet, and eventually you change your behavior based on your cost structure there. Adrian, you rolled your eyes a little bit there when you started talking about... Oh, you read my body language. I shouldn't project my body language. Look, this is the reason why companies like Google use this technology. This is why it showed up in the kernel to begin with, because if your goal is to run a very large-scale system efficiently, you need a way to have not just provisioned capacity, but the actual utilized capacity is the thing that you're optimizing for. And virtualization is the wrong model for optimizing for memory consumption. I completely agree on this point. But the reason why virtualization makes so much sense for the last 11 years is that our workloads do fit on one box. Those legacy workloads do fit nicely on one box. Those boxes are getting bigger and bigger and bigger as we get better and better at making chips. But now that we have the ability to generate data faster than we've ever been able to generate it before, we're producing problems that are too big to fit on a single machine. And we do need big distributed systems in order to use these immense amounts of data. And so we need an answer for how to do that well. And containers offer a glimpse of hope to be able to do that in a smart way. So I think virtual machine means different things at different times. But virtual machine as a server consolidation construct, resource isolation concept, I think these will be around for a long time. But one aspect of virtual machine that's also been used in the past, things like, you know, as an application packaging format, like AMI, you know, which I really think it's been a huge advantage of AWS that have such a rich AMI library, which opens that, lacks still today. And there's been a lot of effort trying to beef it up. I mean, that's, I think that kind of aspect, I think, containers would definitely be able to offer a lot of complimentary value. So that's why I actually think the regardless of virtual machine containers, I just think open stack and containers is such a great, you know, complimentary fit, because just the value of bringing in this extremely rich set of container images is just so powerful, you know, to an alternative infrastructure system like open stack. The question I would ask, and I'm sure we'll have opinions on both sides, but assuming that containers can solve some of the security and isolation problems, and I think they will, does it make sense to have a container on a VM on a piece of hardware? To me, it doesn't. Long term. Maybe right now. From a performance perspective, it does not. Just in any perspective. From a security perspective, it will, and I want to give you a technical answer for why. Okay. If you have two containers on the same host, the security isolation between them is the kernel. It's the syscall interface in the kernel, which as a version 3.14 of the kernel is something like 382 different system calls. That is an extremely wide attack surface that is extremely difficult to secure unless you know exactly what you're doing. So if you're a top tier service provider, can you do that? Yes, probably. If you've got some balls, yes. Google can't. What are you talking about? Yes, they can't. Google compute and the whole, they use virtual machines under their containers. Let me finish. Let me finish. When you have virtual machines that are side by side. They're pretty good with containers. Virtual machines that are side by side on the same host, the thing that is separating them is a hardware virtualization, which by comparison is tiny. It's a tiny attack surface. It's much more realistic to secure that interface between the two things. Okay. So you're going to have a much lower probability that between VMs that you're going to get through. Okay. And you didn't let me finish. You didn't hear my whole point. Okay. It doesn't make a whole lot of sense for you to run neighboring containers that belong to hostile workloads because the probability that they're going to break through the syscall interface is high. Okay. Unless you know exactly what you're doing. And I'm going to say that most of us in this room don't have that level of prowess and capability. So we need other ways to do isolation. Virtualization is a great way to do isolation. Having separate virtual machines is another, or physical machines is another way to do this. Right? So Magnum's answer to this is embrace the fact that this is a weak security barrier and don't use it for security isolation. It's not the right tool for the job. Virtual machines are the right tool for the job if you're trying to do security isolation. If you're trying to do maximum efficiency and memory utilization, then containers are a better tool for the job than virtual machines. So it really depends on what you're after, which tool you're going to use. I'm actually, I agree with Adrian very strongly, but not just because of VM as an isolation. I just don't think containers as an application packaging format is not the first application packaging format. We had JAR files. We had TARGZ. We had lots of other things. And I think virtualization provides more than enough value to stand on its own. Virtualization came along. Because you could have made the same argument, say 15 years ago, before virtualization even established. Why would anyone ever want to use virtualization? Especially back then, service actually a lot less powerful. And virtualization was a lot less efficient as well. And it certainly made a lot more sense to run the Java VM or .NET application on bare metal than virtual machine. And at the end of the day, virtualization won out. And I think there's just a lot of things virtualization does for us that we take for granted. We don't always run the same operating system. There's lots and lots of, not even, there's so many, we make Linux distro. So we know the need for why doesn't anyone converge on the same Linux distro, which a containerized system would force you to do. So the way I see it is going to be around for a long time. James, do you disagree or agree at this point? Have you been convinced? I mean, I actually agree with Adrian. I think I just use Google's public cloud as the bar of if you think you can go 100% container and bare metal and all of that, I would expect them to be the people that can do it. This is 2015. All right. If we set our lens to 2020. Yeah. Okay. We might have very different hardware with very different features. The utilization is safe today. Yeah. Safe. I'm going to use in quotations. Okay. It's relatively safe today because there's hardware support that keeps it secure. I mean, my take on this whole conversation, though, is it's a bunch of infrastructure discussion that's separated from fundamental use cases and workload. And what we've seen is that we really help people write microservices applications because how are you going to go consume this new distributed, rapidly updateable infrastructure? And so as much work as we've done at the container layer, and that's all exciting, where we really help people is we have a project called spring boot, which is a very radically reduced surface area for java programming that embeds the runtime and the app server into it. And we go into companies and they're like, hey, spring boot actually radically reduces the surface area of what we actually program to. It's very easy to move around. It's almost sort of like the docker for java. It's like taking it up a level and abstraction. And so that's where I'm really fascinated because then we actually help people to write the apps that matter on this infrastructure. And what I see in the absence of that sometimes is I see people that like, oh, I got to do docker. I got to do docker. And they jam their old monolith into a container, run it, and they're like, ta-da, I did it, right? And I think we philosophically betrayed them by sort of giving them the idea that they can win at an infrastructure layer before they win an application layer. So that's more my perspective. Over time, the physics of how we implement these lower level systems calls I think will change. But that's what fascinates me. It's more of the upper level stuff. Yeah, I think what I've seen over the last couple of years, and I'm very adamant about is that whether it's an infrastructure play, a platform technology play, we've really shifted from being opinionated to you really need to be un-opinionated. It's even like with Spring Boot. It's like, assume you have Java. It's opinionated. You have a Java system. My gut is that the un-opinionated systems will win. And so even though Docker does have an opinion, because it's based on a Linux kernel, they kind of let you kind of throw everything in there. And to James' point, a lot of people will start with that as a first step. But you've quickly watched the ecosystem where they're getting very, very small payloads, very, very precise, only does one thing. And that's kind of what people are pushing. To Adrienne's point, for the here and now, I actually agree. But what's interesting, as we look at ourselves as people, we talk about hardware as a cool thing. We're running off a 50,000-year-old hardware in between our years. Our ability to think exponentially is zero. We can think linearly. The only way we see exponential is when we look backwards. So in 2020, all I can promise is everything that we think's going to be happening will all be wrong. And things are going to move so, so fast that we'll look back and go, oh, oops. And I don't have the answers. And so I try to separate between what's going on now, like even Spring Boot is relevant and applies as you use Java. 2020, all the rules that we think are going to exist will not. It's hard to follow that one up. All my predictions are wrong now. Hughes gets a cracking start. So does Boat and Gatlin. So quickly got past you. But Boat's got a good start around the bend. This is going to be very close as they come into the home straight. Boat maybe with an advantage. And Gatlin hasn't caught him yet. And Usain Boat is in the front. He can't relinquish it. And you said Boat continues to go away. Boat's got away at last. Almost a big, big margin that he probably couldn't believe. 19.56. Okay. They are by design fundamentally different things. Yes, they are both trying to cluster container applications for you. Okay. But let's talk about Kubernetes first. Kubernetes is what we refer to as a declarative system. Which means you describe the result that you want. And you present that to the system. The system has the magic built into it. It has the complexity built into it. The system carries out the work for you and produces the outcome. So YAML file goes in. Orchestrated application comes out. Okay. That is very different from the scenario you would get from a Docker swarm use case. Which is, we refer to this as an imperative system. Where the instructions, the actual process, is not built into the system. The system is stupid. And it just follows the directions that you give it. Okay. So you give it these instructions. And it does exactly what you say. And you get an outcome. But the input C systems are very, very, very different. Right. There's a very simplistic input to the declarative system. And there's a very rich and complicated input to the imperative system. Now, the advantage of the imperative system is that if you want to modify the process, you don't go and do software development on the system and modify the system itself. You modify the instructions. And you get the different behavior. So if you care about customizing the behavior, then you really want an imperative system. But if you don't care, and you're just going to consume whatever the functionality that the system knows how to do, you're like, great. The best practices are built in there there. The best practices are good enough for what I'm trying to do. Then you describe your application and you feed it in and you're really happy. So different motivations for using completely different kinds of systems. And that's why they exist separately. That's why there's not just one. And also, they're opinionated, right? Kubernetes is an opinionated system. Apache Mesos is an opinionated system. Maybe a little bit less opinionated. Cloud traffic is the most opinionated. Sure. But they're different opinions. And by four, maybe 10x the highest revenue system. Only an order of magnitude. So every declarative system has a different opinion about how something should be done, OK? That's the whole point, isn't it? Right. So there's going to be some need for choice. Because you're not necessarily going to agree with this opinion or this opinion or that opinion. And you're going to need to have some options. And so Magnum recognizes this need for different kinds of expectations and provides some way that you can meet those expectations. All right. Well, thank you all for your opinions. I think we're getting kicked off the stage right now. No, I just was walking for exercise at that pastry this morning. Smart man. Thank you guys very much. Thank you. So of course, the key function of OpenStack's Look and Valley is to help provide you clarity on your future application development direction. And so I hope that that really cleared things up. You definitely want to use either bare metal or type one or type two hypervisors or containers or higher level abstractions or none of the above. OK. So who plans on using all or none or something else? OK. I thought so. All right. We now have a sponsor moment-ish of Minutes with Jonathan Davidson from, or Donaldson from Intel who's going to, as one of the sponsors, helping you enjoy your spacious surroundings and lovely desserts and pastries. We'll say something about what Intel is up to. Great. Thanks, Jonathan. Thank you. A applause for Jonathan. I think Jonathan Davidson actually works at Juniper now. But we both worked at Cisco at one time. So welcome, everybody. I would just like to, I'm Jonathan Donaldson with Intel. They're supposed to have my speaker in up there, but I put them on here instead. Oh, there we go. So I just wanted to say welcome to everybody. I get the privilege to be able to introduce day two to everyone. And so we had a great list of presentations and discussion yesterday on panels. We had panels on innovation with the other Jonathan, Jonathan from the foundation. We had essentially containers in you, both individually from Craig, Alex, and Mark. Always interesting dialogue with Randy. And finally, we heard from Diane yesterday on Intel's journey in this space and why we care so much about seeing open stacks succeed. Today is another day packed full of discussion, debate, and education. So I implore you to spend as much attention and time as you can with the subjects. We're going to hear from Mark Shuttleworth shortly here right after me on how you operate open stack and some other interesting tidbits around that. Adrian will be up here. He's always enlightening on microservices. We'll hear about tooling and the list goes on and on and on. I think this is just, for me, one of the most interesting events for open stack that I've been to. And I've been with the community on kind of a more personal level for about two years now since I joined Intel from another company. And, you know, much like my kids, right, I've seen it grow significantly in those two years. And it's transitioned from, you know, kind of the mindset of, boy, I really wish I could do that, right? To more recently seeing, oh, I can do that. I just want to do that better than anyone else, right? So it's that kind of change and shift in mentality that I think is incredibly interesting in this community. And I want to see more and more of that type of thinking. I heard an odd comment yesterday from one of the analysts that we spent time with, and he said, aren't you worried about all of the projects that spin up around open stack, right? There's, you know, 600 Git repositories or something like that around open stack. And I said, no, I said, actually quite the opposite. If I stopped seeing new innovation and new projects spinning up around open stack, that's when I would seriously get worried, right? So I think, you know, Intel has big plans. You heard yesterday from Diane for public and private cloud and hybrid cloud, absolutely. Open stack plays a significant part of that. And, you know, what I'd like to do is, you know, on behalf of myself and Intel is to thank you very, very much for all the hard work that you've done up until this point and your dedication in making open stack a reality and what it is today. And I would like to, you know, challenge your innovative spirit, right, to help make open stack what it will be in the future. And with that, you know, I'd like to thank everybody for being here. I'd like to thank you for your attention. Thanks for your dedication and have a great day.