 Well, next up, we've got a quick video from Intel and then they'll be telling us about what they're working on these days in the land of OpenStack. At certain moments, the rules change, a choice confronts you, stay the course and decline or disrupt and accelerate. Andy Grove called these moments strategic inflection points and led Intel through several epic inflections. In a crucial transition, Intel shifted focus from its primary business, memory chips to microprocessors. It was a wrenching experience filled with risk, but a timely choice that changed the world. Today, the emergence of data-rich cloud computing threatens to overwhelm the data center. Intel is responding with breakthrough technologies, silicon photonics for game-changing connection speed and non-volatile memory for fast, vast data capacity, risking current success to create future opportunity, embracing strategic inflection points. It's in Intel's DNA. Is it in yours? Ladies and gentlemen, please welcome Imad Sousou. Good morning, everyone. So today, I want to talk to you about what I think is an inflection point or how we are, I feel, at the crossroads in our OpenStack project. So I want to talk about this a little bit. As all of us know, there are times when faced with certain changes that we get one of two choices. We either act and end up flourishing or we just don't do anything and most likely end up in decline. This is what Andy Grove at some point called it, the two options at an inflection point you either adapt or you either die. And to us, it feels like we are in OpenStack at one of those moments, especially in a couple of areas when you look at orchestration and scheduling and probably equally important at the process, the development process that we use in OpenStack that we are at this inflection point at the crossroads where we do need to make certain amount of change to continue to flourish in our great project. So now imagine how we would do or how you would do OpenStack just knowing everything that we know today, knowing that it's not only about VMs, that there are containers, that there's Bermudal, and how would you do things and what is it that you would actually end up building if you are just doing OpenStack from scratch? Just knowing all the data that you know today. So at Intel, this is exactly what we did. So a handful of engineers got the assignment to, okay, with little boundaries, just call it an experiment with no boundaries, no, none of the rules that are imposed in the project or anywhere else that to reimagine how this would look like. And as usual, the few engineers, just a very small number, five or less, they went out and they spoke to customers, they were already part of the community, so they understood where the state of the technologies is. They spoke to a lot of end users, so understood all the pain points and so on. And they ended up really coming with what is effectively a redesign and the re-implementation of the schedulers and few key pieces of OpenStack done and go in the go programming language. And they went along, they wrote down these five criterias or these five requirements or areas that they focused on that they've heard from everybody and they wanted to really solve. And by the way, none of this should be new to anyone. All instances are equal or should be equal. So right now today we have three schedulers. We have Nova, Magnum, and Ironic to manage containers for machine and bare metal. So why is there three? Same thing with built-to-scale, scale and performance, upgrades, deployment, security, all of these things that are fairly well understood and really it took a bit of work just to step back and to do a complete re-implementation just to see what it would look like. And now I'm going to show it to you. So this is, I'm not as brave as others to do a live demo. So it's a demo, a video that we shot a few days ago. So if we can play the video, please. Okay, so in this video of the demo, we are launching first 10,000 containers. This is using the re-edit and scheduler. And so 10,000 Docker containers that are running Ubuntu, you see them getting run there. And now adding 5,000 VMs, these VMs running Fedora, and now these VMs are running. And all of this is running on a 100 node cluster. So both, you know, the 10,000 containers took about 20, 20-some seconds to run and with all the properties that I spoke about earlier, and we'll see the, as you will see, the 5,000 VMs running on that same infrastructure will take around roughly 40 seconds. So in total, there is the 5,000 VMs and the 5,000 VMs and 10,000 containers all done in one minute or slightly less. And this is at the, as you all know, this is order magnitude better than everything that we can do today. And this is with just like thinking out of the box and so on. So our biggest challenge right now, obviously, is just we want that in. We want that in an open stack. So this is right now. We just opened it on GitHub. And we would like to see this in open stack. But this is where we also started running into other problems that I think also we are at an inflection point into really start to rethinking some of the aspects, especially when we take such hard positions. You know, obviously, one of the things, as I mentioned earlier, is that we have developed all of this in go. Okay, this is, you know, we have what I would call a dogma of like, nope, everything must be in Python. Well, why? Why can't it be? Why can't we use the right tool and the right programming language for the things that we want to do? And this goes across a lot of aspects of trying to evolve our community and our development model and development process so that we are able to use the right tools and the right processes for what we want to accomplish. So the project is called Chow. It's on GitHub. And there's more information on clearlinux.org Chow. And that's really all I wanted to talk to you about. I just wanted you to keep an open mind and let's do all what we can to continue to keep open stack flourishing. Thank you very much and enjoy the rest of the summit.