 Right. While everyone's still finding seats, I want to give a big shout-out to Blue Box. And the reason for that is that we thought we might be full this morning, and so we asked Blue Box if they would allow us to stream this presentation out to the video wall, and they very kindly agreed. We haven't been able to organize that with the OpenStack Foundation, but I still wanted to appreciate and give a shout-out of appreciation to the good guys at Blue Box for making that possible for us. So, we still have, looks like, five seats, Mark? Five. No, I think we're good. Early bird. Very good. Let's get started. Bienvenue. It is fantastic to be here in Paris. Oh, I'm so glad that's not the AV system. Right. It's great to be here in Paris. I want to thank the Foundation for choosing such a wonderful city. I hope you guys have all had a wonderful first day of ODS, first day and a half of ODS, and that the conversations here are everything that you hoped they would be. I must say that being in this kind of environment has really sort of raised the quality of thoughtfulness and the caliber of conversation amongst all of the different vendors and participants in the forum. Without further ado, I would like to dive straight into what will be an overview of a series of presentations which we'll be holding in this room over the next couple of hours. I hope to show you some of the extraordinary things that OpenStack is now capable of doing and some of the extraordinary things that people are doing with OpenStack. Now, those of you who have, who's been to a canonical keynote before? Couple of folks. Okay, so we have a tradition, and that tradition is that every six months at ODS, we celebrate the extraordinary progress that OpenStack has made and also the progress that's been made in the Ubuntu community and Bio-canonical in terms of what OpenStack is capable of and also in terms of how easy we've made it to use by doing a live demonstration on stage of a deploy on bare metal of OpenStack. Now, this is why I am very gray because doing this every six months is somewhat terrifying. It is sort of live acrobatics in front of a studio audience, some of whom don't wish for us to succeed, although I always get the impression most people are cheering for us. Well, today is the summit of that journey and today is kind of a milestone for all of us because my little secret is that every time we've done it, we've never needed it, but every time we've done it in the past, I've had a safety net. One of my colleagues sitting behind the curtain who would be able, if necessary, if I pushed the wrong button, to quietly make things move forward in the direction planned. Now, we've never used that, but I've got to tell you, it was the only thing that kept my heart still beating as we went into canonical keynotes. Anyway, today I feel sort of relaxed about this because I'm not going to deploy a cloud and in fact none of my colleagues are going to deploy a cloud, but if you would be so kind, one of you will deploy an OpenStack cloud live on stage in front of everybody else. Now, to make sure that this is not a plant, I'm going to ask someone to throw a hopefully not very well-made paper aeroplane into the crowd you don't look like. Throw it anywhere you want, any direction. Hopefully it won't go very far. And if you work whoever receives it, and so moving right along, go ahead, however you want to pick a random volunteer, and if you, live demos, ladies and gentlemen, all right, all right, all right, I tell you what, wait, wait, wait, I think we can improve the aerodynamics of this structure mainly by doing this and just throwing it that way. Now, if you work for canonical or you know what's coming, then throw it away, pass it on. You don't, you want to come up on stage, you don't have to come up, pass it on anyway, why not, throw it in a random direction. You going to come up? Ladies and gentlemen, tell us a little bit about yourself. My name is Russell. I've recently moved into the world of IT infrastructure. I work as a systems engineer, so I'm just sort of getting into understanding what Clown's all about really. Sounds good. How are you liking Paris? That's good. All right. And have you ever compiled your own kernel? No. Brilliant. Okay, so have you ever installed OpenStack? No. Brilliant. Would you like to? Are you nervous? I've got to tell you something. I'm nervous. But come over here. Let's have a look. So have you ever been to Texas? No. All right. So I'm going to teleport you to Texas through this thing called the Internet. It's a system of tubes. Okay. Not there. Here we go. So this is the University of San Antonio in Texas. University of Texas in San Antonio. And they're very kindly made available to a 76 machines of all sorts, just random hardware. A bunch of it is OCP hardware. There's some AMD-C micro. There's an AMD-C micro chassis in the mix over there. But it's a nice healthy random mix of stuff that we could cobble together. And all of those machines, except for one, are switched off with no operating systems installed. And in the course of the next little while, you're going to build an OpenStack cloud. How's your chef? Non-existent. How's your puppet? Same. How's your web browser? I can do that. All right. Good. So that's Maz. Everybody knows what Maz does. Maz is metal as a service. So it essentially just does all of the pixie booting and BMC management and operating system installation, all of that base data center management stuff, like software-defined infrastructure. And then on top of that, normally we would layer juju to orchestrate all kinds of different things. But what we're going to do here is point the canonical OpenStack controller pilot, which is here, at that Maz. So just zoom out a little bit. So this is Landscape. It's our standard systems management software for Ubuntu. And this is the now in beta, now in public beta OpenStack installer. So all we've done is one of those machines is switched on and it's got Landscape installed. And we've pointed it at Maz. And you can see that down here. So it says we registered a Maz region controller. We've got at least five machines in there. And at least one of them has multiple disks and multiple network connections. That's a whole neutron thing. Right. My only clue is that that button over there starts the process. Right. If you have to ask any questions, ask them. I'm not allowed to say anything. Okay. There you go. Good choice. So this is an interesting thing, right? That's not from browser history. That's Maz, which knows the network and server layout and it's telling Landscape, these are the networks you could use for your public gateways. And it knows the IP address ranges, so it's telling him which ranges will work for his external neutron gateways. Oh, you guys are so boring. But it's a good choice. Don't get clever now. All right. So what's happened is Landscape has gone to Maz on the list of all of the machines that Landscape thinks could be useful for this cloud and they're preselected. So you could deselect them or you could leave them all there. Depends how big the cloud is that you want to make. Up to you. Oh, big cloud. Okay. Don't ask me. Voila. Now, thank you very much, Russell. That's all there is to it. So that software is available now. I guarantee you all you need to do is set up Maz, straight forward, Pixie boot all your servers off Maz once. You use a standard script to deploy Landscape on that and then you point Landscape back at Maz and you can use the rest of your hardware to build an OpenStack cloud. Now, the reason we've done this is because right now OpenStack deployments are largely limited by the number of OpenStack consultants in the world. Consulting is really expensive, right? And it's expensive twice. It's expensive the first time when you get those consultants and you have to get the good ones and you have to get them on time. And then it's expensive every time you want to upgrade or change that cloud. So what we wanted to do is to say for folks who trust us to build a great reference cloud, we wanted to be able to automate that process so that you can do it in your own data center with no consulting whatsoever. So we really dramatically reduced the cost to people of just getting a standard reference cloud. Because that's built into Landscape, of course, it also comes with all of the management, the systems management for those underlying hosts and guests, the Ubuntu hosts and guests. You would use your normal management software for other operating systems that you mixed in. And there are a couple of cool things about that. We will evolve that reference architecture. Now, our reference architecture at the moment, we're going to do a walkthrough of that reference architecture at 5.30pm here today. We'll share everything that we've learned about how to do a great open stack deploy. We've done this at some of the very largest institutions of the world. And we'd love you to reproduce that on CentOS or REL or SUSE or VMware. Any platform you like. So we'll share what we've learned. But we wanted to encode what we've learned in a place where people could just say just give me what canonical knows. And that's what this autopilot is all about. And the cool thing is as we learn more, we will update that, you update Landscape, you push the button and your cloud will evolve to the latest, greatest reference architecture. So I think that's pretty cool. The problem with reference architectures and standard products though, is that they tend to limit the number of choices you have. And I hope you saw there, that in fact we've started to give people all the commonly trusted choices in open stack. And of course we want to extend that because we want this to be a reference architecture that is uniquely flexible and customizable and shows our partnerships with a number of vendors in the world. You may have heard of Oil, the open stack interop lab. And I'm delighted that there are now 26 global vendors who are part of the interop lab. And what we do, there's a talk a little bit later I think that's at 340. I've got times over here. Sorry, that's at 250. What we do is we build the open stack 100 different ways every day. 3,000 builds every month. We run 32,000 tests of open stack every day in this interop lab. And as these vendors code essentially goes through that validation process, at the end of that we will certify it. And if those vendors want, we will add their code, their plugins, their extensions and so on to this reference architecture standard installer. So you'll be able to make the choices that Russell made, except you'll have a bunch more options. And I'm delighted to give you a preview of open contrail from Juniper, which we expect to land in this autopilot just in a few months time. So during the course of the cycle. If you look at this list of Oil vendors, one of the things that I'm really proud of is that we really have attracted a lot of the SDN vendors to Oil. And so we really are serving as a platform for interoperability testing. We're able to say, okay, for a given SDN, let's test all of the storage options, let's test all of the hyper-arviser options and the permutations and combinations of that. And then give feedback to those SDN vendors as to their interoperability with all of those other choices that you might make. And that's come about because of a strong focus on our part on the telco market. There is a talk later on today at 340 here on SDN and NFV. So those of you who have heard a lot about NFV, but are interested to see live demos of telco applications being deployed, live on to clouds which you can interact with, show you a little bit of the work that we're being asked to do on behalf of telcos to help NFV vendors integrate their stuff with each other. We obviously use Juju for live integration and deployment on to OpenStack and any other cloud or bare metal. And all of that will be deep diving on SDN and NFV at 340 this afternoon. Okay. So all of this is in the name of helping you go faster, whether you're a vendor or whether you're a customer. A year ago in Hong Kong we talked a lot about the telco industry because at that time we were seeing telcos as the lead investors in OpenStack. And that's really why you see such a strong SDN presence on Ubuntu and with Canonical and strong NFV presence there. Then six months ago we talked about work that we were doing with the banking industry and I'm delighted that those projects have continued to grow and we're now well represented. Oh, sorry, I should say we've had a bunch more telcos during the most recent of which was WingCloud in China and we're delighted to have them on board as a partner. In banking we've continued to grow and we now have a number of projects on Wall Street and in London and around the world and banks really are pushing our security story forward to ensure the complete isolation of the infrastructure from workloads and so on and also pushing performance. We've done a lot of work with the banking industry on containers and OpenStack and I'll talk a little bit more about that later. But in the current phase what we're seeing that's really interesting is tremendous acceleration with media companies. Media companies are a perfect fit for OpenStack for a bunch of reasons. First, their workloads are typically very scale out friendly a lot of transcoating a lot of streaming and they're also a great fit for juju because if you think about what a TV station does it's launching new shows and properties every quarter and you never know which of those is going to be a hit so you'll have huge amounts of traffic which of those will be a miss and not have a lot of traffic you don't know if they're going to last a year or 10 years and so you need a really elastic infrastructure and you're doing repetitive work setting up for each new show websites, SMS gateways, competitions forums, fan sites and so on and so that repetitive rapid deployment of best practice stuff is really a great fit for juju. I wanted to highlight one of our customers and partners, folks who've taught us a lot in the last couple of months and who's focused on scale out, I've really come to appreciate and that's Sky. A few words from Sky about their engagement. One of the fantastic things about Sky is we continue to innovate and push the boundaries in the way that we bring products to our customers and bring content to our customers. When we started our private cloud initiative we wanted to be disruptive for a number of reasons but principally we wanted to pick a selection of technologies both from hardware perspective and software perspective that were best in class that would make sure that our cloud was designed from the ground up to be what we wanted it to be. The main characteristics we were looking for is a sustainable cost base so price tag for the cloud platform that makes it effective and viable at scale. We needed a platform that was obviously robust and scalable and also a platform that brings innovation and a fast pace of innovation around open stack. Ubuntu helps us meet those and more importantly realize those because of that broad range of experience that Canonical bring to the deployment. Canonical has helped us understand how we engineer those characteristics into the platform from the ground up and then how we most importantly maintain their moving forward. To look at the risk is also key so for us OpenStack it was risky there's many successful deployments but there's also many failed deployments partnering with Canonical and Ubuntu bringing their expertise with our own engineering teams would ensure that with the risk of project as much as possible. Jujo and Mass are critical for helping us build and deploy and manage the cloud environment because automation is one of the core benefits and core objectives of us from our private cloud initiatives. Without tools such as Jujo and Mass we'd have more people doing things more manual processes, more manual tasks. So having this ecosystem of tools to help us deliver automation all the way through from turn to software is critical. Now the name of that project actually is the linear scale data center and I think it shows the insight that they have they really are trying to restructure their operations to fit the economics of the 21st century and the wisdom there was lots of interesting stuff there but he called out economics and this is something that we think a great deal about OpenStack is really only going to work if it is economical in the 21st century for people to run private clouds there's a hard limit because of the public cloud and the excellent work that's been done in the public cloud there's a hard limit on the economics for OpenStack we try really hard to walk a fine line we know we've got to be sustainable we know we've got to be able to make long term promises to customers we also know that those customers have to be confident that in the long term the economics of building on OpenStack with us are going to be highly competitive and advantageous to them compared to the public cloud and that's a really good discipline to have I think in many ways so I want to give you a little bit of a map of the way we engage with folks around OpenStack today that's not it that's not it either you know what I magically hit the wrong key this is like an IQ test and I don't think I'm passing but I pressed the right button so the first thing we try to establish is whether somebody is really focused on the cloud as the opportunity in its own right so for example telcos who are going to offer cloud services for them being way ahead of the curve on the cloud itself is really important and those are kind of like formula one type engagements because really with them we've got to learn a lot we've got to push forward on the science of the cloud of the scale of the cloud on the other end of the spectrum we have folks for whom the cloud is merely a way to ensure that they can continue their operations in the security of their own data centers and their own environments and so really that boils to sort of on the spectrum of tending to reference or tending to the extreme so on the reference front you saw the autopilot right so our goal there really is to reduce the costs of a reference cloud to simply the cost of supporting the Linux platform so that's either 300 to 700 dollars a node or AZ pricing which we've announced previously 50K depending on the scale of your AZ but we know that we have to cap that cost because you're thinking about private versus public cloud for folks who have a short term skills blockage we also then are willing to actually build the cloud and operate the cloud until they have the team to operate it themselves and so this is proving a very useful bridging function right so boot stack build operate and optionally transfer a quick way to get a standard reference cloud up and running have us handle all of the backups and other operations all the monitoring while you staff up your open stack skills base either internally or recruiting and then we transfer the keys and pull out our engineers so that's all handled in your data center remotely on your hardware and again that's 15 dollars a day or 5000 dollars per node for a full year typically those are three to six month engagements while people ramp up and build their their cloud for folks who want to go beyond the reference we kind of break that into two categories we think of it as tailored in the tailored case really what we're doing is we're accelerating the roadmap so say for example you want to work with an SDN vendor that's not yet in oil or work with an SDN vendor that's not yet in the auto pilot that's where we would go there and essentially what we're saying is we'll do that necessary development for you at cost in our interest to accelerate what would be in our roadmap anyway for the formula one guys it's a much more intense engagement we have onsite staff this is where I think consulting actually makes sense doesn't make sense if you just want a cloud to run your workloads it makes sense if the cloud is your business and you want to be ahead of the curve on the cloud so we're trying to make sure that we're only engaged with consultants and our partners are the sorts of consultants who can handle these formula one type engagements right people often ask what our focus is in OpenStack and I hope I hope you would agree that we've tried to be really thoughtful about where we go in OpenStack and what we work on at OpenStack and this is how I think about it what we have to prove to the world as an OpenStack community is the performance reliability and scalability of OpenStack that is the question on the table now can you get to 500 nodes, can you get to 5000 nodes, can you get to 50,000 nodes that's the question that's on the table at the moment and that's really where we focus all of our energy and I think OpenStack is at a really important decision point in what it wants to be there's really deep questions at board level about what is the focus of the project I think this is the thought that I'd like to share I think it would be very good for OpenStack to strongly say that the core of OpenStack is these four pieces this is where we've focused and the reason for that is we think everything else will come in time everything else will come in time there are dynamics inside the project which are I think bringing too much code which has knock on consequences for that core bullshit as a service money as a service, junk as a service irrelevant vendor bluff, puff and distraction and so we as a company are trying to focus very strongly and work with other people who focus very strongly on that core because if we get that right then all of the innovation that's happening on EC2 and on other public clouds will come to OpenStack and so we're not going to beat all of those startups with committee meetings we're going to make a viable platform that brings that innovation to the private cloud that's the way we see it I hope folks appreciate that to work on that core to work on the performance and reliability of that core is ten times more work per line of code but I think it's ten times more valuable per line of code to give you some taste of the sort of stuff we do there every six months we do a scale out performance test of the current release of OpenStack and then benchmark that against previous ones to see how we've done and in this round we've got partner HP thank you very much HP for making available a series of moonshot chassis these are amazing hyperscale very dense new style new architecture devices very dense x86 and ARM cartridges so we did this benchmarking on x86 and we were able to deploy the cloud itself on a couple of hundred nodes 500 odd nodes in two and a half hours which is a lot faster than we were able to do it six months ago we were able to hit 100,000 vms which is our target threshold in half the time that we were able to do it six months ago so that's pretty good news the other really good news is that neutron has gotten much more scalable between Icehouse and Juno so this is Juno that I'm talking about and that autopilot that I showed you that's Juno as well so if you use the autopilot you get Juno M and beyond so that's the good news neutron is much more scalable there's still work to be done there particularly in that neutron doesn't support cells so at the moment there is a hard limit on how far you can get but within the bounds of that it's much faster, it's much more scalable the bad news is we saw significant regressions in Nova in Nova scalability so we've got some patches for that we will apply those patches to the stable branches of Juno that we co-maintain and we will make sure that everybody gets those they are or will be in a Buntu open stack as well so they will be part of the autopilot installs that you do but I think it goes to show that when you've got development moving very fast and we've got many many many topics of conversation when you've got many of developers working in kind of artificial environments like DevStack that these scalability issues can creep in without anybody noticing and so we know that if we're doing continuous integration every day for correctness we now want to look to do continuous integration every day for scale take this sort of infrastructure and make it part of the CI CD process to help catch those issues in open stack very very early right so that's it for open stack for the moment but what are people doing on top? well who's heard of Docker? anybody not heard of Docker? okay Docker is amazing it is profoundly changing the way developers are pushing code into production it is the fastest cleanest neatest way for your devs to push their code into your production servers and I think it's amazing and kudos to Docker Inc for the work that they've done there I think everybody knows that Docker was born on a Buntu and we continue to work really closely and we work not only with Docker but we're very passionate about the developers who are using Docker so there are actually six times as many Docker images on a Buntu as they are the next operating system in the list and that gap is widening because we continue to focus on whatever we need to do to make a Buntu great for developers so if you happen to be a developer in an institution which is fascinated by Docker but has a corporate policy that limits you to shall I say a legacy Linux environment you will be delighted to know that this is the new way for a Buntu developers to get their code straight into production. We're seeing that all over the enterprise market as well one of my sort of passions is all of the things that are growing up around Docker and we're seeing a whole explosion of different ways to command and control Docker environments there's days there is flocker there's a fig from Docker Inc itself panamax from Century Link fleet from the great CoroS guys there is Kubernetes from Google Diego from Pivotal this is a fantastic Cambrian explosion of innovation and we should relish and celebrate it and I want to make sure and OpenShift who've said that they're going to rewrite again to use Docker now all of those with the exception of OpenShift have chosen a Buntu as their target platform of choice and I want to make sure that all of those are available to you on a Buntu and not just on a Buntu but instantly deployable with Juju and I'd love to have OpenShift as well now yesterday we announced that we were working with Google to put a Buntu, reference images of a Buntu fully optimized on the Google Cloud which is a great milestone for both of us there's been a lot of fun working with Google and today I'd like to show you the fruits of that collaboration which is Google's optimized Kubernetes which I'd like to deploy for you live on bare metal I don't think anybody's actually done this before until we did it and certainly not in front of a public audience so on this orange box remember inside here we've got 10 little Intel microservice so I want to deploy live on stage Kubernetes to those boxes one of those boxes is actually it's a VM on the box that's running MAS is running Juju so that's the Juju GUI I'm not going to use the command line at all because Kubernetes is already in this repository over here so all you have to do is spin up Juju on bare metal or on Azure or GCE or EC2 or OpenStack you get this screen you go in here and search for Kubernetes here it is orchestration actually looks like and hopefully if I press this button you'll start to see lights coming on touch wood on this machine over here and that will be MAS being asked to provide metal on which it's been asked to put Ubuntu we now have banks deploying SUSE with MAS and other banks deploying REL with MAS and yes I'm glad to say banks deploying Ubuntu with MAS and soon banks deploying Windows with MAS as well anyway in this case MAS deploying Ubuntu onto new metal that's being requested over there and then Ubuntu being installed onto that metal and Kubernetes being installed onto that and while we're waiting for it to come up oh you know what I haven't done I don't think I've actually kept it off this is where the man behind the curtain might be really useful we'll come back to that one right so Docker is absolutely amazing and you're going to see an explosion of Docker command and control systems and we're going to put all of that on Ubuntu make it easily available on every cloud juju deployable and with packages in Ubuntu but what else well a lot of people a lot of the one of the most exciting topics of conversation generally at the moment is around containers and many of you may know that canonical leads the work at linuxcontainers.org which is where LXC the general system container is developed so I want to talk a little bit about what's coming there I want to announce that working together with that community canonical is going to lead the next big thing which will be a new hypervisor purely focused on containers called LexD now I'm calling it a hypervisor for a very specific reason the things that you count on from a hypervisor are security well canonical has led a Mac based security and second security and username space is security for LexD and we're bringing all of that to LexD in addition we're working with silicon vendors to provide hardware guaranteed isolation of containers so that all of the hardware guarantees that give you isolation guarantees in the chip in KVM will also be available to containers without the overhead of virtualization and because we'll be LexD we'll be a demon a small demon written in Go that runs across many different machines we'll be able to do live migration of containers from machine to machine now in the spirit of taking our lives in our hands if you want to see that in action stick around in this room because at high noon Tyco and Dustin are going to take their lives in their hands and do a live migration of Linux containers from machine to machine it's an awesome awesome demo so that's a taste of the future this is going to unleash new levels of performance for private clouds that are Linux on Linux the one catch with containers is that it's Linux on Linux if you want windows we're glad to give you windows which you'll have to put it in either KVM or ESX which will fully support as well so LexD you heard it here first ok so clouds well people don't build clouds I don't know about you but I don't think people build clouds because they like to see the blinking lights and thank God the lights are blinking so yes we're starting to get a bit of Kubernetes that's where it was so I don't think people build clouds for the blinking lights right they build it because they want answers they build it because they want solutions and so that's really what we're focused on now working with multiple vendors that's Kubernetes but in telecoms there's a whole portfolio of solutions NFV and otherwise that we're being asked to accelerate the integration of and deliver with juju onto clouds in a telecom type environment in the media environment it's the same and the one thread that comes up is customer engagement one is big data and so I'm delighted to announce that over the next six months we will bring to market solutions from three of the biggest big data companies in the world MapR Hortonworks and Cloudera and of course we're going to deliver all of those as a series of juju charms so that you can deploy their solutions instantly on any cloud now the really cool thing about this is I gave this presentation of this to a financial industry technologists event in London and at the end I showed MapR and Horton and Apache Hadoop just spinning up the end of the guy put his hand up he said look I don't have a question I just want to bitch about it he said because if I'd known this six months ago I could have saved myself the last six months because I've just spent six months deploying all of those solutions manually and here we have them deployed instantly on any cloud or on bare metal so for evaluation purposes all of them have strengths that might lead you to choose them for your particular application for evaluation purposes this is going to be absolutely the best way to choose your big data or Apache Spark solution of choice we're also working of course with application vendors who are delivering intelligence on top of Hadoop also as a series of juju charms and those are going to be demoed later today over here that will be in this room at 4.40pm so cloud foundry working with Pivotal and Hadoop working with all of the lead vendors now I want to call something out here our goal is not to sort of muscle our way into these markets and compete with those vendors I don't know about you but I really don't like it when a platform vendor feels it has to buy or compete with people who've originated the technology because innovation is hard platform companies tend to have other problems and so they don't innovate as much so I feel we have to accelerate those companies and we have to let them bring their solutions to market we have to let them compete and let them innovate so that's our agenda we're not entering the cloud foundry market with a canonical cloud foundry we're supporting Pivotal and we'll support other passes and other institutions as well we think Pivotal has the leading enterprise solution with Pivotal cloud foundry but we want to see all of that innovation happen so we're not going to try and crowd it out we want to accelerate them getting your ability to evaluate and choose their solutions and their ability to find customers whose expectations they can really really meet okay most clouds start with a particular application in mind and I think that's really really good because if we know what your application is we really can tune that architecture to accelerate that application but one thing I've observed is that a cloud gets built for a particular purpose and if it's successful the same developers then start to want to use the same cloud for other applications and so there's kind of a trade off because on the one hand you will get better results better performance and so on if you optimize the cloud for your application you know if you know you're doing Hadoop or you know you're doing Condor or you know you're doing test and dev then you would build a particular kind of cloud but the reality is that over time you have to be able to evolve to the general if you're going to be successful and that's just one of the great things I think that we've developed is the ability to to build a cloud using sort of mix and match components with the architecture the spread of services on metal that you want that's right for that thing but then to evolve it over time to be useful and important characteristic I think we're running out of oxygen in this room so I want to wrap up with an invitation to you and to your friends and colleagues it's a great privilege for us to working with Juniper I don't know if there's anyone from Juniper in the room but thank you very much for joining us in this to host people and invite people from OpenStack to one of the world's top five and I think top one art collections in the Musée d'Orsay which I've probably mispronounced but it is an extraordinary place and there'll be no decks or pictures champagne on Juniper and there'll be guides if you want to learn a little bit about the masters but in an evening before dinner, before the other parties you'll be able to see some of the art that you've almost certainly seen throughout your life incredible artists from an incredible diversity all in one place so I hope you'll be able to join us as partners, spouses, friends and I hope you'll have a lovely evening together anyway, thank you very much right after this right after the break actually I think we have time for questions which is the one thing we never had in a plenary keynote do we have time for a question? well, we've got a man with his hand up, go for it right, so I think the question is how are we going to make OpenStack something that works well in a newly competitive environment and especially if we're partnering with vendors that are perceived to have expensive solutions so the first thing is we're very mindful of the costs that we inject into the equation so we've tried to model those as being costs that will keep you economically productive on a private cloud in the face of competition from Microsoft, Google, Amazon Facebook, Tencent and other mega data center operators so we're very confident that our costs the costs that we inject will keep you well below the threshold in the face of that competition other vendors that we introduced to that I think are responding very well to the changing dynamics, you know, at times change technology changes, the rules of economics change and companies change as an example of that, you know, Microsoft I think is sincere in their commitment to running Linux on Azure right, that's an extraordinary shift not just from the leadership although I think it does come from the top with the support of the top but also from the heart of the business now I expect that all of the vendors that you saw on that page have plans have strategies to be able to offer value right, they know that if they don't they won't be around and our job is simply to make it really easy for you to evaluate their solutions choose the ones that work for you including on the economics and then build a reference cloud that includes those components fully managed to keep the costs down great questions Chris I don't know, has it Kubernetes is up those are the maz, if I just reload that you'll see a bunch of green those are the maz machines I'm going to go and I'm not going to do that I'm just going to go and scale out Kubernetes quickly another if we've got another machine right, so we just asked for another machine for Kubernetes and that is the cloud installer in Texas progressing, so what it's currently doing is deploying RabbitMQ, deploying Keystone Sephrados Gateway, MySQL OpenStack Dashboard, so it's spun up a bunch of those machines, it's got the operating system on them it's now allocated which ones it looks at the RAM, the number of cores the number of network interfaces, it dynamically allocates the right interfaces to the right kinds of machines, it builds our reference architecture and it's busy doing all of that it will probably take another hour or so, because that's 76 machines and not all super fast but when it's done that will be a full reference cloud and you're welcome to come and bang on it from our booth in the trade share area any other questions, I think we're out of time last question, so first it's not LXD, that sounds like a drug it's LexD LexD, LexD good, LexD will come out over the next six months we're doing this properly as a full open project it's written in Golang it'll be commits to the standard LexC LexC, nerdy, LexC LexC repositories under linuxcontainer.org alright, thank you very much, have a great day I hope you understand this