 of engagement, and we won't see private cloud in its fullest free software expression. But the mission we should not forget about, the other half of this mission is an economic one. Every cloud competes with the public cloud and it should, it's right that it should. If you're building an open stack, you are at some level competing with Amazon, SoftLayer, with Google and with Microsoft, and it's healthy to face that competition head on and we should all be mindful of the long term operating challenges, operating costs of your private cloud infrastructure. And that has been our relentless focus throughout. We just saw the latest open stack user survey, who's seen it, who's super interesting. I really recommend it as a read, full of information. From the perspective of people building clouds, there are two numbers that really jumped out for me. The first is this shift. Juju, the number of clouds built with Juju, grew by 50% in six months and is now ahead of clouds built with Chef, with Salt, with Packstack, the Rel installer. And most importantly, I think that shift represents a growing awareness of the long term operating costs of a large scale out infrastructure because uniquely, Juju allows us to crowdsource and collaborate on operations, not just configuration management. So that's a key theme for us. The other big piece of information there is that Ubuntu continues to be the platform of choice for open stack. I think what validates open stack is that every platform is here. You can build open stacks with Hyper-V. You can build them with VMware, ESX. You can build them with REL, with CentOS, with SUSE, with Debian, with Ubuntu. And more people build on Ubuntu than all of the other platforms combined. And I think that's really testament to the extended community and to the confidence that people have in the ability for themselves to shape their own open stack in an Ubuntu environment. Now people think they're here to talk about open stack, but I don't think it's cloud that will be the defining term of our era. I think it's something new, something different, something that people haven't really given a name to. And I want to do that today. And that name is big software. You've all heard of big data. And I want to remind people, just remember to the dawn of big data, when data professionals laughed. Right? In the late 90s, early 2000s, being a data professional was a serious career. People have been doing it for 15, 20 years. We were in a world where we had very sophisticated ways to model data. And as the scale of data grew, we increased the sophistication of our tools to the point where some people started to realize that things were breaking down. And so we needed to move through a phase change to an unstructured. We needed to embrace, in addition, an unstructured view of data. And again, the professionals laughed. The professionals said, look, these kids just don't know how to do it right. These kids just don't have access to big enough machines. These kids haven't been trained properly, right? But really what was going on was a phase change. Phase changes are really powerful things. If I say to you that your mission was to get 100 tons of H2O to Houston this afternoon, you'd really want to know, were you dealing with ice? Are you dealing with water? Are you dealing with steam? It is a two degree Celsius difference to go from any one of those phases to the next one up. But it is an entirely different user experience, an entirely different set of safety regulations, an entirely different set of tools to deal with a new phase of H2O. In the same way we needed an entirely different set of tools and thinking to deal with big data. And my proposition to you today is that we need an entirely different way of thinking about big software. So what's happened? Applications used to be simple, two, three pieces spread across two, three machines. Today, of course, any interesting software is made up of many, many applications, many components, spread across many machines. This is the scale out era, but it's really the big software era. And architecture today is really a discussion about how best to spread many pieces of software across many pieces of physical infrastructure. The architectural conversation in OpenStack is really about how to spread the 1520 to 30 pieces of software that make up a modern OpenStack deployment across tens, hundreds, or thousands of diverse physical servers. Now that's a phase change. And just like the proverbial frog in the pot, the temperature rises and people try to attack this problem with their established tools. And it's only later that everyone steps back and realizes that was a boiling point, that was a change. And that really is the thing that motivates me, that's what motivates our team. Now we've talked for a long time about JuJu as a way of modeling applications specifically to get a handle on big software and this phase change. And we just launched Ubuntu 16.04, which includes JuJu 2.0 and a new tool, a very lightweight wrapper on top of JuJu. Which I just want to give you a quick taste of. This is something called Conjure Up. And this makes it extremely easy for any member of the community to provide a walk-through installer of any big software. Any big software at all, better. So I can conjure up OpenStack, but I can conjure up any big software. And you all know that JuJu is essentially charms modeling individual applications and then a model of those applications spread across different servers. Here, this is a bundle specifically focused on OpenStack. There are a couple of different ways I can attack OpenStack in this conjure up deployment. But I'm going to essentially say I want to do this like a developer. I want to build an OpenStack on a single machine, like a laptop. And I'm going to use container machines. I'm going to simulate many machines, but I'm actually using LexD containers for each of those machines. And the reason that's interesting is because it allows me as a developer to experience a real OpenStack. For example, a highly available OpenStack. Now who's used DevStack? Right, DevStack is a script that installs all the components of OpenStack in one process space, right? What we're doing here is essentially creating 20 or 30 process spaces, 20 or 30 container machines, and then installing all of the components of OpenStack as if they were installed in a cluster across many racks. And that allows us on a single laptop to have a real experience. And I can go through here and change some of the configuration settings, or I can just go say, build that out, and it will essentially spin up that OpenStack on a single machine. So I think that's really going to change the experience that developers have with OpenStack. We see a lot of bugs creeping into the OpenStack code base because of DevStack, because people essentially experience a naive or simplistic OpenStack in the development cycle. Here you'll be able to have highly available OpenStack on your laptop, on an airplane, hooked up to your Git tree, and hacking away, testing what really happens when you shoot a node at 35,000 feet. So I think that's going to be a great thing for OpenStack development. Let's talk a little bit about production and scale. Now, the real test of success for an OpenStack cloud, for a serious business that's going to count on that OpenStack cloud, is not what it costs to stand it up. It's what it costs to operate it over time. All the backups, all the fixes, all the updates, upgrades. That's really where the long-term economics will be tested. There are other factors, but let's just start with operations. A key focus for us has been to automate and to crowdsource operational expertise, and that's really reflected in another layer on top of JuJu called the autopilot. And I want to show you where that's at today. So here we have three racks of servers. Inside each of these is 10 little Intel nooks. And so we can use those as a simulation of a rack, effectively. This is showing a picture of number 36, which I think is the third one over on the right, or my right, your left. And you can see that it's got an OpenStack, it's got something deployed, there's a bunch of nodes active. But it also has, the autopilot has some spare capacity that I'm going to use to go build a cloud. Now this is, the intent here is to essentially make all of the decisions in terms of the allocation of resources, but also the operational decisions to automate those. So we're using the crowdsourced charms, which capture a bunch of information. We're also using the fact that the underlying infrastructure is modeled. So MAS, this will tell me what my network ranges are. I'm going to choose this network range, I've got more IP addresses, I know, spare on that one. And it'll offer me a series of choices, a new choice in this latest iteration is integration with Nagios. This is our top customer request, very interesting. Super widely used open source monitoring software Nagios, really easy now to integrate an OpenStack deployed, an autopilot deployed OpenStack with your Nagios. Now the fundamental decision I need to make is at the level of resource allocation. How many servers do I want to give this cloud? There are, I'm just going to choose all of the spare servers in that cluster. And this is a new option that we've presented, which is the ability to allocate specific roles to those machines. You'll see that because of the underlying modeling from the bottom up, the autopilot will preclude me from saying put network gateways, for example, on something that doesn't happen to have enough network interfaces, or it'll preclude me from putting storage on machines which don't have enough extra disks to provide useful storage services. So this is kind of a manual approach, allowing people to create their own architectures, experiment with their own architectures. But we retain, I'm just going to go back, we retain what I think is the most important thing, which is fully automated deployment. And so here you can see that if I press go, I'm going to build a cloud across three availability zones that's all abstracted from the bottom up. I haven't had to enter a single IP address. That wasn't just browser caching that's modeled from the bottom up. We treat IP addresses as an actual asset and a resource. And I can crack on or install that. You will see lights coming on as that cloud gets built. So that gives you a reference architecture. And the important thing I think about our reference architecture is that we've considered it from an economic perspective first. And the key thing here is the maximum utilization of the resources that you have fed to that cloud. The way we achieve that is with a hyper-converged architecture which is a lot of syllables. What it really means is that we use every disk, we use every core, and we dynamically spread load including administrative overhead load. And we use containers as a very efficient way to take a slice of a computer and guarantee a certain amount of performance to that slice. So for example, we can have a rabbit messaging subsystem or a part of a slice of a MySQL database that's feeding the entire cloud on any given unit. And the use of containers gives us live migration. It gives us a lot of profound administrative operational benefits. And of course it leaves the maximum amount of capacity available to our users. So I'm just gonna kill that and that. So containers, it's difficult to have a conversation these days about technology without talking about containers. There is an enormous amount of really important work being done with immutable containers like Docker containers as a new kind of application primitive. But our focus in that, alongside the work of Docker, and Ubuntu is the platform of choice for Docker, 70% of the Docker images are Ubuntu based. But the really interesting thing for us is to enable people to treat containers as a hypervisor. And so the pure container hypervisor, LexD, is now fully integrated with NOVA in OpenStack. And so here I have, I think it's, again it's on that last rack over there. I have OpenStack deployed and you can see it looks like any other OpenStack. I may have to log in again. But the key thing here is that the hypervisor in this OpenStack is LexD. So it's pure containers and not a single VM here. If I create an instance, what I'm gonna be doing is creating a container. I just wanna show you how fast that is. The critical thing about containers as a hypervisor is density, latency, and performance. With containers, we have no physical infrastructure virtualization. So we really have the bare metal performance. And so I should spin up five of those. And I need to allocate, let's just use a XenialImage, mediums. In the latest version of LexD, we can in fact do very detailed quality of service. So we can guarantee performance to containers. We could also stop those containers from using excess capacity on the machine. So we really are starting to get to the point where containers are like a hypervisor. If you've done this before on a couple of Intel Nooks with KVM, you'll know that it can take a little while to spin up five machines. But in about 15 seconds, you should have five instances running on this OpenStack cloud. This is really important for very high density applications. It's also important in cases where latency is crucial. For example, in telco NFV or real-time transcoding and streaming, any dropped packets, dropped frames, dropped microseconds, milliseconds are gonna be noticed by the end user. So using a container, we can actually very precisely put a workload on the underlying host CPU, pin it to a particular core, pin it to a particular thread, use numeric primitives to get very, very accurate and fine granularity on the allocation of capacity to workloads. So we see this being interesting for high-density use cases, dev and test, idle workloads, but also very precise supercomputing or other workloads. So one last thing on the question of infrastructure. Today we see a very definitive move towards software-defined infrastructure. Software-defined storage is big software, right? It's scale-out software, multiple components across many machines. We have done that consistently with Swift and Cep and Nexcentre and various other storage components modeled in Juju Charms. Today I'm delighted to announce the newest member of that program, that's EMC, with Scale.io, now fully charmed and integrated into Ubuntu OpenStack. And the driver for this work really has been telco adoption of Ubuntu OpenStack and the desire to operate with very high-performance software-defined storage at scale. Infrastructure's exciting. This is essentially an infrastructure conference, but at the end of the day, nobody comes here to make disk spin and lights blink, right? This is all in the name of applications, business intelligence, answers. And those answers come from big software. The future of big data is machine learning, it's big software. The future of PAS, it's container orchestration, it's big software. So in Juju, we're modeling big software on top of the cloud to operate in a hybrid cloud environment. With Juju, we can place applications across public and private clouds, traditional vert infrastructure, and as you've seen containers and bare metal as well. This is a snapshot of some of the vendors who are producing charms to model their big software applications across all of those cloud and bare metal environments. And in particular, I want to call out the rapid acceleration in telco usage of Juju for this purpose. These are a selection of the telcos who are using Juju, both for what they call the VIM, we call it a cloud, the virtualized infrastructure management layer, but also for the VNF, the rest of us call them apps, right? The virtual network function layer. And I want to invite up onto the stage Robert Schregler, who is lead architect and director of operations for Deutsche Telekom to tell us a little bit about Deutsche Telekom's experience with OpenStack. Robert, welcome. Thank you. Thank you very much. Now working, yeah. At Deutsche Telekom, we approach now a completely new game as a software-defined operator, as we call it, where Deutsche Telekom wants to deploy the cloud infrastructure underneath its new applications for all over Europe. So therefore, the nice picture here in the background. And it consists of three main things for us. And these puzzle pieces are coming now together. You might know that we are dealing with it for a longer time. And now the IP network strategy with Terastream, where we have an IP network, IPv6-based network approach which explains the connectivity underneath. BNG is a German subversion of it. And this then combined with the SDN approach and real-time network service management, which we are currently building up with NetConfiang to provision, modify, and also handle the change management of all the network elements automatically. And last but not least, and this is where I'm mostly working in the infrastructure cloud. So this is an existence in two flavors. We experimented in the last years with a so-called front-end data center, which is data plane-oriented. So high throughput directly connected to the R2 router and completely automatically installed already with Juju on IPv6. But very simplified model, very simple installation in fact. Now this year we started with the bigger installation and the tight integration in the IP network on the so-called back-end data center networks with complex underlay network structure with SDN solution in place. We use contrail for that one. And OpenStack completely automatically installed with Juju. So this is a hell lot of planning to do it right, to do the scaling, the hardware scaling in the right manner, the underlying infrastructure configuration correctly, but then it is amazing. So my journey is when I started three years ago with OpenStack, it took us three months to install it with good planning. And now when we have a good planning done, we install in three days. So this is really, really amazing. And yes, we have strange errors, maybe one nice anecdote. We had a bonded network connectors plugged just 90 degrees wrong. So small packages ran, big packages not, so be aware if you have these kinds of problems, check cables. And it takes long. And this is really amazing for us because at the end we are now onboarding applications on top of it and we are planning to put TV load on it. We are planning to put simple applications like email on it. This is all nice and good. And the biggest problem, I think as you said, Mark is managing the software because the complexity now is in the software. And I disagree with the B-model approach. I think the slicing of the software, if you manage your software in a way so that you have your application lifecycle management approach or software delivery lifecycle approach in a way so that you can release and deploy smallest chunks in a tested environment, then you're good to go. Thanks. Robert, thank you very much. Finally, mission accomplished, the Buntu-powered TVs. Just not quite the way we thought it was gonna happen. I'm super appreciative of one key thing, which is that Deutsche Telekom has insisted that all of this work happen as open source. And so the operational learnings, the operational insights, all of those little debugging tools, gotchas, glitches, all of that ironing out is happening as open source in the charms. And that's really important for other people who might relate to this complexity. For everybody to do this from scratch is some definition of insane. And so it is a delight for us to be able to participate in that project and share that work with like-minded carriers around the world. What really matters here is that portfolio of software on top of the cloud, that portfolio of VNFs. And here is a selection of VNF vendors delivering charms, not just for Deutsche Telekom, but for many of these telcos who are using charms to model those applications at scale. One of the more exciting projects for us is interestingly exploring the 5G network as a series of software-defined radio experiences. And I actually have that running on this rack. So here we have OpenStack. And that OpenStack is running a cloud EPC from Expeto. That's a VNF essentially, a big software VNF. And those are powering this GSM base station. So the Texas Rangers are banging on the door because we've got an unlicensed carrier in the room. If you turn on your phone and look for carriers, you'll find one called Test PLMN. And this phone is registered, I have the magic SIM card, you don't, but you can see the carrier. And to prove that it works, I want to try and make a Google Hangouts call to one of my colleagues. Very relieved is the answer to that. So I just want to clarify for you what you're seeing. You're seeing an OpenStack deployed with the autopilot on this box. You're seeing a full EPC, Evolved Packet Core, which is a piece of software that's not just used by telcos, but you might have seen Google's announcement that they're using an EPC to deliver Wi-Fi on trains in some cities. So that's a software-defined EPC experience driving this base station. For those of you who want to dig deeper, this is the next generation. It's a software-defined radio, again driven by a set of charms, providing 5G. This one is running over Amir LTE. All of that running live on stage on OpenStack. So another really interesting project in the telco space is trailblazing work by a global tier one telco, which is focused very much on performance, price performance at scale for VNFs. And they've chosen to explore that work on power, on an architecture that is legendary for its scale and performance. And I'm delighted to welcome up on stage Mr. Doug Balog, the general manager of IBM Power Group, and he's going to tell us a little bit about power in the context of NFV and other amazing things that are being done on power today. Doug, welcome. Thank you, thank you. So you're probably wondering what's a hardware guy doing up here to talk about big software to use Mark's terms. So Mark talked about phase changes. And I'd say in the hardware space, just like probably many of your companies, we're going through our own transformation. Why? Because if you think about functions like NFV or IoT aggregation for analytics, those have a thirst for performance, a thirst for analytics, a thirst for ingestive information, and then trying to get insight out of that information. About a little over two years ago, we started working closely with not only Ubuntu, but companies like Melanox and Vidya and Google, recognizing the world that we all knew, certainly called Moore's Law, doubling of transistor density every two years, therefore the doubling of performance. Bad news for you is coming to an end. And so while on one side, we have this incredible set of applications, big software to use Mark's term, that desire, performance, compute, ingest, we have a fundamental law of physics problem in front of us. And we needed to crack that. We needed to solve that. So we created a foundation, open source foundation called the Open Power Foundation. And its purpose was to really look at the system design, given these big software applications we see, and fundamentally find ways to continue to deliver double the performance every two years, but more than just a processor alone. It was gonna take things like accelerators. As I mentioned, and Vidya was part of the original formation of the Open Power Foundation, but since then we've seen not only GPUs, but FPGA members all joined as well too. So with the power architecture, we've open licensed it. Anybody can license power now to create derivatives, to create manufacturing, and create new systems designs. So whereby I like to say about two years ago, we had five companies, five guys in a bar on napkin. We now have over 200 companies around the world innovating around the power architecture, focused on this set of problems we see around the big software and analytics. Now it doesn't just stop at NFV, stops, it continues on to areas that we think are pretty exciting, that at least in IBM we call the cognitive era. Hopefully some of you have heard of Watson? Watson everybody? Yep. Now it's evolved from the Jeopardy playing machine of four years ago, to really the way in which we're bringing data, yet previously unseen by computers on structured information into the world of healthcare, into IoT, into commerce, into finance, into retailing around the world, so that there is an aid to solving some of the big problems of the world. This aspect of cognitive, whether it be, you think about it, advanced analytics, so artificial intelligence, machine learning, and deep learning, there's a whole category of rich analytics that occur, that again have a thirst, just a fundamental thirst for compute, for IO bandwidth, for memory bandwidth, for big memory to go with big software. That's what the Open Power Foundation with these members are working on every day around the world, and we're pretty excited about the capability this provides us, and Ubuntu has been a leader with us in this space since the day it was born over two years ago. In fact, Canonical's own John Zenos is now the, well I like to call him the grand pooba of the Open Power Foundation, or I guess it's president or chairman, or some big title, so we're thrilled that we have that kind of partnership. But that's what's been going on in the hardware world. So again, transforming as it needs to transform, whole open licensing models around power that might surprise you, but really fundamentally focused on solving some of the rich analytics problems that we see in the world today. Because it's a world of big software, as you said, Mark. It is so exciting for us to be working together. I couldn't help but notice that the company that open sourced machine learning first, Google, is blazing that trail on Open Power. And so it is very exciting to be able to bring full choice of architectures and this amazing portfolio of capabilities to some of these new business problems. In fact, Google with Rackspace and IBM and others just announced a couple of weeks ago their plans to release Open Power meets Open Compute meets Open Stack based on Power 9 to the industry here as an architecture for the data center. So we're pretty excited about that joint announcement took place at the Open Power Summit in San Jose. Doug, thank you very much. Thank you, Mark, pleasure to be here. So riffing on that theme of machine learning, deep learning, neural networks, and the beginnings of AI, team at IBM and a team at Canonical over the last month have done a little deep dive into open source machine learning. And I'd like to show you the fruits of that labor. This is work and I'd like to thank a company called SkyMind, another company called DataFellas who worked with us on this. This is deep learning 4J running live using a model that was built on Ubuntu on Power. The way deep learning works is that you push large amounts of data through a learning algorithm to produce a model. And this model actually is very interesting. It is looking at Open Stack logs and doing predictive failure analysis. So I want you to think about a large scale Open Stack deployment generating megabits per second of logging information at every level of the stack. It is far too much information to introspect manually. But what if we could bring deep learning to that challenge? So here you have a running Open Stack and essentially a real time signaling service telling us based on machine learning a predictive, giving us a predictive failure indication. So it's essentially giving us an early warning of storms brewing in the logs of a large scale Open Stack cluster. That's a model built on Power, on Ubuntu with Open Stack with the help of SkyMind and DataFellas. This is another proof point of the same story. Again, open source machine learning. This is looking at network intrusion data. So again, real time analysis of streaming logs with the aim of predicting, giving early warning of attacks in the network. Really profoundly interesting stuff. This I didn't talk about earlier, but when I talked about scale out software defined storage this is an interesting example. This is a Cep dashboard developed by Canonical on behalf of one of its customers giving real time performance analytics on a large scale Cep dashboard. Again, that's a straightforward juju deploy of the dashboard and wiring it in with a relation command to your juju deployed Cep cluster. So, and this is of course the juju topology of the deep learning system that did that data analytics essentially that built that neural network model of both the Open Stack log analysis and the network intrusion analysis. All right. So I just wanna recap. You've seen an interface that allows anybody to build Open Stack trivially given four nodes, given 40 nodes, given 4,000 nodes. You've seen an Open Stack running that uses pure container hypervisor for the world's fastest Open Stack lowest latency and bare metal performance. You've seen VNFs deployed with juju running live on Open Stack providing LTE services. And you've seen a machine learning deployed with juju on Open Stack on power. The point is this, this conference is full of complexity. But the reality is we need to step back and think deeply about the core primitives that are needed to attack the problems of the new world. We will not survive this next decade as a private cloud forum and community unless we think really profoundly about the core primitives. I believe with LexD, with Maz, with juju, we have three really important primitives to wrap around your Open Stack experience. Everything we talked about here is on the booth. It's on the IBM booth. It's on the Ubuntu booth downstairs. Marketplace opens at six o'clock. I believe there's a tradition of a beer crawl. In Austin, that should be spectacular. Welcome to Austin. Thank you very much for coming. Stick around. And we may have time for questions. Can we get a signal on that from the back? Do we have time for questions? If you have a question, please come to the mics. Right, well. If that was so completely definitively final, thank you very much for coming. There are a series of great sessions in here. Lots to see on the agenda. Have a great week.