 I'm going to just walk away right from that now. So I'm going to go ahead and introduce our first presenter this morning, Lou Tucker, vice president and CTO of the cloud computing team at Cisco. And his presentation, OpenStack, the road ahead for cloud computing platforms. OK, great. Thank you, Gary. Yeah, make sure to get these as you leave and then use them because everything has to run on OpenStack. So oftentimes, at these summits, I try to give us a perspective in terms of where we've been, where we're going, and why we're going there. And so I would like this to be quite interactive as well. So I'm going to try to make sure to reserve about 10 minutes at the end to have you all either comment or take questions. And it was no great yesterday, actually. I thought that was a really interesting perspective, particularly I thought around open source and privacy and how, really, those of us in the open source movement really, I think, are advancing everything that we need to do to keep this world open. So when I look out at the changes that we're seeing in our industry and the disruptions that we're seeing, I now take an Uber or I'm staying in an Airbnb, those are major, major changes. And I think when we look at those, what's behind those changes? I think of these three guys because I think they encapsulate the big, big shifts that we're going on first is in terms of the technology changes exemplified by Gordon Moore's law, which is that the cost of computing drops every 12 to 18 months. And there's a lot of us who are looking at the semiconductor industry now and saying, we may be at the end of Moore's law, but I actually don't think that the impact of Moore's law is declining in any sense. We're still seeing, maybe as Gibson said, the future is already here. It's just arriving unevenly. So we're still seeing Moore's law in effect in other parts of the industry going forward. The second one is Medcalf. Medcalf talked about how particularly the example was back in the age of the fax machines. You've got one fax machine, it's not very much value. You've got two, you can communicate with your business or your grandmother. All of a sudden, you have 1,000 or 10,000 and the value increases by the square of those things are connected. So that's a statement about communications and what happens with communications. And the path I've also given talks about how communication is the way that we transmit information and we transmit cultural values. And that's what I think is the very human part of our existence is that our culture is transmitted by information and therefore that goes essentially at the speed of light. And then the third is the economic environment. And that's where we're seeing new businesses being created and what they're really doing more than anything else. If you take Airbnb or you take Uber, they're changing the way something is delivered. It's still transportation, it's still hospitality, but now the method that has been used changes because of the technology and the communications aspects. So how does this apply to OpenStack? And when we're building out, continue to build out the OpenStack platform in a number of different projects that are coming along, I think it's good to think about it in this lens because we'll be able to see then what I think is going to be driving us forward. I don't know how many of you read Bill Gates's book The Road Ahead back in 1995. There's enough of us gray beards here, I think, in the audience that probably remember when this came out. And this was the age of the PC era. And this is where many of us were looking at this going, well, the PC's changing a lot because now we're starting to see communications come into the PC and an isolated laptop computer or a desk side computer wasn't enough. And that he really started to talking about and having other people in the company in Microsoft really talking about this information superhighways that we were about to see, which was the coming of the internet. And so now when we look at Open, and we obviously, that was a direction and we're seeing Microsoft sometimes struggle, but I think now they're time and again, they're adapting to the new environment going forward with their business and their technologies. Similarly for OpenStack. Since OpenStack was started about six years ago, those of us who have been it since the very beginning, it was about virtualized computing so that we could spin up virtual machines, NOVA. And then we could have key value stores, scalable storage that way, object store, Swift. And then we said, well, there was networking, so we wanted to introduce the networking aspect of it. And so these were like the fundamentals of what's going on, but now the environment's changed a lot. The technology's changed a lot. We're seeing a whole wave of new technology coming in. And so we're seeing OpenStack similarly evolve and we get to now look at the road ahead and say, where's OpenStack going? How do we embrace these new technologies? There's many I think in the industry who think that technology, one replaces the other overnight. No, these aren't waves that are crashing on our shores, for sure, but oftentimes what we see is the technology's picked up again and again. I was fortunate in being part of the very early Java team at Sun Microsystems. And now Java's legacy. Well, to me, that's success. So we see things have overtaken Java, but now almost everybody coming out of computer science departments knows Java and it's now become legacy and embedded into everything we do. So this is how these waves of technology and not only crash onto each other going forward, but they also bring technology forward. And it's important for us in the OpenStack community to really think about this. In many ways, I think that what we're seeing now is that OpenStack is becoming the kind of infrastructure play for infrastructure as a service that covers both virtual machines, containers, and bare metal. And that means that it's really becoming the software-defined infrastructure layer and that's what I think I'm gonna be alluding to a little bit later as well. So of course the biggest driving force in all of this is the internet. And there's lots of numbers out there and these numbers are getting so large, they're just kind of hard to really fathom that when we think of, for example, with almost four billion internet users today and when we look at this is global. I mean, so the growth of the internet is happening much faster outside of the US and within the US. And so 26 billion network devices. This is just phenomenal. So this again, think back to this is Metcalf's law. So the value is increasing tremendously because of this. The communications that with mobile computing has caused this internet to grow at this tremendous space. And that again, the modalities are changing. 81% of the, or 82% of the traffic is gonna be video. So we're all doing FaceTime, we're sending pictures. These are much, much, much larger data sets that are being sent around the world and we clearly are entering this kind of Zettabyte era. And at the core wall is, of course, is something that we all see or saying which is a software is really changing the world. And one reason for that if you think about it is that in one way, bits move faster than atoms. Bits mean pure information. Pure information is transmitted at the speed of light. Atoms have weight. You have to have FedEx trucks move bits. Even the fastest way to move data as we know is to pack it up on disk drives and put it in a FedEx truck and ship it. But it's still, it's physical. When we can transmit that all electronically we can go at the speed of light. And this is information and software encapsulates a program, it's information that we're sending. It's not really the bits of the program but it's the program itself, the information content. That's what allows anybody around the world to access this software. And so open source I think is particularly important in that. And if we look at like the growth of software itself just measured by GitHub projects, 35 million projects in GitHub today, 14 million users. If you're thinking of writing a line of code you best check whether somebody else has already written something pretty close to that. That you might be able to clone and adapt and use for your own purposes. This means that as an information media, communication media, how we share information this is how it's happening today. And this is just becoming embedded into everything. So it's no wonder that we're then seeing this transition between kind of the physical domain of computing into a virtualized domain. As we virtualize things, as things move from really atoms into bits, it explodes in terms of the capabilities and in terms of the amount of things that can be involved this way. So we're seeing things such as edge computing coming about that we're more and more pushing onto the edges of the internet and computing and with IoT devices and everything else. This is growing faster and faster and faster as we start to virtualize these domains. So it truly is the age of the kind of software to find data centers. And if you talk to CIOs and everyone else they are really finding that manually operating their data centers or operational costs are the hardest things to bring down. And the only way to bring down those operational costs are to turn it into software, turn it into automation, use automation through APIs to start to manage your infrastructure instead of sending out trouble tickets and sending people out into the data center to change wiring or configurations. We really need to be able to do this through the automation, through software, because software and information moves faster than anything else. So one way to look at that also is I think that easy kind of mantra here, but configuration and cabling now becomes code. And there's a lot of reasons why this is important. First and foremost, we write it down. We write it down in a way that the template here can be shared, can be stored, can be audited, can you have traceability of this? So by making configuration to code and then we're asking automation systems to replicate this and do it the same way over and over and over again. Because this is a proven way, this is a configuration, we know if you do it this way, you'll be okay. That's much, much better than somebody writing it down or going to in a notebook and opening up page 37 and following some sort of a run book. Let's get it all into code. And as we do that, then that code, particularly when it's shared in open source, starts to become standards. This is the standard way of configuring your systems. This is a standard way of deploying this architecture. And this is something that the standards body will never be able to directly approach. Just people will be too hard for us to write it all down and have it go through the normal standards process. Instead, these are then shared and being ad hoc standards that we all can benefit from going forward. But so we're seeing this real movement, I think, now with OpenStack as setting many of the standards for cloud computing going forward. And that I think is a very, very important contribution. Now businesses also embracing open source at a much faster rate than ever before. I think it's because that they're looking for, they want to avoid being locked in to a single vendor. They really want open modular architectures exemplified by open source and that they give these solutions and with the number of projects we're talking about, you have a large number of different solutions that are out there that somebody else has already proven validated and that you can then copy and apply in your own situation. And again, as I mentioned, this then drives the standardization process. So oftentimes I think it's a mistake to say is the success of open source dictated by the number of open source companies we have. I don't think so. Open source, I've talked about this before, is a way for us to collaborate. So the success is really how much open source software is being used. That's the proper metric. And so also the number of companies, if you look at, for example, Facebook and Google and others, they're very, very large contributors of open source software. They're not software companies. They're using it for their own purpose but they find it's highly advantageous for them to put that out into the community, have the community participate in that, that that helps their whole development environment because one of the most highly leveraged way, I think, to use software is to share it with the rest of the community. In cloud computing then of course we're also seeing tremendous kind of growth. That the number of companies now that are using cloud computing is growing rapidly. And in fact, what we're seeing is now, no single cloud is going to capture all of this business here. We're seeing the very large cloud providers, sort of the big three, big four out there today, which is really wonderful and we're seeing an awful lot of enterprises now moving to that, but at the same time, they're setting up private clouds in their data centers. And we're seeing about an even mix, I mean it goes back and forth or whatever, because there are advantages for sometimes there's tremendous advantages that we've all talked about in terms of using a public cloud. Somebody else takes all the pain of running that infrastructure and you don't have to do it today. But you may have regulatory reasons, you may have a configuration, you may have optimizations or a particular way of using computing resources that you can do much more effectively when it's under your own control in your own data center or it's at your own service provider. And so I think really moving into this world that there is a world of many clouds and that we're calling this hybrid cloud computing. I don't think hybrid cloud itself is a thing. It's a use case, it's how we use clouds today. And so I think this is really the dominant mode over the next several years that we'll see that we'll have this multi-cloud strategy going forward. And that will start to look very different also as we start going further and further out into the edge of the internet. Of course this is driving again huge traffic back into data centers. So this is kind of interesting to think about it as the internet has grown, cloud computing has grown, the network traffic within data center has grown even faster. And that's because now we're able to with the number of customers that we're reaching through our data centers, number of applications, number of mobile devices that are walking into our businesses every day, all of the information that's coming back into the data center. And that's where almost everybody I know has another strategy around how to handle big data and the analytics that they can derive from that. And increasingly now we're seeing what is the artificial intelligence you now need to use to help you understand this large amount of data that is coming into your data center. So it is very much this kind of hybrid IT world. Businesses are not only having to worry about the four or five major business critical applications that they're running in the data center. They have a whole new set of technology, a whole new set of SaaS services that they rely upon. They've got their data, they've got their application in multiple places in their own data centers and also at cloud providers. And they really need to start thinking about this much more holistically than what is behind the four walls of their data center. This has tremendous impact on things such as security. So the biggest thing now in security is really saying that you don't have a perimeter anymore. Perimeter defense is not the way to think about security. You can't wall things off. You really have to look behavior. Who is accessing your systems? Where are the data flows going? Are you seeing all of a sudden an FTP server starting to pump out data from within somebody's desktop? Well, maybe that's not desirable. Maybe that's not supposed to be handling FTP traffic. And so you've got vulnerabilities that can exist within your own environment and you have to think about security very differently for that. Now this line tried to capture, I think, what is also on the minds of many of the CIOs, the waves of new technology that are coming about, the new shiny objects. And that every day they're probably waking up or on an airplane or whatever and they're reading about yet another technology they have to become aware of because their competitors are starting to look at it. And so we're really struggling, I think, at this time. And those of us in this room, I'm sure are feeling the same thing because I'm feeling it. Every other day I'm waking up going, oh my God, here's another new project. What are they doing? How do I learn about it? Is this something that I need to pay attention to and start tracking? Like I said, fortunately, since all of this is by and large open source, I can actually go and it's all there available online. I can learn about it. I don't have to try to call up the company to get information about this new product or whatever, I can do all my research through the web. And I can look at the code and by and large I can bring it down on my laptop and I can experience it directly. But it is difficult. And many people I think are really struggling as we move so, I want to provide some context around these technologies, particularly when we're talking about containers, I think we're starting to see a new kind of layering again of the stats going forward. That if we think of the lowest layer of the physical infrastructure, where you've got your servers, your routers and your storage, and then operating systems that are right on top of that. Now on top of those individual servers and those operating systems, we're creating a platform across the data center. And that's what I think cloud software is. Cloud platform software is really about abstracting all of those servers, all of those storage, all of those networking capabilities into a consistent layer where you can, through APIs, spin up virtual machines, you can spin up private networks, you can spin up storage devices and that becomes abstracted. And that's infrastructure as a service and that's a software-defined data center. But one of the limits of that is what we've done is that we've taken, we've abstracted, remember, the data center with the same things that are in the data center. So I see a virtual machine. Well, a virtual machine, you still essentially need to act like a system administrator for it. You have to patch your operating system, you have to bring in new applications on that. You can start them and stop them really quickly if it's great and I can run half dozen on my laptop. But the model there, again, is that of a virtual machine and we just abstracted and made it accessible by software. The next layer up now that we're seeing is really the containerized layer. So that's where we're seeing Docker. Years ago, it was Solaris containers. Containers actually are pretty old technology but it takes a different point of view. It's an encapsulation of everything, all the resources that an application needs. So this is a technology that's coming from the top down. I have this application, I wanna encapsulate all of the things that I need, which version of Python I'm using, what are the libraries I need, but it generally doesn't have, and it has expectations on the operating systems that it's lying, that it's on top of. But it's a way to encapsulate applications without mimicking a virtual machine. A layer below is virtual machines or physical machine. So containers are really an application-focused way of describing what is in the application and its requirements. And it allows us to spin these up very, very rapidly. Tens or hundreds of containers can be spun up every minute. And so it's much, much faster, much lighter weight, and perhaps much more portable. So now we can take that container and we've got a very high degree of confidence. We can run that container on my laptop, on an EC2 instance or on a VM in OpenStack. So the portability now increases dramatically because, again, it's an encapsulation of applications. So when we talk to people, I think it's important to try sometimes to separate the conversations to what people are looking at. The system administrators are really looking at a software platform across their data centers, such as OpenStack, and application folks are looking at what is the life cycle of their application, how do they manage it, how do they continually update their application and deploy new containers. At the same time, science is advancing. And this, I think, is really thrilling because in science, and as I actually used to be a neurobiologist, and one of the reasons why I switched from neurobiology to computer science, was that biology is moving far too slowly. At wet labs, we're doing experiments. Those of you who have done anything in chemistry or biology know it takes a long time to make progress and experiments can last years before you know the example. And I move into computer science that all of a sudden we can write code, things go a lot faster. Now they're coming together. So science, particularly around genomics, is becoming data science. So data science, big data, computing science, all of these things are merging, which accelerates that. So when we see that, this is a good book if you wanna just get an overview of what's going on in genomics at this time, we're seeing that science now, as it's becoming a technology, as technology is impacting science, science itself moves much faster. For example, here's a cost of a genome, of producing a full genome. It used to be in hundreds of millions of dollars in 2000 and it's dropping. And the top line there is what it would drop if we use the exact same technology as we were doing it before, just speeding it up by Moore's law. We'd still be at almost a million dollars for a genome today. Fortunately, science and computer science is also about algorithms. And because of algorithmic changes, we're moving now much, much faster than Moore's law. And the big, big change is gonna be when this, again, impacts in the economics, is that when the cost of doing a full understanding of your genome as an individual drops below $100 or so, every one of us will have that. This will be part of your medical record. Every doctor will have to have your genome sequenced because it'll be so cheap. Their insurance will actually mandate that you wouldn't prescribe something if I didn't really know the entire genome of the patient I was treating at that time. So economics, algorithms, and these technology, I think, is what's driving most of these changes. The other thing, if you look back at the history of computer science, and particularly like in Palo Alto and Homebrew Computer Club, a lot of things started out in the hobbyist world. And people were building things themselves or whatever. And now we're seeing actually with Maker and with Kickstarter and everything else like that, a lot of these hobbyists are driving a lot of the technology changes. We're not waiting for the big Intel's of the world or Cisco's or IBM's or even Google or Amazon. Hobbyists are getting into the game here with 3D printing and with drone technology. And we're then rebounding back into the industry. So now people are talking about how are they using drones driven by sort of the consumer gaming side of drones but using that in manufacturing. Now when you're talking about then drones within a factory, you have to be able to do a location by Wi-Fi because GPS is not gonna give you the location that you need. So all of these things start to play on each other and again accelerating this rate of change going forward. The other thing is that know how really knows no boundaries. And there's another theme I think we're talking about particularly when we're looking at things such as robotics. Robotics actually mostly now are made in China. And that we really seem because the know how can go beyond boundaries, national boundaries and everything else. And so we really, this movement again of information that can be transmitted faster than anything around means that the entire world is now participating. And it's great to see actually what we're doing in OpenStack because now I think we're just at the point where in terms of the number of golden members now is almost 50% outside of the US. A little more than 50% outside of the US. So that's great because we are a global. So when we run these conferences, that's one of the things I'm most of vice chairman of the foundation, we try to put these conferences around the world because we are a global foundation. And that's something that I think follows then just simply from the fact that the technology and the know how is going to come from everywhere. Again, that creates continued acceleration. And even some countries, I'm an advisor to Qatar and they are largely dependent upon natural gas and oil. They know that's limited resource, that's going to run out. So they're taking 2% of their GDP investing it every year in R&D. And they're building science and technology parts. They're putting universities there. They've got a very, very progressive approach towards their future they know has to be in information technology. And so they're taking their petrol dollars today and investing it into the things that they need in their infrastructure to become a leader in not just Al Jazeera, where we probably have seen them, but also in terms of then information technology and university and trying to build some of the foremost institutions. Particularly around like Arabic natural language translation taking most of Wikipedia and making it available to the rest of the world. Shenzhen, I don't know how many of you've been to Shenzhen, I haven't been. But when I've read about it, it's just simply amazing. This is the next Silicon Valley. This is where the way that they've collaborated between the different industries there is allowing them to move much, much faster. Silicon Valley I think was fueled largely by our financial systems in terms of venture capitalists and how money was applied. This is being driven by how ecosystems of companies can work much, much faster. And I think it's a very interesting play. This is a great article from Wired. If you wanna read more about it, it's very, very fascinating. Again, know how, doesn't know any boundaries. So they can apply the same kinds of things and become a real powerhouse here in Shenzhen. So what else is happening that would affect OpenStack? Well, this is one that I'm particularly fond of since here in Boston in the early 90s or whatever as part of something called Thinking Machines. And Thinking Machines was an artificial intelligence company back in the 90s and late 80s. And we wanted to build a machine that would be proud of us. That was our motto. And to do that, we had to bring together like 65,000 processors, go after massively parallel supercomputers because we were trying to compete with what was the processing required by the human brain. At the time also, the Japanese had a big effort called the fifth generation. And this created a lot of angst here in the United States that we were going to be leapfrogged by the Japanese because they're investing heavily in large-scale computing and an artificial intelligence. We were way ahead of ourselves then. We didn't have the computing power necessary to really pursue AI and everything else like that. And so we went through a period and the AI winter refers to actually a dearth of funding from the federal agencies that just almost overnight stopped funding AI research for about 10 years, 15 years. Well, technology had to... Moore's Law kept driving technology until today. Now it's achievable again. Now we can talk about self-driving cars. Now we can talk about IBM Watson. We can talk about Siri on my phone. All of these things are now possible because Moore's Law has made computing cheap enough that we can start to apply some of these algorithms to this. And also, the amount of data that you need now to do machine learning has increased to the point where these things are rapidly increasing their capabilities. So if you think of the latest thing in terms of now a Go champion has been defeated. And one of the interesting stories if you read behind this is that AlphaGo, right at the end in the last couple of days, they sort of had run out of human players and playbooks that they had from previous games and AlphaGo started playing itself. So you have two versions of it each playing each other and it learned much, much faster at that point. That's kind of scary. I think for those of us who read a lot of science fiction, but I think it shows that, again, data is the important thing here. And now AIs are actually gonna be able to start having behaviors and when they have behaviors, again, behaviors create instances, examples, and you can start to learn faster. And this is, if you think about the way any infant learns and everything else they interact, they learn through interaction. And a lot of what I think we're gonna see in progress in AI is that more and more the interaction not gonna be pre-programmed behaviors. It's gonna be behaviors that'll be learned through interactions with us. And that creates a whole raft of ethical considerations and everything else that I think are gonna be quite interesting going forward. The other thing is that we'll often know about this. So the number of PhDs or whatever in AI that are coming out are not enough to satisfy what the industry is looking for today. And instead, we have to reinvest in learning itself. And again, because of the beauty of the internet, I think that we can say we can all learn through the web. And in fact, those of you who heard about TensorFlow and wanna play with it, well, Google put it up as an app. You can actually design your own TensorFlow. The software is freely available, so you can play with it, you can learn about it. But here's something that if you don't wanna write any code, you can play around with changing the algorithms, changing the feature detections that are gonna do, and to see whether you can come up with the solution here which is to identify this cluster versus the other things to partition this into two steps. So it's really fun. You can watch how this actually evolves and how many steps that are going on. And it is watching how many iterations it's going before it can arrive at the solution here. So I advise you to take a look. We'll be posting these online so the URLs will be available to you. But the fact that we want learning and doing it yourself is a part of how we cope with this increasing rate of changes. There's a real theme here. Similarly, jobs and online. So now you can go online. You wanna be trained in data science or in machine learning. Courses are available now online. And this again will I think help us meet the need that we have in this. One last area I think that I've been thinking about a lot is this whole notion of serverless or Lambda or functions as a service kind of computing. I think of this very much similar to what we've been talking about in the past is data flow architectures. And this is where the flow of data is actually driving what the system does. And what's beautiful about this is that now I think serverless is actually a poor name for this because of course it's running on servers. That's the only way computing actually happens down below but you don't have to think about it. What you have to think about it is writing a small function, one piece of a larger system. And you can break systems up into these different pieces given to different people to do or given to different devices to do. You can have your Amazon Edge device Alexa doing your speech processing for you so that you can talk to it or whatever. And then you can fire off an event that now will take place perhaps on an EC2 instance to do some processing of that. And so here's an example like how do I set up an assistant to tell me what I can get to the gym? Well, the assistant has to know that what gym I like to go to and my calendar. So it's going to interact with several systems to come up with the answer to say that three PM on Thursday is just fine. This means that I'm not even thinking about the servers. I'm not even thinking necessarily about where this software is running. I'm just saying I want this function to call this function to interact with this other function. And I think we're just at the beginning because of how we start to think about this. So to me, this means that there's a whole new raft here of like not only just like programming languages and frameworks and design principles for how we design these kinds of distributed systems that I think will be really exciting going forward. So I'm running out of time, unfortunately, but I just want to say that as we look forward to what we're doing in OpenStack, these are the things that I think are coming and that we need to think about what are the systems for that. Fortunately, we're building this in online communities. This is community developed software, so we all have a role to play on this. And we can look over as OpenStack is on two tracks right now. One is maturing. The other is continuing to innovate. And so we're trying to categorize a lot of this stuff through our marketplace and through our navigator, showing the number of projects and not only showing what the projects are, but allowing you to see how the statuses of these projects. So you can see in the maturity, how widely adopted is this. These are all considerations that we need to go into when we're thinking about adopting different kinds of technologies. And I urge you also to take a look at from the foundation we've been now introducing certification. Because employers who are looking to find experienced OpenStack engineers, they need to be able to know that these people have been trained properly on this. I'm not gonna continue to go into other areas, but this is where verticalization is happening as well, both in media distribution. If we looked at, we've got Comcast Xfinity, that is based on OpenStack. We've got Direct TV based on OpenStack. So a lot of the video stuff now is moving very quickly to OpenStack, as well as OPNFV, combining OpenStack with a lot of the work that's going on in the network function virtualization. So there's a number of sessions on these topics later today. And here's just the list, while I guess I've opened for a question or two, I'll leave this list up, so I'd love to have it stay the rest of the day. Questions? We've got a couple of... If so, please come up to the mic. We've got a couple of mics up here, so if there are any... I think I have about three minutes. You got, yep, you got... Or I can bring a mic to you if you don't want to... Oh, here we go. Or no? Hi Lou, thanks. So just aside from the network infrastructure, where are... Can you share two or three priorities for Cisco as far as contributions are concerned? Sure, I think we're... We do two, perhaps two ways. One is that our real expertise is in networking. And so we are continuing to, as you'll see in the rest of these talks, advance the state of networking. So for example, coming up next is really about VPP, which is a much, much faster line rate, software-driven packet forwarding technology. We've developed that, we put it out into another open source community, which is FDIO, and now making that available and integrating that, again with OpenStack, similarly with IPv6. So that's in our sort of core area of the expertise that we have in networking. But also, from a larger perspective in Cisco, as a company, we're undergoing a transformation like everybody else. So we are transforming from where Cisco's traditionally been and moving up into the software and to the services world. So you see, we've recently purchased, for example, AppDynamics. So in terms of application performance management, we're seeing much more of movement towards management itself is moving to the cloud. And so things such as what we have a product called Moraki, which is a how do you manage all of your wifi access points? You don't want to do them individually and doing from one data center. These are across your entire enterprise. So run that in the cloud as a SaaS application. And of course that we've had things such as WebEx, which is SaaS. So increasingly you'll see Cisco moving into SaaS methodologies for managing things. And even one of our OpenStack offerings isn't managed from the cloud as well, which is our private cloud on, remotely managed in terms of meta cloud that is managed from the cloud for the customer running in their own data center. One more, yes. Hello, great talk to you, thanks a lot. So OpenStack started as a microservices project to deploy cloud native services. And now there's a new foundation CNC that kind of sounds like the next generation of OpenStack. And Kala is one of the projects that integrated Kubernetes into OpenStack. Is there any working group or user group in the foundation to embrace the rest of CNC? Yeah, I think so. I think the larger question you're asking about it, what is the impact of containers gonna be on OpenStack? Yeah, and the rest of the CNC. Well, it downs. And I think with CNCF and everything else, we're contributing to both. And particularly with a project such as Kola, in many ways we're seeing, I believe that OpenStack is becoming more like an application that can sit on top of containers and that can offer containers again. So containers work actually, not only for deploying OpenStack as a set of services, containerized services, but then you can also run containers on top of OpenStack. OpenStack I still see as being much more focused. I mean, what OpenStack provides that you don't provide just by CNCF technology and everything, isn't the management of the infrastructure itself. So OpenStack continues to be, I think an integration platform for a lot of different technologies. While it's primary purpose though, it's still that software-defined infrastructure layer managing the infrastructure itself, presenting it as a virtualized form. And the form that can present it as virtual machines or as containerized containers, or even as bare metal, which then you can run just about anything on. But when you bare metal machines, even through like Ironic or whatever, are now treated more like virtual machines. From the developer's point of view, they can say, okay, shut down that server I don't want it anymore. And they can do it through an API. They don't have to go and file a trouble tick and have somebody run out to the data center to do that. So I think there's still a role for those things. And I think that we're seeing those, the different communities now work together. And most of I believe the containers today, like Kubernetes, is running on top of OpenStack. And so I think that's very positive. Thank you, Lou. Okay, thank you so much. A couple of things, everybody.