 Well, welcome, everyone. Thank you for coming to this session. My name is Lou Tucker. I'm a VP and CTO of Cloud Computing at Cisco Systems. And I'm also vice chairman of the OpenStack Foundation. So we've been involved. I've been involved with this since sort of the beginning. It's great to see the change in this. Every time with each new release, we're getting more and more usage, more and more users. And so one of the things I wanted to talk about here, this is not a very deep technical talk. This is really more about, I think, the transformations that we're seeing taking place, particularly in the data center, because of things such as OpenStack. And as we're building larger and larger systems and connecting them up and everything else. I'm sure many of you could actually give the same talk. So this is what I think is the larger phenomena that we're seeing. And that is that the continued sort of impact of the internet and the growth of the internet and impinging upon what we're doing in these data centers and in the models that we have for computing, the models that we have for how we develop applications. And when you think about it, this last year, whatever, there's more data created than in the last previous 5,000 years. We're continuing this digitization of our environment. And in order to do that, we several years ago passed in terms of the number of virtual machines that are deployed versus the number of servers. We passed that point where there's more than in the next year. So 70% of all machines will be virtual. And also, if you've seen any of the stats in terms of how the number of packets flowing through the internet and the use of bandwidth, a 2-thirds of the bandwidth is essentially going to be mobile video now. That's a huge transformation of those of you who have had rim blackberries or earlier kind of cell phones and now everything is getting video. And particularly as you travel other parts of the world, you'll see people just sitting on subways watching movies on their phone. It's really striking. And then everybody's talking about these 50 billion connected devices that we can have. We're nowhere near that yet. But when we start talking about connecting all of these different devices, it really changes the nature of computing. And so the old style of thinking about an application that you bring up on a server and you're running a couple apps on that server, they're really gone. These are all cloud scale applications running over hundreds to thousands of virtual machines, running over multiple geographies. And that's a big change. One of the things if you followed actually Jeffrey Moore crossing the chasm, his earlier talks, now he's really talking to my search on Google or on Yahoo or Microsoft for the systems of engagement with Jeffrey Moore. He's got a great talk. And his view over this is rather interesting. He says the traditional IT apps, ERP applications, HR and everything else, those are systems of record, financial apps, how we manage our business. They're done. Meaning we've finished development of them. They work just fine. Now they're moving just into a reduced to cost kind of cycle. There's no real innovation happening in those systems of record applications where IT organizations are expanding and this is where the most rapid growth is in what he's calling the systems of engagement. This is how does the company engage actually with its employees, with its customers. More and more companies are using the internet and mobile devices and social networks to engage with their customers. So this is coming out of lines of business. It's coming out of marketing departments. These systems of how does the company engage with its supply chain and with its customers and with its sort of community around it? Those are where the new developments are happening and a lot of those are happening in very unexpected ways. We're seeing, for example, that people walking in, you've all heard about bring your own device. It's a big challenge actually for IT administrators. All of a sudden people are walking around with pretty highly capable computers on their mobile devices accessing their networks and wanting to have access to the information. I think this morning you've heard people talk about, gee, why is it we have a richer environment and what we can do at home than we often do in the office? So the IT organizations are trying to catch up and so they're bringing out a whole new set of applications that are designed to be much more about how do they engage with our employees? How do we engage with our customers and know more about them? One of the biggest drivers, obviously, of big data that we're seeing is in terms of analyzing all of the click streams and relationships and as much information as we can get about our customers and about our customers' use of our products. And that starts to become part of the whole development cycle, mining that information, getting that back into it so that we know when we do the next iteration of the application, how to better target that application for its users. And then there's the Internet of Things. And this is Internet of Things moving to Internet of Everything where it's not just the devices, for not to think about it, just solely from the point of view of devices even though there's gonna be, that's where the large growth is going to come. But it is a mobile applications. It is connecting into things such as smart grid and smart buildings. And one of the apps I actually like, which I think is a great example of both this kind of movement, but also of open data, is that in the city of San Francisco, the city government has decided to publish information, put sensors in the roads about where the available parking spaces are. So they publish anybody. They can now build an application based upon that, that you can carry around on your cell phone or whatever, to find the closest parking spot. And that helps the city in a number of different ways. It reduces congestion, it reduces pollution, all enabled by the fact that they essentially open sourced this repository of data that they have so that now innovators and entrepreneurs can develop applications to go and use that information to provide interesting new services. Factories are becoming robots. When I think about a lot of robotics and AI, which I used to have in my background, you look at factory automation that's going now, that's a large, large system. You look at logistics company or FedEx and people who are making massive amounts of shipping products or whatever, those are one big system. And they're treating that now as another example of a robot. Because this is where automation is actually being used to move things around, even though it doesn't have the form of a physical human. So these things, I think, more than anything else, drive this sort of virtuous cycle. That we have the internet of everything, more and more things coming into the internet. We have now a need to process all of the information that drives the need for big data analytics and streaming analytics. That drives the need for more and more cloud computing capabilities to run these kinds of applications. And then that has an effect even down in the layers when we're looking at software to find networking. And that's where we'll talk a little more about how this is affecting, not just in compute, not just in storage, but also now in networking. And in fact, this is sort of the promise and whenever I think any of us are trying to explain what is cloud computing to somebody who might be asking us. We're really talking about the speed, and you heard it this morning, the speed at which you can start to deploy applications. In the old days, you basically designed an application, you coded it, you decided where you were going to run this, you might have to procure new systems in order to do that, then you would install it, you can configure it, and all of that is necessary, which takes numerous approvals, numerous sign offs to get this, and what we're really moving to this model where you really are going to go from design, code, and pushing it. And that's what we call continuous deployment. And this has to do with how do we now accommodate the kind of code that is involved in these systems when it's changing that fast, and that's where continuous integration is necessary. And a lot of us who are involved in OpenStack now are running essentially the same continuous integration environments in our own facilities that we're running at the foundation so that we can actually take a lot of the updates and everything else associated with the code and be able to run and deploy it in-house as well. So continuous integration, continuous deployment are actually something that I think we're gonna be talking a lot more going forward, which means we're really making this whole change in the traditional way of doing things to a model whereby several times a day an application is being changed. That has a whole different notion in terms of what does it mean to release an app? What, you want the app at 4.35 p.m. yesterday, or do you want it at 6.00 a.m. this morning? These are what is the new way of actually approaching these things. And so with this, the other thing that when people now look at, because of Moore's law, we're continuing to crank up higher performance computing systems and storage systems at lower cost. And therefore, what's remaining, the percentage of overall spend is growth, then the part that's growing is the operating cost. And so these operating expenses is what really needs to be attacked next because Moore's law is taking care of the rest of it. But how do you, again, develop, deploy, monitor, patch, upgrade the application is taking most of the expense now in IT. And this means that by attacking that, how do you attack that? With virtualization, that was one early answer and that seemed to not have an impact because in fact we saw what's known as VM sprawl. We just built more and more virtual machines. Now we had a larger number of servers you had to manage them before because now these were virtual and it could be spun up very quickly. So automation, continuous integration, continuous deployment, all of these things are an attempt to attack that growing percentage of IT spend which is associated with the operating cost. And it's interesting I think in this community because OpenStack as a community is largely made up of people who are trying to develop and deploy cloud platforms. And we're using automation tools, we're using Shaft, we're using Puppet. And we think about this as something that needs to be automated because installing OpenStack completely by hand or managing OpenStack completely by hand, manually logging into servers and everything else is going to just make it very impossible to attack these costs. So we really need to really look at the automation tools necessary for this. So, how many people actually here are consider themselves IT administrators? And do these kinds of things concern you today? All of these new terms that are coming, it seems like every other month we're coming up with something new about programmability and network operating system, network function virtualization, OpenStack. All of this is coming at them and there's a huge sort of reeducation that changes not just what you know or how you operate but actually how you're organized in the teams themselves. So this is an awful lot of change again that's coming on very, very fast. And it's something that I think we're trying to address with everything that we're doing in OpenStack. We were like OpenStack to make this person's life easier. With all of these changes, we want to be able to automate the infrastructure, automate the deployment, update, upgrade. Automation is the key to everything here. And what we've done is that we're virtualizing the data center. People have talked about it, a virtual data center, everything else so that we can actually make it easier for then the application developers to be able to deploy their own applications in a very much in a self-service kind of model. And at the same time, data centers themselves are growing and they are changing. There's a lot of emphasis now as data center size, you're seeing new data centers popping up all over the landscape. We're seeing that the principal factors here are power and cooling. And so it's been actually quite encouraging in the last couple of years to see many of the large data centers attacking that problem and dramatically bringing down their cost of the power and the cooling needed for these things through a lot of different innovation design. So that I think is one of the things that's changed is the physical data centers. And data centers as a business seems to be becoming a real estate business because an awful lot of property is involved with this. OpenStack holds this all together. So the real transformation I think of the data center is the fact that OpenStack is becoming this new platform that is a part of the software stack of the data center. Many years ago we developed the term middleware. And at that time it meant sort of job application servers, typical message buses, that kind of software that you would use to stitch together and connect different applications together and middleware became a whole sort of industry. I think that notion now is being replaced by that of a cloud platform. And so that's why we're seeing the interest in OpenStack and so many companies involved because we all have an interest now in making sure that the way this platform is designed is designed in such a way that we can all use it. We can, it can span different kinds of use cases from possibly a bank to a private cloud deployment in an enterprise to a large scale service provider to a company such as Comcast, which is putting Xfinity on OpenStack. All of these different use cases we're hoping to cover with what we're doing in the OpenStack projects today. And so I kind of have this picture in my mind that looking at now a data center, you have this layer, is a certain amount of your resources now devoted to the compute services, NOVA. Another set exactly to storage. And underlying this, you've got the network service neutron that ties this all together. And then you've got Solometer and you've got Heat and everything else that builds on top of this. On the top of this, we can also put the PAS applications, whether it be Cloud Foundry or OpenShift and others. So we're querying again this new layer cake. All the while what we're trying to do is make it so that applications can be deployed without having to contact somebody in the IT organization by filing a troubles ticket. We really want to make this self-service dynamically provisioned and scalable. And so also what we're trying to do through the rest of these services is again make it easier. Why should, if you're developing a large scale application, the last thing you want to do is stand up your own database or have to figure out how you write your own key value store. Instead, use a service. So the services are the way to think about OpenStack. And when I describe it, it certainly is the open source software that allows anybody to build and deploy their own cloud. But it's also a set of loosely coupled services that when you look at them together, become this platform upon which applications rest. And therefore as a platform, it has the top layer, which are the developer APIs, that makes it easier and easier for applications to be developed. And then it has the bottom, which is where it actually hooks into real physical systems because at the end of the day we still have to push bits over wires, we still have to heat CPUs, I guess, doing computations. So we need to be able to have the top and the bottom and OpenStack is the layer that fits in between. Mark Andreessen, you know, back in 2011 made this sort of comment, software is eating the world and it seems to be coming back around again. That we're to get the kind of speed and agility that we need to be competitive today. Software and automation are an absolute requirement. Kind of interesting that software is eating the world in a lot of the terms that we've come up with things. Chef, chef has recipes, salt, these different, these are the new systems that you have to know about in order to realize this kind of automation of the whole development process, the integration process, the deployment process, and then the management of your infrastructure. So automation becomes very key to this. And we're doing this even in the foundation board where we're putting stuff in to be voted on, you know, in Garrett and Jenkins and we use a lot of the same tools that we do for development for handling the processes that we have on the board today. So the network, of course, is changing and this is where it's been interesting in the last couple of years. When we started Neutron, for example, then called Quantum as a new service, it was in anticipation that there's a lot of, there's a sort of a revolution happening in networking is that we saw the emergence of open flow. We saw different controllers coming into place and that we start talking about an overlay and an underlay. So one of the things to think about as we scale out also the way we're deploying these things, traditional sort of vertically scaled network hierarchies or whatever, or turning into spine leaf because the amount of data traveling within the data, within the data center from server to server far exceeds that going in and out of the data center. So the bisectional bandwidth technically inside of the data center needs to grow dramatically and so we're seeing new architectures in the networking infrastructure built and the new architecture now has to accommodate these overlays. These overlays are an application's own view of what it means to have a network and that network gets mapped onto physical links and everything else naturally, but we want to be, but from the developer's point of view this allows them to think about their application as in a set of servers over here talking perhaps to a database over there and that's the kind of a private network and they're the only ones who can see that that's done through an overlay. That can also be done through routing. So there's a lot of different changes that take place here. So when we looked at putting networking into OpenStack we again realized came back to the principles of OpenStack which is that it is a set of services that the services are loosely coupled, that APIs and the abstractions represented those services are important and what we're trying to do is make those APIs work across a wide range of hardware and software and each one of these services is driven by a community of experts who really can contribute into that and so it's very pleasing actually to watch this and happen because we see competitors working together on a common project and just to drive that forward and the number of companies that we have involved in OpenStack today I think is a real tribute to the fact that the project technical leads that we elect to this are doing an excellent job in terms of keeping these communities and these projects moving forward. The reason why we break these things out is to make them sort of concerned with a particular area such as compute, such as storage, such as identity so that the expertise can really be applied there and that community can get together and decide that and you all as I said everything was changing below in networking that we're going from server virtualization once we have server virtualization particularly led by VMware and others all of a sudden we had a virtual switch because we've got virtual machines there that need to connect to the network in some way and so the whole notion of a virtual switch came into being and Cisco in fact also has this 1000V and there's OpenV switch that was designed into Linux so all of that has to do with now virtualizing both the networking moving into the server as a part of the virtualized environment OpenFlow is a protocol for how can you do it's known as separation of the control plane and the data plane. This allows software now to direct how switches are operating, how routes are being formed, how do you move traffic through a network so instead of having to log in and using a CLI by administrator you can do this now through software and you can actually have the management plane of that running entirely now in software on a number of servers in some cluster. This created a lot of interest in things called network controllers where that thing that is now going to be managing these devices through software that is then now a controller it's another sort of layer in the stack and there's some actually interesting trends around now thinking for Neutron for example I think we had over 12 OpenFlow controllers plugins made by different companies and individuals and so there's a lot of activity going on in this notion of the controller and one of the things we were even talking about in an earlier panel today is the controller the separate thing or does Neutron as a service become a controller and I'm much more of an advocate I want as much functionality pushed out I want Neutron to be really about the developer abstraction so we meet that goal. Lastly, virtualize network services and we can go through each one of these things. So NFE is particularly interesting because just like we virtualized the networking with Neutron and now people are saying well what about a firewall? Load balancer, VPN. We have the appliances that have done those functions is there a way to virtualize those because once you virtualize those and they become software you can dynamically scale them. You can deploy them at will. So this becomes it dramatically increases the speed again at which you can now deploy and scale these kinds of services themselves. So networking evolved. In an OpenStack that meant that we moved from a very simple model of Noven networking which is a very simple flat network very much everybody gets a public and private IP address and that you're in and you're perhaps are isolated by tenant on a VLAN. So Neutron networking where we're really saying this is now Neutron is a network service and so what we wanna do is actually keep compatibility with Noven networking. We still have some work to do to achieve that and but we can now essentially ride on top of this tremendous change that's happening in the lower layers, lower stacks of networking while keeping a consistent and easy to understand model for the application developer because we don't wanna require application developers become system network administrators. So it started out real simple. We decided that was with the principles that we wanna have a northbound API that the developers looks at and that can do things such as create network, create subnet, create port, attach a virtual machine to a network. Those were that was the basic notion. Those become abstractions particularly since we have a restful interface of these and then to accommodate the fact that we have different implementations whether you're on Cisco, Juniper or anybody else you wanna have hardware plugins think of them essentially as device drivers being able to instantiate that into the networking infrastructure and that we also wanna allow innovation so by having API extensions we can extend this naturally and those are extensions to do things such as perhaps quality of service or other kinds of things that we wanted to be able to have which wouldn't necessarily be required for all of the plugins to support. So on a plugin by plugin basis we can allow innovation to go forward. That worked for about two years, but there are issues. What do you do ahead of a genius environment? Not everybody runs completely uniform environments to all Cisco or anybody else. They have lots of different equipment and also how are we going to accommodate now network services? So we moved to a model called ML2 which is really about separating those things out so you can have multiple different, we would separate out the kinds of types of drivers whether you're talking about an overlay for VXLAN or GRE from the mechanisms and that the different plugins that would be interfacing then with the lower level system hardware and that we also then would have plugins for services. So this is where you've seen things such as load balancing of the service, firewall as a service and we're continuing to drive that work forward because you often want services to chain together. You want a firewall followed by a load balancer followed by your machines or whatever. You need to be able to chain these services together. I'm not going to go into a lot of this. Let's just, so the primary purpose here of remember though was to make sure that we use a network to provide isolation of tenants. So in a multi-tenant environment, we're saying we're always operating a multi-tenant environment even if those tenants might be different applications, they need to be able to be isolated from each other and not be able to sniff each other's traffic and everything else. And so this is how we separate the idea of sort of the segmentation from the particular device methodology that's being used in the ML2 environment. In terms of services, we have the notion, at the beginning of that, we are drawing upon leveraging the fact that we have a notion of a router and then a virtual router so that we can now start to talk about how does traffic flow and can we stitch the traffic together to produce this kind of service training. And here for example is an example of a router which is a CSR1000V. This is from Cisco where we essentially took the software on some of our large scale routers or whatever and run that as a virtual machine. So this is important for VPN. So this is a fully functional, essentially hardware compatible VPN service that can be run now as a virtual machine. And that's where we're looking, how we're working to make this available through Neutron so that you can spin up a VPN as a service and specify that you wanna be able to use one of the CSR1000Vs instead of an ordinary Linux VM. So in many ways, across this sort of variety of things that are happening and networking, we're building this open stack cloud platform so that we're able to cover all of these different variations, bring forward a lot of the innovations but make that a place where the applications can actually continue to ride through as that layer below it changes and change quite dramatically. So one of the changes that we'll be talked about in several other sessions here is addressing this very concept of do applications really wanna program the network? We created abstractions for a network, a subnet, a port. But is that really what, does that help anybody? It certainly helps somebody in that they don't have to actually go out and talk to their system admin to put a cable between this port and that port. Now they can do that programmatically. But it's still the same old notion of creating subnets and everything else and maybe requiring too much. So system administrators absolutely do wanna program at that level. So they wanna be able to use software and they wanna be able to access it and so in fact a lot of system applications now are being built on top of OpenStack using Neutron to do exactly that. VLAN provisioning, things such as that. But user facing applications, I think we could do better and that's where this whole notion about how do we start moving into a policy driven paradigm for application developers, making it easier to get the intent of the application without forcing them to become a network administrator. And I'm sure some of you have seen some of this over the last year being developed. But if you take the simplest case of kind of a three tier application, you got your web tier, you got your app server tier, we maybe use MemcatchD and other kinds of things and then you have your database tier. And you wanna be able to load balance in front of your web tier but you wanna make sure that your database tier is not accessible, directly accessible from the internet. And so you create these tiers and these isolation or zones and today we do that using security groups, we do using virtual routers and everything else. The intent is really where we're trying to get out here is there's a policy we're trying to enforce there. And the policy might be that only traffic from the app tier is allowed to go into the database tier. And that can be expressed as a policy. You don't have to say what networks, you don't have to say necessarily what ports. Those can be instantiated below for you by a higher level API which really talks about policy. And this is what we've been trying to develop in something called group-based policy abstractions. There's a blueprint and a lot of contributors have been, a lot of companies have been contributing here. It's just a small number of them and I urge you if you're interested in this to get involved sort of in this discussion as we now are fleshing out how the API should express itself, what do we mean by the policy? How do these constructs actually work? And the URL is there, this will be posted later. This means that simultaneously we've been having essentially neutron APIs which is create network, create subnet, everything else. And you have now these other things, create a policy which says this tier can talk to that tier or here are the following ACLs that you want or you want a firewall inserted in front of this. All can be policy statements. And the other interesting thing about this is it allows us to also essentially involve the people whose job it is to create the policies which are the system admins, network administrators to create those policies for the application people. So that they know what the policies, they know what their corporate governance rules are. They understand the complexities perhaps of their data center and they would like that to be able to hand that to their application developers and now it's not an application developer anymore who's saying that my database is not gonna be routable from the internet, it's actually the policy that says that that's the way a database should be isolated from the network. And we can accommodate these different needs and then when we get in even further, I'm sure those of you who've seen announcements out of Cisco, we're expressing that in the actual networking fabric itself called ACI. And we're making the fabric aware of these policies so that they can be pushed all the way down into the networking infrastructure itself. Again, let's come back to the notion now of a policy controller, application policy controller, we call it APIC, which now can sit in between here. So we can now have these policy controllers sitting there and working with OpenStack but it also could be directing the policy of things outside of OpenStack because OpenStack doesn't necessarily live all by itself. It usually exists within a larger data center and it's connected to other things that are not under the domain of OpenStack. And using this group policy kind of plug-in, we can accommodate that. And so there's a number of talks that are going on on Wednesday and on Thursday. I invite you to attend these where we'll be, in fact, demoing some of these policy constraints with the kind of reference architecture we have going on today. And I think that you'll be able to appreciate that. So I did wanna leave some time for questions. So if you're thinking of a question, just come up to the mic and I thought I'd go over a few kind of closing thoughts here in terms of this. In many ways, I think the data center's vanishing. The landscape's really changed a lot and so the traditional notion of data center has changed. Particularly as we're seeing, we've made them transition from mainframes with dumb terminals to kind of cloud-based application. That's a huge, huge transition. And so we're seeing companies like Netflix that essentially don't have data centers. They run it all on Amazon. So they've moved entirely in the notion of data center to a virtual concept for them that they're running. Comcast is moving an application you would think doesn't have anything to do with cloud, it's Xfinity, it's a video streaming app, but they're moving that to a cloud, to OpenSec in particular. And they're running that in multiple data centers, 10s to 20s of data centers, to be closer to their users. So cloud-native apps sort of at scale now can span multiple availability zones, cells and geographies, however we start to think of them. And I urge us to, that's where we should start to look for some of the next thing that we start to attack in OpenStack. And the reason for this is that we really wanna be able to reach any app, anywhere, any device. It sounds like the old Java days. That's where we're still at this game of making that true. So the data center is vanishing in some regards, because now it's become this multi-tenant platform for dynamically provisioning and elasticity, and these are all the new norms. This is what applications must do today, designing an app that doesn't scale, only scales a certain bit, it's just ludicrous. You need to be able to have these apps that are able to scale. And that they're continuously deployed and released, which again brings up this question, what is the app? Well, it's an app at a point in time. And most of the large scale web applications are doing multiple releases of features every day, every hour, which makes upgrade rather interesting, because there essentially isn't any upgrade. It's a continuous process, until all of a sudden you realize that what you've deployed right now is the upgrade of what you had a month ago, but you did it in these little pieces of new functionality going on. And DevOps is really what's turning infrastructure into code. So I advise you to really look at your whole processes and everything else and understand how your developers are becoming part of everything you need to run your data centers. So Cisco's also made announcements about intercloud, and that really is our notion very much related to what might be called cloud federation and other kinds of things, is that we're trying to take a more holistic view of recognizing that applications now are spanning multiple data centers. Those data centers may be from multiple providers, multiple different service providers, and we need to be able to create this kind of intercloud that can connect all of those so that you can move services around or you can access services in different clouds at will. And we all know that computing and storage is very much moving to the edge. And one of the questions is, well, where's the edge now? I mean, the edge seems to be almost everywhere. It's no longer in the core, that's for sure, it is the edge, and that's the most scalable part of an intercloud is the edge is growing faster than ever. And so how will this change the notion that we have a traditional networks? I want now the networks that I create in neutron everything else to span multiple data centers, multiple providers. I want to create my overlay network that actually can be instantiated and brought up for a couple hours a day, spanning multiple different clouds themselves. And so the final question in that sort of theme what do we mean by a cloud? When everything becomes part of like this larger intercloud. And I think that that's what we're trying to do here with OpenStack, you saw Troy talked about it this morning, I think as a global planet level planet scale cloud, all along our vision around OpenStack as more and more OpenStack clouds get created, run by different providers, run by on private, on premise, as service providers, what are the kinds of services we wanna look to be developed so that we can start to look at this as a larger intercloud of OpenStack services running on each one of these data centers. So with that I'd like to thank you and see if there might be any questions either from the audience or come up to the mic. Gotta be one question. Let me ask you guys a question. So we're always interested in getting and understand about who our audience is and who's coming to these conferences. And I'm not gonna ask how many here for the first time or anything, but if you could actually just shout out, what do you hope to get out of this? What are you looking to find out? You're looking to learn, you're looking, you're a part of the design summit. Just shout it out. I'd love to get much more of a feel for which, why you're here, design summit. And what are you doing in this session? That's great. I wanna be there too. Last question for you. Yes. For the group policy specific to Cisco. Actually, if you look at the blueprint, the group policy APIs, the model that we're developing there is not Cisco specific by any means. And that's why in OpenStack we are developing this notion of what should those APIs be for expressing things as group policy. Then in particular vendor will have different implementations and yes, Cisco does have an implementation at the plugin layer. Which is epic. Yep. I don't have what it seems like to do. Yeah, the question is that in the discussion around group policy APIs, where do the service providers fit in or what is the notion of that? I think that has a lot to do with, we're at this point, we're trying to get the foundational abstractions correct. And then where I think it might come in is the inclusion of actually integrating the policies represented by a different service provider. So that particularly you want that in an enterprise, for example, where we know we want to have the involvement of policy. But is that getting it what you're talking about? Yeah, mostly the question I have is not. Oh, I think this is not an enterprise or a service provider thing. I think this is an application developer conceptualization of what do we want to do with communication between parts of your application. And that's why we're really focused on the top layer. What are the right constructs that a developer would use to express this kind of policy? And then the mechanisms we think will fall out from that. Okay, if there's no other questions, go ahead. Is OpenStack is the only one part of the Cisco spectrum among the different competition like RockStack? The question is OpenStack just one part of Cisco's strategy? Absolutely. No, Cisco is a very large company. We have a lot of relationships with a lot of different companies and different open stores, for example, we're heavily involved also in open daylight at the controller level and then OpenStack at the open source cloud platform layer. And that's our primary focus in terms of our contributions that we're focusing on that in terms of the open source contributions that we're doing. But we support VMware, we support Microsoft where we have a very large install base of customers that have different things that they're trying to accomplish. Okay, I won't keep you all, have a great conference. Thank you.