 I'm sure all of you have had that experience and it's really thrilling because you don't have to worry about the infrastructure or any of the complexity in the lower levels there. The question I wanted to ask however is that, so why when we look at system administration tools, the management tools, security tools, network system administration, why aren't we able to give the same kind of environment to those guys? If you think about it, a lot of the customers I was talking to are talking about something, you know, software-defined infrastructure. And what they mean by that is really the ability to look at their entire infrastructure managed by software so that they can get out of the manual steps that they require to go and deploy new systems, deploy new applications, and everything else associated with that. And it's puzzling to me because it crosses a lot of these traditional boundaries between network system administration, between managing your servers, managing your patches, and things like that. And yet we are sort of isolating those guys over to an archaic way to develop those applications because they can't do things such as quickly spin up a bunch of virtual machines to run a job, or they can't scale out something to run, for example, Hadoop or any other kinds of data analysis that they might want to be able to run at scale, or take advantage of key value stores for storage. So when you're writing management applications, you have to still be in the old school of these stove-piped applications of figuring out what your application is, getting it deployed somewhere, getting it connected into the network, figuring out what kind of a database or storage system you're going to use for that. And that, I think, is a real impediment. So the fact that we now have a couple of these trends coming together, I think, makes it now possible for us to think about doing those kinds of applications on a cloud. And one of the real breakthroughs, I think, in OpenStack has been the fact that the way that we've constructed OpenStack has been as a set of these loosely-coupled services, a service for managing virtual machines, Nova, a service for managing identity, Keystone, and then services for storage, and now also service for networking. This may not be coming up. Not yet. Okay. So, when we look at the way that we've constructed these services, particularly I wanted to focus on quantum. That's been a project that I've been heavily involved in since the beginning. And we got several of our, you know, about 14 companies together, and we said, you know, no one's really built a network as a service yet, so we get, it's difficult because unlike other open source projects where you have some other application you're modeling, like, you know, if you want to say Word or PowerPoint, and you can make Open Office, Libre Office, there wasn't a clear pattern for us to follow. So we made some very early design decisions, I think, that have served us very well. One, we got everybody's ideas together, and they were really varying in the complexity. I could talk to that probably just as easily. But what we had, the challenge that we had in networking was that when networking was built inside of NOVA, one of the real difficulties was there is in fact the rest of the networking industry isn't particularly standardized in how you do provisioning, how you, you know, configure the network. Every vendor seems to have variation, there's a lot of variation, even though there's been attempts around Net Comp and Yang and other ways to bring that forward, it's been difficult. So we decided when we started with Quantum, actually OpenStack Networking, we really designed it in such a way that it had pluggable interfaces at the bottom, that the abstractions that NOVA, that Quantum dealt with had to do with what was the developer's view of these networking constructs. So from the developer's point of view it's to create a network, perhaps create a port on that network, attach a virtual machine, and we became the simplest model networking we could envision. Knowing that over time what we would do is incrementally add enhancements and slowly move our way up until we could now do things such as we're doing now in Grizzly and looking forward to in Havana, of integrating L3 services and a lot of these other networking constructs. The way we accomplished that was again to make the upper levels deal with the user abstractions, create, you know, network representation of a network or a port or virtual interface, and then the bottom layers be pluggable by different vendors. That was not just to satisfy different vendors' needs, it was actually to satisfy the fact that we've seen networking changing very fundamentally with the introduction of a lot of the concepts around software-defined networking. And we also were concerned that we didn't want to incisal innovation. That lower layers is changing a lot. So how do we make those things available to our users, to the developer on top? So in addition to designing it so that we had a pluggable interface in the bottom, we also allowed API extensions to go back up to the developer. Now, developers have to be careful of that because when you use obviously one vendor's extensions or something's being tried out in the community, it may not be stable, it may not be able to be portable. But it allows us to have that innovation grow up alongside through these API extensions such as quality of service. Or better expressed in the developer terms, I would like to create a network that's optimized for streaming media. Or I'd like to create a network that has a particular bandwidth guarantee, end-to-end. I would like to create a network that might expand multiple data centers. So the extension mechanism actually is a very useful way for us as a community to explore some of these ideas. And once we see that's really the right idea, then we can move it from an extension to being part of the core API. So software-defined networking, it's a big term and I think at the end of the day it was originally associated in coming out of a lot of the work out of Stanford around OpenFlow to really say the software can be much more involved with what's going on in the network. It's different, in fact, than what we've seen in the virtualization of computers. In fact, that was pretty straightforward to do, where you could slice up a host and create virtual machines on top of that host. The network's a shared resource across a lot of applications across the entire data center. So we really want to be able to have a programmatic way to get at things such as OpenFlow in terms of where the traffic gets routed. But now we're expanding that, say, well, we really need to be programming more than just the traffic flow. We want to be able to do configuration that way. We want to be able to get information out of the network that way. So when you think about coupling now, what you can do at this cloud platform layer, and you have a developer's view of the network. And those developers, as you know, we can think of those as being tenants. And one of the things in quantum that we really wanted to be able to enforce was the notion of isolation of tenants, so that every tenant has their own view of their own virtual data center. What it also provides us, and we're just beginning to think about this now, is that now for the operator, the operator could be viewed as a tenant. But this operator has system level privileges. This operator, this application that is built in that way, could actually use some of the same constructs. But when they say, show me the networks that I have available, they can see all of the networks. They could see all of the tenants' connections into that network. They could also, as we're moving forward, do things such as being able to, like I say, run large jobs on that for metering, for monitoring all of these devices, for putting agents out there that then through quantum, we would be able to expose that and allow operators to develop those apps. When we also talk about SDN, for example, most recently, you've probably seen announcements around Project Open Daylight. And this is where it's very heartening, actually, to those of us, probably, in the OpenStack community to see another open source community being formed around the notion of what are these SDN controllers and how are they going to be used and how we can work together as an industry to really get them defined in a way that they're useful. And it's important to realize, I think, that what we see is that that work that's going on plugged beautifully into the notion that we have for OpenStack Network Service. So then now, Quantum or OpenSysco, I'm sorry, OpenStack Network Service can call into these controllers that are built at another layer. Now, a system manager or operator that will, again, have a management view through those controllers of a lot of those properties. But if we really want to make it possible for them to write a new class of applications, we should think about what are the right abstractions that the system administrators and management systems of those things can start to project that up through Quantum. Now, these will not be exposed to the tenant, but another interesting piece that you can think about, provided we work correctly with this controller in terms of slicing the network and everything else, perhaps there will be management applications that will be available for each of the tenant because they're operating a virtual data center. Yep, you tried the old reboot thing, right? After the new Mac, it's only two days old. Okay, well, that's going in. So where was I? Okay, so Open Daylight. I think those of you in the community who are also involved in this, I think that we want to offer them a lot of guidance, first of all, and that has made the OpenStack community so successful. And as you may know, I'm also vice chairman of our board. And when we get together in those rooms and we see it in the design summits and everything else, we are bringing the use cases that we have, but we're really attempting to leave our affiliation behind so that as an industry, we can come up with the right platform because we want to see OpenStack running everywhere in the world. There's another thread that's going on and maybe many of you have read a lot of the service providers now talking about network function virtualization. Now we're functioning virtualization, we've again built a lot of the networking infrastructure as these kind of hardened appliances, large scale firewalls, large scale load balancers. And a lot of people I'm talking to now are saying it's again an headache for us when we start to having all of these different systems and we're having to integrate them. So we're seeing those become virtualized as well. In fact, a lot of the work that's going on in Cisco today is taking each one of those things because in fact, if you look at one of those appliances or one of the switches, there may be ASICs, forwarding, plain elements there, and then generally it's a Linux kernel. So we can take a lot of the things such as iOS, IOS, XR, Nexus, virtualize those and now be able to run them as virtual machines. And so we're seeing now that now that you have them as virtual machines, how can you start to think about these firewalls differently? Well, now you can start to think about it perhaps as firewall as a service. We're seeing already load balancing as a service come out of OpenStack and we're looking at firewall as a service. This is the direction that I think that a lot of our customers want us to be able to do. And through the ability to do things such as bare metal deployment that we're talking about with NOVA bare metal, we can talk about also configuring the physical devices themselves. So where we may need performance or we have certain security requirements where we don't want necessarily virtualized, these things can appear as if they're virtualized machines but they're actually running solely on a host themselves. So we've got these different trends. We've got cloud computing. We've got SDN at this other layer and inside of SDN we have this notion of controllers. And these controllers that I was gonna try to show some examples of where we're actually working with some of our service providers who are using OpenStack today and their intent is not to provide a public cloud. Their intent is actually to use OpenStack to manage their infrastructure, to manage their networking systems that way. And so that they are creating deployments of these things connected right into their core switches and everything else, links going over to, for example, in our case, our UCS systems that are running these virtualized appliances now. Those virtualized appliances now can be scaling up, they can be scaling down, they can be moved to other data centers and you can route the traffic over into them. So I think it's a very interesting notion where this community is going and where cloud computing and that as a model is going to make it easier for these people to do these kinds of operations. Another piece of this puzzle is in fact orchestration. We saw our examples today a little bit around heat. This is another, I think, spectacular project where we're taking on, in order to bring these systems up, one of the real problems that we've had, for example, in virtualization on the compute side has been VM sprawl. All of these VMs being set up, we saw zombies actually arising in terms of, so you wanna be able to monitor these applications then they come up and many of these applications which are complex require orchestration. There's an order in which you have to bring these things up. You saw our marks demonstration with Juju as being an orchestration layer that you can say this has to be brought up, this other thing has to be brought up, connection has to be made and now you can go on to the next step. When you tie that kind of orchestration also into things such as service assurance, you wanna be able to insert the monitoring, the metering capabilities there so you complete the cycle. And one of the analogies I personally have to use is that this notion of orchestration just as a small side story here, my father was a big band leader in the 1930s and it was a swing band, Tommy Tucker's Orchestra. So I grew up listening to the sweet sound if you think back into the 1930s kind of songs. Here you have this orchestra that was making one sound. It was an integrated sound and they could go up tempo, they could go down tempo and you had this smooth sweet sound which is what they generally refer to the big band age as being able to produce. That worked by, there was a conductor up there. Yes, they had a score that they were following but they were also following each other. They were listening, they were adapting and they were changing to keep that synchrony going forward. And so when we're looking at deploying OpenStack and deploying that applications on top of OpenStack, that notion of orchestration becomes really important. And how do you orchestrate? Well, we're fortunate that we have things that's puppet and chef, ju-ju and now things like heat which allows us in crowbar around installation being able to do that orchestration of all of these components bringing it up. Because this is what I think that our customers are looking for out of there which is this kind of software defined infrastructure. They would like to be able to view their entire infrastructure as being driven by software meaning that they can use orchestration tools. They can get out of manually provisioning things. They can count on software monitoring systems and fixing things without making a 3 a.m. call back to your system administrator to come and intervene. So I think that's the world we're moving into. So one of the sort of bottom lines of this talk is to really urging everyone, all of us be looking at these orchestration systems we're beginning to bring together such as heat. It really follows Amazon's cloud formation where we want to now include things such as auto scaling, be able to include monitoring in that, be able to include information that's being fed back into the applications. And again, this may possible because these things are coming together at this time. One of the other things around programmable infrastructure is that you not only can project down provision infrastructure you want to get information back up. And that I think will be tremendously interesting particularly as we move into this world where security becomes paramount. We have to really watch these systems. We have to be able to characterize normative behavior. If we start seeing a particular port being hit more frequently than we expected to there may be something going on. With the capability of being able to design, for example, systems that would protect you against DDoS. One of the real problems there is how much do you provision for that? That's an event that may happen only once a week, once a month. And you have to over provision for that one event. In cloud computing, we know that what we prefer to do is actually have available resources that you can elastically scale. So that if you start to see an attack happening, particularly one of the switches and one of the ports, you can identify the agents on there and start feeding the traffic over. My picture is kind of a sponge of virtual machines that are now going to be looking at that traffic since we can distribute that load across your entire data center. You can start to do much deeper analysis and filter out and make sure the good traffic gets pushed back and can be processed. And the bad traffic might go over into a big Hadoop cluster where you're doing a very deep analysis and characterization of that so that you can actually create rules and everything else that would protect you in the future. And I think that actually can be done with almost no human intervention. We need to be able to start designing these systems that can have self-healing capabilities and that create this entire feedback loop between detection, monitoring, provisioning and feed that back through analysis and everything else in making changes. So that's the topic, basically what I was going to be covering in slides. But I'd love to get actually a little bit of feedback from you all because I bet many of you are also trying to work out the same things. And we've also mostly separated out system administration and management simply because of the requirements that those are privileged applications. And if we start finding a way to make privileged applications have the same kind of environment as we have normal applications, I think we can really move forward where we need to with these kinds of systems. So anybody have any questions or is anybody thinking about that as well? We've seen things such as, what we're seeing open stack and open stack is so I slide up there triple up. And so between my tail and other folks we're getting together and saying, okay, that's the way we can actually use Nova. Maybe it's hard to tell from that picture to actually be the beginning nucleus, the seed that can actually do the entire deployment. Because why not start using our own systems to create these more complex things such as the deployment of an open stack cloud? So I think it's beginning a little bit recursive, a little bit turtles all the way up and down. But what we have to aim for instead of worrying so much about how do you do the initial installation? It's a continuous operation. That's the point we should be optimizing for. What is it like to run these systems every day and manage those systems, deal with the failures, deal with the constant provisioning of adding new servers or services or decommissioning them as well. And you see, so I think we're seeing a lot of progress there, I'm very happy to see Mark Chilworth showing how that you actually upgrade the kernel itself without taking down the system. So, any questions, thoughts? Come on, this can't be so obvious. Yes. This is Jun from Blost. Thanks for your great talk, actually. It was amazing without even showing any slides. I will put this, this will go up on slide chair and you can see how faithful I was to be with Jun. Well, it turns out to be much better not showing any slides, like. Okay, I think compared to other cloud platforms, including cloud stack, all of those stuff, I think I need to view probably a little bit behind in terms of abstracting net of layer. So in terms of loaders, we know that all together we just discussed a lot with L2 agent, L3 agent, how we can achieve those, how level, better functionality. But still, I think OpenStack is a little behind it. And then it's not really truly API, as you emphasize, it's not really API around deep design, not yet, I think. So I think we probably need to get some more feedback or help from a real vendor like UCSCO and then we just try to know that how much you are willing to help us in that, not only just that specific hardware or stuff. Yeah, I think it's a great point. And I think to comment, whenever you have a new way of thinking or whatever, you have to say, well, how are you going to implement that? And the model that we've really chosen has been through APIs because that's what you need for services. APIs, we're lucky that we chose REST as a model because in my mind, REST is all about the representation. What you're trying to do is you have a model, you have a representation of a thing you want, be it a network or a load balancer or a router or a firewall, and you want to be able to instantiate that with all the necessary properties or the properties that you care about. There may be 200 properties that are following that that you don't care about, let them default to the right thing. And you should be able to, that should be a read-write operation so that when I look at how we can start to extend quantum or have other APIs, things such as topology, show me my networks. We have a view that a tenant can say, here are my networks that I've created and here are routers, and we're even doing that with Horizon now. You can start to do that. What that makes one tenant, which is your super tenant, and they can start to look at everything else. At a certain level what we want to avoid doing, I'm afraid, is over-complicating open-sec networking. I firmly believe open-sec networking's primary concern should be those abstractions, meaning those APIs that you need and that we now have lower-level systems that provide the real implementation or the translation into the physical resources. But you're right, I think that's where I would encourage us to really start working collectively on new blueprints about what are perhaps the way to approach it. What are the new abstractions that we need to be able to accomplish this kind of a vision? Do we want to be able to, obviously I think that we should be able to look at something, a router, for example. You should be able to see the ports on that router. You should be able to see the settings on that router that would make sense now from a system administrative point of view. That may be very different than what a developer wants to see. But we should be able to, using REST and using these abstractions to accomplish that goal. And we are more than happy to work with you. We've got several customers of our, and their interest is only in virtualizing, like I say, their networking infrastructure to be able to create this abstraction. So almost our entire model working actually within Cisco and OpenStack is that we simultaneously are thinking through these things with our customers and making experiments and pushing those changes upstream. Because at the end of the day, we want them to come back down into everybody's distributions and everything else so we all get the benefit. Next. So I was actually at the Open Daylight launch yesterday. How was it? It was wonderful. They said nice things about you. Did it feel familiar? Yeah. So, but the question I have is in terms of the management applications that you're bringing up, when you get into virtual networking, it's sort of a virtualized data center. Where do you see it? Do you see users and applications sitting on top of OpenStack reaching down or are they on top of Open Daylight reaching sideways and up? I think we always solve these problems by layers, layering of systems, layers of abstraction. And maybe we have to think of these things much more as actually not strict layers as much as they are networks of systems that can call each other. So that for example, I think there will be certainly applications that are written directly to Open Daylight controllers. Absolutely. What we want to be able to do now, when now there's also an interface into those controllers from OpenStack, we want to make sure that we're completing the cycle so that a management application that's written for Open Daylight sees something which is a UUID that could be recognized as being containing, pertaining to a VERF that is owned by a tenant in OpenStack. And what I'm sort of wanting to emphasize though is that there's many, even in network system administration, there's more than designing those applications. You have requirements for storage. We're going to store a persistent state. You have requirements for scaling out virtual machines in this. And we can start to use the other services within OpenStack to do that. So whether these things are strictly on top or they're basically companion services alongside, but at that layer that interfaces with the applications and then they're talking perhaps to management systems for Open Daylight controllers and things like that. And the fact that they're likely going to be completely different technology sets, they're likely going to go with Java, it seems, so be it. It's just REST is the common API that the managed applications are up to sit on top of. Yeah, I'm not even a REST bigot. What I'm looking for and I care most about is actually what are those APIs, what are those abstractions? And that's where I think as an industry it's very important we get those things right. And traditionally that's been done by standard bodies, looking at what are the data models associated with things, trying to standardize. And that's all very good work and we should benefit from some of that because it is the representations that matter. And then that allows us to, as vendors, expose different implementations of those representations. Thank you. Hi, you mentioned quality of service. Do you think technologies like data center bridging have a role to play in this environment? Oh, like data center, what, sorry? Data center bridging, like... Spanning multiple data centers. Is that... So the technologies that develop in IEEE, like QCN, or bandwidth allocation, priority flow control, do you think they have a role to play in this environment? Yeah, absolutely, and actually on that note, I think we have to, in my own activities within Cisco, we're sort of split between looking at OpenStack for the data center, one data center, and actually OpenStack for the wider concerns of a WAN. And that's where we have to span multiple data centers and draw upon a lot of that work, as how you implement these things to have the interoperability. First off, thank you for a good talk. And I think we can request those people to stop fixing things because the session's over. I just want to know, is it my brand new Mac? I can't figure out how the blessing is this guy. I know. So 72 hours ago, I didn't know anything about OpenStack, and I purposely didn't start reading on the website. Because I just... Want to experience it live. Right, exactly. I think I know a little bit. But here's his little feedback and a question as well. I think there is, as just the recent keynote where the speaker said, initially it was developers, then it was community, and now it is production, enterprises, seeing things in real best by PayPal. There are lots of case studies, which is great news, right? Awesome, very encouraging. Now people can do it. The second change I would like to see is stop following AWS, because the minute, there are 26 services in AWS. If we always start with the definition of a service and saying, oh, this is like that, I think you will constrain the innovation that the community can actually bring. Example heat. I think heat should be a vertical layer that cuts across everything and does the things that it needs to do, whether it's orchestration or scaling or not. But if we look at heat as cloud formation, we'll very quickly get siloed into the AWS garden and then be your initial steps, might be. So that's my feedback and question. I think that's a great thing. It is a very active topic, I think, among us in the community today. To what extent are we attempting to be, if not full AWS compliant, but at least EC2 and S3, and I would suggest we should not ignore that market. Yeah, I think that we purposely made a decision that we actually, or we have an API, which is the OpenStack API, and we also have compatibility with some of the basic models of Amazon web services. Because I think actually that's where our real, our market is actually for a lot of the applications that have been built on AWS today. I see that both in people who bring a private cloud in the enterprise, and they're saying they wanna take back the applications that these different groups within the company have put out on EC2, and we would like to make that transition back as easy as possible. Let me clarify what I said. The AWS should fit into the provider model for OpenStack under the hood, where someone can bring their existing AWS investment and get more flexibility. And that I think is an advantage. And I think that, like I would like to personally see some innovation in that direction. And then the second is, within the industry, there is a very clear understanding of what are your data path APIs, what are your control plane provisioning APIs, and what are your provider like APIs. I haven't seen that clarity within seven to two hours. I could be wrong. Well I think if you stick around and dig a little deeper in some of these things, you'll see that's exactly where we're going. We get to do it, it's the second mover advantage. We get to see how Amazon has developed, and actually I think they've given us a very good roadmap for a number of different services that are important to people who are writing applications. But we're in charge of our own, you can quit, you can quit. We're in charge of our own destiny when we decided to make the OpenStack network service. That was a clear break in terms of the model that we thought would be successful for us, because we're building our own cloud. And so we get to take it into places that might deviate from Amazon. So I wanted people who are starting to leave, so one more question. Hi Lou, Pranthadas from AppDynamics and former Cisco AON guy. Ah, you guys see him come back. The great Gersa to SDN. Yes. The question I have is, how can the network vendors that are participating in the OpenStack bring some of the goodness of the network architectures like separation of data. So let me try to quickly answer because I wanna be able to give people a chance to get to the next session. I think there's a real responsibility of the network vendors and that's why we're encouraging every network vendor to get involved. So that we really can bring a lot of that value. And what we have to make sure to keep an understanding of, our users are now application providers. Traditional application providers never talk directly to the networking. That was something about the guys who set it up and put it into deployment. So we have to make sure to speak a new language as we create these new platforms that are actually designed for the application to take advantage of all of the value we have in the networking place. Okay, thank you very much guys.