 OK, welcome everyone. I'm going to get started because I'm between you and lunch. So my name is Lou Tucker. I'm a vice president CTO of cloud computing at Cisco Systems. And today I really wanted to talk about sort of the future of where we're taking OpenStack and being a lot more application-centric moving forward. We've seen things such as heat and other things being talked about at orchestration. So I wanted to provide context for how this fits into what we're really trying to accomplish at the end of the day with OpenStack and that it's really all about the apps. And what we've seen this morning in the keynote and everything else, we're really building this platform to make it easier to deploy applications, particularly at large scale, serving consumers that are either on mobile devices, everything over the cloud. So the essential question is that what do we need to do then as a community to continue to make OpenStack easier and easier and easier for application developers to get on with their business, which is developing and deploying the application itself? So I think that it's clear now that cloud computing is winning because it is absolutely the easiest way to develop and to deploy applications. I mean, if we talk about a lot of the attributes around cloud computing, self-service, elastically scaling, we're abstracting away the infrastructure, making it for an API-driven environment. And this really can dramatically lower the complexity of building particularly large scale applications. I myself am just not interested in an app that runs on a single VM. The interesting applications, my background is actually in high-performance computing, large-scale distributed computing, large-scale website design. And it really has, how do you make that as the application? That is a service easier. And that's where cloud computing, I think, is really winning. And we're also now looking for portability across all of these different platforms. So as we continue to grow the number of open-stack deployments that are out there, that's a larger and larger-based platform to attract application developers to build on top of that platform. And of course, I'm putting the last on the list, it is the most cost-effective way of deploying things on infrastructure. But time and again, I hear from customers, it's all about the agility. It's all about speeding time to market for deploying their application or their service. So as we all know, in the old days, pre-cloud, pre-virtualization, really, applications were deployed directly on top of physical infrastructure. This, in traditional IT organizations, is still the way a lot of applications are done. And it's very time-consuming, very error-prone. You develop something in a dev environment. You hand it over to your deployment team. They have to now see how does it fit into and how does it have to change to run now in deployment and production. So there's a lot of things that can go wrong in that whole process. So many of us who are working on a cloud, actually, we develop directly on the cloud, on the production infrastructure, and then we're actually opening that application up. So we avoid a lot of the steps that are traditional with this kind of infrastructure. When you're having this kind of infrastructure, it also is an awful lot of knowledge and different groups that you have to coordinate between the networking engineers and the people who are responsible for storage architecture and everything else. Getting all of that lined up is what takes months and months and months to deploy applications, which just goes away in a cloud computing platform. So I think that we're seeing that kind of change happen. And it happens because we've added this new layer. And in many ways, I view the most important part of cloud computing isn't necessarily just deploying things on a cloud remote out of your data center. We've been outsourcing IT operations for quite a while. And so that is a business model in terms of how we're deploying infrastructure. But by creating this new platform, a platform is that thing upon which applications run. And that platform now we're seeing has created all of the advantages that we see in developing applications. And the other thing is that OpenStack, as you know and you heard this morning in the talks, has grown in terms of the number of services. You may have heard me talk in the past that we've talked the original kind of layering. We've talked about infrastructure, the service, platform as a service, software as a service. And we forget the fact that really the key element of all of this is that these platforms are made up of a set of services. That allows us to incrementally add new services to this platform, making it easier and easier for applications. So the applications see the OpenStack cloud platform, but they see it as a set of services that allows us to continue to innovate moving forward with more and more services. So where we originally started, as I mentioned this morning with Nova and Swift, these are the services that are available now. We have now a neutron networking service, an image service, identity service. Moving forward we're getting into metering and monitoring and orchestration. We've continued to grow the number of services and I think we're gonna see even more continue to grow over time as Mark was mentioning this morning. These are a few of the new ones and so we have a whole new set of terms and services to start to learn about here in terms of triple O as a better way to actually install OpenStack, originally by bringing up a very small kernel of OpenStack so that you can start to deploy onto bare metal. Bare metal is also a project called Ironic which allows you to now take virtual machines and run them on physical servers just like you would upon a virtualized instance. Havana for Hadoop. All of these things are coming out and so it's important as application developers that you take a look at these things because they each provide a very specific kind of service that may make your job as an application developer easier and easier and easier over time. So this is I think one of the very basic design principles that we've had in OpenStack is that there are a set of services that are loosely coupled that work together to provide this kind of a platform for application development. And so we've used this same platform in a private cloud setting and then also in cloud service providers providing cloud services out to their customers. And each one of these services and I really urge you to get involved, you can apply your expertise in one of these areas to that specific service and make real contributions. And it's not just the code that is being contributed. One of the things I really wanna encourage people to do where we really have, where we slow down in development these services and the number of people that are willing to review. To review code. And to be involved in that process. And so oftentimes people ask me how do I really get started with OpenStack? And my answer is always the same. First start reviewing other people's code. You will learn the code. You will have real value. You will get to know the other people. Then when you make a submission, guess what? You're submitting this code to people who you've already helped out because you've reviewed their code. So I think starting in OpenStack it really starts with the whole review process and getting involved with that. So this poor guy. Trying to figure out cloud computing. I'm sure you've had these conversations with some of your customers and everything. And I wanna look at the cloud platform even a broader sense. We all know clouds are great for developing web apps. As I mentioned, it's clearly now demonstrative that it is the easiest way to develop and deploy applications. What about system admins? Why should they be stuck in that old world just because they actually have to touch physical hardware or that they have to have, they are being, they have administrative rights and privileges and why shouldn't they start to use the same platform? So some of these things are becoming additional use cases that we're seeing here where they really are looking at the system admin. So when you write new system administration and in a DevOps kind of model applications, guess what? You can use a key value store for storage. You don't have to spend up a database for your application. You can use perhaps database as a service for where you're going to persist information. So a lot of these things, I urge people to start looking at this from more than the point of view of the web app and increasingly in terms of being a part of your dev apps environment so that you can start to actually minimize the work it takes to actually operate infrastructure. So mention this and there's a lot of the values because you can look at the same service and you say, okay, so I have this key value store and I wanna be able to use that as part of my application. So you can say that can grow to a very, very large application. So if you're collecting an awful lot of information for let's say analytics and things like that, you can start to incorporate much more intelligence in the applications that are running as a system administrator just like web apps were able to do that in terms of collecting massive amounts of information from their customer base trying to understand customer behavior. This would allow you to understand the behavior of your network, the behavior of your infrastructure by doing that processing and we're beginning to see that as people are looking at analyzing very, very large sets of log files and things like that running it as an administrative application for the environment instead of something that is necessarily facing consumers. So there's a broad range of use cases and being at Cisco it's kind of interesting the different kinds of customers that we're engaged with now. So one is as was showing the last summit was Comcast where they're not offering a cloud service what they're offering is Xfinity. They're offering an app that is delivering out to the consumer their primary business service to the consumers in terms of the streaming media and everything else. So we're seeing a lot of that. We're obviously seeing large-scale e-commerce uses of this and even then in mobile networks we're beginning to see some of this work. So it's important now it comes back again a cloud becomes this layer in the data center and it's really this layer that allows all these different use cases to start to be incorporated and being using OpenStack as the underlying infrastructure. So some people are starting to talk about OpenStack as the operating system for the data center. Those of you remember years ago when we saw app servers in middleware sort of come into the stack and starting to use J2EE across the entire data center I view this in much the same way this is now an important new layer across the entire data center which is now that thing upon which applications are built. And when we look at that then we're saying okay now we can start to go down and connect this now actually into the physical infrastructure and have a view of that physical infrastructure from the application space itself. This is one of the great quotes that I love to see actually Citibank as you know there were a couple of issues that we had with banks in the United States and we recognize that some banks were just too big to fail and now we actually have Citibank report talking about OpenStack itself as a community driven effort is something too big to fail. That's no guarantee it will still take work by all of us and but I think this is where the community that we're developing around OpenStack is so important and the number of different players that you see now are involved we essentially have almost an entire IT industry involved in OpenStack today and it is getting the initial attraction I think in service providers but we're seeing an awful lot in financial services as well. So in the enterprise we're seeing this also starting to take off as people are recognizing the best way to actually provide their infrastructure services is having a cloud platform as a part of their data center. And it certainly is a viable alternative to things such as Amazon's AWS which has been a real leader in showing the kinds of applications and the benefits of working on top of a cloud. So I want to just show some numbers that will Cisco collects a lot of information every year and creates a internet kind of traffic report and they show just the growth that we're seeing in data center traffic. You know tripling between 2012 and 2017 we were seeing like 7.7 Zetabytes of traffic traveling now in the data center and it's growing at a Kaggle of 25% per year. This is scaring the hell actually out of a lot of people who are in running data centers because this is the kinds of data center growth that they need to be able to accomplish meeting up with this kind of information here. It's also interesting showing them the workload shift between kind of running applications, workloads on traditional infrastructure versus cloud infrastructure. This shows that we're about to hit the break point here probably around 2014, 2015 and it'll really shift and that'll be the dominant way of deploying applications in the data center. The other thing as we look at is okay so traffic in and out of the data center versus traffic within the data center. We've often, when we look at this now actually if you take from the bottom here data center traffic serving a user over the internet or whatever it was only 17% of the overall traffic. Most of the traffic is within the data center which is kind of counterintuitive. We're seeing the explosion, growth of the internet and all of these mobile devices but if you think about it for a minute every time there's a request to refresh a page on a mobile device you're probably talking to maybe 20 or 30 servers in the data center itself. They're all communicating, aggregating information, accessing databases and everything else to gather that information up and then shoot basically a screen back out to the user. So we're seeing tremendous growth and pressure on the data center as we're seeing this kind of change in the traffic. So this is affecting the way people are building out data centers as well. So traditionally data centers have built with this from a networking perspective and this kind of an architecture which is really optimized for the North South traffic that your compute storage and your aggregation layers and your services that attach to this and then your edge routers and everything else. And that's the way people have been thinking about data centers for quite a while. This is what most of the large data centers are being built out now. It's a spine leaf architecture. This means that we have from a traditional kind of HPC much, much higher cross-sectional bandwidth within the data center. This allows much more traffic within the data center to match the numbers that I've been talking about before. And there's a new edge. Now we have running multiple workloads, multiple virtual machines on each host. So the new edge is actually becoming in virtualized switches. They're running inside of the host. And so a lot of what we've been doing here inside of Opensack and other things is recognizing that that's where a lot of the activity is but we need to be able to start thinking about these spine leaf architectures allowing to support the kind of intra data center growth and traffic that we've been seeing. Trudit also we've been talking about now overlay and underlay networks. So the underlay network here is actually that transport which bits flowing over wires but particularly in a multi-tenant environment where each tenant wants to have their own view of a logical data center. They're creating overlays and other means of accomplishing the fact that they have a logical set of links and paths that their traffic is traversing and it's isolated from everybody else's. So this has additional ramifications here so there's a lot changing in the data center as we're continuing to move forward in Opensack itself. In the telco area it's interesting, in the last couple of years they really started a real movement around network function virtualization and this is sort of looking at well the old way you would bring in a lot of these hardened devices and these appliances to handle very important network functions for you such as firewalls, load balancers, CDNs and all of these kinds of things and many of the large telcos are saying no more. It just doesn't meet, if they're having to grow that kind of 25% per year this is too slow. Instead they really are looking to virtualize a lot of that networking functionality and deploy that then as virtual machines put spread throughout the data center so that they can dynamically and elastically scale those services. So this is another example of the kind of change that's happening within the industry as we are building the cloud platform on top of this. So one of the themes of this talk is really about how now Opensack is working with a lot of these changes. We've all heard software defined networking I'm sure. I mean all this kind of network virtualization Martin Casado gave a great talk yesterday talking about how the network has becoming virtualized. And so these two big trends are happening at the same time cloud computing which is designed to make it easier for applications now to create these virtualized environments and changes fundamental changes happening and redefining how infrastructure works and become much more driven by software and making software and orchestration a part of that basic infrastructure. These two things come together in Opensack. In fact if you look at Opensack and the Neutron networking project or whatever I think there's on the order of now 12 different plugins and I think eight of them are open flow controllers. So we really are being able to link these two worlds together through what we're doing in Opensack itself. And the reason for that is that we want these layers to talk to each other. We really want to create a conversation between the kind of capabilities that we can now do in the underlying infrastructure itself as it becomes much more driven by software. It has controllers, it has APIs and everything else and what we're doing at the application at the platform level. Because there's tremendous advantages if we can do this because now the infrastructure can respond much more intelligently than it has in the past which has been defecting all of a sudden detecting these kinds of super flows that are going on and then trying to adapt, guessing what the applications workloads are. But with things like Opensack we can start to have that conversation. And so a lot of work that we've been doing in the Neutron networking project has been making it a way for an application to explicitly say I'm going to be creating a network and I would like it optimized for a streaming media application because that's the kind of application and they can make those assertions and now we can have both of these layers working together to deliver that kind of thing and get the real performance that you need. So I wanted to just look backwards a bit because not everybody was here I think as part of the community in 2011. But this was the question that I was facing is we had a compute service and we had a storage service. Inside of the compute service, NOVA in fact there was sort of basically hard wired networking constructs inside of the compute service. We recognized with all of the changes that are going on in infrastructure we needed to essentially refactor that out into another service. And that service is thinking about what do we want that service to be? And it's interesting, I mean I followed Dan he sent something to follow me and we went together with about 14 other companies to really approach this question, what do we want? No one's built networking as a service before. So we really had to innovate here and we really had to go back to the basic things and the constructs around this were well the basis of a compute service is essentially a virtual machine. So how do you start a virtual machine? How do you actually get the image that you want loaded on the virtual machine? How then can you access it in storage? The chief abstraction there is actually of a block of storage or some blob in a key value store that you want to be able to access. So what are the key abstractions in networking that will make sense to a developer, a user using APIs without them having to understand networking protocols and spanning trees and everything else that's going on in the infrastructure? What are the things about networking that matter to a developer? And so we came up with a project that time called Quantum, now called Neutron to address these things and you'll keep hearing us talk about abstractions because as developers that's what we really are dealing with. If you're an object oriented program in the typical kind of things you're creating objects for a wallet for example. So you're gonna have methods and everything about how you operate on that wallet. And that can be very much of a black box because now you have an abstraction for the functionality so some people can write the back ends of that other people can start to use those methods to access it. The same kind of thing is going on here. So the basic thing we were trying to struggle with was let's start with the most basic element cause we knew we had a long way to go. We could see over the couple of years that things would be changing a lot and then we can add functionality but let's get the basic functionality together and the chief abstraction there was this kind of virtual isolated tenant created network. Tenant can create a network, they can attach virtual machines to it, they can delete the virtual machines and they can delete the network and it all goes away. This is all done without any operator intervention. So this is what really supports the speed of application development deployment because now the application developer themselves has power doing that. They don't have to file a troubles ticket for please create this VLAN for my application. They can do that directly through the neutron services. And we also anticipated down the road then we'll get into things such as IP address management then we'll also get into how do you chain services together and so we wanted to be able to start simple and incrementally release after release bring these additional constructs into this project itself. So we were fortunate in that we had the foresight to think that it really depended upon making this service have the right architecture. Essentially we divided the problem into two pieces. The top piece were the APIs that are exposed to the end user making them simple for the end user to consume, create network, create a subnet, attach virtual machine to the network, those kinds of things. But we also recognize that actually networking doesn't have a common interface below it. There are different ways, different vendors that have different ways of configuring networking. And so we created a plugin architecture. And our first version actually worked pretty directly. You would choose what kind of a plugin you want. Some of those plugins were purely virtualized OVS or GRE. Others have, for example, a Cisco plugin itself actually does VLAN allocation on Nexus 7Ks. So there's ways that we can isolate that complexity from the end user it doesn't matter and from the passing of compatibility tests and everything else it doesn't matter. Those plugins have to meet those APIs. We also recognize we didn't want to have that be a straight jacket in terms of innovation. So we created the ability to also have extensions that are associated with each plugin so that some people could have an extension let's say around quality of service. So you could say create a network dash optimized for streaming. Create a network that might use DSCP markings for identification of traffic. Those are the things that can become applique kind of vendor plugin specific extensions that can coexist alongside of all of the others. And that allows us to over time see now everybody's implemented in their plugins this common extension. Now there's time to talk about making that a core part and then we feel that all the plugins should support that kind of API operation. And at the same time then we've seen as I said mentioned a growth of different plugins. You probably have also heard about Open Daylight as being one of the SDN controller as we're getting a number of companies involved with that. Well there's no plugins for that as well. And so a lot of the design sessions now they're able to talk about how do we integrate now these two different layers and this plugin architecture is the ideal way to do that. We most recently recognized well this was fine. You can plug in one particular technology but what about real environments are heterogeneous. How do you plug in multiple ones? So we've got an ML2 version now of this architecture which allows us to have multiple plugins. We also are recognizing that well network services need to be inserted. So you're chicken service training and I want a firewall then followed by load balancer et cetera. So we have extensions that we have the architecture now supports the kind of service models that we want to have inside of this as well. So I think we were lucky in many ways that we came up with an architecture that really allows us to continue to expand the capabilities here and maintain the kind of core concepts that we had initially in this project itself. This is just an example of even reflecting this through the GUI in terms of horizon now the application developer can see those networks that are created and which virtual machines are on which networks. And in fact we had four interns last year working with us at Cisco from University of Kent and they developed a really nice and we open sourced it well kind of a visual model so that you can drag and drop virtual machines onto networks, create virtual routers and connect it up into all kind of a very simple drag and drop kind of deployment technology. Additionally as some of the plugins come in we're recently integrating Nexus 1KV Cisco's Nexus 1K virtual overlay and allowing that now to come into horizon as well be available in the digital plane where we can talk about port profiles so that on this particular port we want a certain quality of service we want security and other of these things associated which in the underlying technology can then support and being able to expose that up to the end user. So in many ways I mean we've seen the same theme about one level of indirection solves all problems and that when we create new layers of abstraction I know that in the early Sundays are as a part of the early Java team which was then called Oak and Eric Schmidt were their leader at that time and he sort of said you know you create a new platform such as Java which was now supposed to allow any application to be written right once and run anywhere but you're allowing that innovation to happen both above and below the line. So it's no wonder that we're seeing this happening so that we're seeing now OpenStack and the Neutron APIs and service allow now service providers to start creating very interesting new services above the line on top of this cloud and it can connect into and talk to a lot of the changes that are taking place in the underlying infrastructure. So this is a great example of that innovation that's happening above and below the line and where we're seeing Neutron itself being that glue that connecting piece that allows these two layers to talk to each other. Oh and one example of where this is going then is if you think of now what's coming out now in Havana is load balancing as a service, VPN as a service, firewall as a service. It allows us to again in a tenant from a tenant's point of view they're creating these networking constructs and they need to be able to have a load balancer and being able to assign what is the virtual IP address for now a set of virtual machines and be able to talk about how using Solometer you can have metering of those and you can talk about the health checks on that for when you scale that up and when you scale that down. All of this is now as these projects are getting integrated and working together. So in many ways I think that OpenStack is now evolving from this kind of VM centric view of the world to an application centric view. The apps are what matters, the apps are complex and we need to be able to do things such as now how to talk about application orchestration. How do you manage these collection of virtual machines? The individual one, they almost are disposable units. You know we can scale them up and we can scale them down and we want these other systems to enable us to do this. So there's a lot of talk now as you hear and talks about orchestration. And orchestration is really making all of these components now work together so that from the outside world that sees a single IP address behind it is a whole array of these different services. And also large scale services are decomposed into other kinds of services. If you look at all like sort of Netflix architecture and everything else, that's the way they approach this. It's a set of services in the application itself that are drawn upon a set of services that are in the underlying cloud platform itself. And this makes it a much easier, less error prone way to deploy your applications because it's largely driven by a declarative model of what you want your application, what composes your application, how you want it to behave. And then we can let the system drive a lot of that. So orchestration is a key element to this. I urge you to really take a look at heat and see what the work that is being done here, which is really taking this template-driven approach towards describing an application. In a declarative fashion, you can describe the components of the application and how they should be. They're scaling up and scaling down. So it's very interesting as I think we'll start to see people publishing different heat, you know, templates. So here's a template for WordPress which ties together both WordPress and MySQL and other kinds of things and being able to pass around these templates and share essentially the best practices in an architectural sense for how you construct these multiple virtual machine applications. When you talk about real apps and everything else and all of the services that are involved in it though, I mean this is a very simplified view of what's involved and what you have to start to pay attention to. Between the different services and firewalls and edge routers and security appliances and everything else, what we're trying to avoid is recreating all of that. That's what exists in real infrastructure today and it'd be a real mistake for us to propagate all of that back up to the poor user who just wants this application to run. And so we're really making an effort of how do we collapse a lot of this complexity into a model that's much easier for us to have going forward. And it comes back to something I've talked about also, apps usually start on a whiteboard. You get together with a set of your engineers and you start scribbling on the board boxes and arrows and you say, well I've got a bunch of web servers up here and this team goes and does that. They'll be talking to your app servers and you need to be able to maybe use Memcache D in front of your databases and everything else. And this is how developers think there is these kinds of simplified logical topologies for how an application should perform. And we'd like to be able to go sometime in the future from this to a deployed app. And that's really if you look at heat and other things as templates are being driven, they're largely following this kind of model of talking about what is talking to what and how these things are isolated from each other. Because in fact when I deploy this application I don't want people from the internet to access my database. That's why I have a topology here. I want the networking, I want the infrastructure to provide that kind of isolation. Whereas I obviously need to have my web servers talking to the internet itself. So here's a more schematic view of that kind of basic three-tier web application. You've got your web front ends and talking at behind the load balancer, behind the firewall and that has internet access. From the internet you can talk to these servers. On the back end the database servers are saying you need to be isolated. The only resources that should actually be able to talk to the database itself are those coming from the app tier. And so this is again another model that people are how they think about their application itself. And that it's really trying to express this in a way that represents the logical view of these different tiers and how they talk to each other. Now we can do this, and this is what we support today with Neutron. You can create in-heat or just manually yourself the all of the different elements to do this. You can create a firewall as a service, load balancing as a service. You can talk about having these isolated networks connecting tier to tier. Ultimately we will be able to do the service insertion and chaining that allow you to put firewalls and load balancers even within the application itself. And we're using the same simple model that actually came from Amazon of a security group to represent what traffic is permitted between these different layers here. The question to ask you is that really the right way? We have the capability. So Neutron and Nova allows us to create these constructs so we have the basic mechanism, but is that necessarily the right way that application developers want to think about it? All of a sudden they start getting involved and saying, okay, on this connect, this create this isolated network, create these ports on the network, connect this virtual machine to that port. Now if I need to scale that, I have to delete that port or add another port onto that network. And now what if I'm out of the range of IP addresses on my subnet? So there's a lot of mechanism that is still required to do that and instead you'd like that expressed essentially as policy. You'd like to be able to talk about groups of machines that serve a particular purpose and in terms of their communication patterns with other groups of these machines express that as the declarative policy. And now you can have the system go and implement those policies creating all of those networks underneath for you and be able to manage then the rest of that and being able to do that then an open stack. So this is what he would like to have and think about it this way but we're sort of lacking now some of that connection with the bottom layer of this chart. Because at the same time what we're seeing and actually Cisco's announcing tomorrow morning I believe in New York at 9 a.m. our approach to really this application-centric infrastructure making it so that the infrastructure itself understands these policies, understands these groups of virtual machines. So it can do all sorts of magic in terms of routing of traffic and everything else according to the policy constraints here. So that's really where we are again creating that conversation between the underlying infrastructure and the top of the application itself. There's a, and we have several of the authors here actually a blueprint for this called a group-based policy abstractions for neutron. You should be able to find an access to that which we'll also link to a detailed spec on this where we're working together as we have in the past in the open stack community with others such as IBM and Juniper, Red Hat, Nwaj and Plexi all working together to come up with the right way to describe this kind of application-centric view of how you deploy and manage applications. I urge you to watch that and if you'd like to contribute, we would certainly welcome you. Where we're gonna end up I think is coming back again to this theme of orchestration. With the newer technologies being released in the infrastructure itself making that much more programmable and dynamic and everything else. We've got orchestration taking place in the infrastructure level. We have a provisioning platform in terms of open stack spinning up these resources and connecting them together that may be either physical or virtual. And now we have application orchestration things such as heat at the top level that the developer interacts with. So all of these things meet that original goal at a start of the conversation with creating a conversation between these two levels. And so I know even within Cisco when we talk about orchestration we have to start thinking about it as orchestration at each of these levels because now these different levels are working together and managing resources across that. So I'm gonna end right there since we've got just a minute or two more but if you have a question I'm not sure we have mics but please just raise your hand and then we can get on to lunch. Okay, no questions. I will be here for a couple of minutes but enjoy your lunch. Thank you very much.