 that I'm looking forward to. We have Cisco WebEx here to talk about their production workloads that they are running on OpenStack. And to present that for us this morning, we have Reinhardt Quelli. Thank you. Thank you. When everybody started asking if I was interested in doing a keynote this morning, actually it wasn't originally a keynote. It was supposed to be one of the small user sessions where I have a much smaller group. So if I'm nervous, it's because there's a lot of you out there. It's very exciting. So we are at WebEx. We actually attended the design summit, the Essex design summit in October of last year. And we were introduced kind of, my team was introduced to the community. And we were inspired by stories from CERN and Mercado Libre and some of the other teams that spoke about their experiences. And we were pretty convinced that participating in this community was a way to deliver what we needed to to support our business. And so I was pretty excited to launch into that. So when you think about Cisco, let's go forward. That's a wrong forward. Let's not repeat this. Most people, when you think about Cisco, think about this stuff, hardware, CRS routers, switches, UCS gear, all great stuff. But that's not what our team does. So at WebEx, we actually deliver and manage SAS applications. So we deliver three primary applications from our platform, WebEx, which most of you are probably familiar with, Jabber, which is an instant messaging system, and Cisco Social. And then there's a bunch of other stuff coming along. And so ultimately, we actually are one of the largest business SAS vendors in the market today. And so we're serving a lot of users from this platform. So as we looked at our platform and kind of the future of the platform, what we built, WebEx itself as a service has been around for over a decade, in fact. And so the platform that that was built on, while serving its needs well, is not the fastest platform for innovation for the future. And so ultimately, what do we need to do to get that platform to accelerate our development and move things forward? So we do deliver these services from a global footprint. So we'll hear a lot today about, we've heard a lot today about a global footprint. Just so that we're running, we actually, our services themselves are run across multiple continents themselves. So we call ourselves cloud services because from the outside world to the outside customer link at Cisco, we're a SAS vendor. We deliver a SAS application to end users directly and we're responsible for delivering that platform. But from inside Cisco, from inside to our other product teams to the other groups within Cisco, we're effectively a cloud provider. We're providing infrastructure, platform services, something we call operations as a service, which basically means that we are responsible for the entire stack of the application from the data center tiles all the way up through the running application and manage that. We work. My boss, Raj Patel, has coined the term tripod to describe how we work with the other teams. It's product management, engineering, and operations working together to deliver this application. So at Cisco, we have multiple cloud teams. And so you can't throw a stick in Cisco without hitting cloud somewhere. The group here that you'll see most around the show here is Luke Tucker's OpenStack team. This is the group that is delivering OpenStack solutions for customers. He saw some of the announcements earlier, including the OpenStack Cisco edition. We also have a group within our network management technology group that has Cisco Intelligent Automation for Clouds, an orchestration framework for the cloud. We have multiple network teams building quantum plug-ins and products to work with Cisco. Then we also have internal teams that are using OpenStack internally. Our cities team, for example, delivering services to internal users. So ultimately, within our group, we call it drinking our own champagne. We run Cisco on Cisco, so all of our WebEx services are delivered from Cisco. Networking gear and Cisco hardware, lots of Cisco software, and delivered. There's, of course, another name for that, and that's dog fooding. So we get a lot of exposure to Cisco, Cisco stuff in the future, and where we're moving from here. So one of the questions, as we look out about our deployment platform, our software platform, we have to answer, why would we do a cloud? What's so interesting about cloud, and what are we trying to accomplish? And it's really these things. In fact, Mark Zellwith-Kanakal said this yesterday. I think he hit the nail on the head, which is that ultimately it's about delivering agile operations. How quickly can I deliver applications from development to production in a consistent, dependable way? And so ultimately that's the number one task for us. In doing this, on top of a cloud platform, and you're doing things in the same manner that our peers in the public cloud community are doing, is we get to reuse all the great tooling out there. Tools like Chef and Puppet, which are commonly used, orchestration frameworks, multiple logging as a service, metrics as a service, lots of additional things that are available to us to deliver our applications. We get to reuse all that tooling and not reinvent it ourselves, which is key. We actually are very interested in multi-tenancy and the delivery of our services. Even though we're a private cloud implementation, multi-tenancy is important to us for some of these reasons. We separate between products, between product lifecycle, QA, dev production, et cetera. But then also security isolation zones, our network termination zone, or application zone, or data zones, and those are all separated in tenets within OpenStack, so we can deliver a secure service on top of this platform. And then, of course, like any SaaS provider, like anybody delivering an application to the market, we actually have to pay very, very close attention to our cost of goods zone. We do have a free tier of service in WebEx, and it costs to run that, and so we have to keep those prices as low as possible. So the Adrian Cockroft has famously talked about the private cloud not making any sense. Like if you're big enough to run a cloud, you ought to be selling cloud services, otherwise you should be running in a public cloud. We have a different opinion of that. The number one reason why a private cloud is important to us is accountability, one throat to choke. When I sell, many of you are probably WebEx customers, when your meeting goes down, it's five minutes after the top of the hour and you're trying to start a meeting. If I were in a public cloud and I had to pick up the phone and call some large cloud provider who's had a couple outages last month, who do I go to? Who do I say? What's wrong with the network? When's it going to be up? How long do I have? Well, we need to be able to answer those questions. And so that end to end accountability is very, very important to us and important to our enterprise customers. End to invisibility. Visibility is actually rather interesting. How many of you would love to be able to go to your cloud provider as an application developer or deployer and say, tell me exactly what's happening on those VMs? I know what my application is doing. I could see what my application, the metrics are off my application. But I also want to see what the VM is doing. What's the IO weight on that VM? What's the load average on that VM? Is it my application or is it the infrastructure? Well, we can answer that question. We can, in fact, expose it to our product teams. So flexibility, we actually are deploying clusters and our data centers. And we intend to deploy them out into what we call our iPops or our edge location. So we can have a common deployment framework across those entire environments. And so I'm being able to have that flexibility across the thing. And then finally, it's an analog world out there. And we connect to it. We have video transcoders. We have telephony bridges. We have all sorts of wacky stuff that have to connect to our services. And if I run my own private cloud, it's very easy for me to make those connections. Well, not easy to make connections, but it's possible to make the connections. So anyway, why open stack specifically? First of all, we're very comfortable with open source. You don't necessarily think of open source in the Cisco world as your first thing. But actually, if you look at Jabber, it came from a Cisco division, a Cisco group, the Apache traffic server we contribute to and have one of the core contributors on our time. We're actually then, of course, all of Luz team and stuff that they're doing. It's a natural way for our organization to actually do business. So there's level of comfort there. And open source, of course, has a lot of advantages that we can talk about with leverage and community building and everything else. Given also that these things are open source, when we have special needs, we can actually meet those needs immediately. We can apply engineering resources, solve our problems, not wait for that next vendor release. So that's another option. So the options for support, I think, is a self-explanatory. We have a lot of places to go where we can communicate, work with community or outside vendors, and then keeping our costs low. So as we decide to go out and say, OK, we've made a decision due to a public cloud, that's a no-brainer for us. Probably everybody in the room understands that. Why open stack? OK, we know why that's important. But how do we do this? How do we go to? And where do we start? Keeping in mind that we actually were launching our private cloud efforts, basically in October of last year, we looked out among kind of the community and decided we didn't want to go it completely alone. We needed someone to help us jump start. And we actually turned to two organizations. One is Luke Tucker's open stack at Cisco Group. And the second was Morantis. And so we've actually been partnering with Morantis and a lot of the things that we've been doing. It's been a very tight partnership. One of the key parts of every one of the people on this side that's been important is all of us are committed to keeping the work that we're doing and the extensions of we're making and the point we're doing open source. And so that's, in fact, everything that we're going to talk about today. You either can get right now or we'll be able to get soon in the open stack Cisco edition. So everything is going to be in the public. So as we started, OK, I want to build a particular cloud. And rather than try to look at some of our public cloud peers, for example, and look at their service offering and try to decide which parts of those service offerings we want to bite off, we took a more pragmatic approach. We had a single product team within our group that was actually running in a public cloud at the time. And we implemented the bare minimum it was required to get them running. And so we developed that application, brought the platform up to deliver the application, and now we're biting off the next and the next. And so it was a good approach. One thing that we got asked a lot is, are you going to support the EC2 API within your open stack environment? We made a quick call that it just wasn't necessary, wasn't interesting, paid off the customer, in this case, ported from EC2 API to RAPI about two days. It just doesn't matter. It's not meaningful. So remember, again, we started our kind of open stack journey in October of last year. By July 31, we had launched our Alpha product on top of that. That has everything to do with the strength of the product in the community. I think if everybody wants to ask, is OpenStack ready to deliver applications, you talk here about all the problems, we have an open stack, all the things that we want to do. The reality is we took this thing effectively off the shelf, deployed it in our environment, connected it to our monitoring systems, I'll talk about that in a minute, and launched an application on top of that. So absolutely, it's possible. And this is the Essex in this case. So yeah, we've had 100% uptime on that platform since that launch. So we are going beta of this particular set of services within a few weeks. I was hoping to announce it earlier, but there's always robust. So just a quick overview of what we are actually providing. So the NOVA implementation is actually stock Essex in our environment, mostly stock Essex, as we know when we deploy, and this is kind of a familiar slide to everybody, a controller node controlling us as we have compute nodes, our infrastructure is Ubuntu. We're running KVM on top of that. Standard NOVA underneath there. We don't feel like that's enough to run in a production environment. And so one of the first things that we did was add HA, active passive controllers in our environment. Now if you look at the OpenStack at Cisco work, we're heading, screaming towards an active, active controller node. We deployed this, again, back in July. And so this is the first generation of that, so our active passive environment. But we have done full failover testing and made sure that this does run. And one thing to note about us is that instead of thousands of customers or tens of thousands of customers, we have tens of customers because we're delivering to product teams effectively. So a lot of the scale questions don't necessarily apply to us immediately. So when you deploy things like this, this is where you get, right? You've got a running system. It's ready to go. You're plowing down the street. But you don't really know where you're going yet. And so one of the immediate things, one of the first things that we have to add to this environment, is monitoring and service assurance layers. So a lot of the work that our own team, internally to Cloud Services has done, is around monitoring metrics collection, alerting, that type of stuff. And so we have a full, basically a full stack of monitoring metrics collection, collectee and statsee on the metrics collection. We use flume for log forwarding into our central logging servers. Monet and Imsen for local process health and alerting. All of these things connect back to our backend, to our existing infrastructure servers, MCT and EMS, which are internal alerting and health check systems, respectively. We've also created a management console that kind of percolates up some of these things into these monitoring views. So you can kind of add a glancey or health check. The status of the cluster is hold red, green, blue, blue meaning in maintenance or act. And then, of course, metrics and monitoring. These metrics, through the work, again, Lou's team has done, can actually be exported and brought up into the horizon interface as well, if you choose not to use that single point of interface. One point I want to make on these monitoring views is that we give our tenants, our customers, access to these as well. So they can see our underlying platform as well as their applications, and we're piping the metrics from both into the same place, so that we can look at the system holistically. One of the major advantages of private cloud, frankly. So basically we did the same thing for Swift. Swift is a big part of our environment, and all the metrics and management layer. The one thing that's slightly different about Swift is that we have added an orchestration layer based on salt, in this case, that allows us to sequence things. I should have mentioned earlier that everything that you're looking at here is automatically managed and deployed through puppet scripts. And systems are booted with Cobbler, configured with Puppet, brought up. So it's hands-off deployments from plugging in a box, enter the MAC address in Cobbler, and you're done. This whole system comes up all the way up to there, including RAID configuration and everything else. So we do run, as you saw in our previous slide, we always run in multiple data centers. We live in California. The Earth shakes out here regularly. We have data centers in Texas. The tornadoes go through there regularly. To provide a seven-byte 24, 365, you have to have multiple data centers. So our approach on multiple data centers for Swift is effectively what we have today, which is two Swift clusters with container-level replication between those. We're powerfully interested in the multi-data center work that the Merantia Swift stack, our team, the community at large is working on. It's going to be very important to keeping those costs down, because this means three plus three on copies. And so we need to solve that. On the ANOVA side, however, we actually intentionally separate out those clusters. It's our belief that application availability is an application layer problem, and that the applications are designed to understand those, just as when you deploy into an environment, for example, you're explicitly talking to an Amazon East End point and Amazon West End point. That enables you, as an application developer, to make decisions about how your data is flowing, how your traffic is flowing, how your replication is flowing if you use replication. We need to expose that to our customer, which is our tenant, so that they can make intelligent decisions about that application design. Because the network does not have zero latency. It's not always up. All of the fallacies, there was a son and fellow who had a list of fallacies of network computing that you can live by. And you have to expose these things to the user. So we actually, in our NOVA environment, we do explicitly expose multiple end points. We do replicate. The only thing we replicate between the two at the infrastructure layer is, in fact, users and tenants credentials, so that they can use the same credentials across both. Anyway, so that's one of the things that, again, we deployed this originally in July. We're continuing forward. We are deployed using the standard VLAN networking model. This is a model that doesn't work particularly well for enormous, large public providers who have thousands or tens of thousands of clients. But when I've got tens of clients, the limits on VLANs are not a problem. So the number of VLANs per network, for example, is just, it's manageable for us. It does give us an interesting capability that we've actually taken advantage of in our deployments thus far. And that is the ability to drop a physical host into a customer's VLAN. So we have certain servers, for example, that run best on hardware. And there's no reason why I can't deploy those right alongside the virtual servers that it's spending up and give them direct access to each other. And so that's actually proved pretty valuable for us right now. We are, of course, very interested in quantum and the things we can do, particularly as we try to extend this model across multiple data centers. But for now, this is working out very well for us. So if you talk about that combined team and what we've done together, the first is, again, deployment monitoring. If you can't deploy reliably, repeatably, quickly, you don't have much of a cloud. You've got to be able to build that infrastructure underneath your growth reliably. And you've got to be able to monitor and watch it. So this is a big chunk of work we've been doing. The HA configuration around MySQL and RabbitMQ, all of our services, by the way, are all API end points. We, being Cisco, know a few things about load balancers. We use external load balancers to deliver traffic into all these systems. And so we use that existing infrastructure for that. And I don't try to repeat it in the client. So on Swift, we've actually done a few API extensions that are interesting for our particular use cases, specifically secure token access, where we can actually hand out, we don't have to have individual per end customer credentials to access data in Swift. We can hand out signed timed tokens that will give them access for a very brief period of time. Cryptographic hashes for some of the data we're putting in, the MD5 hashes are not strong enough for our use case. And so we actually are doing, I think, SHA-512 hashes on those to know what data we're putting in and ensure that it hasn't been tampered along the way. We're doing the ring management via puppet and salt for mode execution. And the multi-part upload, which is for handling these large objects. All those are things. And again, all these things are things that you can expect to see in the OpenStack Cisco edition. We have, again, with our partners, Morantis, we've extended the Tempest test to run full validation. So when we deploy a new data center, for example, and we're standing up a new Swift cluster, rather than just do the functional level tests or ad hoc tests, we're actually running Tempest tests that have been extended to do functional tests. So spin up VMs, ensure we can log in, tear them down, connect to the database, basically give a better exercise of the whole system. Those are intended to run in production as well, so we can continue to run them against a runtime system. And then this last one I'll talk about a little bit in more detail is that one of the interesting things about running a private cloud is workload placement becomes very interesting for you. If you're deploying into a public cloud with tens of thousands of physical nodes, random works very well for workload placement in terms of getting dispersion of your workloads so all of your front-end nodes don't go down at once, for example. If I've got handfuls of physical servers in some small data center or iPop, for example, I have to be very careful about where I place those workloads. So a lot of the things that we're looking at for the future is around the scheduler and how we manage the workloads in this environment. And one of the first things that we had was around NovaVolume. So NovaVolume, as we know, it's an iSCSI-based protocol. If you have a NovaVolume server, typically as a NovaVolume server is a separate box and exporting via iSCSI to the compute nodes, we actually thought, well, what if we could take NovaVolume and distributed across all of our compute nodes, and then when we actually create a volume, effectively anti-affinity groups to create those volumes, a group of volumes, and say, I want you to disperse yourself across all the available servers. So that works out pretty well, so you can actually create these distributed volumes. So if you think about, for example, a Cassandra or a Voldemort or a React or a whatever cluster, you want those disks in that cluster to be spread across multiple physical machines. The next step of that is actually to have a scheduler in the Nova side that's actually aware of where those volumes are and can try to get itself as close to the volume as possible. So when you spin up a VM, you say, I want you to get as adjacent to this particular volume ID as possible. So it's kind of a cool way to get local performance but still the benefits of NovaVolume. So anyway, changing gears just a little bit here. As anybody deploying a new application, a new software package, documentation is a big deal. Engineers, by and large, that's not their first thing on their mind when they're delivering features is delivering documentation for those features. And so it's always a chore to manage documentation. We've done quite a bit. This is actually a snapshot of our internal wiki page that's actually a joint Cisco wiki page across the multiple cloud teams that are doing work. But it actually doesn't stop there. Our users, in fact, when we delivered our first application, I will admit that I actually didn't point them to my own documentation. I pointed them to one of our public cloud competitors who's running OpenStack and said use their libraries and their documentation, they're ahead of us. And so that's what they did. They actually took our competitors' products and our competitors' documentation and built their application using the APIs as documented and just pointed our end point in the way it goes. So we get to use a lot of, there's a lot of stuff out there that our product teams can use that we can leverage without having to write ourselves or hire documentation people ourselves. OpenStack Foundation, local meetup groups that are active in this stuff, the public clouds including our competitors, the various web forums, fellow users in various groups. There's a lot of material out there that makes this stuff very, very approachable for customers. So, as we look towards the future, some of the things that we're gonna be doing here that are very interesting to us and there's been some discussions around this summit and we'll continue these questions. First of all is at rest encryption. We have a, again, use case and need for encrypting data in our data center for our customers. Multi-data center Swift, which I mentioned earlier. We have an internal identity system that's used across our product lines and connecting our authorization. It's actually pretty straightforward with Keystone, right? Connecting to that back end to an alternate authentication mechanism. Of course, quantum as we kind of extend our networking and our capabilities of, particularly as you go to multi-data center, these are some of the areas. Metal as a Service is the thing that's particularly interesting to us and there has been a lot of conversations in this summit about different groups actually using the OpenStack API as a front end to delivering and managing those physical servers. And ultimately the goal for us in that case is to give the application teams a common interface. They don't have to change their deployment, their methodology in order to specify a physical server versus a virtual and there's a lot of use cases for those physical servers. Things like Hadoop, for example, work best on physical, if we can manage them in the same way that we can manage our virtual infrastructure as a win for everybody. So, and then ongoing operability. So, I'm just screaming along here so I'll finish early which you guys will probably all appreciate. So, that was it. So, we are of course, hiring. Has anybody not hiring? And this is the general Cisco landing page for hiring and if you stop by the Cisco booth in the back corner we definitely have, there's some cards and you can talk to people about some of the stuff that we have there. So, that's it. Thank you. Thank you, Reinhardt. I think maybe we started something bad here with this hiring. I live in the country in East Texas and there are lots of woods and farms and ranches and a lot of times you'll drive by and you'll see on the fences signs that say posted no poaching. We're probably gonna start to see some of those out there in the sponsor areas. So, that wraps it up for our keynotes this morning. Thank you guys for coming. A lot of sessions throughout the day, checksked.org and we're gonna be doing a breakdown on this room so you can all leave now. Thank you very much.