 We've got a quorum, so we'll go ahead and get started here. My name's Reinhardt Quelly. My Starbucks name is Ryan. So when I order food, but Reinhardt's not so bad. So I'm with the Cisco WebEx group, and this is an update that we proposed. Basically, a number of years ago, we gave our user story about as we just kicked off the OpenStack project. And so we'll talk a little bit about the date of where we are, what we've done, and what we're working on these days. So we'll go through a little bit of history of where we kind of the sequence that went through over the years. I'll talk a little bit about our organization. We actually changed our organization up quite a bit in order to deliver in the manner that I'll be talking about here. And that's actually a theme that I run into a lot as I talk to kind of my peers at this meetup, or at this conference, as well as other meetups. I'll talk about my platform team and what that platform means, how we deliver what we do, and what the components of that are. Talk a little bit, a couple of sides, on our multi-cloud strategy. And you'll see as we go through that we do deploy to multiple clouds, and we'll describe that. Spend a little bit of time on saying how we deploy applications, what our methodology is, and what the tool sets we use are. It'll be necessarily fairly brief and touch the high points. And I'm happy to talk over the next couple of days, if anybody wants to look me up and talk about details of these things. And so let's go ahead and jump right into it. So we actually started our OpenStack Diderney in October of 2011. It was actually the Essex Design Summit back in Boston. My chief architect at the time, we had just acquired another company that has previously been deploying to Amazon. And the task at hand was to get some infrastructure up underneath them in our own WebEx data center so that they could continue delivering the way that they were used to delivering in Amazon. And so we kicked off our OpenStack project. By July of that year, we had stood up an OpenStack environment underneath them. And we launched a private beta at that time. All this time, we haven't actually worked talking about the product that we were launching and what we were doing on top of that. We were fairly cagey about that, because being a big public company, we don't like to pre-announce what we're doing. That'll come up a little bit later. But in spring of 2013, after an extended beta of that application, we launched two new initiatives within Cisco right about the same time. The first was a brand new, we had made the statement back in July of 12 that we were targeting all future development, kind of all major net new applications to the OpenStack environment. And so come spring, we actually did launch a new application project, and that was, again, as we had promised, delivered and targeted specifically for OpenStack. We also at that time kicked off a Cisco wide OpenStack environment. So a lot of the team that you see here within Cisco is an internal team called Cloud Infrastructure Services. We're on a platform called CCS. And so right about that time, we were launching both of those projects. And then in spring of this year, we actually finally launched the public release of the application that we've been building on OpenStack. So today you can go to that URL and download and run the Cisco Spark, which is a new collaboration application in the WebEx family. It is 100% hosted on OpenStack. So from media bridges to back-end data stores to monitoring a metrics to everything else, runs 100% on OpenStack. So I finally get to talk about what we've actually been working to deploy all this time. So that's fairly exciting. So this project was interesting. One of the challenges as we deployed OpenStack within WebEx, there was a lot of inertia, a lot of existing processes, a lot of muscle memory on how you build applications, how you work with engineering, how you do change control, all this stuff, all of which made it difficult to iterate and move very, very quickly. So the management team, our new CTO and C Group VM at the time basically had us all working together in a room through an architecture session. They called us into the room and fired us all, said, everybody walk out the door. OK, you're all rehired. Let's start a new startup inside of Cisco that will be concentrated on building this application. And so that project was launched. We actually moved an operations team, an application development team, a client development team all into one organization and did a true DevOps style startup with those teams jointly deploying and running to that application. This is a huge part of our ability to get this project out and running quickly and building what we'll describe later as this cloud-native application. So after incubation, after we went into beta, we actually have started rejoining the Cisco, the WebEx mothership, and you'll find us combining operations and combining other things as we move forward, as we integrate with the rest of the WebEx properties. So my job in platform engineering, so I run of those groups, the cloud apps engineering, the client engineering, and the platform engineering, I'm responsible for the platform side. And I always joke that this is my job. My job at the platform level is to be invisible. I shouldn't be seen. It's kind of like the kids, better seen but not heard. In general, people just want me to be under the covers and not see what's going on. So I strive to be boring. In fact, one of our top, we do a Kanban style thing. Our top priority is always to what we call quest for boring, try to be boring. So that's a theme here. So my team is responsible for the build release environment. So all of the build test environment is managed by my team, running the deployment targets, where we actually deploy the applications to. I'll talk about that a lot. Coming up in a couple slides, common services, message buses, data services like, well, message buses like RabbitMQ, data services like Cassandra, React, Redis, Postgres, a variety of other things. We're also responsible for the service assurance layer. How do I monitor, manage, make sure all this stuff is still running? And as I mentioned before, we are a joint engineering team. We follow under our engineering organization. And so all of my operations people are, in fact, software engineers building operations as a software project. And so a lot of the stuff that you see will be around this. I don't have a traditional operations team. It's all engineers and the software that the engineers are writing in order to deliver this stuff. So another way of saying this is that if the green stuff is the things that my cloud application teams are providing, and the gray stuff at the bottom, well, I guess there's two grays, the gray stuff at the bottom is what my infrastructure provider delivers to me on that layer in between. So all the things that support the applications that we run. So I mentioned most of those things. In particular, I'll talk a little bit later about a significant number of our applications actually deployed to a PAS. We use Cloud Foundry. And I'll talk about when we deploy to the PAS and when we don't. We do today deploy across five data centers and four data center providers. All of them are open stack. You'll find that we run paired primary data centers, where data is actually replicated across those two data centers. We have particularly high sensitivity things that are running exclusively in Cisco data centers, things like the key management servers that manage the keys for the into and crew encryption that we use in our application. Those run 100% in Cisco data centers. We also have command and control that we put a diode for the electrical engineers in the room. Everything in our environment, command and control, pushes into our cloud providers. We don't really trust our cloud providers. All of our key material, everything that we're doing is pushed into the environment. And that just makes a lot of conversations a lot easier with InfoSec teams, with corporate policy folks, et cetera. But we do run, on the right hand side there, we do run a number of public providers as well as the Cisco operated internal open stack providers. We have primary data centers where we run the full stack. Spark is primarily messaging. There's a rooms metaphor. It's kind of like chat rooms with integrated video conferencing and a number of other things in there, file sharing. All of the data services are distributed across those two primary data centers. But we do have, in addition to the key management servers I talked about, we do have media bridges that consume a lot of bandwidth. We need a lot of them, and those in particular, we split out to additional data centers in addition to the primary data centers. And so those are the ones that spread around the world to get close to customers, effectively. And so you'll find that we have more of those in external data centers than our own, simply for reach. So why OpenStack Cloud? You know, I'm actually not gonna talk about this. We're all here at OpenStack Conference. We kind of understand why we developed for cloud and why OpenStack is a solid choice for doing that. So I'm gonna assume that you guys know why we're targeting cloud and why we're doing OpenStack. It is interesting that we do leverage public cloud, even though we have our own OpenStack providers, our own Cisco managed OpenStack providers. There's a couple reasons for that. One of the reasons was pure timing. When we launched the project, the Cisco wide OpenStack internal OpenStack built for the Cisco SaaS properties was launched at the same time. So it simply wasn't ready for us. If we wanted to start our deployment pipelines and start building the application and doing continuous delivery, which we literally did from within two weeks after launching the project, we started the delivery pipeline and we haven't stopped since. Well, that had to be done in a place while the cloud was there. And so the internal stuff came a little bit later. So that was the first thing was actually to leverage that provider that was already available. The second thing is that we are mobile first. So our application, so Cisco Spark is a mobile first application. And the funny things about these mobile devices is they don't come to us over the corporate network most of the time. They come over our cell provider's networks. And so they actually, their connections that are coming to us, as we're doing development, we don't wanna bring up a VPN every time we wanna connect to that conservers. And so for that reason also, all of our, from day one, all of our developer systems had to be outside of the corporate firewall and accessible from these mobile devices that we're doing that are our primary targets for our application. And the final thing was fast capacity on demand. I don't know of any, well, maybe eBay and Walmart by the numbers now, but none of these guys, none of most of the private open stack environments are actually much smaller than what we can have available and in the public cloud right now. So being able to leverage that capacity without the lead times and the capacity planning and everything is very, very handy to be able to have that. In general, people talk about cloud burstings as a form of cloud bursting, being able to use those resources wherever they are and whoever has them. So we do have, as I mentioned now that we're in, now that we're in production, we continue to leverage public and private. In general, being able to carry our base loads on our own infrastructure makes a lot of sense. We have a number of specific network access requirements. So my build environment, like most DevOps teams, you wanna have your build environment, your dev environments, look as much like production as possible, but I do have things like source code and build systems and secret stores and various other things that's very, very handy to have that within the corporate walls. Yet it's still open stack. It still looks like the rest of our deployments from a deployment management perspective. And so we do have that. The WebEx team continues to have an open stack environment and as we do closer integration between Cisco Spark and WebEx, there's things that I would like to run in the WebEx data center adjacent to the things that it needs to talk to. So we'll continue to use things that have those network access. Clear delineation of access is primarily a religious and political layer, but there's a lot of conversations that we have when deploying to the cloud as a corporate entity, deploying applications to the cloud, it just makes it, to be able to tell my security teams, well, the key material is always in Cisco property or the only thing that's ever in the public cloud is encrypted data. That just makes conversations a whole lot easier. And so just to be able to say, look, I'm gonna run these small footprint things inside the Cisco operating things, just it's like it's a conversation I don't have to have with the InfoSec teams. And so that's very, very convenient. Public cloud does continue to have benefits. Time to market, I mentioned before. Scale, there are, your average public provider has a lot more capacity than your average public or private provider. So being able to leverage that. Geographic reach. Even Cisco, a large global company, doesn't have data centers in every region of the world. But there are open stack providers in nearly every region of the world. And I can get at any of them to put these media bridges or these other things. And that becomes really important when we talk about specific privacy concerns in specific regions of the world. We call it the Snowden effect, right? It's like nobody wants their data transiting through the US for a lot of these applications. So being able to actually ensure that those are always out there is nice. And then diversity. Diversity is really, really interesting and important. There are things that my local providers don't have yet. Metal as a service or ironic, for example. GPU instances, for example. And so these things are available in some of these third parties. So being able to say, okay, what would my media bridge look like if I ran it on metal versus running on virtual and those types of things. So that's a big benefit. Just to touch a little bit more on scale. Being, having availability of machines is one aspect of this. The second one that's really has really been hitting home over the last year is minimizing the impact of failure. You know, if you have a, in an Amazon scale data center where they have tens of thousands or roughly 10,000 nodes per availability zone is what's been reported. Four zones per region. If you have a physical failure in that type of environment, the chances of it actually affecting you are very, very small. And if it does affect you, it's gonna affect one of your machines. Well, if my private provider has hundreds of physicals and I'm running hundreds of VMs on those physicals, a single node failure in this environment is a very high chance of actually affecting multiples of my nodes. And so being able to have, be able to spread my stuff across a max of more hosts is very, very valuable to me. The second thing that's been interesting is that in a private cloud, we tend to have to manage capacity more closely. We don't have infinite resources. We don't have an infinite number of machines, even a company the size of Cisco. And so I do have to give my internal providers an idea of how much capacity I'm going to need in the next end months. And that's actually not an easy conversation to have when I've just released the product and I have no idea what I'm gonna need in a couple months. And so this is having the scale and the ability to use resources elsewhere is a big deal for us as well. So if you get beyond kind of the technical things of using multiple fighters, there are some, or using multiple open stack instances, that kind of the multiple providers is actually very useful. So the first thing here is the policy conversations I've talked about before where we can choose one provider that has a particular set of policies or particular deployment. The German provider who doesn't have data centers outside of Germany so I can ensure that German citizens data never lose Germany or my key management servers that are run by a particular vendor, i.e. Cisco and RSEF. And so that just makes those policy decisions all that much easier. You know, there's been a number of stories, certainly the code spaces going out of business in 2014 when they lost control of their Amazon keys. Now there were ways they could have mitigated that, but the bottom line is when you're, if all of your stuff is in one place and you have a set of powerful keys that control that one place, the blast radius is very, very high if you make a mistake there. Yet if I'm going across multiple providers and multiple data centers, the blast radius for any particular mistake is contained, it's in that little area. So we do like that separation. Changes in commercial terms. I don't know if anybody's got a pager duty bill lately, but pager duty's about to jump our pager duty bills, not open stack provider of course, but we've got a bunch of automation tied into pager duty and renegotiating that contract is going to be painful because I can't easily move somewhere else. And so having multiple providers for commercial reasons is very valuable. This diversity of implementations, I wrote this slide a while back. I don't know if you guys remember the XSA 108 ZIN exploit where we had to go reboot all of our hosts because of the hypervisor exploit in ZIN. That was kind of a non-event for us because our Cisco providers all use KVM and our public providers use ZIN. And so we just failed over to our Cisco data centers and slept through the weekend. Well, that was all fun until this weekend where Venom got both of them. But sometimes it doesn't always work that way, but we have a better chance of weathering through something like that. There are disadvantages of course, as we go across, there's more complexity as we go across providers. Some of the obvious things, instance types are all named differently. They're sized differently having to manage that. The public IP mechanisms differ from open stack provider to open stack provider. Some do provider network, some do floating IPs and having to manage and make our puppet configuration manage multiple providers is a pain, very large pain. Security, security group implementations can be different across providers, sometimes not provided at all, a sore point. And so some of these differences, it does take work today. As we look at the common core stuff and being able to have some rationalization about what an open stack provider does, will be good because it will help minimize these things. It actually is costly in Cisco to onboard a new vendor. Whole process, whole financial, this internal business continuity plans, privacy agreements, all this stuff. It takes time to onboard another vendor. So it's not freed onboard another vendor. Sometimes that gets in our way, always that gets in our way. Actually, this provider process is being different is rather interesting. If you look at the way that our internal provider handled the venom announcement and reboots and everything else, it was quite different than what our external providers did and having to manage two different vendors with two different processes can be challenging. Just because we're building operational processes around these things, right? It's like when I get a notice that says this, I react in this way. And if I'm getting different notices in different ways across different providers, it can be challenging. So some of these things, the Cisco InterCloud, which I won't talk about at all here, but there's other things going on in the sessions that you can see a source for some of these things where the cloud of clouds with Cisco and partners actually kind of smooths some of these things, including them. So you can follow up with the rest of the Cisco folks about that. So let's go back to our deployment for a minute and kind of discuss about kind of out of the, why we do cloud, well, we're all doing cloud today. So I didn't talk about that. Multi provider, that was important to us. But let's talk about what we're actually deploying and where we're deploying and how we're deploying it. So we'll talk about these things in general. So again, I'm responsible for that middle layer and we're deploying these applications on top of that. One thing I will double click in just briefly here, deploying what we call a platform, which is the thing that our software, the applications that we're building runs on, is there's obvious things like the database services and the PAS itself that we're deploying to, but we also have to do the service assurance layer. In general, because we're going from cloud to cloud, we bring all of this stuff with us. We deploy our own logging infrastructure, our own metrics infrastructure, our own alert infrastructure, all of which so we can just pick up and carry and drop into whatever environment. And therefore I have commonality across those. And I don't generally consume those things from my providers in these cases. Now I do, there are a couple of services like DNS, for example, that's a third party external service that I consume. There are some of these things that I knew relic, which most people are probably familiar with, I consume as a third party service. The stuff in the blue are things that I consume from others. But most of the service assurance stack I carry with me. That's about all I'll say about that. I have gory detail slides of how we do service assurance, but I won't bore you with them. Eric, I'll bore with you with them later. But so a big part of deciding what we're gonna deploy, if I look at this thing here, you'll see things that are deploying to the PAS and you'll see things that are deploying straight to the IS. And the question is, how do we make the decision? What do we target where? So the first thing as a platform is I have to be able to support all of these things because I have applications in different parts of their life cycle, different types of applications, things that have data, persistent data, things that don't. And so as a platform, I need to support all of them. But in general, we target everything we possibly can at the PAS. It deals with a bunch of the plumbing. It deals with deployment. It deals with scaling. It deals with HTTP routing. There's a bunch of things that it will take care of for us if we just let you deploy the application into that environment. However, it's best for HTTP routed, single port, stateless applications. For things that have persistent message stores like my data services, they don't fit very well in the PAS. And so we target those straight at the hardware. The most notable example of this in our world is our media bridges. I can't just turn off a media bridge when I do a scaling event. I have to wait for the messages or the meetings that are on that bridge to bleed off before I can switch it out for another one. So managing the life cycle of that application is different. The media bridges open up 1,000 ports that they're listening on. That doesn't route through a cloud found rerouter, particularly well. So those types of applications, I just deploy straight to the bare metal. And then of course, our IaaS hosted services, our Cassandra's, our Rabbit MQs, anything that has persistent disk, that has long running state, that I'm not spending a new release of Cassandra every 20 minutes like I am with my applications. But so these things that are long running, we target again straight to the IaaS. I don't actually have any metal in my world. Everything is virtual and consumed through my OpenStack provider, with the exception of the metal as a service which we'll be experimenting with for media bridges. So we do choose the right deployment model for the right application. And we support our kind of platform supports all of those. And we pick which one. We steer the user to the one that makes sense for them. Now if I do go in that middle layer, IaaS hosted apps, there's a whole bunch of plumbing that that engineering team has to write for themselves. You know, queuesing applications, start and stop, health check, all of this stuff has to be done by that application team. They can't rely on the paths to provide it for them. So just a brief foray into our application I mentioned that this was a new project, a new application. And we are targeting, we are cooperating with talking about building cloud native applications with our engineering teams. These things are being built to run in the cloud as a cloud application. It is a microservices architecture, a couple of the top bullet points are here, but there's, if you read the 12 factor app thing that was mentioned earlier, there's a lot that goes into building a stateless cloud native application. And we do push everything we can into that direction. But it also is true that it's not just the things that we build and run in our environment that should be cloud native, we try to in every case pick a cloud native version of the data services that we're using. Cassandra is a great example of this where it's a scale out horizontal thing, you run a cluster, set your application value three, I don't really care if one of them goes away, but it does have persistent data and it manages persistence and availability at the application layer and that's what we're doing at our application as well, which kind of brings us into the next slide, which is primarily availability is in every case in our world an application layer concern. I don't expect my infrastructure to provide any particular availability for me, that's an application problem to provide that availability. This is kind of one of the key differences between kind of your traditional enterprise app and your traditional cloud app is that this is delegated to the applications, which means it's a partnership with our application teams that are building these things to build this in this way. We prefer lots of small nodes rather than a few big nodes, lots of good reasons for that, one of which is best generally what's available in cloud infrastructure is more small nodes, but also if you have in the case of my current elastic search cluster in this one application, I've got 30 machines in the cluster, 26 machines in the cluster again. If I lose any one of those, meh, there's a 26th of my capacity and I have three way replication anyway. If that was a postgres pair, master slave pair for example, and I lost one of those nodes, I've just lost all my reason slaves and it's like I'm at 50% capacity like that. That's really evil. So being able to have lots of little things just is a happier place to run, let's say. So how am I doing on time? Wow, I need to talk faster. And you thought I was talking fast already. So we do prefer local storage. For these Cassandra nodes, for example, have local storage on them. We tend to avoid block storage. It's more resilient, it's faster, and the applications take care of it's availability anyway. We are explicitly building our applications multi-data center. When you start up your app, it talks to a discovery service and figures out which data center to talk to when we switch in the app from data center to data center. I don't do any fancy networking to deliver traffic to one data center to the other. That's an application concern. Much like mail or SIP, if you're in the SIP world, it's like these things are built into the applications to understand multiple servers and priority of servers and everything else. We do that to our applications. The final thing I'll say on this building kind of applications is that because we have capacity available, effectively infinite capacity available at any time, it changes the way we approach certain operations. We tend to not upgrade things in place. We tend to side grade. We tend to stand up a new instance and flip traffic to that new instance to expand or to upgrade or replace a Cassandra node. I stand up another node, join it to the cluster, wait for the data to replicate, turn off the old one. So I don't generally upgrade things in place. So how do I deploy all these machines? What do I do? Well, we follow a very kind of flexible, composable steps in our things. We don't say, hey, the VM is always, or the image is always the unit of deployment and everything has to be packaged into a VM and do this, it's like VMs are built in a particular way. We start with a juice image, we apply a puppet config, we configure the application through that config largely, but sometimes the secondary process. I don't really care, there's a sentence fragment there. I don't really care whether it's bare metal or a VM or a vagrant box in dev or whatever else. The process is kind of the same and I use this across my whole fleet, including even building Docker images, which I'll talk about briefly in a minute. We kind of use the same process for all of this. We do follow a masterless puppet module, it really model, I talked about this at length at the PuppetConf two years ago, about why we do masterless, it's all about timing and getting a particular set of configs on a particular machine and maintain that over time because we're constantly churning our environment. So how do I manage the sequence and the life cycle of this stuff? So I won't talk about that more, you can look it up on the web. We do use HIRA for our configuration, pushing data in and out of Git. We're actually starting to push a bunch of that into MongoDB rather than the Git reposer all in right now. Git reposer actually great, carried us two years, but it's hard to manage across many people in a team. Git blame is great, you can always tell who changed what config where. So but getting it into a state that's referenceable by other people at any time by other automation tools is necessary. We do push orchestration. All of this stuff, masterless puppet, standing up these machines, we have to have something that goes out and we take template files and we interpret those template files and push them. Remember we're running, our data centers don't have access back into command and control. Everything command and control is pushing everything into data centers. So we use tools like Fabric and StackStorm to take our configurations and push them onto the existing environments. And so I could talk about that at some link. We do mostly immutable, I talked about side grades, we do mostly immutable infrastructure, but we're not pedantic about it. If I need to delete a user from a machine, I'm sure it's all not gonna rebuild the machine. I'm just gonna say, insure absinthe and puppet and he's gone, so it's much faster, a lot less strain on logging and metrics and other systems to have those machines be a little more stable. So just one note, I added this bullet after watching one of the other presentations yesterday, Cloud Foundry, the standard way of deploying Cloud Foundry is using Bosch, which is their automation system for deploying and managing that thing. Well, Cloud Foundry is one server type or one service of 26 that I run. It has three server types, it's really simple. I just use my standard deployment tooling to deploy Cloud Foundry. It's actually quite simple to deploy and that way it's the same as everything else in my environment that I'm managing. You'll hear me talk a lot about it's the same in every environment. We strive a lot for consistency here. This is what it looks like in general for today. Our infrastructure is all defined as code. We have configuration manifests or puppet manifests which are compiled, actually. They're checked into Jenkins. When you change one of these things, Jenkins picks up the change, builds a new set of puppet modules and checks them into the YUM repose or Debian repose. We use both Debian and Ubuntu. I'm sorry, yeah. We use Ubuntu and CentOS. We have a definition language. It looks a little bit like a heat template. What's different from heat is that it runs across multiple providers in multiple environments. So that's why we're not using heat is that we actually deploy across multiple open stack environments, local vagrant and oh by the way, a large cloud provider in Redmond provides some of our infrastructure as well. Things like DNS for example, Route 53. Cool stuff. We do have, if you take the basics of our configuration, the high-read data that says this is what this thing is supposed to look like, the puppet modules say this is how that's supposed to be applied. We package that all stop, push it on to deploy hosts. We're using fabric primarily for reading these manifest files and saying go instantiate this thing and push that config on to that machine. And it's fabric that talks to the open stack APIs to say go build this thing or this particular instance. This is actually, we're right in the midst if we weren't rebooting machines because of the Venom bug. We would be working on this, which is introducing a product called StackStorm. It's open source. They're actually here this week. You can find them around. But basically StackStorm is an execution environment for a bunch of the automation that we've already built that allows us to take all these tasks and these things that we've been building and running and make it callable from our other automation systems. Make StackStorm watch for a database change and go execute something or react to an alert. And it's largely calling the things that we already have. A machine fails. Well, I already have the tooling to go deploy a new machine. I just need someone to answer the alert and go launch that task for us. And so we're kind of a good way of describing this is closing the loop on our automation and take the things that we used to have humans instantiating. We now are using StackStorm to do that. So it's an iteration. And in fact, it's kind of a theme and a bunch of the stuff we do is kind of iterate on this, right? Just like get something working, expand it, grow it, and then move on to the next thing. And sometimes go back and revisit some of these. So we're really big on iterations. We do chat ops, actually. It's kind of the way that we interact with a bunch of our systems is through a chat bot. This is the top of that line there. This is actually a screen. Hope it doesn't say anything embarrassing in there. The top screen basically is Nate, bang server type blue. He's issuing a command to chat ops to go create a new bastion server in that particular data center. Now it's our Cisco Allen data center. And then chat bot goes and follows that. So anyway, pretty simple stuff. Well, not simple, but not hard either. Just a lot of moving parts. A lot of buzz about containers, this show and elsewhere. Yep, we run containers. In fact, since we're running Cloud Foundry by definition, everything we're deploying is in fact in the container because that's how Cloud Foundry does its work. And we do have, we running Docker as well for right now primarily in our build environment. Some of these applications that we're building have very intricate and detailed environmental things. Specific C libraries on the systems, Python libraries on the systems, all these things that actually need to be managed carefully for a particular build. So the first deployment of Docker for us was in that build environment, running on our Jenkins agents to control our build environment. Well, it's a very short step between that and saying Docker push. And here's this built thing, now go use that. And so you'll see, particularly on the media bridges, starting to use that for part of our deployment. And then of course we're watching Diego and the other things that are happening. So what's next for us? We have a number of other Cisco teams within the collaboration technology group that are also moving to the cloud. Some of them are very like-minded, building similar applications stacks. And we're throwing in together and saying, hey, let's use a common platform rather than building a new one. Other applications are less so. Other applications that have more of an installed base, more of a different application architecture that doesn't lend itself well to the stack and the things that we do. And in many cases, they're picking up pieces of our platform, our logging platform, our metric platform, our alert system and using those, but then not using kind of cloud foundry and Cassandra and all the other things that we use in our platform. So we're pretty pragmatic about what we share and who picks up what. But you'll find us deploying to a lot more data centers as we expand out into the world with the application. Both our own data centers, WebEx data centers, more Cisco data centers. There are, by my count right now, I've got about 30 different OpenStack data centers that are available to me as I need to deploy into different areas. And so you'll find us spreading even more. And then as we bring on these other guys expanding our platform services, upgrading our logging infrastructure to use Kafka, for example, there's a persistent ring buffer in there, for example, adding Kafka as an application layer service. A bunch of you'll kind of find us expanding those things. So that's it. I'm happy to take any questions if we have time for questions. I will ask that you use the mic, however. Well, you're kind of trapped, so I'll repeat for you. So I have a quick question regarding, what's your strategy? I'm sorry, if we can go with her for just a minute. Go ahead. Yeah. Well, heat in this current incarnations is it's executed in the context of a particular provider, right? Your heat template is being interpreted in a particular OpenStack provider. And a lot of our clusters, like our Cassandra clusters, have members in two different providers across two different environments. And so my templates are actually saying, if I look at machine definition, it says, data center one for these three machines, data center two for these three machines, and our orchestration is hitting two machines, two separate API credentials, everything to deploy. So that's the primary reason, so. So I'm just curious, what's your scaling strategy in terms of handling more applications? I'm sorry? So what's your scaling strategy? I'm just, for example, if you're ordering, you're actually offering Cassandra as a service, Kafka as a service. I mean, they're all useful in their own way, but they also suck in their own way. You have people, you have to have people to understand how each application works. But then if you want to offer service like 100 different services, I mean, do you actually have 100 people, or what's your strategy to kind of scale up? Yeah, great question. Actually, this is... So the first thing is that the platform team, we do not expect ourselves to be expert at every one of these things. We assist with the deployment of the thing, Cassandra, for example, making sure that we're deploying it, monitoring it, the basic operations are covered, but the people who are really expert at running Cassandra are in fact the application users that are consuming that. And I think in general, this is something across, you'll hear me, all these things I mentioned were mostly no SQL style solutions. We don't have a Cassandra team that knows everything about Cassandra. The application teams are a lot more closely tied to running Cassandra, doing the DB reviews for Cassandra, understanding how to use Cassandra. So the short answer is the partnership with the engineering team who's using that. Now we do tend to push people into a particular thing. If I've already got Cassandra running, and someone comes to me and says, hey, I would really like you to deploy React. I'm gonna ask them pretty detailed questions about why the heck they can't use Cassandra, and they really need React as an example. And sometimes the answer is they need both. The question is how do I make React run multi-data center? And yes, the React requires an enterprise license to run multi-data center. And yes, I pay a multi-user, a multi-data enterprise license for the limited application where we use React. That's a bit of a little question. Yes, it's worked. We've run it for a metadata store for a specific application for a number of years. I would say that React is basher in the room. Cassandra, fully open source, has been every bit as reliable, if not more so, for us without the licensing fees. By and large, as an organization, we prefer to use open source, and we're happy to pay a vendor for support. And so if you take a look at Cassandra, I've got third-party support for Cassandra, third-party support for Stackstorm, third-party support for Rabbit, but we're using the open source versions of all of those so that we're not locked in. We don't have to get into a licensing discussion every time I open a new data center, which as you just heard, I open lots of data centers. And so yeah, we'd prefer that and it is a bit of a sore spot, to be honest, so. I would like it too. I'm paid up for another 18 months, so it's not a, my hair's on fire. I don't have much air left anyway. But yeah, we in general are trying to push the people away from that. Now there are things that it does well, and that the vector clock, specifically for managing consistency, is useful for specific applications, and that's why we're using it. So we're not, sometimes you bite the bullet, pay the money and solve the problem and move on. I mean, at the end of the day, we've got to deliver our application, so. Anything else? They were telling me I'm out of time, so thank you for coming, and I will be around. Thank you.