 That's really pretty bright. OK. OK, Ken Dwayne, you guys win for attendance. Hi, everybody. My name is Gary Kovorkin. Welcome to the Cisco sponsored session number three today at the OpenStack Summit. How was lunch? What'd they have? I didn't make it. Seriously? Oh, man. OK. I got to get off the event team. So again, welcome to our room. This is the third of our sponsored sessions today. And as I mentioned, when we're passing out the cards, we will be doing a small giveaway at the end of the session. We'll collect all the cards at the end of the talk. With that being said, I'm going to quickly introduce our presenters for today. We've got a great duo of scaling containers in OpenStack, Dwayne de Capitay, our director of product management, and Ken Owens, our CTO for Cisco Cloud and the virtualization team. With that, I bring you Dwayne and scaling containers in OpenStack. So good afternoon and welcome. I appreciate you attending Standing Room Only all the way in the back. So today, we're going to be discussing two of my favorite topics, containers and OpenStack. We're going to be talking about how OpenStack can make your container deployments easier and how to scale your OpenStack and container projects. So I'm Dwayne director of OpenStack Product Management. We're joined today by Ken, our CTO of Cloud Services. And we're looking forward to a great conversation about containers and OpenStack. So containers is one of the main themes of this summit. And it's virtually top of mind with everyone in our industry. So quick show of hands. Who here is in the process of creating a container strategy? Exactly. Who here is deploying containers in production currently? Excellent. Excellent. Good. Everyone is thinking about containers. We're going to talk all things containers and OpenStack in this session today. We're going to do a deep dive on some of the container-focused projects in the OpenStack Foundation, including Kala and Magnum. Then we're going to talk about some of the recent product announcements from Cisco, including NFVI Network Function Virtualization Infrastructure, which is built with container and OpenStacks, as well as Mantle and Shift. Shift, our new platform, is a service for containers in OpenStack. We'll also have a demo of Mantle and Shift. And then we'll do a deep dive on two great plugins, Conteev, which is a plug-in to ACI. Who here saw the great presentation from Mike Cohen in the ACI and Sun Guard earlier today? Great. So Conteev is a plug-in to ACI. We'll talk about that and do a demo, as well as Kallico. Really nice Layer 3 plug-in. Works with Docker networking. Works with Rocket networking. Works with OpenStack Neutron and other environments as well. And then we'll close with a summary and a Q&A. But this is our session today. We're going to make it interactive. We're going to have some fun. And we're looking forward to a great conversation on containers and OpenStack. So containers have been around for a long time, actually since UNIX in the 70s. I mean, namespaces, control groups. But it wasn't until a little company you may have heard of called Docker really made containers a household name. And they did some really good things. They defined a container image format. And they also created a hub repository to put container images. So Docker got lots of people thinking about containers, which is awesome. Rocket, part of CoreOS, really good containers. Some nice security enhancements as well. Ubuntu has good container, Nova and Lexi. So Linux containers, LXC or LexiD, the hypervisor for container management. Really good Nova interactions there. Also OpenVZ, it's been around for a while. A good container and storage integration. This is parallels virtuoso. And people like containers in their top of mind. They're lightweight. They're fast. They share the kernel. But it doesn't have to be a Linux kernel. It can also be a Windows kernel. So Azure container, excuse me, Service is now GA. Windows Server 2016 has both Windows Server containers as well as Hyper-V containers, which you see on the right-hand side, which is interesting because it's a container in a Hyper-V VM, which is very interesting. But we are an open source community here at the OpenStack Foundation. And we like open source. And the Linux Foundation is working on two major initiatives for containers we're going to talk about today. OCI, Open Container Initiative, and a CNCF, Cloud Native Computing Foundation. So this was originally OCP, right? Open Container Platform. But that sounds a little bit too much like another project in the industry. So it's not OCP. It's not OCD. It's OCI, Open Container Initiative, right? And we at Cisco, we are proud to be a part of it. The project is designed to create standards around container formats and runtime. So the container runtime spec is out. It's nice because it's not a monolithic standard. It's modular, which is nice. And also the application spec is also out there as well, which is nice because it starts talking about some different flexibility in terms of how to deploy containers in different environments. So the CNCF, Cloud Native Computing Foundation, which Cisco is also very proud to be a member of, is designed to create new common container technologies around internet-scale computing. And you may have heard recently that the first hosted project was announced something you may have heard of called Kubernetes, right? Also, very top of mind, everyone's talking about Kubernetes. So it's very exciting that Kubernetes is now part of CNCF. At the same time, they also announced their first technical oversight committee. And whose picture is that popping up there? So that's Ken. He's on the technical oversight committee for CNCF. He's also on the governing board for OCI. So Ken, I heard that you were the first person unanimously selected to the TOC. Is that true? It's awesome. So I voted for you, by the way. So very exciting. I'm also part of the marketing committee for CNCF as well as the product management committee. We're in planning lots of interesting things happening. There's Cisco Live. The summer, who here is planning on going to Cisco Live? This year, awesome. Thank you. I'm also container con in August. It's gonna be very interesting as well. So lots of good events happening this summer. So now let's take a deep dive about some of the container projects in the OpenStack Foundation. So one is Kala. So Kala is great. The project technical lead, Steven Day, because with Cisco. The idea behind Kala is to make a better OpenStack with containers. And so fundamentally putting OpenStack services in Docker containers and managing with Ansible Playbooks. So it's very powerful. It's designed to provide production ready containers and deployment tools for operating OpenStack Clouds. So this is the use of containers to operate OpenStack Clouds. So in the Liberty release, the focus was a little bit more on deploying OpenStack Clouds. So it's in the big tent to deploy up to a hundred nodes. All the major services are there. Nova, Glantz, Keystone, Sethback Storage, choice of distributions. But the focus was more on deployment rather than operations. However, with the Mataka release, the focus became on the operational side. Security enhancements were added, upgrade, reconfigure. A deployment time was reduced by 80%. New services around the operations, including Elastase Search and Kibana. So with Mataka, the focus is on operating OpenStack Clouds at production scale. So very exciting. A lot of the good work that's going on with Kala. In addition to Kala, we're also very excited about the Magnum project. So Magnum is essentially designed to make, that's a good sign, there we go. So Magnum is designed to make containers a first class resource within OpenStack. And it's the first project, if you will, allows multi-tenant support for containers as a service. It does this by creating an asynchronous API around heat templates. So it's asynchronous, so it scales great. It leverages heat. So it's essentially a wrapper around heat, leverages the same identity mechanisms, Keystone. So it's a really nice way to make containers kind of a first class resource within OpenStack. So the architecture is nice. So it basically supports multiple container orchestration engines. Kubernetes is supported. Swarm is supported. Mesos is supported and saved with Kubernetes. So by default, Magnum is gonna create a minimum of two VMs. One for the worker node and one for the master node in Kubernetes. Those are all installed on a micro OS, whether it's Fedora, Atomic, Core OS. And then it's all managed with heat templates. All the OpenStack services, Nova, Neutron, Ironic. So it's a great way to kind of provide container as a service within OpenStack. In addition to Magnum, there's also some very interesting things going on in the Courier project, right? So this essentially maps Docker networking, lib network and converts it into Neutron APIs. Lots of benefits in doing that. Now you can network your containers just like your VMs within OpenStack, but you can also use all the Neutron plugins. You can use OBS layer two plugins as well as Calico plugin, which we'll be talking about a little bit later as well. So lots of interesting things happening within Courier on the OpenStack foundation. So Cisco recently announced some major new products. One was NFVI, Network Function Virtualization Infrastructure. This was announced at Mobile World Congress in February, built with containers and OpenStack. NFVI is very nice, very powerful because it's kind of the best of both worlds, right? You have the flexibility of network function virtualization, but it's all on turnkey infrastructure. So it installs well, it configures well, it scales well, and it's very nice to scale with different networking plugins. It's built with our partners, Red Hat and Intel, very excited to be working with them. So RELL OSP, RELL KVM, as well as Intel technology like SRIOV, single root IO virtualization from their enhanced platform awareness. But even though it's a built-in partnership with Red Hat and Intel, it's a Cisco product, so single access to support, single point of support. So we're very excited. There's also demos of NFVI downstairs and after the session also. In addition to NFVI, another product that was announced at Cisco Live Berlin also in February was our new platform as a service, Mantle and Shipped, which supports both containers and OpenStack. And now I'd like to turn it over to Ken for a deep dive on Mantle, Shipped, as well as Conti and Calico. Thanks, Gary. Thanks, Dwayne. So when I kind of looked at how containers are evolving over the last several years, it's kind of obvious that there's different use cases that you want to address. And so the first one, it's kind of looking at how do we help, kind of what I like to call the IT administrators, the milk administrators, the compute administrators, the infrastructure admins, how do we help them make managing and running containers as a service much easier and faster for their business. The other two groups are sort of both developer personas, but they have kind of different interests, right? One is more cloud native development, so they just want to write their code, push it out as fast as they can. The other ones are more on the data analytics, data scientists kind of side, trying to figure out, like how do I take business insights from my application data that I'm creating and how do I then automatically make enhancements to my application to drive further business improvement? The experience is sort of, I kind of want to put like an end-to-end view of what the user experience was trying to accomplish was. And so if you can't think about this from a developer-centric viewpoint first, most developers just want to write their code in the way they always have written their code. Most developers have sort of, I call it a religion, around how they develop their code. But then when they go to deploy it, they want to be able to just deploy it wherever it makes sense from a business standpoint. They don't have to want to figure out how do I write to these other infrastructure APIs or how do I manage this new API that's just been released by Amazon, for instance, and how would that impact my code deployment there? So they want to just basically write their code and then have a multitude of options to deploy to and have sort of the, what I like to say, the value of cloud was being able to kind of automate all this infrastructure for IT, the value of what we're trying to build is sort of automating all of the software development lifecycle for developers. So give them a way to sort of develop their code, deploy it in a multitude of options without having to understand the underlying physical APIs of those underlying infrastructures. The path to do this is sort of going from the entire SDLC process where we have different tools that we've completely integrated and tested and validated across multiple infrastructures we can deploy into. And I'll show kind of a demo of this in a few minutes. From a container standpoint, we're leveraging Docker containers using both Meizos for sort of the, like I call it data scientist persona use case, as well as Kubernetes for more cloud native development. Although you can use either one in either case, you don't have to, just because you're doing data science doesn't mean you're going to leverage only Meizos. But most of the frameworks exist in Meizos for kind of the data type of applications. The other piece then was as you've deployed containers, you know it's pretty easy to get a Docker container deployed and it's pretty easy to like write your code to deploy in that container. But then if something goes wrong, it's not so easy to kind of identify where in that whole process of marathon or Kubernetes did something break. And so we spent a lot of time kind of looking at how to enhance that visibility into the container deployment scenario, as well as the application service dependencies that sort of get created on the fly. Like when you scale up an application, there are dependencies that sort of get deployed underneath that scale up request that we kind of kind of track those dependencies and make sure that all the services are up and kind of monitoring each of those services to see what their performance is like. And then we want to kind of feed all this back into the issue management system so that if there is a problem in the code, the developer can track that through the processes they use today. The key to the kind of the deployment piece of this is under the covers and we call it Mantle, to kind of be the bedrock of microservices, if you will. We sort of looked at it more from not just a container infrastructure, but what do you need to sort of do an entire, you know, management and maintaining of a new infrastructure for the data center. And so we have sort of a similar model as you would think, you have these control nodes and you have the resource nodes. We also have some edge nodes now to do kind of like low balancing and firewalling between different service boundaries. We solved the problem of kind of deploying this to a single node or a single data center, but our main focus was really on how do we deploy containers across multiple clouds. And so we spent most of our time kind of looking at and developing a way to make sure that when you create your application and you want to use a service that's in Amazon and a service that's in Google, you have the ability to create one application that leverages those two different services without having to deploy your application to either one of those clouds. And so we kind of take care of all that complexity and the networking needed to do that and the service discovery that's required to do that and all the enhancements and routing to make sure that you can route between these different clouds all under the covers for the developers. They don't have to think about that or worry about that. The nice thing about Mantle though is if you do really care about that, we let you have access directly into Mantle as well. So you can go in and modify and clean up and kind of pre-configure what you want to have happen. So at Cisco Live a few months ago, this is where we're at today with Shipt and Mantle. We're using Terraform as sort of that abstraction layer to the infrastructure. And so for a lot of OpenStack as a good example, we have several packages that work with OpenStack versions and working with Magnum as a Terraform package just for Magnum for instance. But if you also want to run across public clouds, we want to support public clouds as well. So we have support and Terraform packages for all of those including vSphere. And so if you're running in a VMware environment and you're trying to make the transition to containers, we have a platform that kind of help you with that whole transition from VMware to containers. And it's not like an all or nothing. It's sort of this, I like to kind of have a hybrid DevOps model. You can continue to work in the environment you're working in today. You can add the capabilities that the cloud native capabilities are bringing you in the public domain and in your private domain. And you can bridge those two over time in a way that makes sense for your business. So this is what the interface for shipped looks like. It's at syscoshipped.io. It's very easy to use. You, all you need is a GitHub account. Right now we're going to add some additional identity providers as our enterprise customers kind of ask us to add additional capabilities. But right now it's mostly just GitHub. The mantra we have is build deploy run. And you'll see me kind of go through what I mean by that mantra in a few minutes. So when you click on that, sign in, if it's your first time signing in with your GitHub credentials, you have the ability to select private repos or public repos of both. If you're using an enterprise GitHub account, we also allow you to connect into your enterprise GitHub account as well through the same process. Once you go into build, the build use case, like I mentioned before, most developers have a way they like to develop. And so the goal is to make this as lightweight and as easy to leverage what they're doing today as possible. And so using kind of the 12 factor process for developing, we create a project in GitHub when you create a new project. With that name, you then go in and compose your project. And as you can notice, we have a bunch of different little starter packages that you can use in your development. Most, I can tell you the smallest of these is like, is goaling the most, I guess heavy of them is probably like the Python one we have. But there are different packages you can use to do your development to kind of help you with the build pack and get started. We also give you access to, like if you want to get a Docker Hub image or you have your images in Docker Hub, we've connected into Docker Hub. So you can go into Docker Hub, search for an image that you want to use, take whatever parameters that vendor has said you have to fill in to use that image and we deploy that as part of your development environment that I'll show you in a minute as well. We also have added a few things like some Cisco technologies if you're using Tropo for like collaboration or using the APIC EM, which I'll show an example of Contif a little bit later. We have the ability to kind of integrate with Cisco public APIs as well. So you can then leverage their API code within your development. Before I go here, when you go to deploy, it just opens up, you just open up a terminal on your laptop and you just do a get command, it pours in the, we automatically push that command. We give you a command to copy and paste. You paste it into your terminal, pours in the code, we're using vagrant to stuff like local VMs on your desktop. You leverage it as Eclipse plugins as well, so if you're using Eclipse, if you're using other development frameworks that has integrations to pretty much all the popular development tools. So you just develop what you normally would, would develop and then you just do a get push to push your code back up to GitHub. If you use an enterprise GitHub, you do a get push into your enterprise GitHub account. Once you do that, it goes back to this screen where it says that you've built your project, now it's time to deploy it. And this is an example that kind of shows, I think sort of the power of what we've tried to develop here, which is your building of your application is very dependent today on where you plan on deploying it in the future. And so what we try to do here is say, you could have a staging environment that's maybe in Amazon, you could have a QA environment that's over in this private environment that you do your QA testing in and your production environment could be in Google. We don't only care where you have all your different environments at, but most companies have different environments that they test and do QA and deployments in. And so your code doesn't ever change, it's just where you deploy it to is what changes and there's ways to kind of set up these environments and you can kind of define if you want, how much CPU you want to give that environment, how much disk and sort of what kind of ports you want to use. So you can do this on a per container or per service basis, if you will. And then we tell you where to deploy that environment. You can create new environments by just adding a new environment and entering in kind of the, like your credentials, we're using an open source security tool called Vault from HashiCorp to kind of securely keep your private, your key pair in a Vault, if you will, something you can move the name. You got to run then, once you deployed that code into this environment, we give you a single view of your code running and whichever environment you want to look at. And so again, if this is a production environment and you're running across multiple different cloud environments, including private cloud, we give you a single view of that application and the CPU is using across those multiple environments without you having to know exactly which environment it's running on. We give you a lot of details into kind of the actual mesos, in this case is a mesos example of what sort of load and what's busy, what's some of the busy components going on within that cluster, as well as from a data analytics standpoint, we kind of feed in, take this data out of the environment and feed it into a tool that lets you then go in and kind of create your pipeline to look at how the data is flowing and what type of enhancements you want to make to that data. And so you probably, by this point, you're kind of wondering, okay, you've talked about something kind of cool, but what about networking? This goes to networking company and so we've been doing two things in the networking space. We've been working in user space to try to make the actual container networking as fast as physical networking with something called FDIO. We've also been working on policy and kind of making sure that storage and networking security concerns that enterprises have are being addressed with a project called Conti. Both of these are open source projects. The main goal of the networking work that we're doing is to make sure that containers are treated in a way that enterprise network guys would expect them to be treated, right? Again, if you're a developer, you probably don't care about that at all, right? So for the next few minutes, you might just tune out and say, yeah, whatever, la, la, la, I don't care. But if you're, most of you guys are probably network, network-minded network administrators, network architects, you probably care about how we do this. And so we've done a lot of work around making sure that we can do multi-casts. We support IPv6 in our containers. We can give you an IP per container or we can basically give you like a service IP. And so you could have lots of containers sitting behind a IP address that's being load balanced by a load balancer. So you kind of have an option of how you want to manage your networking at that point. We also provide service domain name resolution. So your containers actually have a human name that you can read. So if you kind of create an app server, it'll say app server, right? If you want it to be numbered like one to eight, you can number them one through eight, right? So we let you kind of do things you'd expect to do as a deployment of a server into your production environment. We still show you the UUIDs. If you really like managing new UUIDs, we won't make you stop doing that. But we kind of think it's easier to give you a human readable name. So when you see a failure, you can tie that directly to a server that you understand what it's doing. And then the policy piece we feel like is really important because a lot of visibility gets lost as you start going through these different layers of abstraction. And so we want to make sure that we expose everything from the very basic component of a port on that container all the way up to your policy that you define for which traffic can flow between which VLANs in your network. And so to take a look at Conteev a little bit, the idea of Conteev is again, kind of looking at both mostly operational intent. We're working with Congress here in OpenStack to kind of work on application intent a little bit further and kind of make it more application developer-centric. Right now this is more kind of an operational intent mode. It works across whatever orchestration framework you want to use for containers. So it's not tied to only Kubernetes or only Mezos. It's also, the whole goal of Conteev is to sort of provide a framework that lets you extend your physical policies and ACLs that you use in your physical environment to the container world, but also lets you have sort of the cloud native capabilities of scaling up your application and scaling down your application easily. And so we've built into it the concepts of scale and being more elastic in the deployment. It's very scalable at this point. It's about 10 times the scale of what we can do with VMs from a policy standpoint. And it basically leverages whatever underlying infrastructure you have. It doesn't have to be Cisco under the covers. And so I'm gonna try to, this video was giving me a hard time earlier. I'm gonna try to switch over to this screen. You get blank. That's why you don't do live demos. And then it turns out you wish you had done a live demo. So as I kind of bring up this demo, what I'm gonna try to do show is the actual way that Conteev sort of takes the policy that you define and it kind of gives you a nice interface to deploy it into. The phrase is gonna blow up. So it's primarily like the key points of Conteev is it's open source, it's applying policy across network and storage. It uses this kind of a blueprint, is what we call it, but you can call it something else. You're gonna have your web, your app, and your Redis database. You're using sort of the model on the right-hand side which you wanna deploy into and be able to kind of deploy firewall policies that it can be elastic. And so if you look on the, kind of look at the YAML file, it's a very basic file. We have a nice little user interface that is a web-based interface that you can log into. You can kind of see that it's running VXLAN. If you kind of look at a deployment scenario, you wanna deploy this policy into your development environment. It's kind of looking at the processes it shows you have your web environment. You have some apps, web servers running, app servers running, your database running. You can then create a production instance as well. So you now have sort of two environments running, your dev and your production environment. And you can look at the graphical interface and that will kind of show you when you look at your applications, you have your dev group for web app and database. You have policies to find that allow traffic from web to the private network and from database to the private network. You then kind of look at, these are like the dev ones. You also have production ones. And then they ask to kind of scale up. So you saw in a few minutes you'll see a screen that shows the number of services spun up from three to four. And you can basically stop it all and bring it back down and it'll show in a minute the, when it stops it, it shows that all of those application servers we just showed you are gone. So that's pretty much the gist of the demo and the booth downstairs in C11 tonight or more like tomorrow, we can show you kind of run through it actually in live, we have a wired connection down there. So we're gonna have to worry about demos in the booth. So definitely stop by. I can show you both Mantle, shipped and the Conteev stuff in the booth. Anytime this week you wanna see it live not like a recorded like that. So the last thing I wanna kind of cover was then Dwayne kind of mentioned Calico a little bit earlier. And we also have demos of Calico we can show you downstairs as well live. But the idea of Calico was, it's one of the first projects that supported sort of an overlay networking for containers. It supports most of the major platforms and it's kind of like this V router mode. So it's not gonna really, it connects with live network and CNI. So it's pretty good from like an open stack model standpoint with Neutron. It worked really well with what we were trying to do early on before we started kind of doing more with FDIO and Conteev. We plan to still support Calico as well as other type of networking container networking standards because it's as Dwayne mentioned with OCI and with CNCF our goal is to be open and connecting with multiple different projects not just trying to lock you into one Cisco way of doing things. With that I'd like to invite Dwayne back up for any sort of questions you may have. I'll let Dwayne do a summary real quick. Thank you. Good job. So, open stack can make your container deployments easier. You know, projects Kala and Magnum are gonna be key parts of it. As Ken mentioned, the Linux Foundation lots of smart people doing lots of good things. So very excited about what OCI and CNCF are going to do. And also please go down to the Cisco booths to see the live demos as well as some other goodies that are down there as well. And we really appreciate your time and attention. We appreciate you standing, packed in there, standing room only. And I believe we have time for a few questions. Yeah, we'll do the best we can. So we've got a, oh, we've got a mic over there. We've got a mic over here. We'll take a couple of questions and then we'll just have to continue the conversation. Alicia, wherever you are, is gonna start collecting all the raffle cards. So any questions that we can take? I thought your name was Gary, not mine. One question or one thing, if you could put all the cards to the center aisle, that'll be easier for us to pick them up. Thank you. So if you have queues, they've got A's. Wow, you guys just nailed containers and open stack. That's the most. Okay, wait. Mic's up. Leave it to Mike. Um, dumb question. How's Cisco going to monetize this and benefit? So, do you wanna start that one? So there's two ways. One is do kind of an open stack support model as you'd expect. The other way is more of a product model. So we will have a kind of like a V block if you work for containers. So those are sort of the two models. We're also gonna keep everything open sourced. And so if you don't want either of those models then we're not gonna force you out of the open source model. And Cisco is a leader in cloud infrastructure, compute storage and networking hardware and Willis software also. So it's different ecosystems. Yes. So these projects you're going to push upstream in open stack? They already are. Are they already there? They're in Linux foundation today, Apache. So, Kola and Magnum are both open stack projects today. So, there's been a little bit of discussion in the last little while about deploying open stack itself as container services and some work done and call on Mezos. Has there any, do you guys have any view in terms of Kubernetes as a deployment tool and you know, how ready is Kubernetes to do something like the Kola deployment workload? Yeah, so from my experience it has, there's two pieces there, right? From a early stages it's ready for like testing, but it's not ready quite for the production grade workloads. Mostly because of the monitoring and some of the health components of it. If I had to guess, I would say within the next two or three releases of the project you'll see it like being perfectly enterprise grade. So, it's on its way there for sure. And it's moving much faster than I ever thought it would move. It seems like every new release has like a ton of new capabilities, so. Right. Anybody else real quick? Cause one more and then we have to actually make room for the next company coming into the room. How does this interact with ACI at all? So, Conti plugin interacts directly with ACI. Do you create your policy in ACI and if it sees Conti there to deploy directly to Conti? Right. And then with an OpenStack Neutron, so there's ACI plugins, there's group-based policy plugins as well as Nexus Speland-based plugins that include the 9000. All right, well thank you everybody. Thanks everyone. I'm gonna have, okay, Dwayner Ken, one of you is gonna pick the lucky winner. All right. And I got news for you. Your odds are going up as everybody's leaving because the winner must be present. Who wants a snap? You can't have one. Okay, a couple more, a couple more. You gotta mix them up again. There, all these late, all these late cards. Hold on, Dwayne. There we go. All right. All right, that's it. Okay, okay, couple, couple, there's always one. There's always one in the crowd. I think you confused him and said your name was Mike. All right. Go ahead, Ken. I gotta be nice to him, he's my boss. Chezz. Chezz-Chi-Chi from CHT. I'm from the H-T. There we go. I am. All right, congratulations. Congratulations. Congratulations, thank you for coming. Okay, thank you. Thanks everybody for coming to the Cisco Room. We'll see you tonight at the booth crawl starting at six o'clock. And we'll see you tomorrow starting at 10.45 and we'll get the demos up and running. You can see all the good stuff.