 Good afternoon, everyone. Hey, will you hear me okay? Everyone got your coffee? No, it's just the first day. This week, it's going to be a long week. There's a lot going on. Anyway, welcome. Thanks for joining us today. My name is Karthik Prabhakar. I'm at Red Hat working with OpenStack and being part of the OpenStack community since about the SX release. So seeing it grow in literary leaps and bounds as the community has grown, the projects have grown, the impact to customers have grown. So what we're going to do today is really focus in a little bit on looking at Red Hat and Cisco and how we're collaborating together to help customers be more successful with OpenStack and helping OpenStack meet customers' business objectives, right? So the focus is how are we collaborating in different realms to enable customer success. Joining me today, we have a number of folks from the Cisco side. We have Suhail Said, Hart Hoover, and Vish Chakka, who are product managers for different parts of the Cisco product portfolio. And we're going to talk about the different collaborations that Red Hat and Cisco have in different realms. But before we get there, I'll give you a little bit of context as to why we are collaborating the way we are and why the focus on solutions. Now, within the OpenStack community, we very commonly use analogies, right? Real-world analogies as to why things work a certain way. And a very common analogy that we often use in the open-source community is cars. You know, open-source is about driving cars with a hood open or proprietary software is about driving cars with a hood welded shut, right? So in the OpenStack realm, an analogy I've kind of used a little bit is imagine you're on the market for buying a car, right? What do you do? Do you go out and start looking at auto catalogs and start picking the parts that you want in your car and then say, hey, I'm going to assemble this car for myself or have someone assemble it for you? How many of you go about the process of buying a car that way? Pick your favorite catalogs from the common auto manufacturers and start selecting parts. Or do you walk into a showroom or you go online or you go to your favorite auto dealer and say, hey, what cars do you have in the lot? I'm kind of looking for this model of vehicle at this price range with this kind of feature set. And pick a car and drive it out a lot. Maybe tweak it a little bit, change your color, change your tires, change a few things you're in there, right? Option packages. But I suspect most of you tend to fall into the latter category. If you step back and look at what's been happening in the OpenStack community the last two or three years, for various reasons, we always tended to take an approach when presented with hundreds of choices, thousands of choices of technologies that we can adopt. We wanted to use every single one of them or many of them and start picking and choosing what we considered the best of breed in each different category. Now, that works really well if you're a hobbyist and you're looking to understand the different technologies and really appreciate how things work from the inside. But if you step back and if your real focus is on getting OpenStack deployed successfully, quickly, in a stable fashion, not just focused on an initial deployment but keeping that deployed and running from an operational perspective over many cycles of OpenStack releases. A model where you pick and choose individual components isn't always the best option. Not for you, the customer, not for us, the vendors, just because it makes it a lot more complex to troubleshoot, to decipher all the issues with every combination of components out there. So the approach that Radhat and Cisco have taken together is, first of all, looking at all of the different components that we have together focused on OpenStack. Cisco have numerous products that they bring the value of their technology portfolio into OpenStack. When you look at their UCS platform, when you look at the Nexus platform, the different networking capabilities, the storage capabilities, the compute, the orchestration. And together with Radhat, Cisco have collaborated in making sure that each of those products are enabled for OpenStack, that there is a lifecycle for each of those products. And so we collaborate very closely with them. And this is really making sure that every one of these products, which typically have an independent lifecycle, are certified for OpenStack. And we collaborate very closely on that. And we combine in other complementary technologies, like in the case of Radhat. We have Radhat's storage. We have management orchestration portfolio. We have a number of other surrounding component. We all collaborate on certifying and making sure that they all work well together. But when it comes down to how do we package them together and make it easier for a customer to succeed with these individual components, we can just present this entire catalog to customers and say, hey, go off and pick your components and pick any random mix of third-party components and put it together and we'll support you. Well, generally, if it's a certified component, the answer is yes, we will support you. But it's a lot more challenging. You get that deployed from an initial deployment perspective and supported and taking care of upgrades. Because keep in mind, each of these individual products has an independent lifecycle, an independent product team. Sometimes these product teams struggle, especially when you look at the broader ecosystem of vendors out there, to keep up with the rapid pace of change with OpenStack. So the approach that we've taken instead is to focus in on integrated solutions. And so from that perspective, we've kind of looked at a number of the common customer deployment use cases. Starting with some of the ones such as building private cloud infrastructure and whether it's customer managed or whether it's customers are looking for a Cisco managed on-premise solution. We've looked at telco and NFE infrastructure, where as you heard about this morning, there's been a marked shift over the last couple of years towards more and more telecom operators moving to OpenStack as the platform for NFE deployments. And even for public cloud deployments, where we look at the common customer use cases and Cisco and Red Hat have collaborated and packaging together a number of solutions in each of those use cases. And the benefit to customers, first of all, is that Cisco and Red Hat have collaborated on making sure that when we cherry pick the individual products that we think are relevant for that given use case, a given workload. We do a lot of work on integration testing, making sure that the products work well together, making sure that we have common recipes for high availability. We have recipes for making sure that when individual components break, making sure for the common customer deployment architectures that things have been tested and validated. Secondly, to validate that things like the lifecycle is taken care of. As these disparate products have independent life cycles to make sure that they converge before we recommend that the solutions be deployed in the customer site so that customers don't have to go to the pain of having to deal with multiple independent products. Upgrades is a huge issue because over the last two years the OpenStack community has made a lot of progress in taking care of in-place upgrades, but that still continues to be a major pain point within the community. And that's been another area of focus for us. And finally, building an ecosystem of not just individuals within our respective companies, but our ecosystem of partners, an ecosystem within the community to help build knowledge and awareness on how you deploy these known solutions, these known validated configurations, and have these pods of validated reference architectures available across the community so that when customers need to deploy these, there's expertise available readily. To draw the automobile analogy, if you go to a random mechanic and take your own custom-built car and just taking a Ford or a BMW or a Mercedes, it's like there's a lot more expertise around known quantities. And so that's kind of our approach to developing solutions so that it makes it easier for customers to deploy and for us all to support it. And the benefit from a customer perspective, besides the ones I've already mentioned, is that the time to business value for you is significantly improved. You're not spending weeks or months, in some cases years, like I've experienced with some customers, trying to get some custom unique configuration to work. This is deployed very quickly, matter of days. And days for fairly large deployments as well. And it reduces the initial time to deployment, it reduces the time to business value. So you get back to showing value to your business fairly rapidly. So that's kind of a quick high-level summary, and now I'm going to pass it over to Vish to kind of cover the first two pillars of this, which are UCSO and FlexPod and what those are. Can you hear me? Thanks, Karthik. Let's talk about the number of solutions we have built with Radat for enterprise private adoption. As Karthik mentioned, customers have tried OpenStack on their own with mixed results, and it's often a tough task for our customers, enterprise customers to stand up their own private cloud. Recognizing those factors that have been shared by our customers with us, we've built a few solutions that I'll talk about in the next few slides. So the main goal for us for working with our partners to bring the solutions is to get our customers a faster time to value with their private clouds based on OpenStack. The joint solution is about providing a validated design, a validated architecture that has been tested, configured with best practices that customers can adopt for the best business outcome for them in their environment. So some of the main benefits that we bring to the table with this is faster deployment and easier deployment of their private cloud and reduce the amount of risk that they will face if they were to deploy a private cloud on their own. At the end of the day, we want to increase the return on their investment while reducing the time to value for them. By the great partnership that we have with our partner with Radat and others, we provide a fully validated solution that removes guesswork, removes all the legwork that they have to do if they were to embark on the journey on their own. So there are no configuration questions, there are no sizing questions. We provide all this information that customers can leverage. At the end of the day, the enterprise customers, they want to use something that is validated, they don't want to spend their valuable resources on something which can become a science project for a long period of time. It's all about getting something that is enterprise ready, running on a very stable and reliable platform. So we have from an open stack private cloud perspective, the main use case, as you can imagine, is infrastructure as a service. We have a couple of... Okay, so it looks like the graphics are kind of not good there, but yeah. So let me try again. Ah, okay. I don't know what's happening there. We have a couple of validated solutions out there for customers to adopt. It's all about the end of the day. Our main goal is to provide options for our customers, be it a option from an open stack version perspective, be it the kind of storage they want to use or the capabilities they want to use. Based on that, we have two things that I'll talk about in the next couple of slides. It's the focus here is to provide validated infrastructure as a service offering for standing up private cloud in your data centers. The way we do it is, by following the rigorous model we have, okay, the way we do this is by following the rigorous approach that we have, Cisco, that many of our customers are already familiar with, the Cisco-validated design model. Here, four key elements come into play when building these solutions. It's driven by our customers, driven by the common use cases, common pain points, common issues that they are addressing. So those things make a big part of this whole process. That's the customer-selected engagements. We work with them, collect the requirements, do a product development by working with customers, with partners to get the best out of the system. And we provide thought leadership by working with the best of the breed in terms of technologies, in terms of skill sets, and provide a tested and validated design with detailed reference architectures that provides best practices in terms of configuration, sizing, and the use cases that you can run on it. So this provides a reliable model for our customers to leverage as they embark on the journey. So one solution that we have with Kilo is Cisco UCS with the data platform running self-storage in the backend. Here, this is a production-ready cloud deployment solution. It's a highly-available architecture and a scalable solution that customers can leverage even if they are starting fresh or deploying a large-scale setup they have. It's a completely integrated co-engineered solution wherein we provide seamless end-to-end configuration and management capabilities for our customers. One of the things that Karthik mentioned is about support. So one of the big things we bring to the table for our customers is Cisco solution support capability. We act as a single point of support for our customers, be it a physical layer, virtual layer, or the applications running on top. So customers can call Cisco and be assured that they'll be able to get the resolution they need, which reduces the downtime, if you will, if something were to happen for the customers. So details on the individual components. I'll skip that for now. The other thing is the second option that we have for our customers is... Okay, once again, sorry about the formatting there. The other option we have for our customers, for customers who are using a particular kind of storage, for example, with NetApp, FlexPod, as many might know, it's a proven architecture from a Cisco-integrated infrastructure perspective. So we have a solution out there as well with FlexPod with Juno OpenStack version that is being refreshed for newer capability as well. Here, it's Cisco uses integrated infrastructure with servers and switching, and NetApp storage on the storage side using the Red Hat software components. Same capabilities, enterprise, production ready, private cloud, with the same capabilities in terms of scalability, availability, and support. So these are a couple of main things that we have, but we have multiple options out there for our customers, depending on your preferences for versions or vendors that you have out there. So quickly on why I'm in Cisco for OpenStack-based clouds, right? So one of the things that we are known for is, I mean, reliable and leading integrated infrastructure platform. So we are combining the capabilities of proven architecture, proven platform with the leading edge capabilities offered by OpenStack. By combining our joint strength between Cisco and Red Hat, we are able to provide a reliable solution for you guys. We have application-based policy infrastructure. I mean, with OpenStack, it's all about automation, it's all about open APIs, it's all about policy-driven architecture. So combining Cisco, combining Red Hat provides that capability for you guys. I'm not going to other capabilities in terms of security, reliability that Cisco is known for. We bring it to bear by leveraging the plugins that we have co-engineered with Red Hat as part of these solutions. So from a vertical perspective, we talked about infrastructure as a service, but we have a couple of a few other solutions out there, either being in works or already available now. So the next one that we talked about during keynote today about big data and analytics. So we are offering a Hadoop as a service built on OpenStack, Red Hat OpenStack. Basically, by virtualizing Hadoop, we are able to support multi-tenancy. So some of the metrics you see there, I'm not going to go through them line by line. Over a period of time, we have proven that Cisco uses integrated infrastructure, provides inherent capabilities in terms of savings, in terms of performance, in terms of economics when they're running their big data applications on top. By combining those capabilities that we have, along in the OpenStack context, we are helping our customers be more successful when adopting newer technologies. The other thing that's happening in the industry, it's software-defined networking. So Cisco ACI is a leading software-defined networking technology that we have. So we are bringing those capabilities, combining ACI, OpenStack, as well as UCS in the same architecture, same solution that customers can leverage. Many of the customers that we talk to, they're interested in SDN, they're interested in OpenStack. And we want to make sure that we enable those customers. They're able to adopt these new capabilities, new technologies, without having to worry about going through all the pain in terms of design, architecture, and all that. I will skip this slide. I will not go into too much details here. The point here is that it's not just that we are trying to position ACI for our customers. It's the developers like the capability that ACI brings to the table in terms of programmability, in terms of automation, in terms of policy-driven architecture. So this ACI, OpenStack, and UCS, they all play well from an OpenStack context. Let me wrap it up from a solution portfolio perspective, what we have today and what we have in the pipeline. So earlier I talked about, I mean, infrastructure service. That's a big thing for our customers, even though OpenStack has been around for a long time now. Many of the enterprise customers, especially, they are in the process of adopting, evaluating, adopting OpenStack now. And we want to provide these capability-validated solutions for them to adopt comfortably. So the new thing that's on everybody's mind, we talked about it briefly during various breakout sessions today, as well as during keynote. Bare Metal as a Service is a new capability that many of our customers are looking for. Leveraging Ironic, we will be able to provide that capability. So this solution is being worked on with Radat. That's something our customers will be able to leverage, not just for Bare Metal, but also for specific applications that require Bare Metal capabilities as part of an overall cloud. And then the OpenStack and SDN solution, the journey that many of our customers are taking, the ACI solution that I talked about with UCS, so with UCS with ACI, we'll be able to provide that capability with OpenStack. With that, I will... So this is all the stuff I talked about, is these self-managed solutions that customers can leverage. But many of our customers are interested in not necessarily standing up on their own, but they would like Cisco to help on that area. So that's where we have a few solutions that Hart is going to talk about. Thank you very much. So hello, I'm Hart Hooper. I'm with Cisco Metapod. So as Vish was just saying, talking about a customer-managed OpenStack solution, I'm here representing the Cisco-managed OpenStack solution called Metapod. I will challenge you that when we dismiss here in a little while, if you go downstairs, try and find a company that is not hiring people with OpenStack expertise. They are very hard to find, you people out there. You're hard to find, believe me. So what companies are doing are saying, I know I need to get to the cloud, I know I want to use OpenStack, but I either don't want to pay a ton of money for OpenStack talent to run this for me, or I just don't have the expertise in house. I guess it's not going to happen. So I want Cisco to manage that for me. So when companies adopt Metapod, their users are delighted because they get what they want. They get a public cloud experience with a private cloud delivery. So they get self-service, they get to integrate with their existing tools, some DevOps automation tools, infrastructure automation tools, anything and everything that can integrate with OpenStack APIs can integrate with Cisco Metapod. We have open, well-documented APIs. These are OpenStack APIs. There is really nothing super-secret about them, right? We give users instant provisioning of computing, just like a public cloud. They log into a dashboard, they use APIs, and instantly provision compute and storage, and networking. And finally, because they're using their own servers, they get consistent, reliable performance. They don't have to necessarily worry about noisy neighbor outside of their company. If they have another team inside their company that's a noisy neighbor, that may be a problem. But as far as consistent, reliable performance, Metapod can give that to them. As I said, developer tools integrate with OpenStack, and therefore they integrate with Cisco Metapod. Like crazy. Here's a big slide with a bunch of logos on it. It integrates with Metapod. Everybody get their picture? iPad guy in the back? Awesome. But not only does it provide what users want, it provides what administrators want. So if you were here for the keynotes earlier this morning, you saw Boris and his bear trot out, right? And talk about VMware admins out there in Cloud World, right? Who want to deploy OpenStack. Administrators get what they want because they get to manage and govern users, groups, and projects. They get full control of quotas, so how much each team gets within Metapod's space, what VM images are available, and what flavors are available. So flavors being RAM, disk, CPU. They get access to security policies and authentication. They get to control firewall rules, keys, et cetera. But on top of that, they get highly, high availability. They get monitoring and management from Cisco. And they also get historical real-time reporting. And they don't have to run OpenStack because we're doing that. So as far as what does Cisco do for you when we manage your OpenStack Cloud, we help you design and architect the solution. We deliver it to you as a service. We help you install it in your data center and then we deploy it for you. We then monitor it for you, make sure all the OpenStack services are running. And if they are not, we log in and fix it. If there's a problem, we'll solve it. We coordinate maintenance with your team. So if there is an upgrade or a maintenance that needs to occur, we actually reach out to your team via email or support ticket and say, hey, we want to run an upgrade on your cloud. It will impact these services. What is that cool? Here's when we want to do it. And if you say, yeah, okay, that's cool, then we do it. And if not, we schedule it around your schedule. And finally, we also provide capacity-planning reports to you. So if your adoption rate is, oh, my gosh, higher than expected, which it probably will be because cloud is amazing, you get a report saying, hey, you're almost out of space. We should add more capacity just to let you know. Most importantly though, Cisco Metapod has delivered to you the big takeaway as a service, a public cloud experience. Thank you for your time. Let's talk about network function virtualization. Here's Sunil. Thank you. Hi, guys. Welcome to the last session, literally, of the first day. I know it's been a long day and you heard a lot of talking points. Right now, I want to talk to you about a really exciting field that's developing, that's growing really rapidly, and that's in the space of NFV, network function virtualization. There's a lot of action. It's just amazing the amount of velocity I'm seeing here in the last nine to 11 months. The amount of RFPs I've been personally responding to and the number of service providers who have been adopting NFV and adopting OpenStack as a vehicle for them to launch their products. It's amazing. Today, in the next 10 to 15 minutes, I want to talk to you about NFV and FEI. What are we doing at Cisco to address some of the problems that this growing space is encountering? Together with Red Hat, how are we fixing them? What pain points are we addressing? Really, guys, in NFV, as you virtualize applications and your functions, performance really matters. With Red Hat and with Intel, we can demonstrate to you what that means. Really, when you look at NFV, it's taking monolithic appliance-based solutions and then deploying it in a virtualized environment. That's what you see on the left is the monolithic environment. On the right, it's a new virtualized environment. Now, where I see customers today, they've just embarked on this journey, and 90% of them are just essentially virtualizing their appliance. It's still monolithic. It's still running as a VNF on top of infrastructure, and that's where most of our customers are. But where we see them going and where we are leading them down is you also have to think about changing your applications, making your applications cloudy, highly available, modular, and being able to stay resilient, independent of the underlying infrastructure as much as possible. This is a big challenge. This is a challenge that has been encountered on the consumer space, and they're making the journey too. And an enterprise and service providers, they're just starting on it. It's scary. They don't have the expertise as my first world colleagues and Karthik talked about. It's not there. They're growing it. And depending on where they are, they're trying to get functions out into a marketplace that lets them extend their infrastructure investment, maintain that TCO, so it's about getting the time to market first, and then it's about, okay, let's do esoteric things. Let's make it even more better. Let's do best of breed. Many of my SPs who have embarked on the best of breed picking up the different components like Karthik talked about, building this car that never gets built because it's very difficult to get all of these different components to work together. The upgrade and update process that Vish talked about is huge. If you want to take advantage of the innovation that OpenStack provides, you want to be able to move as rapidly and quickly as the community is moving because they're fixing bugs, a ton of bugs. My team in Mitaka fixed about 300 bugs in the cycle. We are focusing on a pain point. So if you want to take advantage of that innovation, you have to be prepared to be agile. And this is another problem that I see my SP is struggling with. I still own a waterfall model, and they're not very comfortable with just in time, building it just in time. So we're helping them down that journey. So what really is NFVI? It's a set of hardware curated and software together that forms NFVI, the network function infrastructure. This is the base on which our SPs and even enterprises are looking to deploy their functions, their services running on top of this. What have our customers been telling us? These are the top five after talking to about, you know, 20, 30 of them. It's about carrier class performance, and I'm not talking about five nines here because five nines, frankly, at a system level, it's impossible to get. We're talking about is your solution resilient? When the cloud goes down, when portions of the cloud goes down, can itself heal? Even before it heals, you as an operator, can you observe the situation developing where your cloud is getting to its limits and it's going to go down? Can you take remedial action? That's what they mean by carrier class performance. They want to be use case agnostic, which means, hey, I invested all this money in this infrastructure, millions and millions of dollars. I want to be able to take this infrastructure and run different workloads on it, and we'll show you what types of workloads we are building and developing and deploying on our joint solution with Red Hat. It needs to be open standards like I talked to you. They want to be able to, they want to be able to take advantage of the innovation, the agility of this huge, vibrant open source community that all of you are part of. They want to be able to take advantage of that. They want to be able to adapt to changes in technology as and when they happen. They want to be able to use this, all of this, with unified management. Think of all those moving parts in the car, but being able to control them with a single pane of glass. And most importantly, when a cloud goes on at 2 a.m. in the morning, they want to be able to pick up the phone and talk to one person, one company that can bring them back from the downtime. I'm sharing with you some of the early adopters. All these names are public, but what I want to point out here is that even though Journey has started, it's very rapid. A lot of SPs, a lot of big SPs are deploying it. You heard AT&T talk about it in their keynote earlier in the summit. Seven percent of the infrastructure is going to touch open stack, so it's real. It's getting there. What SPs are looking at and enterprise customers are looking at are these four areas, capital efficiency, operational savings, service agility, and innovation and differentiation. And the last part is where a lot of people forget, but that's really what's motivating our end customers to differentiate their services. It's not about, hey, let's just get everything together. It's the desire with best of breed, but it's very difficult to get. They really want to differentiate, get to market fast. It's also differentiation. So getting to market on a very, very stable, repeatable, consistent infrastructure is very important. So, okay, how are they approaching this? They're approaching this from multiple ways. Some of them are use case-led. Some of them are orchestration-led. And a lot of them are actually going from the bottoms up, which is from a hardware perspective. And we have different elements here. We have the IT guys getting involved now with the traditional networking guys. They come from different backgrounds, and they're all trying to approach the solution based on their historical expertise. So in such an environment, having someone like Cisco and Red Hat sort of calm things down, provide the solution to them in terms that they are used to, and helping them on the journey is very important. And at Cisco, we pride ourselves in participating in a lot of these open-source projects because we feel that we have the right expertise on the networking side to help guide and shape the direction in which this growing industry needs to go in. So we participate heavily in OpenStack, OP NFV, and Open Daylight. And we partner very closely with Intel and Red Hat, ensuring that the innovation that Intel is building and Red Hat is exposing gets actually used and consumed through the software platform that we are building with both Intel and Red Hat. So earlier we talked about NFVI. It's the physical hardware at the bottom with a virtualized software platform running on top of it. That is NFVI. Above that you have the different use cases, so automation, management, above that you have the applications and then you have the enablement, which is how do I automate it, how do I manage it, how do I orchestrate, and what analytics can I run? How do I ensure that the solution that I've built is actually performing well, that I can benchmark, and I can prove that I'm delivering value and SLA to the applications running on top. So clicking on one level, we are running our set of tools and software packages around and on top of RELL OSP and RELL. And as RELL OSP moves with the versions, we will be upgrading and providing in-place updates to going forward. So what this means is that as you are building your cloud, you will be able to move and take advantage of innovation pretty seamlessly. You don't have to do a rip and replace. And you will be, best of all, you will be working on 100% upstream code. So we're not modifying anything in the underlying NOVA neutron areas. We're just running on stock open stack. We're just providing our customers a set of tools and capabilities where they can deploy, monitor, and manage their cloud in a way that they've never been able to do before. And all of this is just getting it up and running, right? Making it run extremely well is another area we are focused on. Making it run extremely secure is another area we're focusing on. And we're working very closely with Red Hat and Intel in ensuring that all the latest technologies that are being exposed by these companies are consumable to our end customers as we productize it. So, like I said, simple access to support. Cisco will front-end this and you'll get excellent tax support. And behind the scenes, we work with Red Hat and Intel to ensure that any bugs that have been found down to the kernel level OS are seamlessly fixed. The other thing we want to do here is we want to be able to bring in innovation into the solution level much faster than waiting for upstream to actually get it out and release. This way, we'll allow our customers to try it out in a real world situation, get the telemetry from the experience, and feed it back to the community. So it's a virtuous cycle. When we are upstreaming a fix or a blueprint, we're back by real world data, by real deployments. That virtuous cycle, that's what we're going to accomplish here with this partnership and make OpenStack product as well as usable. Use cases, like I said before, they want to be able to run multiple use cases. Enterprise is one, so virtual managed services is an example of enterprise-based applications. Mobility is another, you know, VPC, virtual packet core, GLN, being able to deploy that. And then finally, media, media workloads, being able to transcode your workloads either locally or on a cloud. Talk to you about performance, right? On the right, we've actually published this with ENTC, which is a third-party benchmarking company earlier this year, where you can see stock obvious performance on the right, it's getting better, but it still has a ways to go. And what our SP customers have been telling us is, please show me a way in which I can get line rate performance or close the line rate. So with our virtual, with our packet core processing that we have, VPP, packet processing, we are able to generate tremendous packet forwarding performance. And on the left-hand side, we can see the increase in performance across IPv4, across, you know, VM to VM. We get 10 gigabits per second by stock OBS. And 77% at line rate, at 20K, all of this with zero packet drop. In fact, at MWC earlier this year, we demonstrated with three VMs, 40 gigs bi-directional traffic. 127 byte packets, zero packet drop. And on bare metal, we are able to demonstrate 136 gigs of packet forwarding performance, but that was at large packet sizes. So on both extremes, we are able to demonstrate achievable performance on open stack for NFVI. And to wrap it all up, why Cisco NFVI? Because it is carrier grade. It is open and elastic. It does embrace all aspects of cloud networking. It's multi-domain, and we have rich partnerships with Intel and Red Hat as an example. So with that, I'm going to stop. I want to open up the floor for questions. Please show us all the questions. Yeah, and I realize it's 20 minutes before the drinks open as well. So feel free to come up to the mics and ask questions until then. We'll be hanging out here. We'll obviously all be accessible whether at the Red Hat booth or at the Cisco booth as well. Go ahead. I have a question that kind of goes towards the NFVI, but I think everyone might be able to help answer it. And it has to do with the innovation and being used as a competitive differentiation. How do you position that with customers when they're saying the whole reason they want to go to OpenStack is for no vendor lock-in? It seems like these are two opposing ideas. I'll go first, if you don't mind. Yeah, that's an excellent point. And the way I look at comparative advantage, if you get first to market with your solution, that itself is a comparative advantage. And by being able to take advantage of the latest innovation that's coming in, being an early adopter, you're taking some risk. You can differentiate there too. The flip side is you want to make sure that the chases of the car that you're driving are stable, because you want to focus on the engine. You want that engine to be the best damn engine possible. But if a chassis does not work well and it's not able to give you the direct drive to your tires, the engine can run as fast as it wants. It is not going to be able to propel yourself fast enough. So we're ensuring that the chassis is very stable and reliable and you just get the engine running fast. So that's what we're building. I just had one more point there, which is fundamentally, Red Hat is really a fundamental principle that Red Hat bases our company on, is that we believe in the open source community. We believe in making sure that everything we do is open source. And even beyond that, the technologies that we put into our products is available in the upstream community first. So all of the collaboration that Red Hat and Cisco have going on is based on having those technologies available in the upstream community. So the intent is to make sure that what you're getting is a completely open solution. And if you have a functioning solution that works well, at some point you decide that doesn't work well in replacing components. Absolutely. We'll help you if you want to work with others. It's all based on upstream open stack components. Yeah, the tools I spoke about for measuring performance and checking the cloud, they're all open source projects. In fact, one of them is already in open stack and Cisco, we have big proponents of making things innovative open source so customers can take advantage of it. Thank you. Okay. Let's talk about the strategic differentiation from Cisco's NFVI and NSO. Okay. So NSO is the orchestrator that's running on top of our NFVI. And it's a great orchestrator. It's by Teleth. But we're not tied into that orchestrator. So customers do have a choice of actually choosing their own flavor of orchestration. They get unparalleled performance when it comes to life cycle management of VNFs onboarding them and being able to manage and maintain these VNFs, you get tremendous performance advantage. It's up to customers. Everything about NFVI is modular. You can choose your SDN controller. You can choose your VNFM. You can choose your MANO also. But we do provide our own solutions in case you want to do that end-to-end You don't want to do too much of do it yourself. It's there. Any other concluding thoughts? Any other questions? Some of the takeaways I'd like to leave you with, right, to think about through the rest of the week, is that first of all the innovation that you can get with OpenStack and all of the component projects is incredible. What you can do with the technology is amazing. But at the same time it's very easy to get distracted for all the shiny new objects that are there, right? The collaboration between Red Hat and Cisco is focused and our partners like Intel and NetApp and others within ecosystem that are focused on these solutions is around the real world. What do customers need? The basic building blocks to make sure that they can walk before they can run. And so if you are on the stage of planning your OpenStack deployments come talk to us. We have a bunch of best practices you can learn from us. We can help seed some of these building blocks in a fashion that might make sense for you based on your specific needs. And we are absolutely available accessible. Come talk to us in our booth. The slides that will be available to you, we have URLs where you can download some of the reference architectures. Any other parting thoughts, guys? Right. We're open to partner with you guys, discuss with you guys, provide information in terms of the whole solution as well as pilot effort if you guys have to embark on the journey and open for any questions. Yeah. Cisco's booth is C11. There we go. Come see us. Thank you. Thank you.