 Good morning, everybody. Everybody here is good? Great. Welcome to session number two in Cisco's sponsor track sessions. My name is Gary. I'm going to be your host for the day. We've got after this session three more sessions running in the same room for the rest of the day. So come back and join us. As each of you came in and had your badge scanned, I think you were probably also given a little card. We are doing a drawing at the end of every session. We're giving away a very, very nice Phillips Bluetooth speaker. So fill out the card. We'll collect them at the end of the presentation for our drawing. I have pens. I can see that I can. You're either asking for your check, sir, or you need a pen to fill out the card. OK. So without any further ado, I'm going to just jump right to our presenter, Norendra. One of our product managers at Cisco. And we're going to be talking about accelerating NFV deployments on OpenStack. Norendra? Awesome. Thank you, Gary. Good morning, folks. Hope you're all doing well. Thank you for making it here today. OpenStack has been at the crux of what Cisco has been doing with NFV. And today, we're going to talk about how we are enabling NFV deployments with OpenStack, what are we doing to accelerate those deployments using OpenStack, as well as a key case study in the end about a true customer use case where this is being deployed, how is it being used, what are all the components of the solution, and a lot more details on that. So welcome to the presentation. So first of all, I'm in the service provider and Telco Arena. A lot of distractions are happening. Some of them have been continual. So four key themes that we identify, hyperconnectivity. All of us have iPhones, iPads, tablets, smartphones, a lot many things getting connected to the network. So $15 billion to $50 billion in the next few years to $500 billion. And among this, 40% of these devices are just going to be traffic for M2M. And that's huge, a lot of traffic through the networks. And that puts a huge load, as well as a huge change affected in a service provider's network. And speed of innovation. I mean, this OpenStack is a true, solid, complex example of that. I mean, we showed in terms of the iPhone that the first version was launched in 2007. And we are on the seventh-plus version of the phone today. But the key part is that iPhone and iTunes platform also enabled an economy of $100 billion for just app development. That's huge. And similar to that, it's an OpenStack in terms of how we have innovated and brought that innovation at speed in terms of a cloud operating system and more. OK, so those are all the good things. Things are growing, need to be fast. At the same time, the costs and expenses need to go down. There is a lower cost to innovation that's being asked for. So previously, you would hunt for millions of dollars of investment to be made in a company or a startup to just get an idea going, to even test whether it's a good one or not. Today, you could go to GoFundMe or any other portals and get your product going for less than $5,000 to $10,000. So that barrier is as rapidly reduced over time. And at the same time, there's still a lot more money to be invested in terms of venture funding so that the volume of innovation also increases. So lots of things coming at us and to the service providers. And what have we heard about how service providers and telcos are going to respond to those disruptions? So some big names here. And the essence here is I want a virtualized infrastructure platform system on which I should be able to innovate, deploy things faster, change things faster. Things have to be modular enough. Things have to be generic enough that I don't have to go through a 12 to 18 month cycle for every little service that I need to enable in my infrastructure. So that's the key message that we are seeing. At the same time, be simple, be modular, and so on. So how does that translate to what actually needs to happen in the new form of infrastructure that one needs to deliver for service providers and telcos? And one of these key transformations is through NFV, network function virtualization. Anything and everything that's running in big iron boxes today can be actually virtualized. And simplified. And two aspects that are already in motion, so to say, SDN and NFV. However, at the same time, another big aspect of SP transformation that we have seen through our OpenStack summits as well as through customer interactions is the open source movement. I want to use open source assets that are available to bring SDN and NFV into my infrastructure. Fine. So what does that do for a service provider? It reduces the number of appliances, solos, the dedicated, big iron systems that customers need to buy and put away in shelves, et cetera. Automate a lot of service creation so that things are easier. With the click of a button, I need to be able to enable a service for a customer of mine or a customer needs to be able to go to a portal and enable a service overnight. Rather than a large window. And also, highly available. So this is the crux of everything, right? And SPs are known for this very well in terms of how highly available the infrastructure, the systems need to be so that down times are minimized or driven to zero and service times always up. OK, so that's what we need. But then there is also an approach to how things are being bought or how things are being procured or how things are being looked at when service providers need to move towards SDN and NFV, right? So how's that? So first of all, there's an infrastructure that you need to have. Ultimately, this is what customers need to get to in a physical infrastructure, a virtualized infrastructure on top to provide that flexibility and modularity. And then all the VNFs, the management systems, et cetera, along with management and operations. There's three main dynamics going on here, interestingly. One is use case-led. So I want to be able to deploy a new mobile service in a virtualized environment, for example. I want to be able to enable a managed service, virtual managed service, on top of an infrastructure. And that use case-driven requirement and use case-driven conversation drives certain decisions. And those decisions are owned by specific groups within a company. For example, in the use case-led case, it's the business unit as the business vertical who is more influential, who is more driven into how the infrastructure and the application of the VNF needs to be. The second one is orchestration-led. So a common management and operations solution for different use cases. And this mainly revolves around VNF manager and NFV architect, orchestrator. And the buying center here would be the NMS and the OSS teams, very different from the use case-led. Another important one, which is gaining a lot of popularity as well as mind share, is the infrastructure-led. So I want to be able to have an infrastructure again, which is very generic, modular, that I can onboard any VNFs, any workloads that I want to onboard at any time because my needs may dynamically change very rapidly. And I want to be able to adapt to that. So in terms of that, this is a bottoms-up approach. And this mainly is driven by the network and DC transformation teams, or the infrastructure teams transforming into NFV. And that's basically a decision about what hardware to buy, what virtualization should I be buying, or infrastructure managers should I be buying, what should be the SDN controller that can cater to many different needs of my VNFs. So we're going to mainly talk about the infrastructure-led one, the third one that we discussed in the previous slide. And if we jump through the infrastructure requirements, NFEI, Network Function Virtualization Infrastructure requirements, so here are the six key elements that come out of every conversation that we have had, is, yep, I like the virtualization part. I like the modularity. I like the dynamic capabilities. At the same time, I would like to have carrier class performance. So what does that mean? It means whatever performance, whatever capabilities that you have been delivering to me, I have been expecting in a physical world with dedicated hardware, custom made A6, et cetera, I'd like to see similar performance, capabilities, and behavior on a virtualized infrastructure. What else? Use case agnostic. So we've talked about it through the conversation here, is I don't want to marry the system to one specific use case or one specific solution, because tomorrow my customer demands may change. And I don't want to wipe out this entire infrastructure and bring in something new. I want to be able to reuse that infrastructure. I want to be able to expand that infrastructure. At the same time, open standards have been at the crux of the conversations. Things have to be open. They have to be interoperable. They have to be modular so that I can expand on demand, as well as elastic, so that seasonal variations are also taken care of. You give me this fantastic system to work with now. I also want it to be easy to manage, easy to be able to inject updates, upgrades, changes, physical or software, sorry, hardware or software, whatever it may be. And above all else, one other key point is once you bring up this humongous system or bring this humongous system together, actually you want to deal with a single owner or a single vendor when I deploy this infrastructure out there. Why is that? Because the support becomes easier for the customer. The customer is at ease knowing that there is a single number or the single person that I need to be talking to for any issue at any level of VNF deployment, be it a physical hardware issue, be it an infrastructure manager issue, be it a VNF issue, be it a MANO issue, be it a management issue, whatever it is, or any of those requirements, I need to be able to deal with a single team, a single entity. So that essentially streamlines the operations as well as the management of the system. And last but not the least is obviously make it secure. It all looks great when things are put together, but how open is the system that people can hack into it at their will? Or is there a way to secure these things to the extent that the industry has been working with previously with custom-made infrastructure? So in terms of Cisco NFEI, those are the requirements. And here is how we have translated them into how Cisco could actually deliver an infrastructure that caters to all the requirements that we have discussed so far. So on the left here is the Etsy framework for NFV. And we have mapped that out to how Cisco would cater to that. Essentially, the key point here is the entire Etsy NFEI framework has been divided into two things. The bottom half is the infrastructure part of the framework. And the top off is the VNF and the MANO part of the framework. When we discuss NFEI, Cisco NFEI, it's going to be about always the bottom part of the bottom half of the Etsy framework. And we'll go into the details of what is what, but essentially this is how it kind of gets mapped into translating into an architecture. So here's the Cisco NFEI architecture. And essentially, again, divided into two halves, bottom half and top half. And let's start at the bottom at the infrastructure layer. So it consists of physical devices for compute network storage. And it's based on Red Hat Linux operating system with KVM, et cetera. And a virtualized infrastructure manager. And Cisco has chosen the VIM to be OpenStack. And this Cisco virtualized infrastructure manager happens to be based on Red Hat OpenStack platform. And along with that, to satisfy the key elements of the management requirements, we have unified management as well as assurance components within the same infrastructure. So that in brief is what Cisco NFEI comprises of. We're going to dive a little bit deep into some of these elements. And to complete the architecture and the mapping to the Etsy framework, essentially, on the VNF manager side, we have Cisco Elastic Services Controller and any other third-party interoperable VNF manager that can be integrated with. On the network VIM side, essentially to satisfy the SDN requirements, we have Cisco VTS virtual topology system, Cisco ACI, as well as any third-party integrable systems. And at the NFEO orchestration layer and the research management layer, NSO, Network Services Orchestrator, powered by Enable by TLF. And of course, to being an open system, any third-party system could be integrated into the same infrastructure. In terms of the VNFs, lots of them, Cloud Services Router, ASA, Virtual ASA, VPC, VIMS, V2P, you name it, you have it from routing to firewall to video optimization, et cetera, et cetera, et cetera. And again, Cisco plus third party. So just double-clicking a little bit on the infrastructure side to close on this. So when you say physical compute storage and networking, the compute is based on Cisco UCS, Unified Computing Systems. Network is based on Cisco Nexus 9000 switches. Storage is based on Cisco UCS servers. And in terms of the software, the operating system is Linux. Red Hat and Replies Linux. Storage is based on Red Hat Cef. And like I said, the Cisco VIM is based on Red Hat OSP. One of the key aspects that we saw earlier in the requirements was I need things to be interoperable. I need things to be open. And I need a single point of ownership. And we have tried to solve this problem successfully in building a world-class partnership between three companies. And that is between Cisco, Red Hat, and Intel. So we built a partnership together to deliver Cisco NFEI and to cater to the needs of our customers and keeping things open, keeping things at the same time to be innovative at every level of the system. So here again, at the bottom half of the slide is what you see as the Cisco NFEI infrastructure in terms of the components that we discussed. And how does it map in terms of the partners? So Cisco UCS compute as well as the storage is based on Intel CPUs. Software is based on Red Hat assets. And there is Cisco assets within this in terms of Cisco UCS as well as the SDN controller, et cetera, et cetera. And there's a lot of joint efforts that are going on parallely, not just one, multiple ones of them between all the three of us. So there is integrated platform design, verification, validation between Cisco and Intel and Red Hat. There's a component that each of these vendors can bring to the table and make it better for the service providers and telcos. There's a joint engineering effort that's put together, be it on networking, containers, storage, you name it. And there is a topic being discussed almost every day. One of the key things and the number one requirement that the customer specified while discussing an FEI was carrier class performance. Carrier class performance cannot be had by default unless we actually go into the systems and tweak them at multiple levels. Think about it. We can start at the physical layer, figure out how the cache needs to be organized, how it needs to be assigned. Just as an example, you level it up and say, how do I actually optimize things at an operating system level, be it scheduling, be it buffers, be it memory, you name it and you have it. How can we do that at the hypervisor level, at the KVM level? Are there any real-time activities that we can actually optimize on? And then you go to the next level and say, how do I actually do this at a VNF level? Can the VNF be a bit better performing if we do certain optimizations? So you kind of get the picture. Every layer, every level, there is a way to get to carrier class performance, but that cannot happen on an individual basis. Those things have to be worked together with the rest of the infrastructure players. And that is why these partnerships were formed to coordinate and bring all of those things to bear and satisfy those requirements. And at the same time, making sure that there is a single point of support and single point of ownership in terms of the solution. And Cisco has taken the lead in saying, we will be the single point of support, single point of procurement for the customer so that there is any requirement, any issue, any question, any great needs to be had. It'll come through one vendor so that the customer is at ease in terms of deploying the solution. So what are the use cases that can go on this infrastructure? So managed services in terms of virtual internet managed services, virtual managed services, firewall, VPN, et cetera. Mobility, virtual packet core, 5G, GI LAN, and so on, as well as media in terms of video processing, content storage, content management, all of these. And as you can see, the use cases are very varied and a single infrastructure is able to support the multiple needs of a customer. And let's dig into the use cases a little bit more. It's not just at what is delivered as an outcome to the end customer. It's also the places in the infrastructure, places in the network where the use cases could reside. So it could be at the edge or the end point of the customer site to a provider edge, to a data center, so on. And there's many places for NFEI as well as the use cases to reside. And the Cisco NFEI is able to cater to all of these needs in many different fashions to enable the multiple use cases that are important for the customer. So it could be customer premises, access, edge, large data centers, co-location, you name it, and you have it. Fine. So those are all the use cases that can be satisfied. But how can we do that? We said things can be varied of different sizes, different forms. So essentially, for that, we need a good packaging around the solution so that things could be modular, things could be expandable, things could be scalable. So we have taken this through many different dimensions. It could be scalable. The compute could be scalable by themselves in terms of how many number of computes do you want to have at what time, as well as storage. It could be another dimension where things may change and should be expandable. At the same time, sticking to the core of the requirements in terms of being carrier grade, carrier class, validated single point of ownership, and so on and such. And one other key point is, if you notice this packaging, it is also the evolutionary path that a customer would take. For example, a customer could start small to try the environment, get their feet wet, as well as onboard the VNFs, enable them, maybe go through a small service creation for a bit. And then once things are proved, then you want to expand. At that point of time, you actually would prefer not to do a rip and replace of the entire system and deploy a new system just because you wanted a bigger scale or a bigger number of sessions to be handled, for example. So that should be supported in a very modular and easy way than changing over things and starting from zero. And that's what this enables you to do. Now, talking about the virtual infrastructure manager, which is chosen to be OpenStack, let's talk about what is so special about it and what are some of the challenges that the WIM addresses. So first of all, OpenStack is awesome in terms of the number of projects that we have and the speed at which innovation is being delivered to the customers. You name it, and we have a project today in the Big Tent to enable specific use cases, to enable specific needs of the customers. At that same time, when you bring these things together, it could be complex for a customer to bring this all together and make it work in a seamless fashion over time. Not once, but over time through the lifecycle of a project, through the lifecycle of a business. At the same time, there is the aspect of features and capabilities in the platform, which is how do I manage that platform? How do I operate that platform? How do I monitor that platform? How can I bring all those capabilities together? And as I said, it's not just bringing it up once. It's about managing it through the lifecycle of the business. So that means over a period of time, I should be able to update my system, upgrade my system, change hardware, change software. And how can I do this in a seamless fashion without actually doing a sort of a start over every time I see a new update coming in, see a new change coming in, see that there was some fault in the system in terms of changing a power supply or whatever it may be? How can I idle all those things? So Cisco WIM is the answer for that. Cisco WIM stands for Cisco Virtualized Infrastructure Manager. So the WIM is from the NFEI framework, sorry, NFE. It's the NFE framework and named accordingly. And some of the key aspects of Cisco WIM, first of all, installer and lifecycle manager. Be able to install the system, open stack, and many more components, operating system, and so on and such that we'll talk about. And at the same time, manage the lifecycle of the system. How do I increase my capacity? How do I change software, et cetera? Containerized deployment. And we'll talk about this in detail in terms of what it does and why it's needed. HA verification. So customers want highly available systems because that's the core of a requirement in service providers. Fine. Infrastructure could be claimed to be highly available, OK? Could be tested for availability in all fairness, and it works. But how do I actually, over a period of time, at will verify the system is still highly available, something has not gone wrong, either in hardware or in software or in an integration or in an interoperation, and make sure that things are still highly available and humming away on day one, day 20, day 200, day 2000? So these are some capabilities that we have built and integrated together and we will discuss. Health checks. OK, so I have all these. Everything is API based, almost, and things should be integratable, interoperable, and so on. But how do I ensure those API endpoints are, let's leave the world always, API endpoints are responsive? Then we add API endpoints are always responsive. But we can talk about this at length until the end of the summit. But what are some of the capabilities that can actually test that those API endpoints are available? Is there something that can actually test my API endpoint periodically and tell me whether things are still OK, things are still humming away? Or if there is some maintenance operation that I need to be doing for any sort of an issue? Security. At the crux of this conversation is how secure can this system be? And is it really secure? Logging and monitoring. When it comes to troubleshooting, when it comes to checking the health of the system, how do I make sure that I actually can get to the root of an issue very quickly rather than trying to find the issue in the first place over many number of days? Also, what's in machine throughput testing? So when we say that we have stood up an infrastructure, how can I tell what that infrastructure is capable of? Meaning, what sort of east-west traffic throughput should I be expecting? Because of the way the system is configured. And if you want to dive deep into the technical aspects of this, have I enabled VLAN, VXLAN, have I enabled a large, empty use, or not, and so on, and you name it and you have a switch for that? So for the n number of configurations that we have, what is the throughput that I can expect when I'm actually pumping TCP traffic? What is the throughput that I should be expecting when there is UDP, HTTP, or any other traffic through the system? East-west, north-south, you name it. And how do I make sure that this is a part of the infrastructure that I should be able to enable and test at any point of the day to ensure, again, that my system is humming away? I know that if you don't take anything else of this presentation all through the day, you'll be telling in every session, I am actually humming away today. So those are the key aspects of Cisco WIM. I would like to touch upon the installer and the lifecycle manager a little bit. So this was driven because of the experience that we had over the number of years to build an infrastructure platform. And this is not something that was randomly that someone decided to do it at Cisco. These three points come from all the experience that we had, right? So in terms of deploying OpenStack, innovation is there, but when I actually want to deploy, I want to install, I want to manage, I want to run the system, there were and there are several installers out there, but each with their own limitations, right? So some were just get OpenStack going so that you would be able to actually test things, develop things, but actually not deploy things at a carrier class level with high availability, et cetera. So there were very few that were verified, tested and validated to run for many different types of hardware and software. Also, an installer would deploy mainly OpenStack services, but not think about doing bare metal provisioning, bare metal configuration, operating system install, lifecycle management, et cetera, et cetera. So I'm going to run through this a bit quickly. Essentially, taking all the learnings from the experience that we have had with the several installers, we came up with a Cisco virtualized infrastructure manager and these are all the things that we expect an installer and a lifecycle manager to have and that is to build software, all the things that you want to build and deploy into the system, deploy it right from the moment that you need to validate the input configuration that's filled in for the system to be deployed to bare metal provisioning in terms of bringing up the operating system, deploying it, enabling the essential services before OpenStack is enabled, so on and such, deploy storage, set it up, get it going without any human interaction, then orchestrate OpenStack services in terms of which component of OpenStack needs to be enabled on which server depending on their roles. And at the end, do a bunch of verifications too that we talked about in the previous slide in terms of now, do I know that the traffic can actually go between East and West easily? Can it go from North and South easily? What sort of throughput should I be expecting? It's all integrated into the installer as one of the steps and in terms of operations, can I be monitoring the system collecting logs, can I update software, upgrade software and so on? So we'll touch upon a couple of these quickly here. So one of them is containerized, install, update and upgrade. This is another critical decisions that we made we have embraced through the development of Cisco VIM is essentially we build a container's repository that is pushed to the management node and then the entire system is managed through the management node into the control nodes, compute nodes and storage nodes. And this is customer controlled in terms of when the customer wants to deploy an update, upgrade, so on and such. Life cycle management, right? So in terms of being modular, being elastic, you should be able to add new compute nodes on demand. You should be able to replace nodes on demand because of faults and so on or just you had an upgrade in the hardware and such. And you should also be able to replace storage nodes because maybe a disk went bad or you're trying to do maintenance, et cetera, et cetera. As well as software updates and upgrades. So you should be able to push in an update or an upgrade, you should be able to push in a security patch as well as you should be able to roll back a software update because things did not go well or things did not go according to your expectations. And all this done in an automated fashion through the Cisco VIM. In terms of the integrated operational and validation tools that we talked about in terms of HA verification, virtual machine throughput testing, health checks and so on, we have a bunch of these tools integrated into the system. I'll talk about Elkstack for a moment. Cloud Pulse, VMTP, Cloud 99 are all open source projects in the open stack tent, big tent, that are being led by Cisco and everybody's welcome to contribute into this but essentially these were all developed, provided what we have learned from the customers in terms of what needs to be in that virtualized infrastructure to be manageable to be monitorable, et cetera. So Cloud Pulse is for health checks, VMTP, virtual machine throughput is for throughput testing. Cloud 99 is sort of the chaos monkey, if you will, to inject failures into the system and at the same time concurrently testing for high availability and validating that. Cloud Buster for large scale virtual topology tests. Last but not the least, monitoring tools being integrated to monitor every level of the system in terms of containers, processes, physical assets, virtual assets, et cetera. So Elkstack, right? So Elk stands for Elastic Search, Lockstash and Kibana. So this is the open source project that you should be aware of. So we essentially leverage Elkstack in a way that logging can be made very, lock collection can be made very easy for the customer to be able to get to the root cause of a issue and troubleshoot the issue than actually, you know, searching for an issue through Hanei Stack, right? So how is it done? So we essentially accumulate all the logs from all the nodes into the management node and once we have that log repository, we use Elastic Search for search, searching through the logs as well as for analytics and the Kibana dashboard to intuitively look through an issue and get to the bottom of why an issue is happening and imagine if you did not have this and you had a 20 node system, you had to go to 20 plus nodes, look through the logs of each of those nodes to figure out where and what the problem could have been. And all this powered by Cisco WIM, essentially. In terms of SDN, wanna quickly touch upon Cisco VTS, this is the first SDN controller that we are integrating into Cisco WIM or Cisco NFEI and Cisco ACI is also on the roadmap to be integrated with Cisco NFEI. And essentially, you know, Cisco VTS can automate your network configurations, supports VXLan extensibility with BGE VPN, it's open and programmable, essentially REST based to be interacted with and can enable overlays, be it physical, be it virtual, be it hybrid. And all powered by the Nexus 9, Nexus portfolio. More details on flexible overlays, but I'll skip this one in the interest of time. I'm getting dinged here because I wanna get to the customer case study, the most important one. So Gary, maybe I'll take two more minutes to highlight. So we've been working with one of the groups within the entity corporation to enable a virtual managed services solution for large scale commercial deployment. So this consists of the entire HC NFV stack that we discussed earlier and essentially this enables you to provide virtual managed services to every customer that intends to deploy over a period of time. So I'll skip the top off of this in terms of a customer should be able to go to a portal and then things get trickled to network services orchestrator to elastic services controller to deploy a service. What I wanna focus in is on that dotted line out there as well as the physical CPAs. So if you imagine this, essentially what's happening here, of course, this is through Cisco NFEI and OpenStack with Cisco WIM, what we're doing here is we have an instance of Cisco CSR 1000 and enabling a virtual CPA use case. What this gives you is basically service chain to be built between a physical and a virtual device and why is that important? Imagine you have a physical device on the left hand side here at a customer side and you're enabling a virtual VNF in your NFEI cloud. If the customer wants any sort of a business service, let's say a routing, a firewall, et cetera, et cetera, that is enabled through the virtual instance of this setup. And tomorrow, let's say you want to push an update. Tomorrow you want to upgrade the virtual instance because there is a software update, because there is a bug or because you want to enable new business offers. All that you do is actually go and change your virtual instance, the virtual configuration of that virtual instance, rather than going to every physical device at the customer premise to update or change over or even change over the hardware itself and do a truck roll. All of those are realized as cost savings because all you need to do is go to your data center, go to your infrastructure, and change over a virtual machine and say, here's the virtual machine that I want to start from the next moment. So this brings the speed, the agility, the innovation to the customer to be able to adapt to the changes of the end customer, the changes of the end customer's business use case, et cetera. Now you take this and expand it to every use case that is out there in terms of virtual managed services, media, mobility, packet core, you name it. And you can use the same sort of framework to deliver the capabilities and the speed and the innovation that's required. So on the SDN side, of course, it's a VTS with a virtual forwarder embedded into the system. All right? Any questions? We have probably time for a couple of questions. Anybody? There we go. Yes, thank you. One question regarding the virtual infrastructure, if you use Cisco UCS and in combination, for example, with Cisco VNICS, how do you take care that you have data plan acceleration functionalities abstracted through the OpenStack APIs like SR, IUV, or DPDK? Yeah, very good question. Actually, we are trying to make this flexible and open and staying true to it, not just say it's flexible and open. So in that regard, what we are working on is to also enable other NICs that can provide those accelerations at the NIC level itself. For example, NICs from Intel, et cetera, et cetera, to enable SR, IUV, maybe run a virtual forwarder within the NIC, smartNIC, et cetera, et cetera. So any other questions? Or are you guys all just humming away? Thank you, Niren. Thank you so much for the time. Appreciate it. We do want to try to squeeze in our drawing. If you filled out the little card that you got when you came in, we'll take it up here. You can pass it to the end of your row, and we'll do it that way.