 Okay, good morning. Okay, nice to know somebody's awake. Someone has been here long enough to adjust to the time change. Thank you for coming to the Cisco Sponsored Room. This is our second session of the day. I'll get a little housekeeping out of the way first, then I'll get into a quick introduction and turn over to our presenters. As each of you came in the door, you were given a little business card size card, ironically. We're doing a drawing at the end of today's session for an Apple Watch. So if you want to fill that out, I'll be collecting all the entries in the fishbowl at the end of the session. As you're circling the session numbers on the little card, this is session number two. So with that out of the way, I'm going to get into just some quick introductions. Boloji, Karthik, and Mike. Boloji and Mike from Cisco. Karthik from Red Hat. All the titles right there. So I'm going to let them go into a little bit more detail on each of their backgrounds as the presentation goes on. Again, thank you for coming. We're going to make sure we have some time for Q&A at the end of the session as well. So with that, Boloji, all yours. Thank you. My name is Boloji Sivai. I manage the UCS-based open source solutions. Open Stack Solutions is one of the things that I actually work with with our partners, Red Hat. Karthik is one of the key contributors from the Red Hat side in our joint solution. And Mike, he's the Director of Product Management for Cisco ACI Solutions, which is basically our networking solutions. Basically, our session will go. I'll talk about why we are doing this solution together and what benefits that we provide to the customers. And then Mike will cover ACI specific content at the end of the session. So one of the things that I sort of take away from the keynote address for the morning was that you have to have a super in front of you, right? You have to be a super user or super integrator or super woman, I guess. If you're not super, then you're not cool. So one thing that we want to do is actually provide a solution, make Open Stack accessible to the common man. So that's really our goal. I know that over multiple Open Stack summits that I've been on since Boston Summit, I think Mike and I were there four years ago. We know that the bigger companies or bigger, you know, the Comcast of the world, the Yahoo's of the world were able to put enough resources to make Open Stack work. We know that. But the typical enterprises, small-sized enterprises, they don't necessarily have the expertise or the time and effort to make Open Stack, you know, deployable in their data centers. And that's the goal of what this solution we're going to talk about today. So essentially bringing sort of enterprise-grade Open Stack, you know, we wanted you to make it come online faster, but also be able to scale as you go along. So one of the key things that's driving why this whole fastness comes into play is sort of the evolution of the IT that's transformation that's happening overall. You know, at Cisco, we call it the fast IT. Essentially, you know, traditionally, you're used to an IT where your primary goal is to make efficient IT infrastructure that's reliable and stable and works. And that's been the case for, you know, I guess the last 20 years. But now with the sort of the need for DevOps model and the evolution of everyone wants everything faster. Nobody wants to have a water flow model for the software development, right? I mean, I think at Cisco when I started, it was more like, you know, you submit a PRD or product requirement document. Six months later, something comes out. And those days are obviously over. So software development teams are moving into a more agile way of developing because it makes sense. But also you need to have the IT be ready to be able to support some of these newer workflows. And that's sort of the gardener defined as a more do IT where you're able to support these developer infrastructures. This could be using containers like we talked about in the keynote or it could be other forms of fast way to deploy IT infrastructure. And so one of the key things that I see is that open source being a big part of this fast IT. Meaning, if you look at any solution that's available in the market, open source is a huge component of it. All the solutions you talk about open stack, you know, the container movement, a lot of the things are open source. But open source adopting in my organization as an IT organization is not that easy. There's a lot of challenges to be able to take an open source product out there and be able to adopt, you know, you really want speed but then it doesn't mean that it is not that easy to get speed quickly. There's a lot of integration of different components that you need to get in from the open source itself and you need to have to put the work to make it happen. The speed of deployment is actually difficult to achieve. There's also the risk involved. Obviously there's things that are changing, particularly in the case of open stack for example. You have a release coming out over six months. How do you upgrade to the next level of software? How do you deal with anything that happens? There's a risk involved in terms of maturity of many of the projects. It keeps on evolving. I think Kartik will talk about how the liberty in the next release will change one more time. But you can see that there's a lot of risk involved when you want to actually be the IT that's supporting this solution for your internal organizations. You know, it's cool to go open source. It's cool to deploy open stack, but then if you're not able to meet your demands of your internal customers, it doesn't really work. There will be big pieces to retain flexibility, right? We talked about obviously deploying this as a challenge, but also you want to necessarily lock it into something. For example, I'll give you an example on the container world. There's so many different startups evolving that keep on changing the way some of these components are used. What's cool six months ago is not cool any longer. And you need to be able to have the flexibility to be able to avoid vendor lock-ins as you adopt your open source projects. One other big thing that we talked about is support for the solution itself. As you deploy the solution in the case of open stack, for example, who are you going to call for support? The support has to sort of encompass both the hardware as well as the software elements that comes with it. And if you make any customization, how do you get support for that specific customized product that you have? You don't want necessarily a locked-in in that sense as well because you have one vendor, now you're locked into the vendor. So there are deployment complexities that we talked about, obviously deployment being the bigger challenge of deploying the open source software. Now you're deploying on a specific hardware and you need to worry about plugins that you need to use to enable that and ongoing maintenance of those different products. Obviously this all involves means complexity, right? And this is all the challenges you see in adopting open source in your fast ID process change. And I think specifically speaking of open stack specifically, it's actually lagging behind in terms of promise and fulfillment. This was a research data that I took from 451 Research that basically pulled a bunch of enterprise customers and asked for their opinion on what is the promise of open stack and how it has kept up the promise and how is the fulfillment of that promise to her? And if you look at it, open stack is a data last in terms of all the other solutions that's in the market. If you double click on what exactly are the different areas that people have kind of given a bad or lower scores is all of the things you could talk about. Support is a big part of it. Obviously the reliability piece of it, the security piece of it, the experts that's available to support your solution. Once you create this monster in-house, you're going to maintain that monster going forward, right? As you're bringing different components and create the solution, you're going to have problems. That's what's obviously being shown here. The great example is sort of a customer who, this is a customer court from a financial customer who talks about, you know, everybody talks about open stack and deploying it, but they're talking about small scale. Of course, the large guys are doing large scale because they have the ability to do it. But if it's small scale, I think everybody can do it, right? You can have 10 servers, 20 servers, you can do it. But once you start pushing the scale, you really need the support system to be able to not only do a day one, but also through day in. So one of the things that we think at Cisco is that, you know, we are basically a provider of infrastructure itself. But then we see an opportunity for us to be able to guide and help the customer as they start deploying your infrastructure for the specific use case. Yes, I want to deploy open stack. Now, I'm struggling with it over three months, six months, whatever the number of months you have. You know, we want to be able to solve it for you or at least be able to help you in that case. So we want to provide you a couple of choices. We know that not one side fits all for all our customers. So you can build on your own, which is typically what people do. They get open source components and software and build and run on their own. It's basically customer deployed and customer managed, which is most of the cases. It's also cases where a customer is able to don't want to even manage. Essentially they want to be able to run the, run the, you know, actually use a product, use the open stack, the promise of open stack without the difficulty of actually enabling open stack. The other option obviously can go to the cloud and they just say buy a computer as a service. But of course that's sort of the AWS model, which people do all the time. And that's why AWS is a good growth. But if you want to run it in your own private infrastructure, you have an option to choose, you know, you build and manage yourself or we can actually manage it for you. So there are two options Cisco has. One is what we're going to spend some time, which is called the UCS integrated infrastructure for open stack. This is basically a partner based solution where we work with our partners to build a full stack for you that you can manage and will provide you support. The other piece is the Cisco meta part, which is essentially based on our meta cloud acquisition where we are basically running the infrastructure for you. You don't have to do anything except provide an IP address, servers obviously, and then the switches and we will essentially, you know, bring your own, bring your own software and not only deploy it, but also run it for you 24 by 7. So that's a model that for people who are want the easy button essentially and use it as an AWS, but then it's still running in your own data center. So those are the two options. And today, like I said, we're going to spend time on how we are helping to help the first case, which is build and run on your own. So one of the things that we are using to enable you to have a faster way to deploy this, your own self-managed cloud, is using the Cisco Validated Design. Cisco Validated Design is usually, it should not be new news for customers, Cisco customers, because we have been doing this Validated Designs for quite some time for a lot of other solutions. So for example, we have solutions, if you want to deploy VDI, you know, VMware-based VDI or Citrix-based VDI solutions, or SAP, HANA, different kinds of solutions. We have 26 solutions that basically takes a specific use case and we sort of define the requirements of customers, talking to customers and getting the definition from customers and go through an end-to-end use case development, including all the components. Some of them could be Cisco components, some of them could be your partner component, but we do the validation and we do full system-level testing and we provide us a blueprint that you could go and deploy it. And the beauty of the blueprint is that if you want to have Cisco Advanced Services come and deploy it for you, they will do that. Or if you want your partner to come and deploy it for you, if you're using Cisco Partner or some other partner, they know this blueprint works because Cisco and other partners are behind it and validating it. So they are able to come and deploy it for you. And the key part of this overall Validated Design is the support that goes behind it. Once you've deployed it, obviously, we will provide different versions of it, obviously, as you go along from version one to version two. We will also support you through a Cisco support model, Cisco Solution Support, which is actually a pretty good way to feel comfortable deploying this blueprint. So the OpenStack specifically is running on the Cisco UCS. We call it Cisco UCS Integrated Infrastructure because essentially it takes all the elements that you need, a compute storage and network, and provide it in one package. So it uses the Cisco UCS, which is number one blade server in the U.S. and maybe one or two in the world, as well as we also have the Rackmon servers. It makes it stop-fying in the worldwide server market. One other innovation that UCS did is the ability to integrate network and compute and storage together in the same infrastructure and make it fully automatable. So we have the ability to basically scale a large number of compute and manage it together. So Cisco UCS Manager essentially manages a cluster of compute nodes from a single entry point. And also you have the UCS Director, which even scales beyond that and provides additional infrastructure automation. Cisco Nexus, which is basically our Nexus switching infrastructure, Nexus 9000 being our premier switch that's in the market in the last couple of years that Mike will talk about. We also introduced something called ACI, which is an application-centric infrastructure, which is our SDN innovation that Mike will touch upon a little bit more. And the key part of this infrastructure is that we are working with partners, Red Hat, NetApp, Intel, and other partners as we have built the solutions. So we have a couple of offers that we want to talk about today. These are available either today or next week. One is the Cisco UCS with the Red Hat OSP OSP Director and Red Hat OSP platform, Red Hat OpenStack platform with Ceph Storage, which is called a UCS solution to your left that you see there. That will be available basically this week or next week. We'll have that blueprint available for you. The other solution that we have essentially is to take OpenStack and have it run on Flexspot. Flexspot is, as you know, is a Cisco NetApp giant development. We have provided this with the NetApp storage, basically, back by NetApp storage, Cisco Nexus, switches, Cisco UCS, and NetApp storage. We have basically validated OpenStack for our Flexspot customers. We have quite a bit of Flexspot customers using NetApp customers and we want to continue to use NetApp storage back in storage and we want to be able to provide that solution for them. So we'll talk about two solutions. The key thing on both of them is that these are both enterprise ready. We have tested the various use cases and have it available as a turnkey model and we will be, you know, either already available or we'll be making it available shortly. And I want to actually hand it over to Karthik and I think, you know, he's been working on Red Hat for four years and, of course, OpenStack and, you know, he has a different perspective that I want him to say to us. Go ahead. Thanks, Balaji. And before I even get into OpenStack and all the fun stuff and how we've seen the market evolve over the last four years, let me start a little bit with a somewhat different example, different industry, right? Look at the early days of commercial automobiles, late 19th century, early 28th century and you look at how the early manufacturers are producing automobiles. Pretty much every car was coming out as a custom, unique design, right? And then what happened was Ford came along and you had the concept of an assembly line and, most importantly, well-known building blocks that can be pieced together so you can produce cars, commercial automobiles at scale. And so today, how many of you go when you go to buy a car, go to the manufacturer and start pulling together a bunch of piecemeal parts and assemble your own car versus just going to a showroom and buying a car and maybe customizing it a little bit, right? So the comparison I want to draw is when you take that and look at what's happening with the different generations of OpenStack deployments that have been happening. The first couple of years, going back three, four, five years of OpenStack evolution, the first two or three years, pretty much every major deployment was a custom deployment. Different super users, if you will, different enterprise customers were looking to tweak it to the end of the degree and what's been happening is over the last year to a year and a half that's not trend towards moving away from what I would call, in fact, the community calls the sort of custom boutique snowflakes into more standard solutions. And the reason for that is because increasingly more and more customers are not looking to have a completely tweaked and unique custom snowflake. What they're trying to do with OpenStack is have it deployed, have it worked reliably and get to the business value they're trying to accomplish with OpenStack. What is the business objectives they're trying to accomplish with this infrastructure? How are they getting business value? And they want to get to that business value really quickly. So OpenStack as an end in itself, we pass that generation to the next generation of customer deployments that we see very often at Red Hat because we've been talking to customers around OpenStack deployments quite a bit is that customers are no longer looking at OpenStack as an end in itself, they're looking at how do they get value from the use cases? Are they deploying containers? Are they deploying Hadoop as a service? Are they deploying other kinds of leading edge workloads on top of OpenStack? And so to that to that point, customers are coming to us now saying how do I get to a stable OpenStack deployment at scale quickly? And so what we've been doing together between Red Hat and Cisco and without partners like Intel and NAP that are pretty common across all of these major OpenStack deployments, how do we simplify them? How do we turn them into foundational building blocks that can be connected to each other so that they can be deployed at scale and deployed quickly, deployed easily and really how do you support that in production as the innovation continues to happen in OpenStack at a very rapid pace, right? So I'll come back and talk to what's happening in the future releases, but really how do you take this the set of foundational building blocks and get them to be deployed in a stable fashion, and that's the key problem that we're trying to solve together with Cisco and our partners like Intel and NetApp. So with that said, the solutions that biology talked about we're really vetting these solutions for the common enterprise use cases that we see, you know, things like making sure the infrastructure has high availability making sure that when customers need to upgrade between releases maybe as they're looking at continuous deployment continuous upgrades processes that we have well-defined processes for how do we take them between releases and how do we really build an ecosystem of partnerships and partners that are able to stand behind these foundational building blocks and build bigger solutions that can provide enterprise value, right? So no longer is it about taking just OpenStack and customizing every little widget you have in OpenStack, but taking these foundational building blocks and using them to get to business value quicker, right? And that's the key target that we're trying to accomplish. So one quick plug I'll make here, and this is something I'm saying as a Red Hat person, so I'm not wearing a Cisco hat, Cisco hasn't paid me for this although they do have me on stage for this is when you've deployed OpenStack at scale, our experience from having done this at Red Hat is not all platforms are created equal. Now there are some unique benefits to the Cisco UCS and to the Nexus architectures that lend themselves to better customer experience as it comes to OpenStack deployments. So when you go back and you track the kinds of challenges that our customers have, Red Hat customers have had with deploying OpenStack at scale and what are the kinds of issues they encounter. Very often when we look down into two buckets there's issues with initial deployment which is how do we get a deployment to happen in a stable fashion and as quickly as possible without having to spend weeks or months of time troubleshooting the initial deployment when things go wrong. The second bucket is once that's been deployed how do you keep it operationally stable in production especially as you start to do things like upgrading software new operational teams coming on board and having to manage and update hardware whether it's firmware, whether it's NIC cards, whether it's other things that could typically go wrong in a scale environment. Step back and you look at the UCS architecture, just a quick show of hands tell me if you guys have actually deployed on UCS before. The Cisco guys don't raise your hands I'm sure you have. If you've never, by the way how many of you guys have deployed OpenStack at your companies? Okay, so a few hands off so otherwise I'm going to be going like so are you guys in the right room? But anyway, so when you step back and you take a look at the steps required to deploy OpenStack in production for the first time the unique aspects of the UCS architecture that are really important that really provide value to an OpenStack deployment is this concept of service profiles. So when you take just the bare metal UCS hardware it doesn't really have any sort of intelligence in how that hardware is glued together. When you actually apply a UCS service profile onto the hardware, you can define a UCS service profile based on templates that you would define. You then specify how that hardware needs to be bolted together, how you take the raw CPUs, memory, NIC cards, storage, all of the different components and how you want them connected up and in effect give that brick a profile. And in an OpenStack context, the really amazing thing that this brings is the ability to define templates for the different OpenStack roles. When you have a scaled OpenStack deployment with let's say three or five or 15 controllers with anywhere from two to a couple of hundred compute nodes, right? The ability to define a service profile that says for a controller this is how it needs to be cabled up. These are the five different networks or six different networks that need to connect to a controller for storage connectivity, for tenant VLANs, for multiple tenant VLANs are required and multiple external networks for internal API traffic with an OpenStack. Similarly defined storage profiles to say exactly how the disks are laid out also things like firmware being able to specify which version of the system is going to run on a given node and put that into a service profile template and then apply that template on a given node which puts it into a controller role and have a completely different template for a compute node, a completely different template for a self node and then apply that template and then deploy OpenStack. That process of having a templated deployment is something that eliminates literally so many of the real world customer to deploy OpenStack at scale. So if you go back and look at the history of all of the customers we've dealt with over the last two or three years and the deployment challenges that you've seen so many in fact the vast majority of those challenges have to do with things like something miscabled somewhere some little firmware and one server being a different version than the rest and the ability of UCS and Nexus as well on the networking side to be able to define this sort of service profiles and enforce them on a given node before you start the OpenStack deployment essentially eliminates a whole just the majority of these deployment challenges. Similarly, once you have it deployed in production the ability to really avoid this concept of nodes drifting from the initial profile so for example as you add more compute nodes as you add more controller nodes as you have to scale your environment to a larger size again being able to take those identical service profile templates and applying them to nodes really guarantees and gives you a lot more confidence that as you add nodes into the pool it's not going to accidentally be misconfigured to cause a whole bunch of the operational issues that you see in production with OpenStack today. So part of the challenge with OpenStack is the amount of you know this is a benefit is the amount of innovation that's happening but that always typically means that the troubleshooting tools, the expertise people, the knowledge that you need to have in your teams to be able to troubleshoot when things go wrong it always tends to lag behind innovation by a few weeks to a few months and so having this sort of a consistent well-defined templated deployment both for initial deployment as well as for ongoing operations really helps address that key operational challenge that most enterprise customers tend to face and so this is really a plug that I want to give for Cisco in terms of what they bring to the table and now the other part of that if you will is looking at how all of this innovation is brought to the market in OpenStack. Now if all of this was just constrained to the hardware and then OpenStack is just an application infrastructure layer that runs on top of it that's one thing but really the way it's Cisco has brought this to OpenStack is all of this hardware innovation, the platform innovation is being exposed through OpenStack via OpenStack plugins whether it's a different ML2 neutron plugins on the networking side or in the case of things like the UCS differentiation that's being exposed via different drivers being brought to bear on the ironic on the neutron on other OpenStack modules and so for someone who's deploying OpenStack at scale in an enterprise they can benefit from all of this with a unified view from within OpenStack rather than having a whole bunch of different management tools on the UCS and the Nexus side and the completely different tools on the OpenStack side. So what we're doing together is taking all of the common plugins that we see as relevant for enterprise customer use the ones that we started with here are the Nexus ML2 plugin for physical connectivity on the Nexus NXOS side with the Nexus 1000V for virtual switching and all of the benefits that the Nexus 1000V brings to the table including better troubleshooting performance, better operations management and being able to tie it into the existing processes that you would use to manage the square devices. Along with the UCS ML2 plugin which takes care of orchestrating all of the configuration settings on the UCS infrastructure itself such as the Fabric and Tekken X auto-configuring the VLAN information auto-configuring some of the core data and expose that via OpenStack. This is all the more important now moving forward with OpenStack because of what's happening upstream with an OpenStack. First of all there's a new governance model the Big Kent I'm sure you guys have all heard about that. What that means is there's going to be a lot more projects coming into OpenStack that self-classify as the new projects can all satisfy themselves as OpenStack. In addition what's also been happening with some of the OpenStack projects like Neutron is the vendor plugins like some of the ML2 plugins are being taken out of tree and so it's doubly important to have a partnership like we have between Red Hat and Cisco to working with our partners like Intel and NetApp and others to really pick the common pieces that we think are extremely important to customers and collaborate and how do we make the customer experience so that's all things that we collaborate on together and so I want to leave you with that and turn it over back to Balaji and Mike to take it to where we're going in the future as well. Thanks Karthik. I want to basically talk about what's available, what's not available and should I give you a little bit more detail about these two stacks that we have. Think of it as an integrated stack of solution that's available for sale or that's fully supported. So the first one is the Flexpart solution we talked about there. The back end storage is a Flexpart FAS for RL System 8040 the ECD storage basically for Cinter and Swift storage for the NetApp model and then it is using the Juno release of OpenStack on the Flexpart today if you will obviously update with Liberty as we get into OSP version that supports Liberty but today it's basically a Juno base that's available. We shipped this CBD basically beginning of September and that's available supported by NetApp partners. The second solution is the UCS solution with OSP 7 which is basically a kilo based deployment solution and using the CIF back end storage which is basically using the Cisco UCS CCD as a CIF nodes and that's available basically next week. So key part I want to leave with that besides the solutions being available besides the blueprints and the solution CBD being available is the support that goes with it. So you would need support not only in obviously we have built this so you can take that and deploy it yourself absolutely you can do that and that saves you a lot of headache as it is without even calling Cisco or calling Red Hat on this topic but you're going to need to support maybe you need to even support for day 0 as you're deploying this and also once you've deployed it now you want to get support for adding more nodes or other issues that you have or when you get to the next release of software how do you go from let's say in this case kilo or beyond. So we have a model where Cisco is a single number to call so you have Cisco solution support and you have a single number to call Cisco and then when you take the call we do the triaging for the solution overall. With the specific open stack or Red Hat related issues that we need Red Hat help on then we will go ahead and call Red Hat on the back end from a customer perspective from your perspective you're able to call Cisco and you're able to get the single point of support and we figure out the logistics of making the solutions of support more seamless for you so that's all I have on the overall the solution and I want to give it to Mike who's going to talk to us about the awesome ACI and Nexus 9 case. So I have the honor of being between you guys and lunch and I think I have about two minutes to get through everything so I'm going to show you a bridge set of slides and I'm going to hopefully finish on time for you guys to actually get your food but I did want to spend a minute or two to talk about the Nexus platforms and how the Nexus platforms are highly flexible and can fit as a component of these kind of solutions and how they'll also offer a platform for evolving the solutions to more advanced SDN technologies. So the way we thought about the Nexus platform and particularly the Nexus 9000 was a multi deployment strategy where you could deploy the switch in what we call a programmable network fashion which is again what you saw to some degree in some of the solutions that you know we're now doing with UCS and Red Hat which basically run standard NXOS and allow it to fit into all of our existing network topology that you have today with APIs available for things like open stack plugins to manage them and again that's the bread and butter of our business and that's where a lot of the deployments are happening today. We also have advanced capabilities that we call the programmable fabric. We're still within NXOS we can use protocols like BGP EVPN and allow you to build highly scalable VXLAN fabrics out of our technology and manage them via third party controllers but also via the Cisco VTS controller which can actually work directly with the EVPN technology. The third solution you can do again on the same hardware is Cisco ACI. Cisco ACI is our enterprise class SDN solution it introduces the APIC as a source of policy automation for the entire network fabric and actually we have a set of open stack plugins that can actually manage APIC directly and give you a merged overlay and underlay that can give you physical and virtual integration across the entire environment and again that's the solution we'll be working on as the next generation of the work biology and Karthik we're talking about in terms of our UCS and Red Hat solutions so we'll actually be next tying them into ACI and being able to offer the unified stacks with ACI as the networking foundation. I think on that note I'll probably terminate it to make sure that everyone can have time for lunch but feel free to come grab me after and we can talk more about the ACI solution as well. Any questions that you guys have? Happy to take it. Very just obviously thank you to our presenters any questions from the audience or Mike or Karthik or biology? You're very clear that they got it. You guys really you must have nailed it, yeah. Well thank you that's the second time I've done that. Thank you to our presenters we're actually now going to do our end of session drawing I'm going to ask maybe our guest to pick the names I'm going to come around and collect the little cards that we gave you when you walked in. So can you win with your Cisco red hat? You can't win and you can't win. I would put the limitations there. So while we're waiting I think the ACI integration into the UCS solution will be available in Q1 time frame basically hopefully with the Liberty update. Again from having worked on this last few months with the Cisco team there is a lot of complexity that we're taking out of the building blocks for you. The numbers of issues that we're troubleshooting starting out for you it's not something you want to be encountering in a production deployment so there's a ton of value that's hidden in the solution that we didn't get to talk about yet today. All right. Last one. Last one. No, no, go ahead. There's not that many in there. Good odds. All right. And the winner is? And the winner is Patrick from HP. Oh, HP, wow. There you go. Congratulations. You can not only take the information and also take the Apple watch with you. And can we interest you in some nice hard work? Again, thank you to our presenters for your kind of applause for them. Thank you, Gary. Everybody go enjoy your lunch break. We have two more sessions this afternoon. Hope you can come back and see us. Lou Tucker will be a part of both of those so hope you can make it back here in the afternoon.