 Now we're gonna give you some extra time. All right, let's get the ball rolling. It's a slow start. That's mostly my fault. So, all right, whenever you're ready. So, like I said, my name is Henrik Plix, one of the senior product managers, and I'm going to talk a bit about Ironic and how Ironic helps make OpenStack better for some traditional and cloud native workloads that you might have running. So, enterprises want to reduce cost, right? They want to increase utilization, automate, standardize, and contain complexity. And virtualization was the way we've been trying to do that for the last 15 years. So, for the last 15 years, we've tried to squeeze as much as we possibly could into a VM. But, what if you need access to the hardware, direct access to the hardware? Maybe you're running a database. You want to run big data workloads, or maybe you're looking at this new thing called containers. How is that working in a virtualized environment? So, that's where Ironic comes in. So, Ironic is an OpenStack service. It was broken out of Nova. It was called the Nova Bare Metal service before, and it's now its own OpenStack service. And it basically aims to provision Bare Metal just as similar to what you do with VMs. So, Ironic cuts out the middleman, so you don't have to worry about a hypervisor. There's no performance tax that, you know, you get to the hypervisor, you don't have to worry about performance, or you don't have to worry about the expensive hypervisor licenses. So, what is Ironic? It's really seen as a hypervisor. It's a hypervisor API from a Nova standpoint, and a set of drivers, not either reference drivers or vendor drivers that plug into Ironic to do Bare Metal provisioning. So, here's the, you've got 10 more seconds to take a look at this picture. So, this is the Ironic state machine. So, these are the states that I know typically would go through when you do provisioning. So, this is what Ironic helps abstract and helps the admins to deal with and not have to do that manually. And it's also easier for the tenants and the users. The commands and the way you do the Bare Metal provisioning is similar to how you do a VM. So, you do a Nova boot Bare Metal and you get a Bare Metal machine. And similar for ops, it's less headaches for ops. You don't have to worry about cleaning up nodes after the tenants are done messing it up. They do their own provisioning, so you don't have to do no more OS installs for the admins or the operators. And to make it even easier and better for the administrator, HP has a tool called OneView, which is basically a tool that by, or through templates helps you manage your hardware environment, servers, switches and whatnot. And in addition to that, there's also an upstream plugin for OneView that integrates with Ironic that helps automate even further by doing automated flavor creation and syncing the Ironic database with the OneView database. But wait a minute. One of the things we said we wanted to do with solar versionization was utilization. So, how do we do that when we're using Bare Metal? Do we just go back to where we were 15 years ago? Because it's hard to divvy up a Bare Metal machine. So, one thing you can do to overcome that is look at more of the low cost, high density servers like the Moonshot, where you basically shove 45 cartridges into a chassis and then you can do seven or eight chassis in a rack so you're getting hundreds of servers in a rack. Or you can look at this new cool thing that everyone's up and around about the containers. Containers love to run on Bare Metal. You don't have to worry about the performance hit running in VMs and you can get that nice divving up the machine that you can with VMs by a lot more lightweight. So, gonna end by taking a quick look into the future and some of the cool things that are going on in the ironic community in Newton and forward. So, one of the most asked for and highly anticipated feature is the multi-tenant networking. And right now you can only do single networks. Once you get the multi-tenancy then there'll be rainbows and unicorns and ironic will be awesome. Hopefully that will finally land in Newton. There's also work going on. I think the specs and the blueprints are being written right now about booting from volume and attaching volume basically using cinder together with ironic, which you can't really do right now but that's one of the key areas we're focusing on as well. But Bare Metal, it's available now. Go check it out. It's usable from Kilo and forward and it's also available in the new Helion OpenStack release that we announced here this week. So, go and check it out. Thanks. Under five minutes. All right, don't walk away. Yeah, we're gonna ask some questions. Folks, questions? Go ahead. The example you gave is mainly the Moonshot based on the ARM processor, meaning the one chassis hosting many ministers inside but I haven't seen any VNF or application that kind of rebased themself for the ARM instruction set. So, do you see a demand for this or you just expect that it will boom? Thanks. So, for Moonshot support, it has both Intel and ARM processors, right? So, right now we don't support ARM yet so that's something that will be in the future but there's definitely been an uptick in interest in Moonshot and a lot of that has been driven by ironic and there are service providers that wanna do Bare Metal as a service that seems everyone wants to do now either with or without containers so it's actually ironic because it's driving a lot of interest in Moonshot. Any questions on this side? No, okay. All right, what we're setting up for next, presenter. Again, the format is five minutes, 20 slides or approximately 20 slides. Anybody else interested in doing this next? A topic they wanna present, five minutes, no. Anybody wants to help Gavin present his 20 slides? Okay, are you ready? Test, test, oh, there we go. Thank you very much. I do not have time slides but you're gonna time me collectively. All right, I'm Dave Holly, Healing Product Manager and I'm gonna be talking about OpenStack Switching in particular for enterprise applications. Give you a little tour of where we've been and where we're going and what you can learn at this conference. All right, so I'm gonna use the reference of enterprise applications in OpenStack and I think this is really a cool topic area because it adds the right level of complexity to what we've been trying to do in networking and explains a lot of the projects we've been working on and I'm gonna start from the concept of a three tier enterprise app web application tier and database and what was required to do that. So first of all, you need local switching within the tiers, that's straightforward. If you have two VMs on the same compute node, you just have the local OVS switching that occurs between them and of course if you're on an adjacent compute node, we just do the translation onto the local virtual network across from one compute node to the next. Straightforward, simple. Next step, going between the tiers. Hold on a second, think about this. If you have multiple hundreds of applications running within a data center and every single transaction between tiers needs to go to a central network node, you have a serious bottleneck problem in your network. What did we do to solve that? This is where the area of distributed virtual routing or DVR was brought into play is to address this node. What we do is we add a local router for each tenant environment and local namespace so that we can do that routing and maintain the locality of transactions. For east-west, again, whether you're within a single node or going in between adjacent nodes. We've also addressed the north-south connectivity so for that web tier when you need to go outside, we can actually connect directly by having a local external bridge for that traffic that needs to go to the clients or the external environment. If we need to connect within to the network externally, we also still use that local network node for that source net function. Again, this is not generally used in such frequency in terms of transactional behavior, but we have recently added HA capability into that area. What we found though is a lot of enterprises have a real heartburn associated with natting in general or floating IPs because they have an operational model where they need to be able to access every single VM in their environment at a known IP address. So what's going on right now is work that's addressing this through a learning process using BGPR so that the identity of the VM is associated with the compute node that it's on and an external router can access directly into that while you use a static IP framework. Very important for a lot of enterprises. At the app tier, going down one level, a lot of applications have been built using ESX as the virtualized environment. And so something we've done in the past is already to put open V-switch into an ESX environment. So we have all of the same concepts available to us and we can within a single region, associate your KVM, your ESX and your bare metal or ironic environment. So let's go on to that. How do you do this switching if you have bare metal services embedded? Well, we've created the ability to directly connect from one compute node to another at a switching level and if you're using VXLAN, we've got a mechanism to do that in hardware for that VXLAN translation to a compute node. Once we add routing to it, things get more complicated. And again, this is an area you can learn more about here. We actually can leverage that routing transaction just as we did in the virtual space to talk to the compute node. The issue is when you come back in the other direction. And so what our engineers have done working with the community is to establish a DVR routing function that allows the bare metal service to talk back upstream into the virtual environment, giving you that completion of the bare metal routing function. So if you go from top to bottom again, we've now addressed the need for a web tier accessible through static IPs, an app tier that involves ESX, and a database tier that can support bare metal environments, all of which can route through the stack top to bottom. So we've transformed this concept of a KVM cloud-centric environment to an enterprise-class end-to-end multi-tier application supported consistently within a single region. If you want to go and learn more about these, there's a lot of sessions on Thursday. Some of the engineers are here, I recognize them, but you can learn more about the new topics that are coming in Newton and Okada. So feel free to dive in on Thursday. It's gonna be a great day for these types of sessions. So thank you very much, five minutes, top to bottom. Wow, that's right on the money. That's awesome, thank you so much. So questions? Go ahead. Why don't I just leave the mic with you, and you can just, no. The solution you presented for the bare metal connecting to network is mainly for layer three, diverting towards the controller host to layer three agent role for routing, right? So how about the bare metal layer two? Do you have any solution? Oh, so I kind of whizzed past the layer two. I did have one slide talking about the layer two. Right now, in the community, you have the flat VLAN-based networks, but we also have the work that's gone into, I think in Metaca, if someone can correct me, that's actually delivered the ability to have isolated tenant domains in the VLAN space within your bare metal ironic provisioning. So that's actually complete as of Metaca. The routing is what's coming in the next releases. Any other questions, anybody? On the slide around fixed IPs, I'm not familiar with the new DVR functionality, so I'm not sure if this is something that's in OpenStack or whether it's just something you guys have built, but you had a box in there with BGP in it. Is that part of Metaca now, or is that just something you guys have added in? There's ongoing work in that area, so the initial work that was done started with the last release, but I don't think it's quite finished yet, and so we'll see that complete, hopefully in the Newton timeframe to deliver that. And Ryan Tidwell is the lead for that particular project, and he's presenting on Thursday. Cool, thank you. All right, thanks. Hey, thanks again. That was beautiful. And was this your first Ignite? First Ignite. First Ignite, but he rocked it, like less than five minutes. Beautiful, thank you. I cheated only at 18 seconds. That's okay, that's okay, that's all good. Let's just make sure you don't have to. I think he built in timers, but. Is this in mic working, or can we turn it on? Thanks. Okay, so my name is Gavin Pratt, a Senior Director of Product Management for Healing and OpenStack. And back before I was working on Healing and OpenStack, which is HP's private cloud software offering, I was working in HP's public cloud, and it's interesting, part of why HP chose to pivot from public cloud to focus on selling private cloud software was we had a lot of customers saying, it's hard to really beat the price point of an AWS or the like, but we, almost all of them wanted to run something on-premise that had similar agility in the like of a public cloud. And they said, if you could find a way to package up all the things that you're doing in your public cloud and help us in particular solve the operational realities that we're currently struggling with, and especially with OpenStack, we pay you a lot of money. And so we heard this from customer after customer, and finally we said, maybe we should think about how do we package up the technology and the learnings of running a public cloud at a pretty large scale. And package that, not quite in strict wrap software, but they're pretty close. And so that's what the talk is gonna be about. It's interesting, I wasn't supposed to present this, somebody else on my team was. So I haven't seen the slides before, but I do know the content. And yet my biggest hurdle is I don't have a Mac, and I don't know how to advance the slides. So that's what we're solving right now, and that is our biggest. It's just ready to go, so whenever I'm... Like I said, this button? Yeah. Okay. Oh, and it's timing. Yeah. You know, I just said this to you out of my artificially. Okay. Okay. Is this part of the learning? Wait. Yeah, this is the big button. Wait, the other red button. And I'll just put this on the side. That's not a button. It's not a button. No, it doesn't work. It's not a button. Oh, here. Oh, that bottom left. Yeah, okay. So monitoring opens at a good scale. So, you know, back in the public cloud days, we were originally using Solometer, and we had this really crazy backend that was trying to solve two problems. One was all the usage metering so we could build our customers. And the other was actually doing things like health checks and capacity checks to know if the system was down. And the problem was that Solometer was really neither purpose-built for one nor the other. I guess if you had to pick, you'd say it was more for billing. But because it wasn't... Are we sure it's on auto? Okay. It's fine. Okay, well, I'm gonna ignore it then. But also be under five minutes. Because it wasn't purpose-built for either one, we said, you know, architecturally, especially as we're trying to go to scale, it made more sense to split these two things out. Because for example, we were running these crazy Hadoop jobs once a month to take all the Solometer data and package it up into pricing files that ended up being just, you know, I don't know, like 20 pages long. And I was like, why does it take a week on this crazy Hadoop job when you have 20 servers to do that? There must be a better way. And so that's where Manosca was born. And so certainly, as I mentioned, you know, as more and more companies are saying, hey, I'm not interested in cloud, just, you know, in terms of a public cloud thing, but I wanna run cloud on premise. All of a sudden, all these operational realities became ones that were important to help individual enterprises come out. And many of these enterprises, sometimes had maybe five or 10, or sometimes even less than that, IT staff starting on other initial open-stack deployments. So they couldn't invest in 30 DevOps engineers like we had, for example, and our public cloud starting it up and Iraq space had a lot. So really, how do we make this, this open-stack platform address production workloads, but also be easy enough to use that enterprises that may not, they may be more familiar with VMware, for example, or with AWS, but they're not, you know, Python ninjas or anything like that. How do they get into production? Sorry, just start with the lights. I'm gonna skip that slide. Oh, excuse us. Sorry, it's just, it's hard to see with the lighting. And then so, there's a lot of problems we're trying to solve in the monitoring space. And so one is, you know, is the system healthy? You know, certainly if a bunch of servers go down, in theory and the cloud context, this is less critical than in traditional IT world where, you know, if a server goes down, like in traditional IT, every server had a name. Like the server named Gandalf and Icarus, and if Icarus went down, it was this, you know, national event. I was talking to a customer, I don't know if I can say the name, but I was talking to a customer last year and they were saying, you know, we're bringing up pets in cattle, but we currently have pandas. So basically, these like, we have these like zoo animals that are as like an international incident if it gets cold. And we're trying to get to pets. And so when you have those kinds of issues, and then later on top of that, things like security and compliance, where if a server gets out of compliance, because maybe somebody with root access changed permissions or the like, you have to monitor all of these things, especially for, you know, our financial services clients and our healthcare clients that have regulatory concerns. It's not just about running a company that they feel proud of, but they'll literally get taken to court if they get out of compliance and any of these things. It really is a mission critical issue for them. And I think I might be running over time, despite the fact that I, I was gonna say, but in general, you know, it's very easy to quickly generate tons and tons of data, and tons and tons of data doesn't necessarily drive to insight. And so one of the things that we were struggling with initially when we were running the public cloud was how do you generate a lot of data, but then either aggregate it up into a meaningful way or if you think of the needle on haystack analogy, how do I zoom in on the one piece of data that's important? And so that was one of the ways of thinking that we wanted to make sure we baked into Manosca, making it easy to do things like graphical time series data and see things like trends, the ability to see things at a high level, quickly go in and see more detail on specific areas that you want to pinpoint. Next slide. It's not working. Yeah, exactly. So monitoring capacity is as important as monitoring performance. So back, I don't know if I should admit this, but in our public cloud days are actually are, I mean, it's a nice problem to have, but the main problem that was keeping me up at night is as a product manager was running out of capacity because on the one hand, you know, the promise of the cloud is that you can get a server on demand, but the thing people forget about is a virtual server has a real server behind it and those real servers cost money and some of these servers, you know, can be very expensive servers. It could cost more than your car. And if you have a terabyte of RAM and all SSDs and the max number of cores and the like. And so it's not as simple as saying, let's just always have access capacity. That's a really quick way to have a horrible PNL as a public cloud, but also even your internal PNL if you're running an enterprise private cloud. So on the one hand, you don't want to over provision, but it can also get very tricky because if for example, you have some servers running Windows licenses and some Linux and the way the Microsoft licenses work is any server for most, depending on the licensing scheme you have, but for most enterprises, they have a Microsoft license. The second you get a Microsoft license on that server, you're paying for all the cores on that server. Even if you only have one tiny Windows VM. So for example, right off the bat, we had to say, we're gonna have a certain number of servers allocated to Windows, certain ones allocated to Linux. When you start doing partitioning like that and then saying, okay, these are my high memory servers and these are my all SSD servers. And I don't want to provision workloads on an all SSD server, they don't need the SSDs. But all of a sudden you start getting a lot of fragmentation in your data center. And even when it seems like you have a lot of servers, it's very easy to quickly run out of capacity for that flavor or type of server you need. If you don't have really good data, not only real-time data, but also the ability to see trends and things like charting time series, but also beyond the run regressions to say, am I on track to have enough capacity three weeks from now? Because then remember, if I need 10 more servers or 100 more servers, you don't always get those servers instantaneously, you call a vendor, and at best they'll FedXU servers in a couple of days. That doesn't always solve, if you have an outage that you have to solve immediately, it doesn't help things. So capacity can be your number one source of outages if you're not careful. This is not advancing. 15 second rule. Okay, thank you. So I mean, I've been talking about this all along, but you know, and our Manosca. So Manosca basically was a lot of custom code that we wrote in our public cloud to solve all of these real-world problems. And then we open sourced it and made it part of the open stack Big 10 and it was accepted. So it's officially part of the open stack community. But I believe that HP is one of a subset of companies that are actually offering it as part of our distro, even though it is part of the open stack community. So it's a great way to solve these real-world problems, but it's also not heaven or lock-in. And because Manosca is very configurable, in general it ties into all the open stack services via RESTful APIs, and so it sees all the services. And then you can also configure on two nets so that you can get more data on the services that you care the most about that are most often having issues or are changing the most. And then you can have robust alarming to let you know when there's a problem. And you can of course tie this in via via the RESTful APIs to something like a pager duty or some kind of external service to let your IT admins know when there's a problem if you want. And then so it's important to remember, even though Manosca allowed us to address a lot of the monitoring issues, that we used to use Solometer for, but it wasn't working well, Solometer hasn't gone away. So Solometer is still very useful for things like showback and chargeback and in general, usage monitoring or metering rather. And so just remember that these two things work together and they allow you to solve different real world problems. And it's also important to remember that, as I'm doing things like adding compute nodes or I'm upgrading servers, it's important to know did something go wrong? So do I need to roll back to the previous kernel version on a compute node? Do I did? Did a certain push of a Python package not work? I need to push it again. When Manosca will soon be able to allow you to see those lifecycle events of your cloud as well as general things like health checks and capacity and also log data, which is critical for things like compliance, among other things. And then on top of all that, you'd say, well, the IT admin, certainly in the VMware world is used seeing a nice GUI, good visualizations and realizing that many of the same IT admins are now the ones running in the OpenSat cloud, not always, but sometimes. HP has built in graphical front ends so that you can see this Manosca data for things like time series visualizations, but also more granular data as well. And let's get that slide, I think it was kind of redundant, to be honest. And that's redundant as well, to be honest. But in general, Ops Console is a great way to have a portal into this data. And I will stop there. Again, bear with me, these are not my slides, but I hope it was useful. I'm not seeing the slides and doing it in not five minutes, but close. Do you have questions about Manosca monitoring, at scale, life cycle management, anything else OpenStack related? Questions about HP, working at HP, lifestyle in the valley, I don't know, whatever else. Any questions, guys? Okay, oh, there's a question. While you're answering, we're gonna have Daniel setting up. I was just wondering how stable is Manosca, and is it in deployment in production right now? It is very stable. I don't know which of our customers are referenceable, but we have some very large customers, including some government agencies, as well as some large telcos that are using it in production today at pretty large scale. I mean, remember that this is using the, I won't say exact same because we've obviously evolved it, but it started from the exact same technology we were running our public cloud on, which at the time was about 5,000 to 6,000 servers. And it's important to remember that although, there are public clouds out there with more than 5,000 to 6,000 servers, we were running this as a single control plane. So most of those other companies get hitting larger scale or start using cells and ways to fragment it. So the fact that we took the technology that could run a single control plane at that scale, and had been now evolving it for the past five or so years, you know, we proved in our own public cloud, we're proving it in some pretty large customers now, and I would definitely recommend it. Okay, thanks. Any other questions? Maybe just a very dumb question. See, this one evolved from your public cloud experience, but right now from HP perspective, it started evolving to like a more about a private or hybrid cloud, right, from the business perspective. So how much, you maybe share, is there any difference over there about whatever's from operation perspective or this technology perspective between you treat the private, hybrid versus the public cloud basis? Tell me if I'm not understanding the question right, but I think so you're basically saying across private, public, and hybrid in the middle. Yeah, maybe the behavior will be different and the monitor matrix will be different. And maybe because right now we try to evolve from the matrix monitoring to life cycle management, apparently from the hybrid versus the public, the life cycle management will be different or there's some difference over there. So how you evolve this kind of a tooling because right now you can start to contribute to open stack. So will this one be, I would say leverage the component you learn from your hybrid cloud basis as well and contribute to the open stack as well? Yeah, I mean, so first of all, you know, it's important to, so when you say public, I assume you mean as a developer, I want to create VMs in a public cloud and somebody else is running that for me. Assuming that's what you mean, you know, a whole number of problems go away. I don't care about capacity anymore. I assume that AWS or Azure or HP is going to figure it out behind the scenes. So, you know, in capacity management, health of the control plane, I don't care about that. I, you know, in many ways it becomes things become simpler as you think about public cloud. As long as you have a way to know that, you know, are my VMs healthy? That's usually when I talk to most customers in the context of public and also hybrid, the things that they care about. So in many ways, I would argue the private cloud is the most complex scenario. Even though from a topology perspective, it seems like the simplest. Did that answer your question? All right. So our very last one. Thank you. Nathaniel is going to rock the house with and come back really strong after completing his Windows update over the last 40 minutes and scaring everybody, but he's ready to go. As soon as it, and mine are timed. Do you guys want to switch his? Okay, great. So my name is Nathaniel. I'm a security engineer with Hewlett Packard. And I just want to say thank you for being here. I'm kind of impressed that there are actually people in the room right before the party. So my slides are timed. Hopefully I'm going to let them see and I'll get you out of here. So I am a security engineer and I think I personally am the least interesting part of this presentation. I don't like to talk about myself. So I'll jump into the technical details. I sort of like an open stack to a bag of Legos. You can take all these different parts and customize them exactly the way you want and build exactly what you want with it. There are some caveats that come along with that. Security is usually done per project and it is up to the security experience of the project to handle it. So the open stack security project was created to sort of shore up some of these problems. But they, again, take sort of a specific project by project view. And we're adding to the assurance. I am a member of the security project. But it's not really a holistic view. So if you don't know what you're doing, it can be one of those times where you step on a Lego in the dark and if you've ever done that, you're sort of hopping around and you're wondering if there are any more and it's just going to be bad news. So one of the things that I really like doing in my job is contributing upstream. It's one of the great things I love about HPE that we get to do all of these things. And so I'd like to go over a couple of projects that add to the assurance that we have created and contributed upstream and are still driving. The first one is Leeson. This is a proof of concept for full disk encryption. It does some clever things taking new UIDs and using the inherent ideas of TLS and user validation to transfer keys across the network and protect either bare metal or VMs. I personally am the lead on the open stack security guide. So I'm trying to get you the best information about security for your open stack cloud as possible. I've written a couple open stack security notes, the pragmatic use of security. And if you have not, we just got an open stack blog. I highly recommend checking it out, a security blog. Our British contingent as that's Big Ben has created a tool called Anchor to handle sort of the problems that come with handling PKI at scale and some of the inherent problems with revocation. And so we have this tool to automatically do those things. Our flagship is probably Bandit and they're in I think about 10 to 15 gates right now but it's a security tool, a static code analysis tool for Python. And it's useful for open stack and outside of open stack. So those are sort of what we do within the upstream community. And for those non-Americans, this is the sales guy who says, but wait, there's more. One of the things I really like working about and I feel it Packard I mentioned, I mentioned the upstream, but we also have this entire portfolio of market leading products that we can leverage. So where we run into architectural difficulties, we're able to take in many cases best of breed products and put them in, the first one being tipping point. So I actually, because I'm here, I'm not at the one that talks about tap as a service but it's new. So your IDS, your ability to monitor events at the edge and the management to look for these types of messages is not quite there yet. And of course that all feeds in to ArcSight. We have Manasca, it's fantastic, but log management is not security event management. And ArcSight can help you with the whole security response process. If you've never used this tool and you like Legos, it's a fingertip saver, I promise. Other than that, we have Fortify. So I mentioned Bandit and Bandit is fantastic for Python. We have it in the gate, but Fortify is gonna help you with everything else. So that app that you're deploying with your OpenStack Cloud. If you make tweaks to Horizon, Fortify is gonna be able to look through it and tell you some of the pain points there. I am from Seattle, so this is my personal favorite slide, the hipster barista, but Web Inspect is going to be able to give you the sort of that outside view. It's going to be able to scan your API endpoints. It's gonna be able to scan your application and tell you what your attack surface looks like from the outside and give you some idea of where you can harden that infrastructure. And then finally we have Data Security and Encryption. Things like ESKM, it's Hall of Voltage. I mentioned Leeson, but it's a proof of concept at the moment. If you're not wanting to do those key management things, we have some products that we're working on integrations with right now. And so it all adds up to this hardened edge, this fortification that can be your cloud. It's so cool that a spaceship has to come and check it out. So both contributing upstream and across the portfolio, we are working to make sure that everything is in fact awesome with both our products and what we're contributing. And we want to continue to do that and continue to drive those various projects. And I'm sort of stalling for time because this is my thank you, but I actually have one more because I didn't know you were gonna hard cut me off. So thank you very much for your time. I appreciate you being here. And I will take any questions and get you guys to the party. And I know we're a bit over, but I really appreciate that coming back. Oh, there you go, very strong. Questions, guys? Anybody, to anybody else? If not, then have a great party. Thank you so much for staying for this forum and hopefully you enjoyed it. And we can't wait to see you next year.