 All right, I guess we'll go ahead and get started. So my name's Bob Calloway. I'm a technical marketing engineer, reference architect. I have a variety of titles at NetApp, but essentially I'm focused on NetApp's integrations with OpenStack, our community participation, and making sure that our customers are having success with joint integrations that we do. Today I'll be talking about how do you use a concept within Cinder called Volume Types to avail differentiated levels of service within Block Storage, the Block Storage project within OpenStack known to the development community as Cinder. So first we'll quickly go over an agenda. So I'll talk a little bit about why is it of interest to actually differentiate what you have around block storage within a cloud. Give you a little bit of background as into how Cinder works, because I think it's relevant to understanding the overall flow of how Volume Types are actually respected in the provisioning process. We'll talk about two unique features within Volume Types. One is called an extra spec. The second is called a QOS or a quality of service specification. We'll give some examples of what those look like. And then I'll walk you through kind of a workflow of when you provision a volume of a particular type. How does that actually work together with the specifications that you've defined to affect the outcome that you're looking for? And then I'll have a recorded demo given time. I'll see what we can get through. But essentially it'll show you kind of how that end to end flow works, both from a view of Horizon as well as from the CLI. And then if there's time we'll do Q&A. So it looks like this isn't working. So why think about availing different types of storage capabilities from Cinder? I think the first thing that's probably fairly obvious to most folks is workloads are not created equal. I mean, you think about what a web server would need versus what a Hadoop deployment would need versus a more traditional SAP type workload. I mean, the way that storage is consumed by the application layer is very different. So a one size fits all approach may be very simple to get started. But when you try to pin the underlying capabilities that you have from a hardware perspective into the application layer, oftentimes differentiation is where you're led. From a service provider perspective, this is relevant as well. Think about, they're obviously trying to sell product at a particular margin. Anything that they can do to make that deployment more efficient drives not only greater revenue but at better margins for them. And as well, if they can show that they have better function that can align towards big data workloads, if that's kind of their specialization, then that's great. And the wonderful thing about Cinder being an abstract block storage layer that sits on top of, can sit on top of a variety of different storage platforms, whether it be kind of software defined storage in the LVM or even the Ceph type of mode. And then you look on the other end of the spectrum where you've got kind of hardened storage appliances. NetApp is an obvious example that's near and dear to my heart. But there's other companies, EMC and Huawei, Hitachi, IBM, etc. That all are looking to plug in their capabilities into a cloud infrastructure. And so since Cinder sits as an abstract layer up on top of that, you're able to access that from a user perspective in a consistent manner. So it's not, I have to code to the EMC API for this and oh, for my big data workload, then I have to do NetApp for that. It's a single API and this concept of volume types is what allows you to differentiate the underlying service that you have. So what do people typically differentiate on? I touched on performance, that's kind of the first thing that comes to my mind whenever people talk about kind of differentiated services. Going back to the kind of the network stack, there was a lot of push in the late 90s, early 2000s around looking at different tiers of network service based on latency or throughput requirements, storage in some senses. Not too dissimilar from that. You think about whether it's media format as a spinning dish, disk is it a flash drive, what are the IOPS limits that you're looking to achieve? Was the underlying storage thin provisioned or thick provisioned? That has implications from a performance perspective. On the flip side, depending on whether you're looking at like a dev test workload or the canonical SAP kind of mission critical enterprise perspective. You may have basic requirements in terms of if it's not mirrored and it's not snapshotted on a 30 minute cadence, then that's fine. But for my near and dear ERP system that may be running on top of OpenStack, I need to make sure that there's backup when I create a volume here. And I know where that's going and how often are the snapshots or the backups taken. And then finally, thinking about the specifics of an environmental environment, that's a great phrase. You can tell it's right after lunch, so I'm trying to wake up. I didn't find the coffee, I only found the water when I got into the room. But whether it's a protocol perspective, whether it's iSCSI versus NFS, whether it's you have this heterogeneous environment, multiple vendors like I mentioned. All of these things are what a service provider might wanna differentiate on and avail through Cinder. And just to give you some context, obviously Amazon is the elephant in the market today from an IaaS perspective. You look at kind of what they do today. When you go to create a block storage volume through EBS, you get the choice of do I want standard or specify the number of IOPS that I want, and I pay based on that. Standard volumes, a flat rate provision is a multiplier based on the amount of IOPS I ask for. You look at Rackspace to take another example. They're differentiating based on media type. So SATA versus SSD and a flat rate from a price perspective for both. Which I mean, that gives you some flexibility but when you really think about kind of the core workloads that have been running in the enterprise for a long time and are looking to perhaps migrate on top of OpenStack. Having a differentiation is really key to making sure that you align what you allocate in a self-service fashion to those workloads. Versus having it be a very static, heavy dependence on a storage administrator to come in and do the right thing. It just, that doesn't work in a cloud environment. It has to be very dynamic and user driven. And so Cinder really in providing block storage services to and users, this is what it was built for. It was originally part of the core NOVA project. It was forked out of NOVA a few years, I guess a little over two years ago. It's a core project that really is about allowing users to provision a volume of a particular size and then through a particular type. While it's typically deployed as part of a larger OpenStack deployment, we have seen certain customers actually deployed as a standalone management tool just for block storage within their infrastructure. If they're kind of consuming OpenStack in a very piecemeal fashion, that's certainly a possibility with Cinder. But it's not the norm. And finally, one thing to mention is that Cinder is really the management or the control plane. It's not sitting in the data path. So it's a set of management processes that are making sure that you have block volumes created. But once those are created and instantiated, they're connected to their endpoint with most typically a hypervisor and Cinder stands out of the way and just periodically will monitor usage and can handle administrative commands on that. But it doesn't sit in the data path, so it doesn't serve as a bottleneck. So Cinder is really comprised of four core processes that are all interconnected through a messaging bus. They rely on a SQL database. This is a fairly common paradigm within the OpenStack ecosystem. I mean, this is similar to how Nova works, Glance works, a handful of other services all have this type of paradigm. The important things to call out here, the block in orange is a Cinder scheduler. And that's kind of the brains, if you may, of Cinder. It decides, based on an incoming request, where should I actually provision the storage to fulfill that request? So it actually takes a look at the multiple, in this picture, green volume processes that each have a driver that corresponds to that particular back end. And the name driver is often kind of frowned upon because it connotes in most people's mind being part of a data path thing. As you think of a driver being for a fiber channel card, or for a nick, or something like that, it's just a bit of a misnomer. But essentially, it contains all the logic of understanding how to talk to a NetApp box, how to talk to an EMC box, how to interact with Ceph. All of that logic that actually builds into the abstraction layer is implemented within the driver. But the scheduler is the one that's kind of looking at the overall state of the system. Understanding where is there available capacity? Where are there different features that users might want to access? And then mapping that to the incoming request so that you know when I get told I want a six gigabyte volume that's mirrored and put on a Ceph back end. If that's what you'd like, then the scheduler is what's going to go through all the state and make that determination and say, I should put it on the middle one because that fulfills the requirements. So what is a volume type? To be explicit, it's an abstract collection of criteria that you can use to describe a particular level of service. The key things to think about here are, number one, a cloud administrator defines what a volume type means to their end users. This is not something that's baked into the sender. You have flexibility here to define these volume types relevant to your use cases. But they're, at the end, utilized by your end user. So your end tenants that are going in and actually creating volumes, that's where they'll select the individual volume types. So your end users are not saying, I want a net app back end with mirroring, with a 30 minute snapshot schedule. They're not doing any of that. They're simply saying, give me a gold or give me the analytics type of block storage. And that all just simply is abstracted away from the end user. One thing to note is, in most cases, you can actually retype a volume. So if you started with it being bronze and wanted to move it into a gold environment, assuming that that's a viable transition from the underlying storage mediums, then you can actually do that retyping after the fact. So you're not locked into a decision in most cases. Now as I mentioned, these are completely defined by a cloud administrator. The typical, going back to the networking analogy, the typical thought is gold, silver, and bronze. Those are your three tiers. And while you can certainly do that within sender, you can also go with a more use case driven approach, say archival, analytics, streaming, database, those types of notions. So it doesn't have to be, again, this is really a flexible and very administrator friendly construct within sender, so it's not going to force you into a particular model that may not make sense for your use case. And I mean, for that matter, you can name them cats and dogs and ducks if you want. The name really has no meaning at the end of the day. It's what are the underlying functions that you want to get at within the storage and avail them to your users. So how is this all used? So your end user would come along, as I mentioned, they're maybe in horizon or at the command line, and they're defining their topology. Maybe they're writing a heat template, and they're saying, I have this particular sender volume that will attach to this instance. But at the end of the day, they're saying, I want a 520 gigabyte silver volume. So how does all this work together? So the first thing that happens is a sender scheduler will go back to the database and say, tell me what silver means. So the user specified the type and the size. The scheduler goes and learns, OK, for this particular setup, silver means they want deduplication from a NetApp controller to be true. And the actual back ends are reporting their current state, not just how much available capacity do I have, but also my back end has dedupe turned on. Maybe 30 minutes ago, it wasn't turned on. An administrator came along and turned it on. Second, you've got mirroring relationships that may be established. They may be disconnected in the background. So if you wanted to look at the individual driver state and make decisions, that's certainly a possibility. But all that information is really collected from the back ends, collected from the database, and from the user request. And the scheduler then determines, OK, based on all of that information, I should provision that 520 gigabyte silver volume. That's going to end up in the back end B. So the logic of how volume types are applied in the provisioning process is really done by the scheduler. Now, as you can imagine, each of the different vendors that have done integrations with Cinder have looked at availing features that are unique to them up into this framework so that their end users can get access to it. But there's a set of, there's kind of a challenge. This being an open ecosystem, when we were looking at this concept in its very early days, we wanted to kind of have a standard set of capabilities that individual Cinder drivers would export. So if we could come in agreement to the definition of what DDoop meant across EMC and NetApp and IBM, and then we could agree on what mirroring meant, and we could agree on these things, then we'd have the standard extra specs that everyone would use. The problem is that everyone has a little bit unique view and object model and abstraction, so it's really hard to get a standard definition for a lot of these advanced features baked into Cinder. So the approach that was taken was to start with a set of default capabilities that would kind of be the basic things, and then a vendor could actually go back and advertise specific things, and those specific things are what are called extra specs. But you can see here on the chart, basically the name of the back end, the vendor, the driver version, the actual protocol that will be used in the back end. Those are all fairly consistent across the drivers and can be exported and used when you define a volume type. So what do extra specs look like? And like I said, these are very vendor specific. This list is not meant to be an exhaustive list of what NetApp can do or any individual vendor can do. But just to give you a flavor of what this kind of looks like, I brought forth kind of the NetApp specific examples. So we've got here what's the RAID type underneath in the individual storage? Is it RAID 4 or is it RAID data protection, a unique scheme that NetApp has? What's the underlying disk type? Is it a flash drive? Is it a spinning media? If so, is it SATA or is it fiber channel attached? What was the underlying volume thin provisioned or thick provisioned as Ddupe flipped on? So it's booleans and strings that get basically expressed in key value pairs that are hooked into a volume type. And simply when you specify what does a duck volume type mean to you, it's literally just a list of these key value pairs that you have. And I've got the reference here to the OpenStack documentation page, the config reference guide for each of the vendors typically has a table where these extra specs are defined so that you can figure out, I'm using HP storage or I'm using NetApp storage, what are the actual capabilities that I can connect to? Now, there's also a concept within Cinder called QOS specs or specifications. And they allow you to define more of the traditional data path framework or requirements that you might have. So think of, I want to limit the overall IOPS for a particular volume to be at x level. And maybe that's because my storage network is shared across multiple tenants. And I want to make sure that network traffic is not too dominant between a gold user and a bronze user. You may want the hypervisor to actually enforce those limits, but the QOS spec concept allows you to actually specify again, what are those limits and where do you want them enforced? So you have the choice of either doing it at the hypervisor, doing it at the storage subsystem, or you can actually have it done at both if that's of interest. So the front end specifications, the standard fields that are there today, currently it's only supported with Libvert and KVM as the hypervisor. It makes use of a feature called IoTune that was added I think a year and a half ago. So it's in most major distributions, it's kind of in the default, I think it's version 098 of Libvert, but essentially it allows you to either limit on throughput or IOPS, so you can specify the overall total of IOPS or if you want to read or write specific things. One of the examples I've got later, if you think about an archival workload, maybe I want to allow you to write whatever you want, but reads need to be limited out, because maybe it's going to tape or something on the back end. For the limits that get defined and enforced at the actual storage subsystem, again this is a vendor specific field, we've got two vendors today, HP and SolidFire that have gone and actually defined specific fields within the QOS spec structure, got them listed there. NetApp and Huawei actually have defined QOS extensions within their sender drivers, but they actually go through the extra specs framework. And this is more a nature of open stack projects or living breathing entities, depending on when you contribute code in, what frameworks are there when you actually go to contribute. So it's not that these vendors don't have support for quality of service, it's more just that they've chosen a different way to do this, and perhaps over time they'll look to move under the more emerging standard framework that's there. So what do these look like? I've talked about them a lot, but give you some concrete examples. So going to the canonical gold, silver, bronze. Taking performance as the reason why I would want to build up a differentiated set of services. Let's say gold then means I'm running on a flash drive and it's thick provisioned. So basically let's remove all performance bottlenecks that we can think of in this particular example and make gold screaming fast. Silver, I don't really care where it ends up, but I'm going to limit the overall IOPS perhaps to 500 a second. And just to give you context, when you go to Amazon and you ask for a standard EBS volume, they're actually rate limiting your IOPS to about 100 per second. So we take that forward in the bronze example here and let's say we're going to force that onto LVM, which is using direct attached storage and limit the IOPS, again, just like Amazon would do for you at 100. So that's one way that you could slice this up and define it. Going more towards the use case driven example, let's say again with archival, this is where I know the back end is mirrored, I know compression is on because it's going to live for a long time and I want to limit the amount of read workload that occurs through this. And it really comes down to what's your use case, what makes sense for Hadoop may not make sense for streaming versus database workloads. And this is really more of your storage admin who knows your workload well, or actually can instantiate and look at the performance characteristics at the back end through a variety of different management tools and actually figure out what are the right mappings for your particular environment. But the framework of extra specs and QoS specs allow you to mix and match. And when the scheduler gets a request for archival, it's literally going to go through the list of capabilities that the individual drivers are advertising and do a Boolean match to find where is the common set. So you can really just envision if a request can't be fulfilled because the requirements are such that there's no back end volume that has mirroring compression turned on, then that request for that volume will simply just be rejected and say, sorry, we can't give that to you. So switching over to the demo here. Just give me one sec while I move over here. So what this demo is going to show is I've got an all in one DevStack VM running the latest from the Ice House release of Pretty Current. We're going to go through, we're going to create a gold silver and bronze volume type. We're then going to associate particular extra specs and QoS specs with those different volume types. And then we'll go and create volume types of each of the respective flavors. And one of the flavors I have, a particular QoS limit on, and we'll actually go into the logs and see that the Iotune parameter that KPM uses to enforce the QoS spec was actually set. So kick this off here. Where is my, there we go. All right, so the first thing we're going to do is we're going to log in as an administrator because you have to be an admin to set up volume types. So the first thing I'll do is I'll create a gold one, we'll then create a silver one, and then we'll create a bronze one. And you'll notice that, I'll pause it right here, you'll notice that I didn't actually define any of the QoS specs or extra specs at this point. Literally it's just a create a container with a name. It doesn't have any predefined meaning when it comes from Cinder. So we'll then continue, we'll say, OK, let me go ahead and create, there's a separate object within Cinder for the QoS specs. So I'll create an object called silver QoS specs where I'll limit the overall IOPS to 500. Again, pulling from the prior example. In here you can see, so you can see it's chosen a backend consumer type. That backend consumer means, again, that it's the storage subsystem that's expecting to enforce this particular limit. In this case, we're actually going to switch that to be a front end consumer such that the actual hypervisor will enforce that limit. But you can see here, again, the key value pair that I've specified is literally stored as JSON in the database. So it's very straightforward to understand what values are there and to edit them. There's not currently support in Horizon for editing QoS specs or editing extra specs. So that's why I'm showing it by the CLI. It's not just because I'm trying to hack this up. But today, that's the only way that you can actually do this. There is a review that almost made it into Icehouse, which was going to give GUI support to the extra specs framework. But I think I saw on Friday that it's hopefully going to go into Juno very soon. So we'll rename the first thing I created there and create a new bronze one. Again, just changing the IOPS limit down to 100 to match the example. And then we'll go and we'll actually start setting extra specs. So the first thing we'll do is we'll say, for gold, we want it to be provisioned onto a back end named c.nfs. And c. is an acronym for cluster data on tap, which is the operating system that NetApp uses on its FADS series. So if you think kind of the normal box that people associate when they think NetApp, that's going to be running the cluster data on tap operating system. And so the next thing we'll do is we'll set a second key for the bronze type. We'll specify that I want bronze images to be sent to an LVM back end, so a logical volume manager. So this is running on direct attached storage. So it's going to be created within the virtual machine since this is a DevStack VM. The next thing we need to do is associate the QoS objects that we created here in this step. We need to associate them with the IDs for the volume types that we got back when we created them originally here. So we'll just issue this associate command. You can actually have multiple quality of service specs associated with a single volume type. So there's just a single associate command and dissociate command that's the negation of that. So we'll simply just associate those with the volume types as soon as I can click play. So we'll just copy the IDs for, again, the QoS spec object and the volume type object. We'll do the same again for the bronze back end and the bronze volume type. So the next thing we'll do is we'll actually go back and look at the current list of extra specs that we've defined, as well as the current list of quality service specs. So now that we've set it up, I've given you a pretty simple example here of just forcing volumes to be sent to a particular back end. And this is the IAPS limits that we defined here. So we've got one final thing that we need to do. We need to change the consumer here from back end to front end so that the hypervisor is responsible versus the storage infrastructure. So the next command we'll issue is just to change the value. And the value there is actually set by a key called consumer. So a little quirky there, but that's how it's done. So we'll come in and we'll say set consumer equals back end for a particular QoS spec ID. Oh, I messed that up. This is why I'm not doing it live, by the way, because I know I got it right when I did it before. So now that we've got both of those set again, or we're still doing the second one here. But essentially once we've done and we've set it up everything from an administrative point of view, then we're going to quickly turn to a tenant view and say, well, now imagine I'm somebody else on the team. I'm going to log into Horizon and ask for a VM to be created and I'll attach it to an instance. So I'll go to the Volumes tab and there's no center volumes there today, so I'll come in and I'll say, well, let me create a gold volume. I select the volume type from a dropdown list. Actually let me back that up so you guys can see. So you can see here there's a choice when you go to create a volume. There's this type field. That's what actually links back to the objects that we created. You specify the size here and you just create. And then it goes through that workflow process that I talked about before. Currently with Horizon today, when you go to boot a new instance, you get several choices as to whether you want to boot that instance from the ephemeral disk or whether you want to boot it from persistent storage as the root disk. So there's some options like boot from image, create new volume, or boot from snapshot, create new volume. And in that case you're actually creating a cinder volume that serves as the persistent disk. There's no support in Horizon today to actually specify the volume type in the web GUI as part of that. But that's something that we've looked at actually going back and contributing that into Horizon. You could envision it really be just another dropdown window that would appear when you made those selections. So we go through, say, gold, and I only want a gig for this particular use case. It goes through the creation step. At this point it's going and it's saying, OK, it's talking to the NetApp controller and saying, create me a one gigabyte volume on that particular back end. So we'll just go through it. We'll create a silver volume. We'll create a bronze volume as well. So again, your users are not doing anything different except for selecting the type of a volume at the end of the operation. So once we've done that, we finally get back that everything's ready to go. So the next thing we'll do is we'll look to actually attach that volume to an instance. So I fired up an instance ahead of time. You can see it in the dropdown list there. So it's actually going to go off and do the logic behind the scenes. In this case, I attached the bronze image. So it's coming from LVM. LVM is going to export it over iSCSI. So it's talking to the hypervisors, creating the target, setting up the initiator, and doing all of that so that the hypervisor can talk to this particular image. And just to show that the quality of service specifications that we had put in, back it up here. The quality service specs that we had done for bronze, if you remember, there were a limit of 100 IOPS per second. So looking at the NOVA logs here, you can actually see that there's an XML file that gets created. And the IOPS limit here is specified in this total IOPS set. So you can see where I actually defined it in text on the CLI as an administrator. Now as a user, when I've come in and I've selected to attach a bronze node, it's then enforcing the limit at the hypervisor when that attachment occurs. So I think I've got a few minutes left for questions if anybody has any. But that's a general concept there. So volume types give you a very easy way that end users can then pick from, essentially, a catalog of services that they can be differentiated on a price basis if you hook in with a metering infrastructure, a billing infrastructure. The volume type information is sent through metadata in Solometer, so it's available for you to actually query in that database. But it really gives you that flexibility that you're not going to get from an Amazon or a Rackspace today. So thinking about how do you perhaps leverage this in a private cloud environment, or even a public cloud environment, it's a pretty attractive extension to sender. So I'll pause there. Any questions from the crowd? When you say a user can access a particular volume. So that is not done under the scope of a, under the volume type, because it's more a description at a generic level across a tenant. So it's not going to, it doesn't have that scope of a per tenant or even a per user within that project. I think that that enforcement is done in a different place. Well, you have scoping to individual tenants, so it's not like any volume can be attached by anything in the public cloud. So there is actually checking to make sure that the user who is trying to attach a volume is under the same same tenant scope and can actually facilitate that communication. But it's not under the volume type construct that that's done. So yeah, we've got somebody here at the mic. Yeah. Right? I had a question about defaults, right? So we're talking about enforcing. But as far as I know, there's no way to force people to use a type when they're creating a Cinder volume. So the default is essentially to have none which could very well still create a volume. Sure. So I mean, it really comes down to how do you allow users to interact with the cloud if it's simply just exposing horizon? Yeah, you're right in that today there's nothing that limits someone from asking for a one gigabyte volume of no type. However, if you think more kind of holistically and say, I define a heat template form this particular workload that gets deployed, then I have a natural space to create that linkage. So perhaps in a very self-service dev test type model there may not be an explicit way to do that. But for particular types of workloads where you want to define the topology and policies around that topology, this provides you a hook to do that. Thanks. Sure. Any other questions? Sure. So the extra specs you said were the vendor-specific capabilities? Correct. If those are basically what I gathered was correct, those are simply exposing what the characteristics of the underlying storage is, then why wouldn't the storage vendors simply pass those variables forward? Pass those variables forward. If I've got my NetApp, if the volumes are being backed by NetApp. Sure. And those volumes are deduited, why wouldn't NetApp just pass that variable forward rather than you have to supply it to the internet? So the NetApp storage actually does avail that it has deduited enabled on the actual controller. It's an administrator has to create the linkage between gold, silver, bronze, or dog, cat, bird to that. So it's not that you are having to go to your storage and so perhaps I'm not answering your question. Are you saying by defining the extra spec to say I want dedupe, are you saying then it should go to the controller and flip on a dedupe switch? It's grade five, it's whatever. Sure. So I've got to set it to the back end, bring that forward, and then allow it to QOS and so forth. Yeah, I mean I think it should be a fairly straightforward extension to do that. I mean if you bring up the scheduler logs and you look at the periodic updates that come in, I mean the different storage systems are advertising on a periodic basis, what they can do. It shouldn't be a far stretch from that to actually pre-populate at least a set of choices of what's available versus having to manually go through and do that. But that's not the way it works today. That's something that certainly could be done. All the kind of fundamentals are there to enable that. All right, yeah, go ahead. Yeah, so what happens is when you get a retype request, it'll say I'm going from bronze to gold and that means I'm changing a disk type. So at that point it will go back and consult the scheduler as if almost a new provisioning request were coming in and say is there space on a system that has SSDs that has all the properties that gold is defined to be. If that's true, then Cinder will actually take care of doing that migration on a per driver basis. Now let's say both systems were net app back ends and if the environmentals were set up correctly that could be as simply as cloning the file from one directory into another directory and that could happen instantaneously. If you're going from Ceph onto EMC, there's no way for those two things to just magically know what their formats and everything are so it'll have to do a copy over the network in order to facilitate that. But that will happen as long as all of the scheduler heuristics are executed line up. Again, as if you're provisioning a new volume, it's just that the source for that volume is an existing volume rather than empty in that sense. So I think we've got time for one more question if there is one in the crowd. Yeah, they're in the back. You may want to come up to the mic. I don't know if I can hear you. So the volume types are only set at the creation of the volume or can those be set afterwards? And can we change the volume type? Yeah, so as I just mentioned, when you create a volume, you can say no volume type or an explicit volume type. And then there's a retype command. I think it's currently only done in the CLI that allows you to switch the volume type after the fact if you later determine that the contents of this workload really needs to be on a flash drive and it wasn't to begin with, then you can use the retype function to achieve that. All right, well with that, I think we're out of time. But feel free to catch up with me afterwards. But thanks for coming.