 Good afternoon. Thank you for coming. If you are interested in storage in OpenStack, then you're in the right place, because we're talking about sender and Swift and how they work together. By way of introduction, my name is John Dickinson. And I'm the PTO for Swift. I work for a company called Swiftstack. And when we were putting this together, preparing for the summit, John and I were talking about just realizing there's a lot of confusion about how the storage projects inside of OpenStack work together. So we really wanted to clarify a lot of that. So my name is John Griffith, the other John. So I work at SolidFire. And I'm currently the PTO for the sender project. And that's the block storage side. So as John said, we wanted to get together and talk about how the two different storage platforms work together. And then also, there's a lot of confusion amongst a lot of people about what the differences are. So we're going to talk about that as well. So the reality is we need something new. And we all kind of know this, which is why we're here and looking for some sort of storage system. Applications have really changed recently and, I mean, recently in the past 10, 15 years or so. And we need to move beyond something that is just silos of data storage. And we need to be able to have the flexibility and the scale that's demanded by what applications are using today. So if you need to go beyond the capabilities of a single hard drive and you want to get beyond the pain points caused by large raid volumes, then the reality is you need something new. And we know that there are people writing new kinds of applications all over the world, building massive websites that are the larger scale than they've ever done before. We've got the proliferation of all these devices, mobile devices, laptops, all of this kind of stuff. And we also need to make sure that our infrastructure changes to meet that. And I think the fundamental promise of the cloud, one of them, is that you have the power to tailor your infrastructure exactly to your use case. We've seen this happen a lot with the compute side of the house, and I think we're also seeing that happen with the storage side of the house. And the basic idea here is you need something to abstract away your data apart from the media upon which it is actually stored. And that's the fundamental thing that I think that both Cinder and Swift bring. We have a system that is able to break that connection, which means that even if a particular piece of hardware fails, your data is still there. You're not intrinsically tied to what's going on. You know, you can think about in the past, people have always kind of assumed, you know, if you can drop it on your toe, it's a thing. And you know, if you've got these ideas, you put it on something that you can drop on your toe like stone tablets or books or something like this. And you know, think back a thousand years. Somebody wanted to kill an idea. What did they do? Well, they rode into town and they burned all the books. You destroy the media and then you have destroyed the data, the information, right? Well, what's really interesting now is that we're faced with a rapid proliferation of different types of media that they come and go, new protocols. You need to be able to swap those out and grow over time. So what we need is something fundamentally different than what traditional storage has been able to give us. And that's why we have systems like Cinder and Swift. And with that growth, the reality is that you have to solve this on distributed systems. Why? Because one storage system is not big enough to handle all of your storage needs. You're gonna run out of space if you put it on one hard drive or if you try to just keep it even in just one server. So you need something that goes bigger. So how do you do that? You have to put it on more than one system. Therefore you need something that's distributed. And in the distributed systems, there's a principle, it's called the CAP Theorem, stands for consistency and availability and partition tolerance. And what the CAP Theorem says is that you can choose two of these. So let's go over them very briefly. Partition tolerance basically means that you can withstand like a timeout or a network failure in your system. So when the two pieces of your distributed system are trying to talk to one another, they still work when they can't talk to one another. The consistency is what happens if making sure that both of your systems are all of the pieces of your distributed system have the same view of the world. And availability means that it's always going to respond to requests when it gets them. So you can choose two things. You have to choose partition tolerance. So this puts you into two buckets. You can either have something that is eventually consistent and this is something that means that even when you have a hardware failure, the system is still going to be able to respond to your system. This is what we see a lot with object storage. And I gave a couple of other talks already this week about Swift and how that's put together and we'll cover it a little bit here as well. And then the other piece is you have an enforced, a strong consistency system. And this is what we're really used to as far as putting in boot devices for your servers or building databases and things like this. You need to make sure that you have strong consistent storage, generally used with like block storage to build out things that you can't change what's happening underneath them. So what you sacrifice in that sense is when half of your distributed system is unavailable, well then the whole system is unavailable sort of thing. So that's kind of where we fit into these two schemes. Swift is designed as being an eventually consistent system. Cinder is designed to provide you strongly consistent block storage for OpenStack Compute. So I want to go ahead and talk a little bit about Cinder and kind of give you a little more background, kind of talk about how it came to be, what it replaces, and how things work. Basically everything in OpenStack, basically we talk about this all the time and we think about it. It's basically a pool of resources, right? So it's no longer a stand up a disk array or stand up a server or stand up whatever this might be. It's okay, just provide a pool of resources for the end user to actually check out and utilize on their own. So the two important things here are these two first items on this list. We're talking about pools of resources and we're talking about self-service and that self-service is from an end user's perspective. The thing about Cinder, one of the biggest things is the whole idea as again with most of OpenStack is what we're doing is we're abstracting out all of the hardware and everything else that's behind that. So that pool of resources could be a completely homogenous set of different back-end devices and it doesn't matter because what we're doing is we're abstracting all of that and making it easier and simpler for the end user to use and for them to be able to use it and allocate it on their own. So this is kind of a quick comparison of sort of the traditional model versus say let's call it the OpenStack model in using Cinder. So you can see traditionally it was always you had some storage device that was actually dedicated to a specific server or maybe a specific user. For example, you had a server that was running a MySQL or an Oracle database. You had a certain storage device, block storage device with certain characteristics that you would deploy and set up and connect to that based on those needs and based on those requirements and what you needed there. The problem with that is what happens is you end up underutilizing capacity, right? Because what happens is you try and plan ahead, you try and build it for what you're going to need in the future and everything and it's pretty monolithic, it's pretty static, you set it all up and you're never really going to use it to its full potential. Either that or you go to the other end of the spectrum and you overutilize it and that's even worse because then you start to have performance issues and you have all kinds of things that you run into and you've got a serious heavy investment and heavy effort that you have to do to actually extend that or expand it or actually migrate it all off onto something, maybe a bigger device. So in the old days of disk arrays and things like that, that's the kind of thing that you had to worry about a lot. The other thing is when you start trying to do things like take a single backend device like a disk array in a traditional environment and share it across multiple users and multiple use cases, you start to run into all kinds of things in terms of your planning, how to deal with capacity, and most of all, as you get more and more workloads, trying to deal with performance, different applications, different workloads, they're going to have different performance requirements. And the other thing is some of those requirements or some of those workloads are actually going to take more performance away from your device than others and they may actually steal performance from other people that need them as well. So that creates a significant problem. And then one of the things I love to talk about all the time is the whole going back to the self-service thing, in the old days or in the old model, you had to go through and you had to actually do things like go through IT or go through a storage admin or go through a requisition process or whatever it might be. And you had to actually set things up in requests or do a purchase order or whatever it was to actually get this storage, have it set up by IT, have it configured, get access, and then have them set it up for you the way you want and hopefully everything comes out right, so on and so forth. Either that or you just get what they give you, either way. The bottom line is in that model, everything is very static, very monolithic, it's not flexible, and it's just kind of a pain. It's the bottom line. So what we try and do on the open stack side of things and in Cinder is we try and make all of this more agile, more flexible, and also we try and make it scalable. So what you talk about now is a pool of devices. Anybody can use them. We do scheduling inside of Cinder so that you can do things like say, hey, I need a volume that has these characteristics. I need storage with these particular characteristics and it can go out to your pool of resources and figure out how to allocate a volume or some storage on the correct device so that you actually get what you need. You don't get more than what you need, you don't get less than what you need. So that's kind of the idea. The architecture and the design is completely designed for multi-users, multi-user environments and multi-use cases. So like I was talking about before, how you have to worry about, I can't mix this workload with that workload on this device. So the idea is by using a pool of resources and being able to split that out and select differently, you can eliminate that problem. And then of course, you know, back to the self-service. That's probably the biggest thing is letting the application developers or wherever it might be, actually go ahead and make these provisioning requests and everything else on their own. So I wanted to talk a little bit about, you know, for those that aren't familiar, kind of how Cinder is made up and what some of the components are. So the idea, you know, most of the, if you look at this and if you're familiar with OpenStack or if you've been in other talks this week and seen some of this stuff, you'll notice that this looks really familiar, right? Because we kind of all try to follow a pretty similar design here. So you know, at the top level, we have an API. Basically all your requests come in, whether that's through the Python Cinder client, whether it's through the dashboard, horizon, whatever it might be, or curl requests, whatever. You know, you may write something on your own. So those requests come into the API and they get put out on the message bus. The default for that is RabbitMQ. Can we do that and send that out? It gets picked up by the scheduler service and the scheduler service will then go ahead and send it down as an RPC call to one of the volume services. So this is where I was talking about before. We have a pool of resources now instead of just a static monolithic device. You can have multiple backend devices all set up and configured inside of Cinder and the scheduler is gonna go ahead and figure out where to send that RPC call to get it to the right device. So that's the idea there. So one of the things that I've realized is a lot of people have never actually looked at this, right? From a user's perspective on the dashboard. So this is just a couple of screen captures here to show you what's available from the GUI, from the web interface for OpenStack. So this is pretty simple. This is the volumes page. You get just a summary of what volumes exist, what their status is, their size, availability zone, so on and so forth. You can see here we have what the type is and whether they're attached. So the whole idea of this storage from Cinder's perspective is we're actually creating storage that can be consumed by Nova and by compute instances and that consumption can either be by attaching them as additional storage block storage devices or actually attaching them and using them as the root partition for the boot device on the instance. So go to the next slide. So this is the create process. So you can see it's fairly simple. There's not a whole lot to it, right? We go through and we select a name, we give it a type or we give it a name and select a type, give it a size and then the option on the volume source, there's two different things you can do here. You can say, I just want a raw volume with no data on it without any information or anything like that. You can choose to make it an image and what that means it's gonna actually go out to glance and it's gonna download that image and make that volume bootable. So then you can now persistent instances. From Swift. From Swift, yes. So yeah, so then you have persistent instances, the whole ephemeral versus persistence. And then one of the other options is from volume. So the other thing that you can do is you can actually clone volumes inside of Cinder. So if you have a certain bootable image or written down to a volume already or you have a development environment, you have a database, whatever it is, and you wanna actually replicate it, you can actually go in and just say, okay, just clone that volume and give me all of that same information again. And then of course we also have snapshots. Oh, wait, sorry. Over here, these are some of the extra things that you can do with a volume and this is where things really start to diverge a little bit, right? Because when you look at the old way of doing things, trying to do things like extend a volume, that can be a little bit difficult on an old system. So we can let you do things like extend a volume. We can let you, if it's bootable, you can go ahead and launch it from here. So then you have an instance that's running on that volume inside of Nova. The edit attachments is if you have existing instances already up and running, you can go ahead from here and you can just attach it and you give it the mount point and attach it as a VDB file or whatever the path might be. And then of course we let you do snapshots. So the snapshots currently are tied to the parent volume. You can't delete the volume if you have a snapshot currently, which causes some people some heartache. But, and you also can't actually use the volume, the snapshot as if it were a volume. So it's sort of a backup device. The reality is what we use it for is a vehicle to get things done more efficiently. So a snapshot is typically very fast. You get a consistent copy and then you can do things like migrate or clone or duplicate or whatever you might wanna do. So to talk about some use cases, you know, most people are pretty familiar and have a pretty good idea, but typically what you're looking at with Cinder is the buzzwords to kind of keep you thinking here are when I need additional persistent capacity for an instance that I've built. So I have an instance in Nova. I've got my ephemeral disk and everything else, but now I wanna attach something else and put a whole bunch of data on it, right? So it's just like adding another drive into your instance. And you can do this dynamically on the fly. Add them, take them away, move them, attach them to another instance, whatever you wanna do. You can also use it of course, as I said, for the actual root storage. So you can put your root partition there and actually boot your instance off of it. Use it for file systems, databases, and you know, anywhere that you need a raw block device. And of course, like I said, you can attach multiple Cinder volumes to a single instance. Thanks, John. So if you've got to put a database someplace, you're gonna use Cinder Swift. Cinder, that's right, good. So just making sure you're paying attention in and awake and everything. So Cinder's really great for deploying your volumes and your block devices and making sure that those are dynamically attached to portable and snapshots and all those really kind of cool stuff that you really need directly attached to your compute. So why do we have Swift? If we've got all that kind of stuff, then why do we need Swift? Swift is still vitally important. And the reason is because you need to have application storage that is globally available. You need to be able to directly talk to your end user. You need the applications to be able to directly talk to the storage for that application assets. You want to be able to have that storage system offload some of those hard problems of storage. People who are deploying the IT, like the IT group, your sysad men who are out there building your storage clusters, like I said earlier, they're gonna want to have a lot of agility to be able to respond very quickly to the needs of their applications but also have shared storage across different applications. And then finally, and this is one of my little soapboxes that I like to talk about a lot and one of my reasons that I like OpenStack so much is that open systems matter. You need to have ownership of your data and you need to be able to know who and what is touching every part of your data from the hardware to the storage engine all the way to the client tool chain. And you can only do that with open systems and that's why we're here with OpenStack. So I wanted to talk about a couple of really cool use cases, just three of them just briefly here, thinking about maybe giving you a real world picture of where Swift is being used today. Pack 12 was something that I talked about a little earlier this morning and it's really kind of cool. They were using a traditional sand device for their application storage and they were running out of space but they still needed to have highly available storage because they needed to be able to access this all the time. And when they were running out of space, what they did is they decided to archive some of that to tape and then stick it on the shelf which makes it pretty much not available and they needed to solve both of these problems and so they built out their Swift cluster and they migrated their stuff from their archives and their sand to their Swift cluster and what that gives them is a lot of new power and flexibility in what they can do. So when the Pack 12 is doing video recording of 800 sporting events a year then they can actually respond very quickly to that and put, know that their storage is available but they can even take that old stuff and introduce new revenue models and things like that because it's now available instead of being gathering dust on a shelf someplace. So if you wanna go see some old football game or something like that then maybe it's possible now with available storage that you can just ask and they don't have to worry about getting a tape robot to plug it in the right place and all this kind of stuff. That's not a scalable model but something that already is storing it and highly available storage does allow you to do that. The next use case I wanna mention very briefly is something I just heard about this week which was really kind of cool and I've mentioned it to a few of you I know already this week. So when the Malaysia airline flight went down people were scouring the Indian Ocean for the plane and one of the things they were trying to do is figure out well if a plane crashed then where would the debris be? Well there's this site that's hosted in Australia and it's called adrift.org.au and they were discovered basically by a newspaper that the cool thing is you can go click on any point in the ocean and it just shows you what the dispersion pattern is for the next 10 years. Really kind of a cool thing. It's really fun to play with. But apparently a newspaper discovered it and then they got overwhelmed by request and they decided they had a couple of options. One, they could continue to build out their web server and just add more engine X nodes or something like that and that's kind of a traditional fine way to do things but it has extra operational cost and it's just a little more complex to build that system out. It turns out that their data was already stored in Swift and so what they were able to do is instead of having to build out their web servers just point the browsers directly at Swift and it automatically worked and boom their scaling problems are gone. And this is a really great story and I was when I first heard about it I said some stuff on Twitter about it and I got a response from one of the guys who ran it and he was saying that Swift has been a great way to help make our site scalable and so it was just really great affirmation. I love hearing that kind of story and I think it's something that's just about all of us can really make use of on almost an everyday basis. The last use case I want to show is kind of a little different one moving from the video streaming and active archiving sort of use case looking at something like web content and then this one from Fred Hurch Cancer Research they were speaking earlier today. They're in cancer and AIDS research and as part of that they have to do lots of gene sequencing and store massive, massive data sets and they need to be able to share that internally but they need to be able to have something that they own that is gonna last a long time but it gives them pretty relatively cheap storage across that much data and they were able to build out a Swift cluster and get really, really low price performance price points for that. There's some numbers earlier this week it's less than two cents a gig for their use case on their large scale storage which is really impressive. So how does Swift work? Well I wanted to point out something very important well I'll get to that in a second I'll get to the important part in just a second. So the first part, the first important part is let's talk about the API. All of the services in OpenStack use a REST based API and so that's a commonality between all of our stuff and in fact this week we're here talking and one of the conversations earlier today was how do we make sure we come together as a project and make really good unified decisions on that. This is what a URI looks like for Swift. You have an account to container and object and this is really just where you place your data throughout the system and an account keeps a list of containers a container keeps a list of objects and then the objects are where you actually store your data. So let's say I have an account on a Swift cluster and I want to create my images folder and then I want to upload cat.jpeg because the internet is for cat pictures. So if I upload a cat.jpeg then is that going to be is that going to be stored as an account, a container or an object? The object that's right. So this is where your data actually is and if you're storing, you know, cancer gene sequence, whatever, that's going to be some object someplace. This is what it looks like just to add a new object and to get an object back. Simple HTTP verbs and response code. So it's very easy to integrate into existing applications. It speaks the native language of the web so it works very, very well with existing web technologies like caching and CDN and browsers and things like that. And this is the important point that I really wanted to drive home and I think it's one of the biggest differences in something like Swift and something like sender. Swift implements an object storage engine. Swift is responsible for making sure your data is effectively stored across a large set of hard drives and does that and keeps them securely stored, durably stored and highly available. Swift is not a provisioning system for other object storage systems. You do not ask Swift to give you a pool of object storage with vendor object storage system X. That's not the way Swift works. Swift's purpose is to abstract away the storage volumes away from the application in such that you have your data and it is very durably stored, highly available. This is the design of Swift. This is how we implemented this object storage system. The client talks to what's called a proxy server. The proxy server implements most of the API and then passes that onto a, passes request onto the storage server. The storage servers are responsible for abstracting away those storage volumes. So in this case, if a client, let's say a web browser, we're trying to talk to a Swift cluster, does the web browser ever directly touch a hard drive? Yes or no? No, it does not. Because this purpose of Swift is to abstract away those hard drives so it never has to worry about it. This way hard drives can come and go intentionally or not and the system still remains available. And so the other purpose of the proxy server is to make sure that the storage nodes are, the responses there are correlated and the client gets the right response code back. So when we're writing out data, make sure it's durably persisted in several locations and then you can get that information back and the client knows as soon as it gets a response code that says this has been successfully written, it is available for you to use. It's not that it's going to write it later. That's not what eventual consistency means within Swift. What it means is it's going to be written right now. It just means that when you have something that is, when you have some failures, Swift will recover from that in the background. And so there's no single point of failure in this, in this design, there's no centralized message queue, there's no centralized database, there's no centralized index of metadata of location placement or anything like that. So it's a fully distributed system that really just continues to get better the bigger you add it out. And the last point I want to make about Swift here, I think it's the last point, is that it's optimized for massive concurrency across the entire dataset. So in the example of the adrift.org site, it is designed so that not one request is going to be as fast as possible, but it means that it can handle all of those requests pouring into it from all around the world. That's what Swift is really good at. Another use case I really like to talk about sometimes, and I mentioned it here at previous summits, is the fact that Wikipedia is using images stored in their own Swift cluster, because you can think about it, they need something that's going to be that massive concurrency across all of their articles. So I want to talk a little bit briefly about the data placement here within Swift and how we do this, because as an implementation of the object storage system, that's kind of an important piece to understand. So this is a little more technical info, but we use something that is called consistent hashing. It sounds like it's a complicated topic, but it is a lot that goes into it, but we're all really familiar with it. Imagine it's a set of encyclopedias. You know how to look up something in encyclopedia, you find what letter does it start with, then you go to that volume of the encyclopedia, and that's how it works. That's just a basic, simple hashing algorithm. If you want to get a little more complicated, then you can use what's called a hash function that evenly places things throughout this single, what's called a key space, and so you hash a value and you're given an output, and that output is just some really big number. Now the way that consistent hashing works is we call it a ring, because if you go to the biggest number you could possibly get and add one, well then you just loop back around to the beginning. So basic consistent hashing means that you hash something and it shows you some point on this ring. And for example, you can walk clockwise around the ring, find where that first node was, that first storage node, and then that is now responsible for dealing with that storage. We've optimized this a little bit inside of Swift to have a little bit better data placement because you want something that's a little more reliable and robust against failure. You don't want all of the copies of your data to be stored on the same hard drive. You need to be able to make sure that it is stored across different failure domains, drives, servers, racks of servers, even entire DCs. And so this is very important to understand so that when you are going to place a piece of data, let's say Swift is gonna store multiple copies of that data in the system and it chooses where to do that based on this principle that we call as unique as possible. What that means is that if you have more than one server, we're gonna make sure that it's stored across hard drives on those different servers. In fact, we're gonna make sure it's not even stored on the same hard drive, then it's not gonna be stored across different, the same server if you have more than one server. In this example here, if you have just two racks, it's gonna make sure it uses both of them. And if you're just in one region, naturally everything's gonna be there. But if it will use as much as you have available, in this way, it means that you can lose, for example, an entire rack or even an entire data center and still have available durable data. And this is something that I wanted to highlight a little bit, especially in relation to talking about Cinder and the other object storage, I'm sorry, the other storage systems inside of Swift. Sorry about that. OpenStack has several different pieces and components and some of them are storage and there's other ones as well. And so it's very important to talk about the extensibility points within Swift. So as John was talking about and hinted at, that Cinder is this provisioning layer for block storage that may call out to a particular storage implementation, a storage driver. And that's their primary way of being extensible is you're able to add in different implementations of block storage that you can then attach volumes to your virtual machines. Well, Swift has a couple of important places to implement extensibility. And two of them that I really wanna talk about are the middleware and the volume extension. And remember, this is the design we had about Swift. So let's break this in two. So if you take the front part, the communication between the client and the proxy server, there may be some API extensions you want to add into that. And so we can support middleware that will allow you to write your own custom code that you can run on the proxy server that for example could implement something like searching or transcoding a video or caching or something like that. And the other end, and this is where it gets really interesting I think, we've had a lot of active development lately to improve this, but on the backend, how does Swift actually talk to these hard drives? How does it talk to a storage volume? And we've seen a lot of improvements in the flexibility of the system within Swift. And today I can think of at least four places where we have different volume implementations for Swift. For example, we've got the default that we use, which is just a local POSIX file system. We recommend XFS, but you can use anything just about. We also have people out there, the Red Hat specifically has contributed to ensuring that Swift can run on top of Gluster volumes. We know that the zero VM team has been exploring the possibilities or the exciting opportunities around keeping compute and storage tied together. And they've implemented their own zero VM volume abstraction that allows them to execute code securely right next to the data. And then the fourth one is the, oh, you could think about, for example, some other abstractions that may include let's do encryption or let's do compression or something like that. So as it's passed down onto the storage volume, you could actually compress it and get a little more efficient storage. You can encrypt it for operational or procedural reasons. So that basically sums up the entire thing of what I wanted to talk about. And basically say that Cinder and Swift both have very important use cases with an open stack, work well together as far as the volume snapshotting, the backups, the loading from images through glance and things like that. And I hope that that helps you better understand where the two projects fit and why you would use one over the other. And to add to that, one of the things that hopefully makes sense now is you actually want both Swift and Cinder for the different use cases. And one of the things that I saved to the end for the better together part is the fact that one of the great things about Cinder and Swift and using the two of them is you can actually do backups of your volumes and Cinder to a Swift object store, which is an extremely great target for that. So, does anybody have any questions? Can you hear over the air conditioner? To get over the air. Although I'm glad it came on. Yes, I probably couldn't even hear you in the front row. Can you go to the microphone please? With Cinder, can you over provision or are the volumes out fully allocated when they're created? Yeah, so that's a really good question. And it depends on what backend driver you use. So the reference implementation for Cinder right now uses LVM. It has a built-in LVM driver. With the base default LVM driver right now today, you cannot over provision. However, you can actually configure the thin provision to LVM driver. And then of course, as the name would imply, there you go. Question over here? So my question was about Swift. I saw that you tried to geographically disperse files. So let's say you're starting small and you have one geographical location, say West. And then you've got a healthy Swift installation. Can you bring up another zone or geographical location and then basically bring everything that's over here in the background? Oh, this is a new thing, okay. Yes, absolutely. That is one of the design characteristics of Swift that we've been very keen on making sure it stays in the forefront with every piece of code that has contributed is that you've got to be able to manage capacity changes on an existing cluster. Also that includes, you know, we've got to be able to reboot with no client downtime or upgrade and things like that. So yes, in that sense, it is possible to wait each particular region or zone and then change that wait gradually and it will automatically rebalance things in a smooth way that won't overwhelm your network. This is a Cinder question. Can you take recurring incremental snapshots in Cinder? So you're going to have to be more specific with your definition. I have a feeling I know where you're going, but... Well, I mean, can you make the snapshot just take incremental changes since the last time you took a snapshot? Yeah, so with the reference driver LVM, it's just using LVM snapshots and so they are. So you're basically just creating a cal file and just doing that. So the thing about Cinder that's kind of interesting and that's significantly different about Swift, the focus on Cinder is abstracting out block storage devices on the back end, right? So depending on which abstraction you choose to use and what device you implement on the back end, that's why the question like, does it do over provisioning? Does it do... The question is always maybe and it depends, but the reference implementation in both of these cases, you can do that, yes. Thank you. I don't see anybody else at the microphone, but I think John and I will probably be up at the front just for a couple of minutes. Thank you very much for coming and have a great week. Thank you everyone.