 Okay, ready? Thank you everyone for coming to my session. I thought that I would probably be standing here alone because everyone's already mentally on the way to the booth crawl and thinking about what to do with the evening, go to the bar or whatever. So I find it interesting and I find it gratifying that storage does attract crowd, that storage does is seen as important, is seen as the basis of what we are doing. Every time I've worked in the storage capacity at different companies, there's always those people who say, well, networking is interesting, compute is interesting, storage is something that's around and we shouted people when it doesn't work. So in OpenStack, of course, there's a number of competing companies and competing projects who are trying to get their foot on the ground and who are trying to make inroads on providing the storage infrastructure for OpenStack. Two of those projects I want to look at a little bit closer today. One is SEV, that's sort of a newcomer. It's not an OpenStack project, but it's very well integrated with OpenStack. It has a very good reputation and deservedly so. And then there's SWIFT, it's the incumbent, it's been around for quite some time and it's seen by a lot of people as boring and staid and it's not, you know, it's sitting around and it's not, SEV has all those, it can, like a chameleon, it can take all kinds of different colors, it can become a file system, it can become block storage or it can become object storage. SWIFT cannot do that, but SWIFT has a number of features which we will see in this talk that are important and that make it, or give it a deserved place in your OpenStack cloud. So why is this even a question? Every time I go to a customer, almost every time, they have a pretty good idea of that one of them, one of the projects is cool, the other one is not. But this customer thinks it's SWIFT, that customer thinks it's SEV. And so what do we do? Then I would like to talk about a customer's view because my task at Mirantis is to go to customers to look at their infrastructure, to listen to their business and use cases, to talk to the people who are later on going to deploy and run the cloud and to determine what we can do to make the cloud more useful and better for the enterprise. How do you go about choosing one? I'll talk about some core capabilities and drawbacks of each project and give some advice on what to look for when you're looking for a storage solution for a specific project. And then finally, and this is a pet project of mine right now, I've been working on an idea of combining the capabilities of both so you would not have to choose between those. This is specifically important for larger environments, environments that span multiple data centers and that have rather dissimilar hardware for the per data center, but we'll get to that later. Why is this even a question? My, very recently I went to a customer, a customer told me, well, we are going to use CEPF. My first question is almost always in these cases, what does drive that? Why are we doing that? What got you the idea of using CEPF? I'm not saying it's wrong, I'm just saying we should look at where you came from, what your environment is and how we can best support that. Starting with selecting a project over another project or starting with selecting a specific technology over another technology gets you probably a beautiful cloud. The problem with it is that it may not fit your business case and if it doesn't, the people up there are going to be mad and they are going to cut off your funding or they are going to scream at you or both. And so we should ask ourselves the question, why are we doing that? What are we doing? One of the solutions is always a waste of time. We have Swift. Swift is not cool. And more often than not, people say that CEPF is the better project because CEPF can do so many things. But do we really need all that? Answer is not yes or no, the answer is look at your business cases, look at your use cases. What are you going to do with it? Is for instance this feature in SAF necessary? Is this something that we actually are using? Is this something that can be worked around? Do you have to work around problems in your chosen solution? That's another biggie because working around problems usually is very expensive and it's very difficult to implement and oftentimes leads to contention. Technical specs and use cases. Every time I go to custom in the first way is back to the basics. What's driving us is the money. The money says Mirantis is cool. Mirantis is helping us doing this. But we need to look at what the money really wants, what the company really needs to determine what approach we choose to get into this design. Any design that we build serves a purpose and the purpose is not to be a nice play toy for operators that have purposes, not even to be the best storage money can buy. The purpose is we want to do something specific with it to thus our solution support that. And again, working around drawbacks is expensive. We can't say that often enough. Customers have tried to time in again, while we'll script something. This is the sentence while we'll script something. This is something that I hear frequently and that usually leads to pretty big problems. Scripting is cool. Scripting is useful in certain events. For instance, you have something that you simply cannot do with either of the current solutions and you choose the solution that fits best and work around the drawbacks of that. But scripting just because you have chosen something and you don't know whether it's the actual best way to do things, you'll just approach it from a, well, we can do that way, is going to cause pretty big problems down the road. So what do we do with storage? And this is a point that probably most of you are familiar with. Of course, we support the cloud functionality itself. We have block storage. We have image storage. Who here is working extensively with Glance and Cinder? So what backend do you choose for Glance, for Cinder? Why do you choose that? Oh, correct. So you're probably just here to figure out whether what I'm saying is nonsense and you can say, hey, this guy doesn't have it down. Okay, so, but it's important to think about that because for instance, I had a customer pretty recently and I'm going to, actually, let's see, we have a customer's perspective here. This is a fictional setup, but it is relatively close to what a specific customer told me. So let's say our company Megacorp has three main data centers, big data centers. They have a lot of storage there. They have a lot of compute nodes. They have all kinds of neat technology. They have networking, everything down. And they have, one of the important things is they have multiple storage devices and they need to be in sync. I put something in on one side and it needs to show up on the other two sides. If you do that in the wrong way, this is going to cause a pretty big amount of problems. So in this case, the consumption and the supply of data has to be looked at and we have to figure out how to get the data that we are supplying in one place to the consumer on the other side. And we also need an object store for application. Maybe we need a file system store. It really depends on the application, but let's say we only need to support our cloud and we need to support the object storage. And then they have satellite data centers. They are small data centers. They have a few compute nodes. They have a few storage nodes. And the storage nodes can, it's not high performance storage. It also may not be the network link between the main data centers and the satellite data centers may be very slow. And of course with projects like that, the cost is the driving factor. It can't be too expensive. So no going to net up and having them caught a couple of fast 8080s in there with a number of big shelves and we are just copying everything down. The network is very slow between here and there. Let's just copy everything down and provide it locally. If you try that, of course, there's going to be a number on the bill from net up and this number is going to have a lot of zeros in the wrong place. And then you're going to talk to management and management is going to say, hey, I didn't tell you to buy something for that much money. So you have to find an open source project that you can actually substitute for the big expensive storage. By the way, I'm not saying that big expensive storage is better than an open source approach. I'm just saying that it has its place in storage too and it is something that we need to take into account. So this is our mega clock cloud project. We have big storage here. Here are the big data centers and we have our little satellite data center with the little storage buckets all around and we have no way to in any way combine them. So what would you think if you have a setup like this? How should we go about that? Any ideas? How do we select the project that will work with us to support a setup like this? We want to have all our data in all these three data centers in perfect sync. We want to have the necessary data in these little data centers that are set aside. So what you could do is you could do the Swift approach. You have storage everywhere. You have to have the same amount of storage everywhere. At least the amount of storage that you have to have to store your complete data set. And so this means that your satellite data centers are going to balloon out and you are also going to have a lot of network traffic going from everywhere to everywhere. And then what you're doing here is rather, let's say useless because you have only a requirement for a very small subset. A few percent of your total data set and you have this subset maybe different from this subset and this subset. And what you want to achieve is to have only the storage there or the data there that you actually need for your relative local use case. So another approach of course, and this is the one that I have seen time and again customers take is okay, we'll just copy this up more or less manually over there. It certainly works if the data set is small enough. You can script that somehow put it together and make it work for you somehow. The downside to that is it's not standardized and if the guy who has written the script leaves the company which is nowadays is something that you have to take into account with the job market the way it is and the scarcity of good IT engineers. And nobody will understand how it works anymore and it will work for another three or four or five months and then it will break. And then there's going to be a number of very unhappy operators who are sitting there three nights in a row trying to figure out how to make this thing work again. And the developers are going to sit on the back end and say we need this new image here and it sucks that the image is not there. So they are going to yell at you. You don't want to be yelled at at least I don't want to be yelled at, I don't know how we do. So this is clearly not the solution that we are looking for. Let's look at the SEP approach. SEP has two different ways to support Cinder and Glance, which is one of which is Rados Gateway, meaning you have an object store again. And the other one is RBD, which means you use a block store to provide your back end for Cinder and Glance. The block store does not really replicate at all. There's no inbound in built in functionality that allows you to replicate an RBD store to another data center. With Rados Gateway you can do that. The problem with Rados Gateway is that you have master and a slave. You cannot write into the slave, you have to have a master. So if you want to write into each of your data centers, you have to have a master there that's replicated to the slave somewhere else. And it's also replicated to one of those little satellite data centers. And then, of course, Rados Gateway has a couple of other disadvantages. One of them is that all the nice advantages that you were hoping to draw from SEP RBD, like copy on write, do not work with Rados Gateway for good reason. This is the same reason why you cannot do that with Swift. At least not at the moment, as far as I am aware. If I'm mistaken about that, and with the speed, everything is moving, it must have been a very new development. So, in Swift, we have multi-region capability. Maintenance, in my opinion, is less complicated than it is in SEP. If you have ever tried to troubleshoot a SEP cluster, it can be pretty grueling. It's a simple to firewall off, because Swift has like a Rados Gateway also has proxy servers in front of it. So you can simply put a firewall in front of those and only let the protocols that you absolutely need inside, which is probably HTTPS, possibly HTTP, through that firewall, and everything else is blocked off. If you use SEP and RBD, you cannot really do that because the compute nodes have to talk to the OSDs directly, so you have to have a lot more ports open, even if you have a firewall between those two entities. On the flip side, of course, what does a proxy do? It passes traffic through, it comes in, it goes out again. So you are introducing an element of latency and also probably a performance bottleneck if you have too much traffic going on through there. So, okay, this is actually an error. This is supposed to be RBD, so I didn't prove it right. And, of course, you have object storage with Rados Gateway, you have block storage with RBD, and you have file storage with CepFS, and you also have the opportunity to write whatever tickles your fancy for as an additional plugin. You have RBD has a number of big advantages which one of which is copy on write, which is very useful if you like to spawn hundreds of instances from a single boot image, and only the differences get stored which is very useful to keep your small data centers small. In Swift, if you have the backend, you have to make a copy for each of them, and each of the instances, and so you will, the thing balloons much earlier. And then we have Erasure Coding. Erasure Coding is an interesting way of avoiding replication and avoiding the disadvantages of having the three copies of each piece of storage that is in your data store. Normally, Erasure Coding is somewhere between 1.2 and 1.4 times the data that you have, your total usable data. And on the advantages, it's about as safe as having three copies without the storage. But it is currently, even if you ask the safe people, Erasure Coding is not quite stable enough for full on production use yet. At least this is the last status that I have from about a month ago. So you don't want to risk that, but of course, for a Dev cluster, that doesn't really matter all that much. Safe Architecture Overview, I think most of you have probably seen that already, you have Liberados is the basis of pretty much everything. You have the Rados gateway on top of that. The Liberados communicates with the OSDs and with the, I'm fat fingering the remote a lot today. And so you're talking, Liberados talks directly to the, can talk directly to the OSDs and the monitors. The monitors are an interesting component of this F cluster in that they have a quorum. If you have a Swift, you do not have a quorum between your proxy servers. You can have as many or as few as you want and they are independent of each other. They're also stateless. So you can add proxy servers or remove proxy servers during operation. In safety, these monitors are stateful and they have a quorum. So if you have, for instance, three monitors and you lose two monitors, then you cannot access the cluster until you repair at least one of your monitors. You can, you don't lose any data with that. It's just, you have to take into account that you may incur downtime if you lose too many monitors. So it makes sense to spread them out and to design them in an HA way. And of course you have your OSDs with storage devices. There's another component that's not in this piece of graphic and also I have no idea why this comes out as this ugly brown is supposed to be more dark red than brown. In any case, these storage devices are usually spinning storage and spinning storage is slow. So if you're right with SF, you have to wait until you get a quorum back from your right. If you write three copies, you have to have two copies written to this. So what a lot of people do is put another layer in between there and put a journaling device that is SSD so you can speed up right performance. And then you can have a separate cluster network where the replication is going through the, replication is going from one storage device to another. Swift external network, you can have multiple regions and this is a multi-region Swift cluster. You have a load balance in there. You have a Swift proxy layer and you have a physical or virtual storage network and you have, behind that you have storage nodes. There is, SF allows you to do caching, although that is also not quite production gradient as I am told, Swift does not. So you basically, your speed is pretty much limited to what your storage devices can do. So what do we do to enhance speed? We put more storage devices because the more pieces are spread over the more, over more storage devices, the less time you will spend reading an individual piece from an individual storage device. How do they integrate an open stack? Back in for Cinder, of course you have, can either have RBD or others gateway, most installations use RBD, at least from my experience. In Swift, you have an object store. There's nothing really that you can say about it. The object store is the backend and Cinder and Glance are fairly well optimized to work with this object store. File storage does not exist in Swift. There are some projects that are trying to put file storage on top of a Swift cluster, but as far as I know, none of them has production grades after at the moment. And object storage for applications, we have Rado Skateway and an object store. And what's also missing is, of course, you can have a storage devices, block storage devices for the application, also only in ZF. So if you look at our mega-corp again, should be a mega-corp up there, we have requirements. One of them, of course, is multi-region. We have the multi-region application and this is something that management is not going to compromise on. We have a limited budget, especially for those storage pods. We have a slow network to some remote sites. We have to take that into account and the local performance requirements are significant. So we want something that is not too slow and then reliability and maintainability. So my idea to address this issue is to combine both approaches to your advantage. You have this approach, you have something that is fast and that is very good at local storage, but it's not very good at multi-region. And you have Swift, which is slower and which has its own operation of disadvantages, but it is very good at multi-region. And for your central store, you don't really need something that is super fast. You normally don't pull hundreds of thousands of images out of that every minute of every day. You just pull images out every once in a while. Some developer comes up with something new and whether that takes a few seconds, it doesn't really matter all that much. As long as you can do that in an automated way. But one of the important pieces, of course, is resilience. You have to have a reliable infrastructure. Otherwise you'll have run into big problems both with the developers that you're talking to and with your operators. And Ceph is almost the mirror image of that. Ceph is almost the mirror image of that. It's fast. It is very good at local. It's not good at global at all. But it can still provide a local object store and file store. So you do not have to have multiple clusters locally. Imagine you want something that is, you have something that is only an object store and something that is only a block store. In this case, you can combine that and you can combine it in an elastic way, which is also an advantage of Ceph that I haven't mentioned yet. You can share one Ceph cluster between object store and block store. And you don't have to decide which part of the cluster is going to be object and which part is block. You have an elastic boundary between those two. So this is the happy approach that I've been thinking about, which is you have a Swift cluster that exists only in the three major data centers and that is used to infuse the images, the snapshots, everything that you want to keep permanently and want to disseminate to your clients in a controlled way. And a Swift is not the best highest performance solution even in the big data centers. I would advise having a small Ceph cluster that is used as high performance storage and also draws from Swift to improve the number of operations per second that you can do or the number of instances that you can launch in a given amount of time. And of course we have Ceph clusters in those remote regions. So the one thing that is missing from this picture is the glue, these blue lines. This is something that we do not really have yet. There's three different approaches that I would like to recommend for this or the approaches that we should consider for integrated approach that we have here. We have scripted replication. Of course this is something that is a quick and dirty hack and for a Dev cluster it would probably work just fine but it is probably not very well maintainable in the long run and it is certainly something that you eventually, the tribal knowledge that created the scripts will dissipate over time and then they are going to be settled with something that you cannot really control. The second approach that you could do is orchestration. So let's just imagine we have a local Ceph cluster and we have Glance. And Glance says to the local Ceph cluster, retrieve image X and image X is not there. So we should have an other orchestrator who would know to go to the Swift cluster, retrieve the image inserted into the local Glance and then operate or launch the instances from there. The advantage of course is heat is an open stack native project, there's a couple of other good orchestrators out there but the downside is still that you have every task that you write, every let's say heat stack that you build will have to know that it needs to fall back to the remote data center and to the remote Swift cluster to get the data out. So we have, it's probably better or it is better than scripted but it is not perfect yet. So what I'm thinking about doing and I'm hoping that I can get a little bit of feedback from people that have listened to my talk and have seen their problems that we have in real life is whether it would make sense to have a multi backend Glance, we have other projects that can have multiple backend Cinder can have multiple backend Glance cannot at the moment and in my opinion it should. We should be able to have Glance draw something from a local store and have a fallback store to draw an image from if it is not there. So I am planning on creating a blueprint and I'm really hoping that I can get some feedback from as many of you as possible on whether this approach from your point of view makes sense and whether you would like to see that happen. I mean, this is what community is all about. We are not talking about something that I'm imagining and coming up with because it's cool and because I think that I can have my name on a project. The idea is that it is useful for the community so we build it and if that's not the given then the project is going to fail as we are not going to pull in any developers and it's just not going to work. So we have about 30 minutes. I want to leave a few minutes for the question and answer. So what should we take from this talk? For the current infrastructure or for what currently exists, analyze your requirements and come up with a solution that best fits those requirements and then go and develop around the drawbacks of the solution that you have. What's most importantly, religious wars always only leave victims on the way side. We have Seth versus Swift. We have Ubuntu versus Reddit. We have all kinds of religious wars and I do not like to see any of them because all the projects that we have in the open source space, most of them are very viable and are very good and working together we can achieve a lot more than butting heads. I mean, if you walk in the mountains and you see those big horn sheep and they run together with their horns and this is somehow what this reminds me of. And then, yes, please show me your solutions. Please let me know what you think of mine and if somebody can come up with something better, I would be delighted to hear it. I'm thinking about this project because it's addressing problems that I currently have and that I see in the field time and again and if we can find a viable, tractable solution for all this, I think this would be a great thing to do. And OpenStack, I love OpenStack. I like working with it. And I think that within the last couple of years that I have been in OpenStack, we have achieved absolutely amazing things. If I compare OpenStack, Folsom with Kilo or what's in store for liberty, it's just, it is like seeing a number of little houses in the village being blown up into a mega city. And I would like to work with others and hear other opinions on that. Any questions or comments that I could answer? Yes? If Swift improved in performance, my happy graph would still look roughly like this, the reason being that at the moment, you cannot replicate Swift piece by piece. You cannot basically being able to say, okay, I'm only going to replicate this little thing here and this little thing there. That and the performance improvements would change the happy graph radically. The reason being at the moment that we have satellite data centers that have very little storage and that we want to use as efficiently as possible and not buy any additional storage for that. So these are the two things that are missing from my Swift picture. If we could get this going, that would of course also be a rather very useful approach. And I thought about the possibility of having separate Swift clusters but then you have the same problem again, especially if the data sets between here, here and here are not very similar. So you cannot say, okay, I'm these are put together and we'll just swing this piece from the Swift cluster into the other Swift cluster. So the idea is still, Swift is called a global replication of everything. I put something into it and it shows up magically on the other side. And I can still, this is another thing that I absolutely forgot to mention before. If you try the same thing with Seft theoretically, you can and if you have a very low latency and a very fast network, you can have OSDs in different geos but I have seen people try that and they invariably come back to the problem. This is just not fast enough. The latency is too high, the latency fluctuates. So Swift, you can write locally. You can have write affinity and read affinity. So you get your read data from your local Swift cluster and you write to the local Swift cluster into temporary locations and after the fact, the data is being replicated into the other geo. So this is the reason why Swift is good at geo replication and Seft is not. There's just different approaches to the same problem. Yes, this is something that is new. This is new in Kilo. I have not seen it yet. I have heard about it. I have, but I would love to test it. And that would certainly make a big difference in terms of how much storage you have to stuff into the thing. And not more expensive, but did you ever think that really Swift need only two copies to be deployed before you return the right acknowledgement? So did you think for multiple data center instead of putting Seft as a local writing destination to put two copies in same data center and make the third copy in the remote data center? In this case, your proxy servers will return the acknowledgement of writes or reads. As soon as first two copies will be returned and they will be local. That approach is also something that I've thought about. The disadvantage to that is that it doesn't scale. If you have two copies here and one copy here, you get, when you write into this one, you get a write acknowledgement really quickly because you have the two local copies. If you write into this one, you always have a remote copy to talk to. And this is where write affinity and read affinity come in. This is the parameters that say, okay, I'm writing here. I have a compute note here that's writing something into the Swift cluster. The compute note, the Swift cluster writes enough copies to generate a quorum. For instance, I have six copies total because I have two copies here, two copies here and two copies here. And I get an act back as soon as the copies are written locally. This is, I would have four copies written locally. Two of them do not belong there. Yes, no, no. Yes, that's what I think I abbreviated that a little bit too much before. The idea is that the self data set is small and not all that dynamic. So you're not, this self cluster is not hammering that Swift cluster. It has a slow link between it, but if you only pull data in it at a relatively moderate pace, you will not see the problem. The actual high speed access is going to be caught by the self cluster. For instance, you have an object that is being pulled from the Swift cluster once and then you need it 50 times to launch instances from an image. So the amount of performance or the performance that you need between here and here is comparatively small to what happens when all of my 50 instances try to boot immediately. So this is the reason why we cannot use the remote Swift cluster to do anything like that. Yes, okay, yes. Thanks for bringing up the point about your blueprint like taking the hybrid approach. So look forward to that and collaborating on that. One of the comments you made earlier was building an elastic file system on Swift and that approach does not work well. I agree with you. Do you think in your vision, taking your hybrid approach could allow building Amazon-like elastic file system on using that hybrid approach using maybe Ceph? You mean replacing Swift to Ceph? Well, basically building an elastic file system around this storage technology. Sorry, I'm drawing a blank. It's been a very long day. So you mean a file system that would be like Ceph? Yeah, something like that. Right, there are ways to do that on Swift, but I think that the better approach, especially if you have relatively small data sets. I mean, a relatively well-known paradigm in storage and paradigm is a tired word, but I apologize. And in this case, in my opinion, it makes sense is that you have usually have a two or 3% active data set. You have a ton of dormant data that is essentially cold storage or warm storage, and then you have your little hotspots. That's why it probably makes more sense to use local Ceph clusters to do the heavy lifting in terms of high performance. Making Swift really fast is doable, but it costs a lot of money. You have to have very fast proxy servers. You have to have reasonably fast storage servers. You can infuse SSD technology even depending on what file system you're using, especially this is another thing that I also forgot to mention that I wanted to mention is one way to make Swift significantly faster is to take the container database. The account database is not so important, but the container database is really important because every time you do a read or a write, the container database is hit. So taking this off the regular ring and creating a separate ring for the container database, or rather, detaching the container database ring from the object ring, and writing into an SSD instead of a set of SSDs instead of a set of spinning drives massively increases performance because otherwise you have a number of, for every access you have a number of seeks on spinning out drives, and you know how expensive that is. It's just in context of the question. Given that when we keep either three copies or we keep or we do a register coding, you're trying to protect against disk failure and controller failure. And given that disk failure is orders of magnitude more than controller failures, I'm just wondering, have you considered just traditional rated storage instead of Ceph? Yes, and there's a good reason why we are normally not doing it. Sorry, I didn't cut you off today. No, no, no. Okay, so I have had a number of customers who have tried that approach and the approach was essentially to say, okay, instead of relying on Swift replicating three times, we are going to put a rate behind it and replicating two times or even only one time. And the problem really is if that rate dies and it has happened that the rate dies and the reason is not only, cannot only be disk failure, it can also be that just that storage node dies. Afterwards, you have to do replication work. If you lose a single disk, you have a moderate amount of replication work. If you do a rate of 10 disks, you have 10 times the replication traffic and the replication traffic does not come from the same host. The replication traffic comes from some other host and in some cases, if you have multi-region setup, the replication traffic at least partly may even come from remote and clog your lines. So in most cases, using a rate underlying the Swift setup is not really an approach that has historically worked well. If you do it right, it may have an advantage, but it can also, it can really come back to bite you. By the way, there is an interesting sidelight to that that I have seen. I have not actually seen a cluster that works that way, but NetApp, for instance, has in their white paper in the architectural design, they have an option where you can use NetApp e-serious sands as backends for Swift and have potentially even only one copy. Currently, I have a customer who is trying just that. In my opinion, the risk is too high because if you lose that thing even temporarily, your cluster is going to be down. So if I was going to do that, I would have one set of storage nodes that get mounts from two e-serious devices and have a replication factor of two. So every bit of data is going to be replicated between those. It's not quite as cheap. You're still saving, let's say, one-third or so of storage capacity. You can have higher density or may be able to have higher density, but I really want to see it work. I want to see the customer finish it up and maybe at the next summit, I can report on how this is going to work. Technically, it's possible, but I personally, at the point where we are right now, I would at least not do it with the standard disks. The controller failure will cause data loss. So you're saying controller failure is what you want to protect against, right? Yes, so the more individual paths you have to the data items, the less likely you are to cause a replication storm and the replication storm is what kills you. You can build a marvelous cluster, a marvelous swift setup, a marvelous set-up, what really breaks you is not standard operation. It is the invariable thing that something goes wrong and this is, it will happen at some point. You may be happy for a while and it will happen at some point. So this is why I always advocate plan for failure. Sit down and think what is going to, whatever design you have, sit down and think what is going to happen to my design. If this fails, if that fails and if you go through your exercise and you think, okay, this is the right approach, then this is what you should implement. Okay, thank you very much. I appreciate it. Yes.