 Can you hear me okay? Yes, perfectly, can you hear me? Yep, good morning everyone, or afternoon even. Looks like kind of a late agenda day, correct? It is a lighter agenda day yesterday. A lot of people, we can wait for a few minutes. We'll start at five past. Sure, did the calendar invite you to Eden? He sent out a calendar and invites sort of maybe a couple of hours ago. Okay. I don't think it goes easy. Okay, all right. I was curious because there are multiple invites out. Okay. Quinton, sorry, we can't hear you. Okay, maybe we should get started. Quinton, can you hear us now? Yeah, I managed to figure out my problem. I'm here, thank you. Awesome, all right. So we have only a few items on the agenda today, but I'm hoping we can get through it quite quickly, but there's a bunch of stuff we can formalize if we get this done. So I just wanted to, first of all, double check that we don't have any follow-ups on the Harbor and Rook project reviews. I don't believe we do, and I believe we've submitted all the information, just double checking with you, Amy, if there's anything outstanding as far as you're aware. Nothing from my end, and I just spun through the project board about what's coming up. So I think you're okay, thank you. All right, awesome. I'm sorry, sorry to interrupt. On this side, was there any feedback from the TSC on the Harbor stuff? Yeah, so they've reviewed the feedback from the existing SIGs, including SIG storage. Their next steps are there are other SIGs that they want feedback from. Nothing that they need from SIG storage. Awesome, did you get any sense of what the posture was regarding the date by default versus not that we brought up? It doesn't seem to be a concern for the TOC. I think it was brought up, discussed, and it wasn't seen as a major concern, basically. Interesting, okay, thanks. Okay, just a question on that. So as a general point of reference, the graduated projects are supposed to be sort of ready to go for production, and they're supposed to be sort of easy to consume by the end users and in an optimal state. Do we need to specifically highlight any of those sort of HA or production-capable criteria in the template so that we can make sure we kind of tick that box going forward? Yeah, and I think we need to be clear about what is exactly the concern because Harbor does support HA, it's just does so by saying, hey, if you want it, you need to go and set it up yourself. And so if the deployment aspect of it is critical, we need to call that out. I think the TOC's position was that it's kind of unreasonable to ask Harbor to go and ensure that all of its dependencies are deployed in a specific way through its own default deployer. So yeah. Yeah, I have a personal opinion on this, and I think that's kind of a reasonable approach what you just mentioned, Saad, but so we do repeatedly see these architectures which are based on essentially a single point of failure relational database, be it MySQL or Postgres or whatever. And to some extent, like the whole cloud native movement is specifically to address that problem. And so I think I would personally say that that's something that relies on a single point of failure relational database is not cloud native, it'll stop. Yeah, I don't know what other people's opinions are on the matter, but it seems like a showstopper to me if there's no easy way to make something highly available, which is the case with relational databases. Specifically, traditional style, databases. I think that criticism is fair. If you have something which is designed to scale and to be instantiated in Kubernetes, et cetera, but then has dependencies on something which is a single point of failure, that should be a cause for concern. I think if there are ways to work around that, that should be taken into consideration. I mean, specifically if, for example, Harbor has dependency on a database, but its default deployer doesn't deploy the database in HA, but there is a set of exercises or maybe there's a Helm chart or there's something you can use that can allow you to deploy it in HA. Then I guess the next thing to look at would be what happens if that service has issues? So for example, a database like SQL or Postgres that's deployed with HA, how does Harbor cope with a failover? Does it need to be repointed? Do you have to have some sort of load balancer across a multi-master setup of databases or something? I mean, there are so many different options and I think if that's not made clear in the project, it's incredibly easy for an end user to get this wrong. Yeah, the nice thing is that they did make it pretty clear in their documentation, which is why I was able to find it when I was doing the due diligence. I think the only difficult part is actually going in and setting it up because you have to go to a third party to figure out how to do it. Gotcha. Yeah, I mean, over and above that, and please, I haven't set up like HA databases for many, many years, so please somebody feel free to climb in and correct me, but my understanding is that it's essentially well-nigh impossible and that's why we have projects like Vitesse, for example, but I mean, it's a whole project designed to make MySQL like highly available and scalable. And my understanding is that there isn't actually a simpler way of doing it. And so any notion that one can actually create, set up MySQL to be highly available and scalable is not true. I mean, correct me if I'm wrong. They have to have manual master-slave failover or something like that because there isn't a way to have seamless automated failover, which is the essence of cloud native computing. Well, we have Sue and Tom on the call. Maybe they can chime in. Yeah, what Quinton says is kind of true. The only alternative is to use a mounted storage that is durable and take the hit on the HA. If the part goes down, it comes back up and performs recovery from another, when it comes, yeah, when it comes back up, it performs recovery and continues. There's possibility of a little bit of a data loss. You may lose your last transaction or something depending on how you set it up. But it's generally true, but it is also true that practical considerations can be brought in and given that we want to encourage more storage to move into Kubernetes. I don't know how strict we want to be about this. And I think we should differentiate between kind of different levels of HA, right? So if we're talking about a single site and we're talking about, you know, synchronous replication across multiple instances within that site, I think that's what we're talking about for Harbor, versus kind of geographic HA, multi-site HA, which is like Quinton mentioned, much more for challenge. But just to be clear, I'm not talking about multi-site at all. I'm just talking about a data store which needs to be transactional. So if you upgrade your containers in the repository, you need to know that it's actually there. And because the entire cluster depends on that version being there, and if the repository becomes unavailable or gets corrupted, your entire cluster is potentially failed. I don't mean to sort of ever state the seriousness of it, but in reality, you know, you have this, you have a, you know, perhaps a global outage going on, you fix the bug, you send it to your repository and your repository loses that transaction because, you know, it's asynchronous replication, whatever. You now, you know, have this ongoing global outage or rather at least, you know, cluster wide outage, which is kind of, you know, and that's sort of inevitable. It's gonna happen. And that's what cloud computing and cloud made of storage is designed to solve is that you don't have those situations. So, yeah, no, but don't, please don't confuse it with multi-site replication or anything else. I'm just talking about the very basic use case and making sure that the repository is available when you need it to any transactionally sound. But wouldn't the same problem exist for something like an SED for Kubernetes or I don't understand databases that well. So, Kubernetes, I mean, SED is fundamentally transactionally sound. You have three nodes, they agree on who the master is. They agree on all the transactions that are committed. And if any one of them fails, or in fact, yeah, in some case you have five, but let's just use three. Any one of those nodes can go down. You don't lose any transactions. Everything is perfectly sound and nothing becomes unavailable. So that's essentially what I'm talking about is there's a fundamental difference between having a SED type system underlying your storage and having a single point of failure relational database if you don't have something like the test on top of it. So by the way, even people don't trust, people that go into production with the test don't even trust the SED. They actually, some of them actually were asking about what if we lose all the SED data. So one design principle of the test is that if the SED data is lost, it can be manually reconstructed. So the entire cluster can be redeployed even if all the data is wiped. But I would trust SED or ZooKeeper or console with this can use all of them. But it did take a lot of effort to get to that point. So I don't know. Yeah, I mean, irrespective of what people believe, I mean, if you burn down an entire data center and all the hard drives in it by definition, all data is gone and there's nothing you can do about that unless you've synchronously replicated it somewhere else. So I don't think the expectation is to be unrealistic like that, but to be very clear about what failures you can tolerate and what failures you can't tolerate. Now, burning down an entire data center is not a failure that anybody claims to be able to tolerate without any outage. However, losing a single machine is. And with a relational database, I'm not aware of it being possible to handle the failure of that machine without losing some data or being unavailable for some period of time until some human intervenes and you lose some transactions as you pointed out. That's the essence of the problem. I guess what we're debating here is the implementation of those backend dependencies. So the way you instantiate and the way you configure those dependencies can have a huge outcome on things like availability because, you know, things like at CDY, they lend themselves so that sort of technology also have kind of, some weird things depending on how you deploy them. So for example, the default at CD operator keeps data in ephemeral storage and if three pods crash, you could actually just lose your data and have to restore a backup. But obviously that's based on how it's implemented. And I think the same could possibly be argued for databases like MySQL and Postgres, you know, if you're using a master-slave relationship with some sort of asynchronous replication, you can almost certainly guarantee that there will be some sort of data loss. If on the other hand, for example, you're using a transaction log and you don't acknowledge transactions until they're committed to the transaction log and the transaction log is synced to disk and the disk is replicated somehow so that you can restart the pod somewhere else and that replication is synchronous, et cetera, et cetera, et cetera, you know, then theoretically, you can probably achieve strong consistency on those sort of databases too. But it is very, very dependent on the deployment mechanism. And this is why I was kind of specifically, I was kind of curious in asking Saad, how well is it defined on how you deploy those things in HA because almost certainly a default thing in a Helm repo won't do the right thing for these sorts of requirements. Yeah, that's a good question. So on the documentation for Harbor, they've basically listed a requirement for Postgres and Redis and they've said that by default, their home chart is not going to deploy in HA and if that is important to you, that you should set that up yourself and manually deploy those dependencies and that's kind of left up as an exercise for the user. And speaking with the Harbor maintainers, they said that the default Postgres and Redis Helm charts do enable an HA deployment. So to Quinton's question, I don't know how well that works out of the box but it is supposedly supported. So to the best of my knowledge and please anybody feel free to correct me, it is not actually possible to deploy Postgres in an HA and scalable fashion or even HA to the point that Tugu mentioned earlier. So to wave one's hands and say, well, if you wanted to HA then do it yourself, is only fine if that's possible. As far as I'm aware, that's not possible. And that's why, as I said, that's why we have the tests and HCD and other things where it is possible. That's the essence of my concern. And if it was, and this is to be clear, not a pointing a finger at Harbor specifically, a Harbor just happens to be an example of something that relies on a single point of failure relational database that has not been demonstrated to be deployable in an HA fashion and on which the entire cluster depends. And there is a whole class of these applications that we've seen many of them being submitted to the CNCF. And I do think, without wanting to like flog a dead horse, I do think that if we're gonna call ourselves the Cloud Native Computing Foundation and if we have as a principle that Cloud Native is highly available and scalable, we need to actually demonstrate that by not graduating projects that are demonstrably not highly available and not scalable. Yeah, I think that's fair feedback. Could you write up that quickly on the storage say due diligence issue? And I will take a note to bring this up to the TOC surface it as, hey, we just tested in SIG storage and this was. I did that already, but if it's, is any, I wrote all this down previously in the issue and all due diligence. Okay, I'll look it up if it's there, then that's. I'm happy to add to it if it's not clear, but I think I've already made that clear. If you look at the notes from, was it two or four weeks ago, don't remember, the notes for this meeting, I put it in words there. Cool, as long as it's writing somewhere, it'll be easy to point to. And this is from memory. So if it's not sufficiently detailed or specific, let me know and I can flesh it out some more. Sure, thanks. So the question is the project owners probably are thinking that they have passed this point, they'll probably be upset if they hear that we are, will there be, are they likely to be upset if we, now are going to say that we are rethinking this, or is that okay? By a project. No, I think. Sorry, go ahead. I just wanted to clarify what you meant. Sorry, go ahead, go ahead and see you. No, I was just asking. I mean, I agree with Quinton's point, what is it is actually true. So the only thing I was worried about is, I mean, the reason why I wanted, I thought it was okay was because metadata not being highly available by some people is treated as acceptable, but if it is not reconstructible, then it's a huge problem. And I don't know if that is a property of hardware. Like if you lost all your metadata, can you reconstruct it annually? We could let them off the hook if that was the case. But otherwise, yes, it is a definitely a concern and in principle, everything that Quinton said is 100% true. But the bigger question for me was, we thought we said, okay, we find that acceptable, but if we have not said so, I think it's fine. We can go back and say that, we're taking to address it. To be clear, what I suggested we do, and I think what Saad did was, we highlight the fact that this thing is not available and we defer to the TOC to decide whether they would like to delay. So the project has already stated that they plan to support more highly available back-ins. Was that? Yeah, I got it, yeah, now I understand. Yes, yeah, yeah. And so the question was just, do the TOC want to delay graduation until that work is finished or do they want to graduate it now? Got it. And so I agree with you. If it were my decision, I would delay graduation until that work's finished. They've decided not to and that's the prerogative of the TOC and that's reasonable. My question is actually, or my concern is actually much more general than Harbor and I'm not having a good Harbor particularly. I'm just wanting us to come to a clear conclusion on the point of this entire class of things that are based on single-point failure relational databases upon which an entire cluster depends, we need to make a blanket decision as to whether those projects can graduate before they provide highly available back-ins or not. And my point fairly strongly is that I do not think we should. I don't think it creates the right precedent for the CNCF if we do graduate such projects. Forgetting about Harbor for the moment. And I don't want this. I think you're perfectly right. And maybe we should just bring this up at the next TOC meeting. For what it's worth, after the last TOC meeting, Erin did email the TOC mailing list and said, based on these issues, we're not okay to recommend graduation as a SIG and we'd like further clarity on this. And I just copied it into the chat window just to make sure everybody had seen that email. So I think it's perfectly fine for us to raise this at the next TOC meeting. I'll add it to the SORICSIG agenda or updates for the next time. Thank you. And sorry for occupying so much time. I just wanted to make, to clearly communicate what I was trying to say because I think it was misunderstood a few times. I will now be quiet. Thank you. I think this is important. It was worth the discussion. Thanks. Okay, so I had two other things on the agenda and then I can open it up to the floor in case there's anything else that the people want to cover. So let's discuss. I think for SIG Runtime, they actually added on the PR that they are recommending for this one to move forward. Maybe not in that document, but on that poor request itself, if I see something, is that we reviewed DD in SIG Runtime and it looks very solid. It's on that PR 331. Yeah. I think Harbor is an unusual project in that multiple SIGs are reviewing it. Yeah. Yeah. I think it's fine. SIG Runtime at the moment to clarify all that. So yeah, I'm working with them. But interesting, the email that you're proposing here said SIG Runtime has not provided a recommendation, but they actually have. So just that, yeah. Sure. Oh, it's, I mean, that email was sent sort of a couple of weeks ago, so. Oh, you sent this email? Okay. Yeah, yeah, it went to the TFC mailing list. Oh, I thought you were going to send. Okay, got it. Thanks. I think the way things are moving at the moment to SIG Runtime is actually supposed to coordinate all the responses from all of the SIGs, which it hasn't done yet. And also there's a document that outlines, so each SIG should describe what they looked at and give a summary of their findings. And that hasn't been done yet and that's what the TFC noted and that seems like what is happening now. And SIG Runtime is responsible for doing that. I'm actually one of the co-chairs there. I've been a bit slack about getting it together, but that's what we're busy with at the moment. Yeah, so. Awesome. Wow, so you get to have a double lock in this case. All right. So moving on to the next agenda item. So as discussed last time, I have created a copy of our Storage Landscape White Paper, given it the V2 title and I've copied in the Database section that Sugu had worked on and that we had reviewed and I copied in the updated management and CSI section that Shing had put together and that we had reviewed and agreed. There is one outstanding small piece of work that I need to copy in, which is the database comparison table from the database doc, because I just didn't finish it in time. But once that is done, I'd like to suggest that we move forward with publishing the Landscape V2 White Paper and I'll raise a service desk request with the CNCF, so maybe we can publish this on the website and maybe have a blog or some other marketing content around it to make sure it's visible to the community. Anybody have any discussion items or would like to sort of review it in a bit more detail or any questions? Okay. Of course, the document link is included in the minutes. It would be really great if people could just scan through it and just make sure I haven't messed anything up or made any mistakes. I obviously feel free to comment on anything that you need, or I think might need an update or whatever. Awesome. Thanks for all your hard work on that, Alex. Just one parting comment. We had a bunch of things that we sort of were targeting for KubeCon Europe, which is obviously now postponed. I would like to encourage us to just stick with our plans even though the actual KubeCon has been postponed. Let's try and get all of those to-do items done by KubeCon's original date, which I think we can do and we've got most of the work done rather than let it slide. Because I'm pretty sure that we're gonna have another deluge of things that need to be done by the new KubeCon dates. They're gonna be a bunch of projects that arrive and want to join the sandbox and be graduated and et cetera by the new date whenever that turns out to be. And so let's not let the existing stuff slide beyond that bit. Hopefully we all have a little more time because we no longer are going to KubeCon. So you get a free week. Yeah. Yeah, no, that makes complete sense. So just to recap, the three things that we had wanted to do was this, the V2 of the storage landscape, the use case template and the performance doc. The performance doc has slowed a little bit. We have people working on it, but yeah, we need to speed that up a little bit. And the use case template was going to be the next thing on the agenda. So does everybody have the link from the meeting minutes? Otherwise, I'll paste it into the chat window. Actually, the meeting minutes, is there a link for that on the GitHub page? Because I thought I saw it last night. There is, yeah. Well, I can't see it now though. It should be in the GitHub page and it should also be in the meeting in front. Let me do sex, I'll send you the link. I see it, it's the Bitly link, foundation. That's right, cool. Right, so the use case document, this has been something that we've kind of gone back and forth a few times. I'm just gonna give it one minute summary just to bring everybody back up to speed and remind everybody of the various discussions that have happened so far. So the use case document was something which we wanted to do as a follow up to the landscape. And the idea was that the landscape described things like the storage attributes and the management interfaces and the different storage topologies and things like that. And what we wanted to do was that then take that, take the information from that content and apply it to some specific use cases. Initially, we had kind of discussed having specific use cases after much debates. We, you know, especially around CNC and king making and that kind of thing. We settled on having use case categories instead of, you know, specific use cases. And we had, we've had a few discussions about what those categories should be. We put together the first five categories that we think needed to be tackled which were databases, object stores, message queues, instrumentation. So under instrumentation, I'm thinking of, you know, things like Prometheus, for example, and KV stores. And the idea would be that we would have a use case document for the category in GitHub. And that use case document might then have one or more options to describe, you know, more specific examples of that category. So for example, the use case for databases might have two or three examples to discuss, you know, a single instance database, a replicated database, a sharded database, that kind of thing. Similarly, you know, for KV stores and whatever else. So far so good. Any questions on that? So if we look at the use case template, we've kind of, we've taken the template that Louise had started working on, had circulated within that working group. And I've taken that, put it into a Google document and added a few additional sections. So we kind of start off with some simple goals and non-goals, which is more to kind of describe what's in scope and what's out of scope and make that clear to the reader. We then have a storage attributes section and this is based on the attributes from the landscape white paper. So when we talk about availability or scalability or performance or whatever else, what those different use cases are dependent on, you know, that we even have a section of consistency, which would be a great discussion point following on from the HRA discussion we just had now, et cetera. And then finally, durability as well. We talk about the storage topology and we're kind of having here a tick box kind of thing to make it easy for the end user to sort of say these different topologies are recommended or not recommended for a particular use case category. We also then discuss, you know, if we're talking about, say, block stores or distributed file systems, for example, or things like that, what we recommend or what we don't recommend for those particular categories. And then I've added in some sample text to kind of cover things like that. Things like deployment and instantiation options. So these are, for example, I actually used the examples that Sugu had put together in the database document as a starter for 10, but we can update those. So what I'd like to do is if we can get agreements on the templates, perhaps we can have two or three people maybe work together and pick a particular category and create an example of that based on the templates and just to see if it works and kind of give an idea of what an example template would look like. Comments, thoughts, queries? Yeah, I think it'd be great to get one or two canonical examples of the use of the template out so that we can kind of set the tone and then hopefully the rest of the community pick up and replicate those for other areas. Okay. Is there anybody on the call that particularly wants to work on this? So I'm happy to work on it with them and obviously Louis, although he's not on the call, will be helping to drive this as well. I had nominated DT to come up with one for Vitas. I'll ping her again. Okay, cool. All right. So what I'll do this, but I don't know if I can get his time. So that's fine. What I'll do is I'll set up a time with Louis and we'll work through the example. I'll send an email and a Slack message onto the whole group and anybody who wants to join, we can help iterate through one example and then we'll share it out at the next storage segmenting. I think if we can get Louis a PR in, then that'll be a good example because that has been the last question time, right? No, that's true. So Louis's PR, we had discussed it and we just need to refine it because and this was kind of like the output of it. So I've already met with Louis and we kind of discussed this because when we discussed the PR, we kind of got a lot of feedback around, we didn't want it to be specific to Minio and we didn't want to be specific to a particular project. We wanted it to be more suited to a category. So we just need to refactor it to this category driven rather than project driven. Alex, it might be a good idea to speak to Xing and find out, sorry, not the Xing we have here. Sorry, Yan, and maybe get one for HCD and more broadly for the category of KB stores. Yes, yes, that's a very good idea. I think he's particularly good at these and or could find somebody, for example, who knows about KB stores to write a fairly authoritative guide. It would be great if we could pass it by some of the other KB stores. TI KB is one obvious example and maybe console and make sure that the thing represents you know, a generic set of recommendations for deploying KB stores on cloud native stuff. Cool. Okay, yep, I'll do that. Good, cool. All right, so if there's no other discussion points in this, I didn't have anything else on the agenda. So are there any other business items or any other things that we need to cover today? Hey, I wanted to introduce myself. I'm Derek Moore. Derek? Yeah, well, I'm pleased to be here. Just been kind of listening in. I'm with Dell EMC and we've been talking to the Linux Foundation about moving one of our projects over to be hosted with one of the Linux Foundation umbrellas as a part of Dell EMC's streaming data platform product that we're working on. The heart of that product has an open source component called Provega, which is, you might think of it as a message queue, but it's really a stream store for the perpetual storage of unbounded streams. It's not quite a Kafka or Pulsar competitor, but that's one way to think about it. With Heritage at Dell EMC, it's predominantly a storage product or a storage platform. It uses like Bookkeeper and HDFS for like tier one and tier two or when used in conjunction with Dell's products, it would use ECS or Isilon as tier two storage or we have S3 connectors and so on, but we're built on top of Zookeeper and Bookkeeper among other things and we have built our own Kubernetes operators for Provega, for Bookkeeper and for Zookeeper, so some of those we will be contributing upstream and we wanna look at Rook if there would be advantages for Rook integrations and we may start in the LFAI umbrella, but I feel like there's a lot of synergies with the CNCF Storage Special Interest Group. So I just wanted to put that out there. Maybe I'll put a few links in the chat, but as we look to join the Linux Foundation, I think we'll at least be interacting if not kind of working more closely. We've recently been added to the CNCF landscape as well as the LFAI landscape and we're looking for which one of those we'll incubate in and I was looking through some of the documents previously but also the version twos that were posted today are these use cases and we would almost fit in a new segment, maybe similar to message queues but truly a stream store and streams can be of events or they can be true byte streams. They can be video byte streams and so on. So I just wanted to say hello and give you guys that. So, welcome. It's always good to have new people attend. That's really great. We have gotten a fairly straightforward template and process that you can follow if you want to suggest a project for a sandbox. I think if you want to share some links or whatever to the project, we can certainly circulate. I mean, certainly a first point would might be to circulate some links to the project to the mailing list and see if there are any questions. But if you want to go ahead, I know you're trying to decide between CNCF and the AI, LF, et cetera. If you do want to go for the CNCF, there is a really simple process to follow for the sound box and effectively the talk, the TOC with triage that request and then send it to the SIG for review. And you would present to the SIG and sort of describe the project and we could write up a quick recommendation. Yeah, I would actually suggest irrespective of whether you choose the CNCF or not and what the TOC decides. I think it'd be great to have a presentation anyway, just for the general awareness of the SIG or what you guys are doing. And then, you know, that would naturally, if things went that way, it would naturally lead into us being able to recommend or otherwise to the TOC. So I think, yeah, either way, it'd be great to have the presentation would be my suggestion. And maybe we could do that in two weeks time if that's long enough to prepare something. Yeah, that would be great, actually. I think that'd be beneficial, you know, no matter how this lands. And one of my concerns is if we did land on the LFAI side, I think there would still be reason to interact with the storage SIG, so. I was curious what the relationship is between your project and AI. It wasn't absolutely clear. Well, Streaming Analytics is one of the major use cases for this platform. And that fits in pretty well with LFAI and what they're doing. They're also a little easier to onboard into and a little less crowded. We're working with Chris and Ibrahim at the Linux Foundation level to decide where to orient this. Okay, but I think we want, we almost want to like span a little bit more regardless of where we home ourselves. Yeah, sounds like there's a lot of overlap with a lot of cloud native stuff in general. Yeah. All right, so why don't we do, why don't we do, as Quentin suggested, let's get a presentation on the agenda for the next meeting in two weeks. Yeah, that'd be great. I'll get that together. Maybe that would be myself and a gentleman named Flavio. So we can take care of all that offline, I guess. Yeah, that's fine. Jai, hop on the mailing list. Definitely, yeah, definitely hop onto the mailing list. Are you on the CNCF Slack, by any chance? No, I can do that as well. All right, if you hop onto the mailing list and just ping us the details, I'll get it stuck on the agenda for sure. But in the meantime, if you want to email me directly or Slack me, that's cool too. Okay, great. Well, pleased to meet you guys. Thanks for hearing me out. Thanks for joining. It's always good to hear about new projects. Okay, I think we're nearly time. Do we have anybody? Any other business to cover? All right. In that case, we can close the meeting to an end. Thank you, everybody, and have a good rest of your day. Thanks, Alex. And everyone. Thank you, bye. Bye. Bye, everyone, thanks.