 Hello, hello. Is my audio working? Hello, Filip. This is Janis. Hello, Janis. Yes, Filip. Are you presenting Piraeus? Yes. Is it from the famous Greek port, or I want to ask on the other presentation? Is it the Greek port or not? Yeah, it is named after the port in Greece, yes. Right, right, right. I was sure, I was sure. I didn't want to jump on the other meeting, but I was pretty sure that it was Piraeus. My team is from Piraeus. Oh, wow. I'm Greek. My parents are actually from Piraeus. From Piraeus, yeah. I was the only once in my life. Did you do the naming? Yeah, you know, we were looking for something with Kubernetes context and, you know, ships and containers, et cetera. Nice. I saw you come up with Piraeus. Nice, nice. And I think it also has a very long history, right? So it was already in the antique time. It's pretty ancient, the port, yes. So yeah, it has a history indeed. Good morning or good afternoon. Good afternoon, Alex. Hi, Alex. Hi, Philippe. Thank you for joining. We'll wait for a few more people to join the call and then we can start. Good morning or good afternoon. We're just waiting for a couple more people to join the call and then we'll start shortly. It's coming up to five minutes, so I think we should start. Today we have two key things on the agenda. We have two project presentations, the follow-up presentation of the Piraeus Datastore project and a presentation from Janis on the dataset lifecycle framework. So Philippe, do you want to go first? Yes, of course. Okay. Then let me share my screen. Okay. So today I will try to outline what is really the scope of the Piraeus Datastore project. And in very few words, it is an operator, a CSI driver, and at a later point in time it will be this failover control. So this is what the rest of the slides is about. And just to remind you about the context, all this Piraeus stuff is about getting the lint-store storage system connected to Kubernetes. And lint-store itself relies for storage replication on a component called dbd, which is a kernel part. Let's focus on this area. So the first thing, the operator, the Piraeus Kubernetes operator is in charge of installing all the components and also configuring them. And for every lint-store cluster, there is a lint-store controller needed. So that is just one container running in deployment. And yeah, it gets installed and configured by the operator. Then there's a satellite that needs to run on all the nodes of the cluster, also installed and configured by the operator. Then the dbd kernel module that can be brought in by the Piraeus operator, but it is in a way optional. No configuration needed. Then there is an etcd instance necessary. Well, let me put it that way. The lint-store controller can use an etcd instance and that makes a lot of sense because then all your metadata is stored within the Kubernetes cluster. So I would say this is the recommended way. If you insist to use an external SQL database, that is possible as well. Then the storage devices. Well, you have to install them. They need to be there. And if you wish, the operator will discover any not used block devices and add them to the lint-store system as available storage pools. Yeah, the CSI driver is installed, of course. The snapshot controller is optionally installed. And also the stock scheduler. And that's the stock scheduler with the patches for lint-store. And by the way, I just learned that the modifications to the stock scheduler are now, there is a merge request out for a stock upstream. Okay, so that's the operator. Hey, quick question. The operator itself, is that using one of the operator frameworks or SDKs? I think so. Is there more it's on the core? Yeah, I'm on the call. And yes, it's using the operator SDK from, yeah. Cool. And then how does an operator install kernel modules? So this is basically just a kind of an optional step in the, or one of your basically configure the init containers for the satellites. So the satellites are a demon set. And optionally, there's init containers which can pull in the kernel modules. And depending on how you configured it, it's either kernel modules that are already available on your system or one that are brought in from our side. So pre-compiled, but this is only supported for a limited set of distributions. Oh, I see. So the previous operator pulls down pre-compiled kernel modules. Are those in a repo somewhere or how does that work? Let me jump in here. So the previous operator has the option to also compile the kernel module on your box. So that only works if the worker nodes have the kernel headers locally. And then it compiles it within that init container. And then it loads it into the kernel from the init container that must run in privileged mode. Okay. And with the pre-compiled, that works for defined distributions. So for let's say, you know, the rel kernel, etc. And then it comes with all these RPMs packaged into it. Okay. Understood. And just one other question, not completely related to previous, but I see that you are installing and managing NCD. Is that based on one of the current NCD operators? Or is that your own implementation of an NCD operator? And no, that's basically just a dependency on one of the already existing operators. We looked around a bit and used the one that fit our use case. It's basically kind of the, how should I say, to have the complete package. So you only need basically the Peraeus operator to get started. That's why we included this NCD. Probably you want to later in a big installation, you have that one deployed yourself using already an operator somewhere. So you can reuse that. Got it, got it. Okay. Understood. That's helpful. Thank you. Okay. Thanks for the questions then let's move on. Yeah, so here we have a bit more description about the kernel modules. So as, as mentioned, in many cases, you want to get this DVD module loaded because that gives your storage replication. In some cases, it just ensures that the NVMeo Fabric drivers are loaded. So if you choose that your storage architecture is based on that. Yeah, I think then the lins to conversion data. I think we already I already touched that so either the, the at CD within the cluster or an external at CD or an external SQL database. In that area, we support Postgres databases and Maria, the DB, and the snapshot controller might be provided with your Kubernetes distribution and stock. We already touched on that. It's very helpful if your lins to is going to use DVD, because then it can hint to Kubernetes where to place posts that require persistent volumes in an optimized way. Right. The CSI driver is also part of the previous data store project. And currently it has these capabilities. So provisioning attaching snapshot thing resizing. It is read write once file IO mode. So the storage stack. You can also use the CD does block devices. So it by default it gives you XFS file system on top. But if you wish, you can have another file system. If you wish, you can modify the amount flags or the MKFS flags, etc. You can also give your persistent volumes in block IO mode. And that's especially interesting with cube word, of course. And although our stuff is read write once. We allow access from two nodes for a very special case for the life migration. Because during life migration KVM. Has the block device open on the source node. And already opens it on the target note before it closes it on the source node. So for that use case, we have like special case that so that we can support life migration of VMs under cube word. Yeah, interesting. Does that does that sort of maintain IO consistency? Is there some sort of IO ordering that's that's maintained between sort of the local and the remote? Is that a generic question or to the life regarding to the life migration? Which regards to the live migration sort of if a read write once block device is being accessed into places at the same time. How do you make sure that you don't have sort of out of order IO hitting the back end? Well, in in case of the life migration and KVM. It is that KVM just opens it on the target node but doesn't access it. Then it stops accessing it on the source node still has it open. Then it migrates all the memory pages. And then at some point it stops it closes it on the source node. So it KVM actually doesn't access the block device in parallel from both nodes concurrently. So that makes it easy for us. But the mechanism behind it is called the dual primary mode in dbd. And there we have full paper on that how that works. And it deals with these parallel accesses to the same block. So the problem is concurrent write accesses to the same block or the same LBA happening on two nodes. And yeah, there's a full paper on it and lots of details. But nothing we want to introduce in modern workflows. Excellent. Thank you. Okay, then I'm moving on here to my to my roadmap slide. So I think I already had it on the first slide that we plan to do this h a controller for rapid failover of stateful sets on and the persistent volumes. That is still in planning stage. We need to do some foundation work on the DVD side. We need to do some plumbing work on the lintster side. And then the, the final thing as the h a controller. So I expect that we will be there. Unless maybe September over time frame. And there is a GUI in the works. It is called Lin view. That is done by a group outside of Lin bit. And we need to look into that and probably will integrate deployment of that Lin view GUI into the operator. So just to clarify the project you're proposing to put in the sandbox is the is the period operator, right? Yes, it's the operator and the CSI driver and the future a h a controller, which doesn't exist as of today. Gotcha. Thank you. I have that here on this slide. The operator, the CSI driver, the h a control. If the Lin view GUI can also be part of that. I need to bring in that group. I don't, I cannot answer that from top of my head right now. And then the, the third item I have here in the roadmap is this. Read write many leveraging Linux kernel and NFS server and client. But that is like a, you know, a far old goal. So maybe some something we will work on in 2020. Maybe it happens. Never. I don't know. Yeah, so that is what we plan here for the future. Then what, what is the current community? How did all that come together? And it literally came together with a little bit in dark cloud. I'm talking about the idea to, to bring Lin store closer to Kubernetes. And what we created in the process so far is we have this this landing page. We have the, the operator, the CSI driver, which is all on GitHub. There's the Slack channel where we interact with the users of the stack with the account. So that, that is all part of what we would want to give to the CNCF, either sandbox or even further. Yeah. And, right, that, that also touches on how all that should go into these foundations around there. So a version of DVD is already on the Linux kernel, but we need to upgrade that. So this is already ongoing since many years. And we are right now talking about getting periods into the CNCF. And then there are first talks of getting Lin store into the solar foundation. But we are here we are in the progress of learning the details that we put it that way. Okay, that sounds, that sounds really good. So, so, Philippe, I'm not sure if you're aware, but we've, the CNCF has recently changed the sandbox application process. So, if you want to proceed with this, there is a relatively simple online form you can you can fill in to to provide details. And the TOC votes on that. I don't know if it's monthly or bi-monthly they vote on that to, to, to approve those sandbox projects. So, so I guess that should be your, your next step now. Okay. And I will find that with Google in a few clicks, right? Yeah, yeah, Amy, did I, did I represent that correctly? Maybe she's not there. Yeah, but I'll, I can send you, I can send you a link as well in an email. It's, but it should be, it should be on the main website, so the CNCF too. Okay, perfect. So yes, okay, then we will take that next step. I, my idea was I, I try to speak with the storage seek first to, to understand if there is any anything more, but yeah, then I go to the form as next step. Perfect. Thank you. Thank you very much. Thank you. Anybody else have any other questions for, for Philippe before we move on to the next presentation. Hi Alex. Oh, this is Amy just stepping back in. I stepped away. I believe I answered the question in chat as far as like where the form is. But happy to be able to take this offline. Okay. Yes, thank you. I found the link. Lovely. Excellent. Okay, carry on. Hello. Hello. Yeah, we hear you. Oh, okay. This is Alex, Jim from the cloud. I just want to mention that we just get informed that way our topic topic about previous just got to pick up by the coup con China, which is we hold virtually online on July 31, 31, sorry, 31. Cool. Cool. Thank you. Yeah, I just put a question there. Something to think about is we also have Rook, and Rook is a orchestrator of story systems. So I'm wondering if this kind of, you know, we could think about this extension of maybe they can think about maybe how work would benefit them or not. You know, it's just another idea. Well, our impression here is that that Rook is so strongly associated with Seth. That our feeling was it would be better something independent of Rook. Sure. Yeah, understand. So, so incidentally, I think that was the case in the early days of Rook, but that's no longer the case and they have support for several other storage back ends as far as I understand. So you might want to have a have another look at that if it's still interesting to you. It really depends on the storage system. But you should look at it, take a look at it if it helps. If not, then we'll just take, you know, no, doesn't damage or hurt you in any way just curious. Yeah. All right, if there are no more questions, we can move on to the next presentation. So, Janice, are you on the call? Yeah, yeah, yeah. Brilliant. So the floor is yours and presents the data set lifecycle framework. All right, very good. Thanks Alex. So let me start. Thanks for the presentation. And so the desktop. Okay. Can you see my screen. We can thank you. Yeah. So, yeah, so basically we are just to briefly introduce, we are a small team of committers from IBM Research Europe. And this is, sorry, let me, I don't know how to move that. Anyway, so yeah, I don't know how to move the zoom. Okay, so we are trying to make it somewhat easier for the data scientists and the engineers to have access to remote data sources. Currently, we have implemented connectors to S3 and benefits, but we're expanding. And from the side of the data provider, we're looking to bring a, let's say, easier way to expose data sets and provide access to their, to their end users right. And also, another bit that we're looking at is how to enforce how to make sure they have governed access to these, to these remote data sources. So one of the more technical objectives that we have, right, we are introducing the concept of data set right so we, it's a new custom research definition of course, that is actually a pointer to remote S3 or NFS data sources. So we've added the ability to look up data sets from remote catalogs like a high meta store. And at the same time, we were looking to minimum to have to introduce minimal changes to the end user workflow. So, as they, the users would be shouldn't be input and shouldn't modify their workflows in order to leverage the data sets. Now, the bit that we just finished is the transparent data casting that I will give some details in the next slide. So basically, and we want, we want to bring a pluggable interface for casting frameworks to implement in order to be supported in the framework. One of the things is that the framework itself would work without, without those plugins, but we have, we have created first plug in based on safe that we leverage root for its deployment. So we give the instructions on how to implement your own casting plugin. And this would be an on the, on the fly deployment of the cast port without, without the user realizing or the user remaining completely oblivious to the fact that this data set is provided by a cast from a casting plugin. So also imagine that if you are casting data on the local cluster, we can give hints and we're looking into this problem about the workload scheduling don't imagine the workload scheduling so imagine knowing where your data sets are cast in which notes on the Kubernetes cluster, then it would be pretty straightforward to give him to the scheduler to bring this post closure to the cast data. And of course we're looking to integrate with spark could be flowing all the ML and deep learning frameworks and so this is the overall approach that we follow. So this is the on the one side, there is a user or a data provider that creates the data set CRD right so they say is my data set with this name. And then the operator takes care of this definition and provides the pvcs or the config maps or and the config maps and cigarettes and so on so forth. And for the post to just use this data set they just need to do that right like other label which is data set dot zero and number and then dot ID, the name of the data set they created and how to use it. They could use it as a mount point or they can use it by environmental variables, you know if they're using these three API for instance, they would use the, they would get these credentials there and the connection details there. And now, the other bit is the, as I said the casting plugin so imagine that we can install a parallel some casting plugins which provide that functionality of casting the data sets and we have implemented solution that works on for S3 buckets. And in the end, as I said the co scheduling of the pods to the to the guest data sets. These are the components and how is it how it looks like so as an example flow so the user goes and creates a data set definition right so it says this my bucket this is on the cloud IBM or AWS or whatever. And this is the username password for the for the bucket, then the data set operator watches the creation of these data sets and creates the necessary pbcs for the corresponding data set. Now when they go and create their pods with the labels we have created an admission controller that basically just says okay, you have annotated us using a data set. We're going to do a look up and then you will have your pod completely transparently using this data set pbc with only that addition right only with just adding a label on the pod. This is now how the transparent casting works right so yeah. Yeah, sorry to interrupt you if we could go back one slide. I'm missing a step as in sort of, is there is there a data plane or some sort of tool that you're using to create a PVC from from the S3 buckets. Yeah, so basically the data set operator right we we leverages the corresponding CSI plugins. So basically the data set operator looks for data sets. And reacts to the creation of data set objects right so this is of type S3 right so the data set operator realize that it's an S3 base so it creates a CSI PVC out of S3 right so it's a data set operator that creates the let's say the native Kubernetes components. And from from this data set so basically we have it requires to have installed the as part of the framework, we are installing let's say the CSI S3 and the CSI NFS. And to be working can do this lifting of creating actually the pbcs right the data set operator is just one level above a matching you know the data set type to the correct storage class of CSI. Does it make sense. Okay. Yeah, the data set operator is the orchestrator let's say, in this in your question right. So, the, this is a core framework right so this is our core framework and I'm going to show how the transparent counseling works right so the user goes and declares the data set right so within within the data set operators the data set controller makes a check and says, is there any plugin available in the cluster. No. So, basically, I'm going to create a shadow object that we call it data set internal, which has the same credential same endpoint with the original data set. So it goes back to the data set internal controller, which receives the definition and create the native components right so it creates the pbcs the conform maps the secrets, based on the type of the data data set, right. Now, if there is. Sorry, I don't know how to move this bit right, but so in the case that there is a data set available. The data set controller delegates the creation of the data set internal to the plugin. It passes these details and says, you know, you should take care of that, the definition of the data set, but in the end just give me the data set internal that I should create so the this plugin. So in our case, Chef, you know provisions provisions, Chef by a rook, and it creates a data set internal at the end, and then it gets handled by the core framework again so it goes back to the data set internal controller, and it creates again the corresponding pbcs from the S3 and FS. So, yeah, so we're integrating with various open source projects as well as I said before. Yeah, we have a public GitHub repo for you to have a look. And if you don't have questions, I can give a very short demo to show how this works so you can tell me if we should go with a demo or have any questions. Yeah. Yeah, my question is that, you know, I, I've seen a lot of use cases, but this was new to me. So I think there was some, maybe I would have for me, maybe for others, it would be nice to see a lot of the wise. So there's a lot of maybe assumption of the why you need that information, and that flow. I don't, I, I'm not familiar enough with the use case to understand that flow. So, maybe another time, I don't know, or maybe just an email, whatever. Just a little blurb on the why the pipeline works that way. So why, you mean some reasoning a bit on the, on the, on this flow, or what's the use case in general. Yeah, what is the user expecting out of this, like, you know, the ingestion, the workflow, what is the expectation from the user I'm not. Maybe others are, but I, I've, I've been around for a long time, but I've never seen this type of workflow. So it's new to me. I would love to learn it. Okay, so basically the, the use case that we're trying to handle is these cases right so when you have users who so currently the current landscape right so as far as we know right there is the CSI. There is the cloud object, the store and to the face that we have seen as a proposal, and it is possible to create pvcs out of, you know, S3 or NFS or some other type of data sources, but our motivation for the work was that if the user just wants to use on their pods, some data data sources without them configuring from the very beginning you know installing the plug in installing the CSI drivers and all this stuff and they just want to have a pointer to do a data set for them to work right. This is what we're trying to tackle right so a bit higher level of the CSI to give some more abstraction to the end user right so and from the flow of the framework right it's so the operators decay right. Almost all of the, as far as I know right, almost all the frameworks based on the operator SDK pattern is that they're actually just pods on the Kubernetes cluster that they are responsible. They are reacting to creation deletion and modification of a new custom resource definition that they have introduced or some native components that already are present on Kubernetes right so. Thank you. Yeah, this is David. I'm working for European by Informatics Institute. If I may confirm that from users perspective, this is exactly what we are looking for a little bit background that the EBI European by Informatics Institute is data custodian for all the public, publicly available by Informatics information, we have about the 27 petabytes of data at this point and doubles every two years. And one of the hurdles for us to move to the clouds, especially public clouds is that we have big trouble to move the data over to the clouds and it also does not make sense from economic perspective to dump the data into either Google or Amazon or whoever and the cost to store a copy of the data in the cloud is really prohibitive. So what we want to do is have this kind of data pipeline to ingest the data as we were depending on what the workload running in the cloud needs. So this is exactly what we are looking for. Thanks, thanks David. So basically, it's very much on point right so imagine that there is a provider that just gives you know disks right and the Kubernetes cluster. And there are users who want to use, as David said, the remote data sources, but at the same time. We want to optimize that by trying to bring as much as closer to the to the pods by casting it the remote. Now we're mainly working with the street but you know we have support for NFS as well. So trying to load as much of the data in the local disks where the pods are actually running and this is very common on, you know, deep learning workloads that they're keep on reusing the same data sets, right. So we want to completely make it transparent for the end users that they don't have to deal with you know configuring optimizing mounting the data sets. It's all done for them, right. So, so, so, so, so, yeah, if I could just summarize just to make sure I'm understanding this because this is actually pretty interesting. So, so effectively, you use the CRs as a catalog of data sets. And if, if a data scientist or somebody wants to run a workload in a cluster that utilizes that data set, then you either sort of orchestrates that the file system is available within that cluster, or you implement caching to to to make sure it's available in that in that cluster because the data set could be remote presumably. So, this is spot on so there are both my answer is both because on so we tried to tackle two issues right one is the usability right so as you said that the scientists would go and say capacity and get data sets on their name space and they will see the available ones. So they will just get a name. Imagine that there could be a case where there is another persona creating the data sets for them, right. We are power users of the framework because we developed it and we create ourselves the data sets and the pods right, but in there is a case that the actor the actor creating the data sets in the cluster could be different right the data provider, and they have the credentials and they have the access and they know what they want to provide the end users, and the end users can just label their pods and use that data set mounted inside their pods right without them doing anything. So usability on one side. And now what we added with a casting is that, while the framework on its own works and does exactly that thing, right. So we bring the hooks for a casting frameworks to transparently with minimal effort as support casting in this pipeline without the user realizing it's happening at all. So the API right for for a casting plugin is that you need. It would be passed a data set, and you are responsible for creating a data set internal. So provision, you know, your services provision your post provision whatever you think needs to be provision but in the end gives us give us the data set internal and all the the other orchestration stuff happens by us so imagine the casting plugin won't have to do mounts or implement this 3F, you know, utilizes 3FS on their own. It's already part of the core framework so it tries to tackle both things right and the third step that we want to do now is the scheduling right saying if we know the data sets where our cast in the in the cluster maybe we can direct the pod to be scheduled on them on the node that has the data cast so to achieve a bit more data locality in that case. Does that answer your question. Yes, very much so thank you. That's really helpful. Thanks. So I think I might have interrupted you. No problem. You can go back to the previous diagram. I just have one question. When you create this PVC is that a it's a it's like an empty PVC you don't have any data source and was already populated. Yeah, so the cases that we are handling is so we support the rights right if you write back on this PVC it would be synchronized on the cloud right we just but the case that we're looking at more is when these data are pretty populated right so imagine that you have image net on on an S3 bucket right so when what the PVC that we create contains this bucket mounted so it would be, it would reflect the contents of the remote S3 bucket right so it's it's not empty it's it has mounted the content of the packet but if within your pod you write stuff to it it would be synchronized as it is as it is an S3 FS mount right so yeah it contains the data there. So when you provision the PVC is the dynamically provision or isn't it statically dynamically exactly so it's a it's a dynamic PVC right and we rely on the CSI S3. So there is there is a CSI plugin for S3 that we have modified a bit to suit our needs and yeah it's a dynamic PVC right yeah I can I can actually very very very quickly show you that right so this is the an example data set right so this is my image net right and I do. Okay. And if you see get data sets, you will see that it's there and that I said, in turn. The internal there you go. So we have this PVC right so it was just created Janis 17 seconds ago, and if we create not this. If we go and use that right so we want to use data set Janis as mounted. So this is optional, as I showed here so you can mount it somewhere you want to or you can leave it to the default. So we create a pod. If we go inside the pod now, give it a second to start. There you go it started. So if we go inside the pod. MNT data sets. You will see that it's the data set that creates the, it has mounted the remote, the remote S3 bucket so it's the raw data. So this is the convention that we use so inside the pod there would be an MNT data sets right. So yeah, this is this how it would look like right so, as I said we're looking to optimize the flow of the end user as much as possible right so they won't have to deal with, you know, they don't have to change the workflow at all they just need to. The only thing that they need to do from the user's perspective is to annotate their pods like this right the tool to labels. You can see a PVC definition. My PVC definition. Describe. Yeah, so basically it's, it's using this dynamic PVC right this is a dynamic PVC right so the data set object. It's like this. So you create this as an end user you give your credentials your endpoint and the bucket, the region. It's just a this is the new Kubernetes component that we're bringing right and then the orchestrator make sure to create. I think there should be the secrets there. So it created a secret with with the credentials. So, so it created the credentials as a secret right and then it was passed to the CSIS three to provide the mounted endpoint that provide the mount point. Okay. Right. Yeah, so we actually we have a Kubernetes cosy sub project in Kubernetes 6.0 now. I think there are a lot of similarities here. Definitely should talk about and see how to collaborate here. Okay, so I will be. I think I'm scheduled for next week on the Kubernetes. Yeah, yeah, we have. Yeah, we cancel the last last week's meeting because of the holidays. Yeah. We are looking to, we're looking to apply for sandbox as well on the CNCF and as far as I know there is a full request that we need to make right Alex, please correct me if I'm wrong. I don't remember a form I remember. I remember a request that we need to have with the template right. So, the sandbox project application process has changed recently so now it's just an online form that you. Okay, okay, so I need to look at the chat that you sent. Yeah, I've just reposted the link to the form in the chat window. Yeah. Okay. That's it from me. Please reach out if you have any more questions or use case that you want to discuss but yeah that's that's our project. Yeah, thanks for for listening. Thanks. Thanks. Thanks. That was a really interesting presentation some some new use cases and well done for doing a live demo. Thanks, thanks, thanks. Thank you. Okay, then. Does, does anybody have anything else to add or any other items they'd like to cover. Alexa was just curious whether anyone has anything planned for either of the upcoming cube cons. For the sick. We have, we have an intro session shed yield for the keep coming you and and we've just been, we just got in confirmation of a slot that we got for the China session as well. Erin and I are trying to book the slots to sort of pre records that that session. Awesome. So they both, they're not live. I think I need to read up on it. The way it seems to be working is that we're pre recording the sessions. And then there will be sort of live Q&A. But yeah, the sessions are being. Think for the China virtual meeting, you're not required to be there, but everything's pre recorded. But for the European one thing you are required to be there for Q&A as well. Yeah, I guess the timing for the China time is probably tricky. I think the time is also like European time not US time. Anyway, well, let us know Alex if you need a hand preparing any of the material or doing anything else. Let us know. We'll do. Yeah, we're all I'll share the deck for comments on the on the signaling list. That's a good idea. Cool. All right. Thanks everyone. It's we've come up to time. So thanks. Thanks everyone for the presentations and look forward to meeting in a couple of weeks. Thank you. Bye bye.