 Hey folks, we'll get started in just another minute here. All right, good day everyone and welcome to today's LFN webinar. We are going to be discussing the evolution of the cloud infrastructure reference model and its applications. Our speakers today are Walter Kozlowski with Telstra and Tomas Fredberg with Ericsson. Both gentlemen are here representing the LFN community in their discussions today. Just a couple of housekeeping items before we jump into the webinar. All attendees are going to be muted during the discussion. We will reserve some time at the end for open Q&A. And if you have questions throughout the presentation, there is a Q&A window at the bottom of the screen. So if you just click on the Q&A icon. You can type in your question at any time. We may answer some questions via typed response in real time, otherwise we will hold them until the end of the official presentation. Okay, well thank you all for joining us and without further ado, I'm going to kick it over to Walter to get us started today. Thanks Jill for your kind presentation. It's a pleasure to be here and a very warm welcome to all participants. The title as we hear is Cloud Infrastructure Reference Model and its applications and its evolution. So let's just actually start from some background from technology and industry and each evolution. So in this diagram, if you look at the top, I'm trying to present in a very simplistic perhaps, where you know how the technology is evolving and obviously it's not really linear and it's not really that simple. But I think it represents very well what's been happening. So we started a few years ago with trying to virtualize network functions with the idea of NIV and VI and VNS. And moving to the containerized world where the workloads actually represented in containers and moving and evolving towards cloud native. And emerging technologies which some of them we know and we will be touching upon some of them. Some of them are really emerging and will be emerging. In this background, if you look at this wheel in here which presents you know how this interacts, it iterates through the time. Let us start from the industry challenges. So there are plenty of them. But let's mention a few of them which I rather than for what we are doing. So evolution towards cloud native. It's one of them. We know that from our practice that you know it's really very hard to produce cloud native network applications. For existence of sort of virtualization technologies, we will be very much focusing on this because you know it doesn't happen that you know one day you stop doing VNS and start CNS. In real life it's not really happening. So they have to coexist. They have to migrate one to each other. At the same time we don't want to create you know hardware infrastructure for each one separately. We don't want to have to do so many silos in our environment. So we have to share the hardware infrastructure. It's pretty much a lot will be of about today about how we evolve our reference model to provide you know to look from the perspective of coexistence of several virtualization technologies sharing the same hardware. But we have to remember that you know this virtualization world in telco it's actually mainstream now it's not experimentation. So everybody expects from us right to provide telco great performance and 5G is a good example. And on a practical terms, you know, however, as myself, I know was the building architecting or evolving and operating cloud infrastructure for network. We know that it's very hard to technically to make it technically robust at the same time viable from commercial perspective and open to some extent possible. And it sometimes it feels like, you know, building an airplane while flying to the high speed because this is very high speed and dynamic environment. We probably know all of this. And major question is, which I didn't write here is who can we trust who can we actually ask how we may show the different layers different components will be working together. That that was what what why the industry responded and response and the next foundation networking is very imminent in this. And a year and a half ago almost CNTT was formed, which stands at the moment that the cloud infrastructure telco taskforce is a taskforce on the elephant with GSMA involved in this. We will be talking about this later on working very closely with open and the V and other projects like open distributed infrastructure manager, which we will be talking about this as well with the idea that we have to join forces in order to help each other in this journey. We compete each other it's obviously commercial environment, but at the same time, we have to collaborate we have to work together. The idea is that with those actually initiate this other life. We want to show the way we want to be ahead of development occurs. That means the case, you know, many technology companies and their money teams are developing things and we don't want we want to make sure that we can actually align them around generating flexible model which can be implemented. And we have to find out what are the gaps in existing standards and existing models and somehow to address them working with other organizations. We develop reference architectures implementation and compliance and we'll be talking about this in a moment. So, what is coming out of this? It's a real life deployments can actually benefit this in in our affection in the request for something for for quota for proposal. It's simplified by using the same language of reference model using the defined profiles and relationship and how workloads and should be mapped to infrastructure types. Real life implementation with think about 5g or the edge, which are happening and they can actually use the difference models and reference architecture to drive them. So, and the other thing is that from the compliance, but you can actually get the confidence, which we said confidence creates trust. Well, and this is actually that was the mission of CNTT. If you look at this, I want to repeat it and it was written in the under the type of language, but it actually said that we have to actually create this to reduce costs and time to market everything you said a moment ago. So that was the mission and to work for the work of for the benefit of the community. So this is quite large community as you can see the major sponsor as we said is GSMA and Linux foundation, but look at the logos and I'm sure there's more at the moment. There's plenty of service providers technology companies and open source organizations. Introducing the reference model is in this company on the left hand side, you can see the CNTT and the major documentation major specification and major outputs. And this model should provide agnostic direction for the for which can be implemented in reference architectures, which by the implication, not really technology agnostic at the moment we've got to are a one which is related to, which is basically open stack base. And open stack an array to which is the development and experience for the communities as a service. Because of this, it should also provide reference model should also provide the some requirements for BNF or CNN vendors to get them in the design. At least from the perspective of how the workloads the network workers should actually work together interact with the infrastructure. And GSMA is an ultimate owner of the entity reference model. And actually, myself, I'm actually the leader of the work team or in the entity of preparing reference model. And next month we are actually handing over to GSMA which will be publishing this as a permanent reference document. And we're very happy about this because that means that this document will get a very large audience. Basically, this the goal is about normalization or integration points and establishing some layers and how they should be layers of architecture should work together. And we will be showing this in a moment in the diagram. The approach is let's talk it again. It's to make sure that we can have simultaneously different reference architecture so open stack Kubernetes and others. Multiple layers multiple implementation using the shared hardware infrastructure layer this is the main evolution. The first version of the reference model as you may guess it was just open stack basically related. And that means that we actually we when we started introducing Kubernetes reference architecture we needed to actually evolve this to make it more generic language. And now we're trying to address the ability new to simultaneous usage of different reference architecture the same hardware. Then we have another set of problems which we are going to address this. So, very much in a symbolic but overall diagram as it looks now after this. It's consistent of two major layers here. Within the which we called cloud infrastructure. Basically, it's former and the GI. So it's called hardware infrastructure layer and virtual infrastructure layer and we'll be talking a lot about today about details of this. So what would be actually looking on the left hand side are the resources or virtual resources and we'll give an examples of them in a moment and hardware resources. And then on the right hand side with good management of this so we get veteran first sector manager, which can be traditionally man. Next slide. A bit different possibilities as well. The introduction of in this version of the model we introduce something new which is hardly infrastructure manager, a component which we define here. We're trying to define and it wasn't really present in the original model. So here this at the top we've got VMS or DNF so basically no network workloads which can consume those resources and management clients. I would emphasize that we are talking about clients it can be many different management clients which can use and work together with the infrastructure manager and how to manage it. So why we need those things here is the next one which which I call the reference model realization diagram. It gives a bit of an example how on a different view of this yeah we still have the same same things here right VMS and management clients and what are the aspects here if you look at this symbolically we've got three implementation diagram deployment types one is traditional VM on the hypervisor the other one is get containers, you know, on top of being a virtual machine and containers on the metal typical ones that you can actually see some other violence of this and and the models have to actually accommodate this. If you look at the rank itself we've got the virtual infrastructure manager the traditional VIM from from Etsy model and container infrastructure service manager which actually will be actually managing the container infrastructure service instances. Those are the elements which are actually mentioned in Etsy high five zero twenty nine. So there are standard as well from this perspective. The resources obviously compute storage and network and this here is a new thing which we will try to define. It's how to infrastructure manager we need in this context because you know if you have different type of implementation of natural level. Then you have to actually somehow to talk in in an abstracted way to these resources to make sure that we can actually use the same hardware infrastructure for many different types of virtualization and containerization. And there are some examples in in commercial work of those at the same time that's part which we actually expect that this part will be actually developed as well by this by this other project we mentioned elephant project audience which is open. Distribute infrastructure manager. Based on the empty of. You know. That's a model model here. One other thing I wanted to mention that which is a big problem in many other aspects of the deployment which is not really in many models. It's SDN and you can see that we've got several SDN series symbolically and we'll be talking in a moment about this. This relates to the fact that, you know, we you may have a different administrative demands which actually, you know, manage those different in this multi-tenant organization and different different type of deployment. That means that there may be different SDN so then the different SDN control is for for IRS and serious for containers of service and many of them and also realization that you know there are elements of SDN in the network hardware infrastructure, which can be actually managed, for example, directly from some control sphere, which can be energy or something else. That's basically know the the idea of this model here is to provide the flexibility that we can accommodate this coexistence and migration from one to the other deployment at the same time, you know, managing the shared hardware. At this point, we want to actually go into more detail how the reference architecture should look like from logical perspective and I hand it over to Thomas. Thomas, it's all yours. Thank you very much, Walter. So, I'm Thomas Fredberg and I'll be drilling down into the updated reference model in a short while. But before I do that, I'll stay on this page and say that one of the important thing here is to enable the operators during the very long migration period of migrating from the current deployments of the NFS to the more cloud natives and by that also realizing that most operators will not build a set of new clouds for the cloud native applications and then gradually try to move over the hardware resources by just physically moving them. So, there is a need for a cloud infrastructure that enables the simultaneous infrastructure as a service instances and the containers of service instances, probably many of them, where the operators will have likely a number of different infrastructure management operations group. And therefore, the administrative domains that Walter talked about is important to enable them to be kept separate. We're keeping those operational groups, having a limited complexity to view into either a single cost environment or a single IS environment or a single hardware infrastructure environment. We will limit the blast radius of errors and whatever faults that humans can do as well as potential software faults and malicious code. When it comes to the servers, the servers will normally be allocated to one virtualization instance, one cost or one IS at a time. So they will just be managed like a normal open stack where the Kubernetes or whatever will be in those virtualization instances. So they are not really the problem here. But it's very, very unpractical to keep a statically assigned and physically separated network interconnecting a small pool of servers. Because whenever that small pool of servers run out, you need to physically hook up more servers reconfigure your physical network for those servers. And when you're migrating from one of those pools over to another one, then you need to physically go and reconfigure your network. So it would lead to over dimensioning of each pools, as well as complexity when you actually need to scale each and every of those small physical nodes. The positive position is that we have a larger flexible server pool that are hosting multiple virtualization infrastructure instances, costs and IS where the VNFs could be migrated into CNFs, or each and every of the VNFs could be scaled or growing and shrinking on demand with new hardware resources needed. But that requires interconnect services with a data center Ethernet switch fabric. And that is exactly why we are looking into the networking as the first thing because that is the problem area that needs to be done. The shared Ethernet switch fabric cannot necessarily be managed by any single one of those virtualization instances, because it will give a large fault domain, as well as it will go get the complex organizational relationship in between those different administrative groups that you want to keep simple. So CNTT has gone in and define concepts and a layered model that supports shared networking in the reference model. I'll switch slides to the first of these concepts. The first of the concepts here is the concept of underlay and overlay network layers where there is a shared underlay network that separates each of the virtualization infrastructure instances. You can see on this picture the shared underlay. The shared underlay network is then offering services to one or more virtual infrastructure layers itself. If the purpose of the shared underlay network is dual, one of them is to ensure that each and every of the virtual infrastructure layers gets interconnection through the switch inside its virtual infrastructure. But the other one is to separate the different virtual infrastructure layers from each other ensure the separation. One of the problem statements in here, though, is that some of the more high performance VNFs or CNFs are using methods to bypass the virtualization infrastructure layer encapsulation methods by doing, for instance, SRIOV straight down into the network interface card. And by that they will go directly on the network and the underlay network. Then there needs to be methods to ensure that the underlay network can encapsulate those so that it belongs to the right virtual infrastructure itself. The next concept is the concept of hardware and virtual infrastructure layers. So layering those together saying that there is a management of a separated one separated hardware infrastructure layer and another virtualization layer that could have one virtualization instance being managed separately from another virtualization instance. That would be the one that enables the organizational separation having one organization managing costs, number one, another one costs number two, and so on so forth, and possibly a third one that is then managing the single hardware infrastructure separately. We can see in the picture here three different deployment methods where you could have VNFs straight onto infrastructure as a service, which is where we're coming from. And then in the first stage it's likely that we will have CNFs on its cost layer that might sit in a VM on the infrastructure as a service layer. Then we'll later on will get more CNFs on their metal costs. And that's when you start to need multiple virtualization instances of the same shared underlay. The third concept we have defined is the concept of SDN control of the underlay as well as the overlay network. So SDN model is intentional here to be possible to align with the administrative domains we talked about. So that we are on an operational basis can assign a particular organization to care for its virtual infrastructure, including its SDN control of its own virtual switch domain. Whilst at the same time enabling the underlay to be partitioning the underlay switch in between the different virtualization instances of potential one IAS and one or more cost layers. So the shared underlay now has the separation concern that it can separate the different virtualization instances and one common way is depicted in this picture where the VXLAN is divided up in VNI ranges. And in this picture then each and every of the virtualization instances get a certain VNI range. But that VNI range is also provisioned on the switching by the SDN underlay controller from a hardware infrastructure orchestrator. And by that it can ensure and enforce the separation on each and every port where there is a server belonging to a specific virtualization domain. This can also help by that SDN overlay controller that knows that there is an SRIOV function bypassing the switching can request the virtual termination endpoint to be installed in the hardware underlay. If that becomes, if that is within the authority of that virtualization instance. The fourth concept is of the programmable networking fabric. And this is the emergence of a number of programmable data plane resident entities that for instance programmable switches or programmable smart mix. That for instance through P4 programmability could implement really very complex functions or just the simplistic VTA functions. Here it's very important that that programmable networking fabric is part of the shared underlay switching. Because otherwise it won't be able to ensure the separation and force that each and every of the virtualization infrastructures, as well as the NFC NFS are not overstepping their authorities. So let's step into the logical architecture, the reference model itself. It's here mapped onto the Etsy NFV reference points, and it also shows a number of missing points, where we are trying to find a suitable home for those being specified by the CNTT as shown here as being CNTT reference points. And a number of those that we don't care about in CNTT because they will likely be application specific. If I only go through them on a very, very high level, the Etsy reference points are just simple lines as for instance a VNF going into a virtual machine. So we don't need to go through them. The areas where we are missing and need to define something is for instance in when it comes to from CNS into the container infrastructure. The actual container runtime is rather well specified. There is a section of secondary networking that telecom is rather dependent on because it normally can't live behind a net or it needs multiple separate networks to separate the traffic out. Those are not uniformly specified. So this is an area where there is a lack of specification that CNTT is looking for. Another one is how we can do container management that is from some sort of a container infrastructure service manager to control mainly the networking from the container infrastructure service instance. There we have a multitude of different CNIs today that are differently managed. So there is no unification there at all. So there we're also looking for ways to find unification. The third place is how a virtual infrastructure manager or some other type of entity, an SDN overlay controller or something like that could be requesting for instance a VTAP controller or those programmable functions in a programmable fabric over some sort of hardware status or provisioning interface. And the reason for putting in the word hardware status in here and not networking which is the main part of this talk is that it will also over time need to have provisioning of allocational service and accelerators into each and every virtualization instance. So we're shooting for what it will become hardware status and provisioning interface over time. Then we have a couple of interfaces where for instance CNTT don't really care too much because it's normally dependent on the deployment or implementation. That's how the hardware infrastructure manager talks to the pool of hardware resources. There is however LFN work going on in Odin that Walter mentioned on specifying this interface. For instance, creating a switch fabric model of how automation can be done from the hardware infrastructure structure manager requesting some switching fabric to be able to set up things in a certain way and expose certain hierarchies of all the topologies and also forth and statuses of the interfaces. That's a promising development, but we'll see how it goes. Over time we believe that the hardware equipment management that today is often done through some sort of an OSS system will move into the hardware infrastructure system. So the equipment management going up to the OSS is neither something that we particularly care for in CNTT. And the same thing really goes for the hardware infrastructure management and the virtual infrastructure management. We believe that they are, when it comes to the virtual infrastructure manager, it's rather few infrastructure distributions out there in the world and they are rather tool specific. So they probably will keep on being specific. When it comes to the hardware infrastructure management, that's very often proprietary implementations that likely will stay in that way, but hopefully complemented a little bit more with redfish and potentially Odin specified interfaces over time. So, wrapping that up now into an example. So here we have a deployment example of the reference model that and I'm from the beginning I would like to apologize to people who are colorblind because I have color coded the possibility for separate administrative domains. So each and every color represents a potential administrative domain that could be managed by a separate organization with a focus of only managing that layer. And you can see in this in the color scheme here that the black basically is the single hardware infrastructure management domain. Then you can have a red IAS instance running on that, which could provide virtual machines up to separate tenants with separate colors up here, but it could also in one of the virtual machines, install a container as a service virtualization instance that in itself could offer containers to different tenants, if that cost virtualization instance is a multi tenant one. That will on the unusual in the beginning because those that support doesn't really exist in the community. That is also one of the reasons why we are expecting that there will be multiple costs instances on those systems. And therefore the hardware infrastructure is the instance of the layer that will ensure the separation in between those different instances. So I think that that pretty much wraps up the deep dive into the CNTT reference model and let's now go over to Walter and myself as well trying to wrap this up. Okay, so one of the things which was I think clear for this that we founded the networking is in this coexistence model. It's a very important part and it's automation obviously. And it's an idea of concern. That is very common experience in the industry. So we need to focus on it in our evolutionary reference model. At this point is a sort of call to actions and we need more industry experts to get involved in this efforts. Because we want to be ahead of technology evolution curve in this normalization work on this alignment and how well we need to contribute into CNTT and other projects we mentioned. I think it's the wind all to work on the industry alignment through several organizations and this includes a lot of projects we talked about GSMA. And we said this reference model will be published by GSMA Etsy, obviously we talked about Etsy model. We want to be aligned to this as much as we possible can and complement this. That's for example, the programmability is happening in many others. So how we can do this practically and from CNTT perspective on this page you can actually find. So we can actually go to this if you're interested in the reference model or working in reference architecture implementation or certification which is called confirmance now. You've got some links here but if you go to even Google CNTT we'll find to get to the home page and you can find out. Also there is a link here for the white paper which describes CNTT and its mission in more detail. Finally, and this is you can see our faces here. So finally we're happy to actually if you want to directly to talk to us. Then we are actually happy to discuss many technical aspects and also the interface into this community. So once again we need a lot of experience and different points of view because out of this discussion always a very good, a common good is coming. At this point I will actually thank you very much. Thomas, if you have anything to add, please do and then we will actually go back to our McGee moderator. I think you wrapped it up very nicely, Walter. So I don't really have anything to add to it. I'm just hoping for good questions and good participation over time here as well. And also thank you for your time and so forth. Yes, thank you both. That was a great presentation. We do have a couple of questions. So I'll kick it off for the first one. Does the hardware infrastructure manager overlap with hardware abstraction interface? For example, SAI and Sonic. Yeah, that's a great question from Ajit. I was trying to answer in the chat window that I can answer verbally as well that SAI and Sonic will highly likely be found inside some sort of a switch fabric and the switch fabric is not in the purview of the hardware infrastructure manager itself. The hardware infrastructure manager is highly likely to interact with the switch fabric on a high level. So there is no overlap as it is today, although that they might on a hardware to software layer be sort of the same layer they are in different sort of components here as well. So on the picture we had earlier if I'm trying to go back to that. If you will, in this picture, you would highly likely find SAI and Sonic inside this box of a network resource, creating an abstracted switch fabric. And then you would expose status and so on so forth for the hardware infrastructure manager. And then the hardware infrastructure manager can express the intent that it wants to be setting up in the switch fabric. Great. Thanks for that thorough response, both verbally and written. Another question we have is a little higher level but what's the best way for newcomer to get the lay of the land and get comfortable with CNTT and learn how and where to contribute. So, as I said, probably the best way is just to join. It's an open community, which is evolving as well. So you don't have to be an official representation organization. There are some rules you have to so there's, if you go to CNTT there is a website. There is also onboarding, I think, onboarding site there and boarding instructions. That's conformality perspective. And there's obviously an eye can talk to you to many people and like ourselves we can work and find out what's the best way to contribute. And you can also go into the GitHub, as we said, because we do all our work with GitHub so it's available to everybody. You have to be a member of the community to contribute, but it's very easy and we are very welcome everybody. Great. Thank you. Another question in from Bob Mugman. So will these new proposed interfaces for CAS you've discussed. Will they be proposed to be added to RA2 specs of CNTT? That's a great question. It actually has a dual proved answer to it. And one of the answers is that on a reference model layer, we will try to have discussions with with our liaisons to the Etsy, for instance, the IFA029 and a few other ongoing development within Etsy. So if they would have some of these things on their specification roadmap, they might go in there and then come into CNTT as interfaces we point to through the higher level of the reference model. But there will also be lower layers on, for instance, the CNCF layer that is more proven to go into the RA2 level of specifications. So I think both is valid to have discussions in the RM about it, as well as to have a discussion within the RA. Okay, I wanted to actually add this comment, but also in general a CNTT is not standard, you know, a building organization, it's not SBO. So when we found the gap, as Thomas alluded to, we tried to get sort of some as we called upstream projects or other other organizations to work with that, whether they can actually fill the gap. At the moment we haven't actually had to do anything like inventing anything new on the new standards, but we may actually have to fill the gaps somewhere. I'm not sure whether, but to your question whether this will be really an RA2, maybe there will be sort of like RA3 where we will be working on some, you know, well of way of actually having those different deployment types together of the coexistence. But that's that's have to be actually, you know, discussing further. At the moment we've got actually quite a few in reference model together set the chapter related to gaps. Some of them is quite a few around SDN and load balancing and other aspects which are actually missing. And we will try and the next release to either find a way to resolve this or find somebody who's doing some other organization who can help us to do this. Thank you. And I think that we could add to this as well. I mean, Walter, one of the reasons for doing this type of webinar is to try to see if there are people out there in the communities around the world that have good propositions of interfaces in the places where we feel that they are missing. I mean, if there are already defined open source initiatives or standardization going on that are working on these, we love to get your feedback on those interfaces and see how well they fit into this type of webinar. So you're most welcome with those. Yeah, great. Thank you. One final call for questions. We've got one more that's come in. So if you do have one, please go ahead and type it in the Q&A window. We did get a question about what, what does the future look like kind of at a high level? What's next for CNTT? Well, first of all, let me just start and I think that she said on, Well, we are just moving very closely working with OPMV and it's hard to describe it's not the purpose of this and describe all this relationship. But as we said at the beginning, I said there is a performance part. So implementation part. So we really want to make sure that we've got this reference implementations and making them complex and more realistic. There can be actually playgrounds where people and operators and vendors can try those solutions and conformance, which is, which can lead to some badging and I think it's confidence on this. So this is where we are actually going, you know, in a natural progression. I think that the future is, we actually talking about the future here. That means that we want to get more realistic implement reference model and reference architecture with those networking aspects. And we didn't mention here the other part is storage, which is really, as we know, related to, to very much to networking as well. So those are a lot of technical things that we have to resolve. I think that the programming for programmable like P4 and networking within the switches are within smart makes is something which is really coming and we have this is the future as well. I got the impression that 5G, which is so, you know, stringent requirements. From the core, for example, of 5G, which is has to be in our SPA has to be in the service space architecture containerized. There will be driving a lot of what we are doing here. One other thing I wanted to mention that, you know, while we focus on network, but this hardware infrastructure manager. I think it's as far as computer storage will have some role as well from this perspective that, you know, to manage different types of like say HP and Dell service in some unique way and present this to the virtual management layer like we can see here. It will be really very important. So I'm sure that there will be a lot of work and surprises on its way. Thomas, do you have seen something different in the future? No, not different, but maybe I could add a little other slightly other dimension. I mean, if we dive into, for instance, what's in the hardware resource pool. One item here says compute resource and people naturally think of a CPU. Then we added another compute resource in here to in some way prepare the future for a couple of different accelerators like FPGAs, SmartNICs, GPUs, and they're alike in here as well. So the ability to manage those, the ability to program those, the abilities to slice them up to have virtualized slices of them in different shapes and form. All of those things needs to be incorporated over time. One of the hard things here is to ensure that we try to be ahead of the curve so that we don't only have to document what others have implemented in diversified way. For that we need to have the good suggestions and the other communities coming in and helping us with how we do that in the best possible way. And we have to take our crystal ball and have a deep look into what things seems to be well needed now if we go into different areas such as, for instance, the O-RAN workgroup 6 have a similar target there. They work together with the CNTT, for instance, through the Edge work stream and so on and so forth. So there is multiple things going on in the technical area here as well. Yeah, and I would comment that I think I alluded to the beginning that from my experience and experience shared with me from many different participants in this community. It looks like networking, storage, SDN are the major problems and complexities when you build how to get them together. So it's relatively easy if you can use this word to say open stack and base cloud and put something on. But just to make it work in a real telco environment when you get so many different functions and different needs and different requirements and different standards. And it's really very hard. So we're trying to help this to make it closer to the practical application which was part of the topic, right, which is for this webinar. That's probably the future. Thank you both. So I think I think it's time to wrap up. I just want to thank all of our attendees. Thanks for folks for asking great questions and a big thank you to both of our speakers who are based in Australia and Sweden. So this is really an inconvenient time for them to be online. So we really appreciate your flexibility there. And if you did register to register for the webinar, we'll receive an email with a link to the recording, which will be on demand for viewing later as well. And if you have any questions, please just send them to PR at LF networking.org and we can get you in touch with the appropriate spokespeople. All right, thank you everyone have a good day and we hope to see you on a future LF networking webinar. Thank you.