 All right, let's get started. Thanks for joining this morning for this session on OpenStack Tacker. My name is Shridhar Ramaswamy. I'm a Principal Engineer in Brocade. I'm also the PTL for this OpenStack project. With me are the rest of the core team members over there, Bob Bartleton from Nokia, Shri Priya from Brocade, and Stephen from Vyama. So let's get started. Let's see if we can get this work. Oh, this is the engineer. We're going to look at a brief overview of what the scope and the scope of OpenStack Tacker project go into this architecture and some of the features. We will spend a lot more time on the demo. So we had a significant release in Mitaka. So we want to make sure to share with the community of all the nice features that we have introduced in the last cycle. So we'll spend maximum time there. And we also look at what's coming ahead. This is the scope is wide for this subject. So we're going to look at all the stuff that's in our radar and wrap it up with a Q&A. So the scope of Tacker is around NFE, orchestration, and VNF management. So we started as a VNF management project, which is based on standards-based architectures. And you probably recognize that's an ITC diagram there. So overall, the two boxes that are out there represent the problem space that OpenStack Tacker is after. Even though its core competency is our own VNF manager, but we got to look that box as a whole to provide the solutions that's required for that function. So it's both a generic VNF manager and some features go into the area for NFE. So and it's an official OpenStack project. We will talk about more on that. So continuing on, the ORIOR on the Tacker project itself for the folks that have followed this project for a while, it started out as a neutron service VM project. It's catching up. Hang in there. So I think it's finally caught up. De-ang this out. Let's do the old school way. So it started out as a service VM project. We moved into the scope of NFE orchestration early 2015. And we announced this project in the Vancouver summit. We had a session. We demonstrated what we add, which is sort of basis for our first release or with the functions around basic lifecycle management. So we add a significant amount of interest. We want to feel whether a feel from the community is the right thing, is the relevant thing. Because we believed there was a gap in OpenStack at the time where there are a significant amount of effort that went into OpenStack NOVA project or OpenStack Neutron project around NFE. But there is nothing there to actually consume those features or it's very difficult to consume those features. And we believe the blueprint from HCNFV gives a framework to go do something. And that's how we started this project. So we had a good second release, which is Liberty. And again, we demonstrated it in Turkey. So there is also other things that we did as a project. Liberty is significant in one way is that we came together as a community. We had core members fell in place. We kind of banded together. We also decided to make the project real in the sense with proper functional gate tests, unit tests, with proper governance, all geared towards the next thing you see there, which is applying for the BigTent. So we applied for the BigTent in March, February, March, and we got accepted. So now, TACA is an official OpenStack project. Similar to all other projects that's in that bucket. So that's a significant milestone for us. So now we released our third release as part of Mitaka. So if you go see the OpenStack release page, you will find TACA. We are working with our distro partners to actually have TACA bundled as part of the future OpenStack release distros. And even so now that, again, it's an official OpenStack project, we are working with our other DevOps projects like Puppet and Ansible to take TACA and easy to deliver to the folks who are interested in installing. So other thing that, over all this journey, it was probably this project is slightly different in the sense we were actively working with the standards organizations and other organizations in the space like OpenFE. OpenFE was significant. We have a very good working relationship with many different projects in OpenFE, particularly SFC, Multisite, and there are various others around the forwarding graph. And we are interacting with more projects in OpenFE to make TACA relevant for those OpenFE projects. And the other significant thing is on the standard side, we want to make sure these are relevant. Again, this is important for telcos to have the solution adhered to some of the standards. So we are closely working with folks like OSS around the TACA standardization on the data models. So like I mentioned, we came together as a community. And this is something I really am really proud of as leading the project for the last three cycles is getting a diverse community around this problem space. We also tend to attract a lot of new open stackers. These are the folks who are from the telcos. They are like they know more about NFE than OpenStack. So the team here spent significant amount of energy in welcoming them, making them comfortable. So this is something we are really proud of. And we are getting a lot more interest. So this pie chart will get more colorful. So now let's go look briefly into the architecture. So this is for the folks who might have seen the architecture diagram. This is a little bit more refined. So this is for the folks who are familiar with other OpenStack services. TACA is designed as another OpenStack service. So you would find familiar interfaces in the northbound with an API front end, with a front end with the horizon GUI and CLIs with using a Python attacker client. So below the API layer, you would see the three significant components that we have in TACA, which is our own catalog, which is where we collect all the assets related to VNFs. So this is our repository of all the VNF templates, which is, again, all TASCA-based. And this is expected to grow into other areas. Again, our main focus is around VNF descriptor at this time. So we have two components there. I'm more interested if you can focus on the actual feature. So this is how we are structured as a framework inside the code. There is a code that geared towards sort of downward facing towards the VNFs, around configuration, around management and self-feeling, and more towards bottom facing towards the OpenStack VIM. And there is also things that kind of ties these things together. So now we have a VNF. The VNFM component is capable of instantiating VNF from the catalog. Now what do you do with it? The end goal here is to get a network service up and running quickly. That's the old story here. So we need to wrap it up with other things, like multi-site. This is something we heard as a very important thing for the folks who tried TACA. So you will hear more about this feature in the following slides. So we interact with EAT. So we don't want to reinvent anything. We heavily leverage EAT. And we have plans to integrate with a lot other projects. We also integrate with the other things that's happening on EAT translator to TASCA and TASCA parsers. So for the rest of the features, I invite Stephen from our code team. So as Shridh has said, this is prior to a Mitaka release. Prior to Mitaka, we were very focused on building a VNF manager that is generic, that's actually useful. We first built a catalog where you can actually be a repository of a VNF these. It's really primitive, actually. This point is basically just a database for you to fetch stuff. And the liberty actually prior to the KELO cycle, we make the back end actually being heat, which is actually a tremendously great decision. But then what we didn't do, well, what we didn't know actually at the time was that there's actually a translator somewhere. So prior to Mitaka, TASCA to heat translation is actually done directly inside TACCA. And it actually, heat will take care of all the instantiation and terminations of VNFs. The thing that TACCA really add values to is the configuration injection, configuration management. And again, this is per VNF specific. You can actually write different management drivers for different configurations, because we firmly believe that different VNFs have different configuration models. And then the really cool thing is when you restart your VNF, it would actually map something back into the configuration back to the VNFs. And then you can actually do dynamic updates of the configurations. And then we push it down to the VNFs. Health monitoring, and other things that TACCA actually added as a value, is loadable per VNFs. So for per VNFs, you can set up different health monitoring policies. And in addition to that, if they are declared dead, literally they were dead, actually, then we provided a policies inside the VNFD that allows you to self-heal. And then, as we said before, the configuration and everything would just be reapplied when that happens. So this is actually what we do leading up to Mitaka, to make the VNF managers really solid. And then for the Mitaka features, which will actually go into much more detail, I bring to you Bob. Thanks, Steven. So as Steven and Shridhar both mentioned, prior to Mitaka, TACCA was using custom DSL. You could call it Tosco Lite if you wanted to. But it really didn't have most of the functionality that you find in Tosco. And so one of the big features in Mitaka was integrating the existing Tosco parser and existing heat translator libraries from the Heat Project into TACCA, so that we can focus on the important things and get rid of our whole parsing section and our whole section that generates heat templates, because that's not what we want to focus on. We want to take advantage of the existing libraries and that the community is already providing. So we've been working closely with the Tosco NFD work group. We started off with the CSDO2 release, which was last November, I think, something like that. And we started implementing that in the Tosco parser project and then integrated that support into TACCA. In the process of doing that, we found lots of issues with the CSDO2 spec, fed those issues back into the working group, and came up with the CSDO3 spec, which was just released, I think, last week or this week. And so that support is now in TACCA. And that will be pushed into the Tosco parser project during the next cycle. And so by pulling that support into TACCA, we've opened up the ability to support a lot of other features that we really wanted to support. In the past, any new feature that we wanted to support, we had to come up with the DSL ourselves. We had to figure out the mapping into heat. Basically, we go through that whole process. By using the existing Tosco DSL and the translator and parser, we don't have to do any of that. We can move much more quickly. This is basically just a graphic that shows what I just talked about. We take the outputs of the Oasis working group for Tosca NIV, implement it, feed back into the working group so that they can make changes that they need to make, we can make suggestions, and that iterates. Those changes also get pushed into Tosco parser, and we contribute to both Tosco parser and the heat translator as needed, and then that whole TACCA then orchestrates that whole template down into consuming the resources in the VIM. This is just an example of a Tosca NFE profile. It's essentially Hello World in Tosca. You'll notice it's quite a bit different than heat. It just is a different mindset. It's describing exactly the same thing. This is one compute in heat. This is one port in neutron and one network in neutron, described from the Tosca NFE perspective. Tosca NFE uses VDUs, virtual deployment units, which map to compute, sorry, Nova servers. It uses connection point CPs, which map to neutron ports, and it uses the term called virtual link, which maps to a neutron network, and then the way that's all bound together is through the port. So this is just an example. It has some of the TACCA extensions to the NFE profile, things like the monitoring, the management driver, the monitoring policy, the management property of the connection point. Those are all TACCA extensions to the existing NFE profile. Some of that has already been pushed back into the working group and has come out in the CSD03 release. Enhanced Platform Awareness is one of the features that was made possible by the base Tosca support. So Enhanced Platform Awareness is something that allows the person writing the template to specify some of these advanced hardware features as requirements for their VNF. So things like CPU pinning, huge pages, SRIOV. Those can now all be represented in the TOSCA template and the orchestrator, TACCA, or in HEAT, can work together to make sure that those resources get applied to the VNF when they're needed. The caveat that I always put on that, your hardware has to support it. There's no magic. If the underlying hardware, the underlying networking doesn't support CPU pinning or SRIOV, this isn't gonna make it possible. But assuming you're underlying compute hardware, you're underlying network hardware, support all those things, this is how TOSCA will represent them. HEAT can then work to orchestrate those requirements out to the VNFs. Auto resource creation was another feature that was made possible by our use of TOSCA parser and HEAT translator. Basically, for specific resources, you can specify them in the TOSCA template and if they don't exist already, the translation to HEAT will create those resources in the HEAT template, which will then create them in OpenStack, just like you can do with a normal HEAT template today. So you can do flavor creation with HEAT templates today. You can do image creation, you can do network and subnet connect creation in HEAT templates. This maps the same thing out of TOSCA. And for the next major feature in TACR for the Mitaka release, we'll head over to Shropia. Shropia, excuse me. Thanks, Paul. So the final feature in the Mitaka release we worked on is the multi-site VIM support. So pre-Mitaka, TACR was able to deploy VNFs on the local OpenStack site where TACR was installed and was running. The requirement then came, can TACR deploy VNFs in multiple OpenStack sites without having the need to deploy TACR or without having the need to deploy or install TACR in each of these sites. So in Mitaka, we started working on the multi-site VIM support where TACR as a single controller can deploy and manage VNFs in multiple OpenStack sites as a single controller. This provides a unified view of VIM management for the operator to manage these VIMs and deploy VNFs in each of these sites. And this particular feature also supports, provides explicit region support. So the operator can deploy a VNF on a specific region within an OpenStack site if there are available regions in that OpenStack site. So the feature auto-discovers the regions and then displays it to the operator. Operator can then specify the region as well to deploy the VNFs in these sites. So that brings us to the next slide where when there are multiple OpenStack sites running in the telco infrastructure, there will be multiple versions running in OpenStack, going all the way from Kilo to the latest release of Mitaka. So TACR's multi-site feature is able to register versions starting from Kilo. So when there are resource requests coming through the TACR server, the resource requests are gracefully downgraded or upgraded based on the OpenStack release and also the Keystone and the Heat Template versions that are running on the OpenStack site. So TACR, as you may know, extensively uses the Keystone and the Heat OpenStack services. So it automatically detects the version compatible on the OpenStack site and deploys VNF in a seamless manner. We can learn more about the multi-site feature in our multi-site session that's happening today in the same room at 1140, so. So that wraps up our TACR Mitaka features. We can now get into the demo. For the demo part, we have split the demo into two parts. In the first part of the demo, we will be showcasing the multi-site WIM feature, the TOSCA template which Bob talked about and also how we can perform VNF lifecycle management enabled with monitoring framework and also the management configuration framework. In the second part of the demo, we will be looking into the EPA feature and exercise some of the EPA properties like CPU pinning, huge pages, and also finally looking to the auto-flavor creation. So let's get straight into the demo now. So throughout the demo, we will be using the Horizon dashboard to demonstrate the features. So we log into the Horizon dashboard and navigate to the NFE tab. Under NFE tab, we browse into the WIM management page. Let me pause here. So we have two WIMs already registered here. One is the SFO site, which is basically running the OpenStack killer release and we also have an OpenStack leopold release running at the New York site. So these WIMs are already registered in the WIM management. So let's go ahead and click on the register WIM button to see how we can register a new WIM into the WIM management dashboard. So I click on the register WIM button, the popup comes up and you can feed in all the information to register the new site. Here we are registering a new site at Austin running the OpenStack metak release. So we provide in all the parameters and also the keystone authorization URL and the username and the project name where we want to deploy VNF in a particular tenant on the remote site. So once we feed in these parameters, we can click on the register WIM button. So that would go ahead and successfully registered the WIM in the WIM management dashboard. So here you can see that the Austin site running the metak release has been successfully registered. We can now onboard a VNF and deploy a VNF in the Austin site. So let's go to the VNF catalog page. We already see there are multiple VNF catalogs from different vendors that are onboarded. So let's go ahead and click on the onboard VNF button and onboard a new OpenWRT VNF. So we select the Tosca template for the OpenWRT VNF and click on the open onboard VNF button. So here we see that the OpenWRT VNF has been onboarded into the VNF catalog list. So if you click on the OpenWRT catalog, we can look into the Tosca template that's been provided for the OpenWRT. So here there are three resource types. One is the first one is the VDU one, which is a node type for NFE VDU and then there is a CP1, which is a node type connection point. And then there is one of the links is a VL1, which basically is on the management network specifying which network the VDU needs to connect to. So, and we also have other parameters provided in the VDU one. We can see that there is a management driver and monitoring policy. We can exercise these features once we deploy the VNF to see how these policies are triggered. So we can now go ahead and deploy the VNF which we just onboarded. So here we have a few VNFs already running and they're active. So we click on the deploy VNF button. Here we have a popup that comes up. We can provide the VNF details. The VIM name basically it's the Austin site which we just registered in the VIM management and then provide the configuration file like a sample firewall configuration that needs to be configured on the VNF. So once that information is provided we see that the OpenWRT VNF has been deployed into the VNF manager and has the status pending create while all the background tasks are getting completed. Once the tasks are completed we see that the VNF status has changed to active. Now we can go to the Austin site dashboard to see if the VM was actually instantiated. Here in the NOAA instances we can see the new instance that's got created and if you navigate into the instance console we can confirm and dump the firewall configuration if the firewall configuration was successfully applied. So here we just dumped the firewall configurations to see that the management of framework and tacker kicked in and configured the VNF the instance with the firewall configuration. Now let's in the next part we are going to show the monitoring policy. We saw in the template that we had provided a monitoring policy basically whenever the VNF goes down we want the monitoring's failure policy to kick in and based on the action you provide in the monitoring policy the action is triggered in tacker and the VNF is respawned here we gave the failure action as respawned. So let's go ahead and bring down the network interface on the instance to see how the monitoring framework responds to the status. So here we brought down the network interface on the VM we should see the status change on the open WRT VNF. So it changes from active to dead because the network was not reachable on the VNF instance. So once the status is changed to dead because of the respawn failure policy the respawn logic kicks in and spins up a new instance for the open WRT VNF. We can see that the status has now changed to active. If we go into the Austin side dashboard again we see that the instance was deleted and a new instance was spun. Here you can just see the name as respawned basically to tell the user that the new instance was created in the background for the VNF that went down. So this is the second part of the demo. We are going to demonstrate the enhanced platform awareness feature and also the auto-flavor creation. So again, we go back to the same workflow. We have the WIM management. Let's go ahead and register a new WIM site which is enabled with the EPA capable compute nodes. When we want to deploy a high performance VNF we want to select the site which has the EPA compute nodes and let's go ahead and register a new WIM at San Jose side where we have an open stack instance running and configured with EPA compute nodes. So here we provide the WIM parameters and then provide the username and project information where we want to deploy the VNF in San Jose open stack instance. And once that is done we see that the San Jose site WIM has been successfully registered. It's running the open stack Medaka release and is enabled with the EPA compute nodes. So once that is done we can now again go back to the VNF catalog and onboard a new VNF. This time we are going to select the SIROS template which has been configured with EPA properties. Once we onboard the VNF we can take a look into the TOSCA template to see what are all the EPA parameters we have provided for the VNF. So we select the TOSCA EPA template and then click on the onboard VNF button. We see that the SIROS EPA has been onboarded. So we can click on the SIROS EPA VNF catalog to see the TOSCA template that was just onboarded. So here we see the VDU node type. It may be a bit hard for people in the back to see. So in the VDU one properties we have the mem page size and CPU allocation. This basically means we want a VNF with huge pages and dedicated CPU that will be applied for that should be applied for the instance. So we can, these are a few of the EPA parameters we can provide. You can learn more about the EPA parameters that can be provided in the TOSCA template in the afternoon session happening in the same room I think at 1.30. So once we provide two of these EPA properties in the TOSCA template we should be able to deploy this VNF in San Jose site. So we can now navigate into the VNF manager and go ahead and deploy the VNF which we just onboarded. So let's click on the deploy VNF. We have this pop-up coming in and we supply the VNF name and the zeroes EPA template which we just onboarded. We have the VIM name here. We are going to select the San Jose site which has the EPA compute nodes. And we see that the status of the VNF has gone into pending create. This will in the background go ahead and spin up an instance in San Jose OpenStack instance and instantiate the VNF. And here we have the VNF changing the status to active. We can now go into the San Jose OpenStack dashboard to see the instance and also the stack that was just created for the VNF. So tacker heavily uses heat in the background. We can see that a stack was created here. And so that's the first one in the list. So if we click on the stack we can see that there are three properties. There are three resources that will be created for the VNF. So here, if you click on the stack we can see that there is a port and there is a neutron port. There is a NOVA server. And the third resource is the flavor. So the flavor is the important thing for us to focus. This demonstrates the auto flavor creation because we provided flavor in the Tosca template with EPA properties. The tacker went ahead and created a new video one flavor which basically has the EPA properties. So we can also take a look into the template, the heat specification where the tacker translated the Tosca template into the heat specification. And you can focus here on the extra specs property in the video one flavor which has the huge pages and CPU pinning the CPU policy and the memory page size large refers to the two EPA properties which we just provided in the Tosca template. We see that the extra specs has been created for the flavor resource. And we can just navigate to the flavors list itself to confirm that the flavor was actually created for the particular VNF. If you click on the flavors you can see that the last one in the flavors list shows that the flavor has been created with the EPA properties. And finally we can just confirm in the NOAA instances to see that an instance was actually created with a new flavor that was created for the EPA and that it was successfully spawned in the NOAA. So that completes the EPA and auto flavor creation. And with this we complete the demo for the Tacker Metakar features and now hand it over to Shridha. So now we're going to the roadmap of Tacker moving in particular on the Newton released upcoming Newton released. I think by far the most high profile I think that's appropriate word for the Newton release is the VNF FG, the forwarding graph descriptors. For the interest of time I'm going to go through this very quickly. If you're interested there will be a design summit session this afternoon in Newton. And so we decided actually to do this directly using the VNF FGT templates on TASCAR. And then we were actually leveraging all the informations. The reason why we want to build that in Tacker is because you can leverage all the connection points which are as Bob said neutral imports and then virtual links with our neutral networks extracting directly from VNF D. So if you think of it that way you can now create a graph on the template. It's much easier to onboard for users and you can actually reuse the template and many different deployments. We are integrating with the networking SFC project. If you actually, and I would imagine most audience actually know, for ITF SFC architecture the networking SFC module in Neutron is actually serving the SFP purposes and you can think about Tacker serving the SFC purposes. And then we also actually, Tacker members are also actually contributing to the open daylight SFC drivers in networking SFC. So very quickly this is you onboard a VNF FGT and as well as the associated VNF D into the catalog. And then by deploying the VNF FD we could potentially by itself spawning the VNFs and then actually chain them up using networking SFC. So it's very convenient. It's actually the beginning of something like a network descriptor and it goes into. Thanks, Stephen. So this is a significant feature that we've been iterating a few times outside Tacker and now we are finally trying to get it in Neutron. So beyond SFC, like Stephen mentioned, it's a significant thing that we are bringing in is various other features that we have been talking about for a while now. Again, there is enough interest in the community to introduce support for network service descriptor which is essentially ties all the other two things together. For example, it ties the VNFDs, a collection of VNFDs and the forwarding graph into one coherent network service. So this is something that's been asked multiple times. I think we waited until we get the right TASCA parser integration before we get to do this. So I think this is a good time to do it in Neutron. And another aspect that keeps coming is around scaling. So this is something we want to get started. Scaling is a huge, this again goes back to the monitoring aspects as well. It's a huge subject, but we want to get started in Neutron around VDU scaling using some simple millimeter alarms. Again, there is already a spec that landed on each of this. And in fact, coding has started for the forwarding graph. Beyond this, we have various other features, particularly focused on VNFM. Again, I will reiterate that VNFM is a significant component of Tacker. There are features around, for example, VNFC. So today, VDU maps to a full image of a compute. VNFC is where you can take a blank VM and actually install your VNFC software on top of it. And all this can be described in TASCA template. So on various other usability enhancements that we are planning, including some refactoring, which is, I think, it's required. We offer three cycles of development. We need to take some effort to actually get reorient ourselves in the code base to go after the rest of the features, including things like notification. And there is also an interest to evolve the catalog aspects of Tacker now that we are going to have a lot more descriptors, like NSD or forwarding graph descriptors. And there are other projects in OpenStack community that we can actually collaborate to evolve this component of Tacker. So these are the stuff that's in our mind. We're going to discuss many of this. And many more, even beyond Newton, there are the various things in our bucket list to go after. Again, it depends on how many folks who are going to join. And in each cycle, we tend to get more contributors. So we welcome that a lot to do in Tacker. And if you're in the space of NFE, I would recommend you to consider participating in the Tacker project. So there's something that we can take together as a community and to get things better. So one common question that comes in these things, again, there are a lot of projects in our ecosystem. In the industry today, addressing NFE, NFEO, and VNFM. So Tacker's primary role is VNFM, the generic VNFM. So we want to make that better. So we have many features lined up in the track of VNFM. And we will continue to do that. So in that sense, we are excited about our integration with various other projects like OpenO, and even other projects beyond OpenO that are interested in consuming Tacker as a VNFM. And we will continue to support that we are quite excited about those integration opportunities. There is a portion of the community who also want to explore some features in the NFEO. And many of them are dual nature, frankly. For example, multi-site could be even considered for VNFM to place VDUs in different sites. And there is also forwarding graphs within VNF itself. Like if you have a complex VNF, you might have VDUs that need to be changed. So we are exploring those features. Even though we slot it as a VNFEO of Tacker, they are quite a dual nature in general. So I think I'd like to leave some time for Q&A. So here are some pointers to go after. So we have a wiki docs. Now that we are a proper official project, everything is hosted in the OpenStack.org. So we have a proper spec process. So you can come and write blueprints. And by the way, this demo video is available as well. And reach out to us. Like I said, we welcome new contributors. We have experience in welcoming brand new stackers. We made new stackers. We are really proud of that. So do join us in this journey. There are two more sessions later today on Tacker or not on multi-site, like Shri Priya mentioned. Another is around EPA, which is, again, a very significant thing that we have done in this cycle. A lot of effort has went and it's a difficult thing to do. So this effort that went in NOAA is, I think, is now easy to consume through Tacker. So I think we are trying to finish the journey that the folks in NOAA started in running high-performance workloads, low-latency workloads for any fee purpose. So that's it. Thank you. I think we might have a few minutes for question and answer. If you have one, please line up in the mic, please. Yes? Are you using CELAMETER for monitoring VNFs today? So that's our intent. Our first VNF scaling would integrate with CELAMETER. Oh, there's a spec for auto-scaling, true. And it's actually written to use CELAMETER. But beyond scaling, we can also use it for things like Respawn. Well, the demo that you showed today, where VNF dies and the NEMOC interface was done in Respawn, that how do you detect the failure? So in this case, there was a monitoring driver. And it's a loadable driver. We just demonstrated using Ping, something that's easy to demonstrate. But you can write your own driver. There is one for HTTP check. And you can write other things. And that's the one we are expanding in the next cycle. Thank you. Hi. I think this is really great. It brings mainly three different communities together, right, as the OASIS and so on. And it establishes concepts like VDU on a high level of abstraction. As you mentioned, it maps down to APIs like NOVA, Neutron, and so on. I would like you to ask, could you talk a little bit about this mapping and the traceability? So you need, of course, to map things down. But do you coordinate all the different resources which generate it? And well, is there any trade-off or so in this regard? So in those areas, we kind of leverage heavily heat, meaning hot resources. We do, there is an ask to expose those resources instantiated with Packard at one level up. Beyond that, not sure if you understand the question correct, there is also efforts around policy that where do you want to place these resources? There are some efforts. We discussed a few in the birds of feather session yesterday. I mean, when things are generated, does information get lost? I mean, NOVA is not aware of a video, right? But do you keep this information? Are there trade-offs in this regard? No, so we rely on each stack. So again, we rely on EAT for some of these things. So we don't maintain the resources created by EAT, but we indirectly map it to the stack that EAT creates. Thanks. Sure. Thank you. I think we are out of time. Maybe we can take rest of the questions in the hallway, please. Thank you so much.