 Yeah, because I got to turn on the mic. Is he ready? All right. Welcome, everybody. So glad you could come join us here. Tired of this talk is CERN and Science Clouds in Europe with Tosca OpenStack Heat and Heat Translator. And this is some work that's involved different companies, organizations, and different standards bodies, and, of course, open source technologies like OpenStack. I'm going to just really quick introduce my co-presenters here. It's Ricardo Orocha from CERN. There's Matt Rokowski. I think he's hiding. He's an IBMer. Anybody know Sadev from the Heat community? Sadev Zala from IBM as well? Not a single one. That's great. So we actually got together at the last summit. And there were some interesting things going on, a big project in the CERN and Indigo, having a European infrastructure that was going to go across clouds. Some of them OpenStack, some of them not OpenStack. So now you're looking at heterogeneous cloud infrastructures and what can be done to exploit those. And so in having discussions, and I won't forget the discussions because it was at OpenStack in Tokyo. And I remember we were late to get into lunch. And we had to get these box lunches that I really did not enjoy. It was all veggie. I don't know if anybody else remember those? You took one bite. I'll just go hungry. But anyway, we had really great conversation. And looking at TOSCA, something that Matt Rokowski's been working on for years, as sort of the specification language for orchestrating cloud infrastructure and applications that can work across infrastructures, works with OpenStack, works with non-OpenStack infrastructures. So looking at working together to be able to use that across those heterogeneous cloud infrastructures, and then how do you map and make that work? Well, turns out we had to pull the rabbit out of the hat. We've been working for years on a couple of sub-projects that have made their way into OpenStack Heat, Heat Translator, and TOSCA parser that one's looking at doing the parsing and one's doing the translating to map the world to hot templates. And these were embraced for a couple of years with this dream of we're going to do this so that we can funnel more stuff to OpenStack. So if we can find a way to translate stuff to hot, guess what? We can go run this on OpenStack. So if you look at what we're going to talk about in this presentation, Saadev's going to come in and talk about the work that we've done in those sub-projects of OpenStack Heat, the TOSCA parser, and the Heat Translator. Then Ricardo is going to talk about how they use this in the big with Project Indigo and the work that CERN's been doing to work across these different heterogeneous cloud infrastructures. And then he's going to show some demos off YouTube, so anybody can go see them. He's going to show some demos. And then Matt here is going to talk about the latest things going on with the TOSCA specification, which is sort of pulling this all together. So at this point, I'm going to hand over to Saadev. Thanks, Brad. Hello, everyone. My name is Saadev Zala. I'm a software engineer at IBM in Raleigh, North Carolina. I'm a technical lead for TOSCA parser and Heat Translator. Both are sub-projects of Heat. So I'll brief about what they are. Some of the enhancements, the significant enhancement we've made in the Mitaka, and we have an excellent momentum going on working with different projects in OpenStack. So I'll just brief about that as well. So as you may know, there are two different TOSCA profiles. This is TOSCA Simple Profile in AML, and there is the NFV profile. The Simple Profile is focused on more general specification, topology orchestration, whereas NFV profile extends the Simple Profile for NFV specific needs. Parser basically supports parsing both of those profiles. It basically reads different TOSCA entities, TOSCA node types, capabilities, interfaces, policies, group, any custom types. And it basically produced in-memory graph of different nodes and relationship among them. Thanks to the development team, we had two point releases during Mitaka cycle, 0.3 and 4. And we had the significant enhancements. The most notable one was support for NFV. TOSCA seems getting increasingly popular in NFV community. And in last summit in Tokyo, we had a meeting with the attacker project team. The attacker is an OpenStack mainstream project now about extending the support of TOSCA parser to NFV profile. Originally, the parser was only for the Simple Profile. As part of that meeting, we had a collaborative efforts between attacker developers and ourselves. And now we support the NFV. Just real quick, tomorrow we are giving a Brownbeck tech talk just on NFV. So if you're interested to know more about NFV architect, how attacker is using these projects, parser and transmitter, please join us. It's 2.15 PM tomorrow in Brownbeck room. Besides NFV, we continue developing against the TOSCA Simple Profile specs. We added new features to parse things like TOSCA groups, which is a notion in TOSCA to group different nodes so that common operations can be applied on them, as well as something like policies can be applied on a group. We also added parsing support for TOSCA namespace, nested properties, and working with the project Indigo members, like Sun and other folks. They are bringing a lot of goodies cases. They use a lot of custom types. So as part of that, we did found a certain announcement and we took them as bug fixes to strengthen our relationship between custom types and normative types. We also, with 0.3, we're doing a full validation of TOSCA templates. So now we give you a compile errors when you're validating the template, support for nested imports. And TOSCA parser is basically used as a library, but we still created a shell utility program just for folks to play with it. Parser was a new project created a couple months before Mitaka Summit. So during this cycle, we did create the utility and we have a few other things available on master, like support for load balancer, range dive, et cetera. Moving to heat translator. So the translator project was created a couple years back. The goal was to enable deployment of non-heat workloads by means of translation to HOT. As you can see on the right side, there's a snippet showing TOSCA Hello World template and how it's translated to HOT. Again, thanks to the team. We had two releases, 0.4 and 0.3, during the Mitaka cycle. We typically have those releases in parallel to the TOSCA parser. Though heat translator uses TOSCA parser pipelines, so there are usually three weeks gap in between them to enable translation for the features added in the parser. But the new features, similar to what I was saying in parsing side, we added translation for NFV, for policies. We completed work on the OpenSec client side with the new test suites. I just want to mention that we do run into challenges when we translate. There is no one-to-one mapping for everything. Things like TOSCA has a constraint base of flavor and images, whereas heat takes flavor and image, the name itself, or, for example, the key name, property of NOAA. It's commonly used in templates in HOT, whereas there is no such concept in TOSCA. So now we basically let user provide those key names as an argument to the translator, and we basically set that accordingly into the HOT. Also, during the 0.3, we are now dynamically creating against NOAA to get the flavor. I'm going to actually talk on that in the next slide with some example there. But we keep adding new command line options to make a translator more user-friendly. We added a few during the MetaCycle, a smaller one, but things like only validating the template. If someone don't want to translate it, but only validate, they can do it. They can save the translator output to a preferred location. On master, that's something we really wanted to do it, is making deployment automatically from translator. So for folks who are translating TOSCA to HOT, now with the deploy option, you can actually deploy it as well. So we basically underneath, we just invokes the stack create, and let it take it from there and deploy the template. Other things like support for NCVOL and Puppet, the folks at CERN on the Project Indigo, they use NCVOL heavily. So thanks to them, they added the support to basically set the group for the NCVOL and Puppet. And we already tested. We have some tests making sure it's all working well. So real quick here, we have an excellent momentum going on. We are working with different OpenSec projects. Either we are using them for our need, or we are working with them to increase the TOSCA adoption. And we're working with our stakeholders. So for example, on the left, Noah and Glance, that's the snippet I was talking about, the constraint-based host in OS, a flavor and image. So if a translator is invoked within OpenSec environment, then we basically query against Noah for a best match of available flavor for the constraint. And again, similarly, we query against Glance to find an image that best matched the constraint. If translators use outside OpenSec environment, which is pretty common, then we have a predefined list of flavor and image, which are commonly used. And we basically find a best match from there and set it in the template. It's a user's responsibility to make sure those flavor images are available into the environment when they deploy. On the adoption side, we now have a Murano TOSCA plugin available. We have an OpenSec client plugin out there. And just a few weeks back, we completed our integration with Community App Catalog. So all the TOSCA templates can be browsed from apps.openstack.org. On TOSCA adoption side, we have two important stakeholders, TACOR and Open Platform NFV or OP NFV. They're both using TOSCA parser and heat translator. And their developers also contributing to both the projects. And we are heat, our destination is heat, so we translate to hot and we deploy with heat. So that's pretty much for me for now. Thanks, and I'll give it to Ricardo. So hi, everyone. So my name is Ricardo. I work at CERN in the OpenStack team. A bit of a background on what CERN is. Some of you might have heard in other talks in other OpenStack summits. So CERN is built by 21 member states, and many other states are involved also in the collaboration. The main machine we have right now is the Large Hadron Collider, which is a particle accelerator 27 kilometers long. And it's 100 meters underground. And that's the tool we use to accelerate protons and generate collisions. So these collisions happen in specialized detectors around 600 million times a second. So it generates a lot of data that has to be analyzed. So archive for the analysis later. So we're generating around 30 petabytes of data. There might be more data coming that is generated from this base data. So the CERN Cloud right now, we're using OpenStack, and we have around 6,000 hypervisors, 150,000 cores available, some 2,000 active users of the Cloud. And at any moment, we'll have around 16,000 VMs running. And we have a ratio of like 200 creations and deletions per hour. So it's kind of a pretty big cloud. Then one of the things that we do at CERN also, because we have all this data, we collaborate with a lot of institutions around the world. So traditionally, we've done this. There are multiple projects we've tried in the past. So grid computing is one that is very successful in distributing all this data and getting analyzed. But for many institutions, now we are looking at other solutions. So the European Union funded in the European Union Horizon 2020 program defended this project called Indigo Data Cloud. And its main goal is to have software infrastructure that can be used easily for science for multiple communities, not only CERN, but many, many other communities. CERN is more high-energy physics, but also biomedical communities, Earth observation, many other subjects in science. So there are many partners. I named here some. So NFN in Italy, CERN in Geneva, and then UPV in Valencia, Daisy in Hamburg in Germany. So many different institutions, both from research and university, more related, and also from the industry that joined as partners. So one of the main aspects of this is that many of them are using OpenStack, but a few are also using Openabula. So we had this to take into account when designing the system. So the main strengths of this project are to provide support for different scientific communities to build an available solution. So we don't want to develop anything new. There's plenty of solutions there that we can build on. It has to be quick, so people don't have to have a very long installation guide to follow. They just want to do their analysis and then support this distributed and hybrid environment with both private and public clouds to output some credits of people that helped a lot with the work that we are explaining here. So why Tosca? This is a bit explaining our experience with Tosca. Why we considered it and to see how we started using it and what point we are now. So the first step was to consider what options are there. Inside CERN, we are an OpenStack deployment, and many other partners have the same setup. So HEAT was an obvious choice, but it's specific to OpenStack. And CloudFormation had the same limitations. So we started looking around, and with discussions with IBM and other people, we took Tosca as a viable common denominator, both for the topology of the system, so to decide what goes into each site and to just orchestrate the whole thing, but also for people to start defining end users' applications in the same way, which was always a problem in the past. There was an existing code base. So the Tosca parser and HEAT translator were there, and we could just start trying it. And one big point is that they are libraries that we can reuse in other contexts. So we can, for example, the HEAT translator and the Tosca parser. Tosca parser is being used by one of our partners in another system. The HEAT translator, we can also use it for translating the same mechanism that we used to translate from Tosca to HOT. We could use it to translate to other formats. And there's growing support in different communities. So we started with very simple use cases. The first one was to deploy a single VM with a certain image and see if it worked. Then we started getting to our specific use cases. So one of them is batch processing. So we have a lot of data. We distribute the data, and then we launch jobs that analyze the data and give back results. We do this in a batch mode. Also in an Alice mode for the physicists. But we do this in a batch mode for the reconstruction of the data. There are many systems in use that the HEAT and other communities in science are used to. So Tor, Slum, and Condor are some solutions. Then, of course, now a lot of people want to use newer systems like Mezos or other container based tools. And then one use case that came was from the biomedical community, which they have their own portal where they just orchestrate what is a batch clustering behind. But they have a portal where the scientists will go to submit their analysis and say, I want to analyze this data with this algorithm and send my results, please, to this place. And then Indigo specific for the infrastructure jobs. So this is more like how to package things and how to get the infrastructure to just keep running without so much attention. So these are all the use cases we consider from the start. So from here, I go to some examples. I hope you can read it. But the goal is just to explain a bit how it looks. So if you're used to hot and the heat, it's not that different. It looks very similar. It's YAML. But then the constructs are a bit specific. So one thing, in this example, so we have here a simple node that we call Indigo compute. And then we say we need polygyp available. And we want these properties for this node. One thing that you will see is that this is not a basic construct existing in Tosca. So we have these custom types here imported. And that's where we define the needs we have. So what is an Indigo compute node? If we look here, Tosca has the ability to define derived types. So you can say, an Indigo node, an Indigo compute node, is actually an Indigo monitor compute. And this allows us to define common things in the same place. This is very similar to what you would do with heat. So in this case, we are saying, in addition to deploying an OVA server, in OpenStack, it would be, also configure a Zavis server. This is the endpoint. And these are the extra metadata I want. And then for the installation part, as said I've mentioned, in Indigo, we use Ansible templates for defining the installation and the several steps in the installation. So in this case, we are saying, on creation, run this specific template playbook. And then on this configuration and the start, launch of the service, use these different ones. And you can reuse properties that were defined in previous templates in Tosca. So then we translate it. And how does it look in the hot? So the same, the exact template I showed you, would translate to something that if you use to heat, it's very common. So it translates to an OVA server. And then it says, the flavor that the person specified is small. And then they wanted this image. We need software config defines for the installation. And then what is a step in the installation procedure of Tosca? Just maps to software config in heat. So it's very direct. You won't see anything very surprising there. Then the Ansible integration was an important one. Internally, at CERN, we use mostly puppet. But in Indigo, the decision was to use Ansible for most of the software installation. So in this case, what does it look like? Just to give you some details, in the installation is just saying which package should run. These are very small examples. There would be more complex ones. And then for the configuration, it's just using some of the predefined variables to fill up the template in the configuration file. So then one thing that we started also is that, of course, if you'd start defining a lot of these bits and pieces, one good thing is to use Ansible roles. So we use the galaxy portal for Ansible. And we just pick up. So instead of having to define all the steps in each template, you just point to an existing role here, and then it just builds on it. Makes things a bit easier. So one good thing that came from this is that through the work we did with IBM, we added the support to have multiple deployment and config options. So when you define in the heat translator anything that looks like YAML, we assume it's Ansible, PPS puppet, because we are also interested in that, and anything else is script. So you can hook in Python, Bash scripts, whatever you need to configure your system. And that works for us. And then if you need something else, you can have a look. It's not hard to integrate this. Then we started looking at more complex examples. The batch cluster, I will take the biomedical case here. So the goal is to have a batch cluster with a portal in front. So the user doesn't see any of the complexity of the system. It just sees a portal. And it goes there and says, I want to analyze this data with my algorithm. And then there's a batch system behind. In this case, we are aiming at Torq. And then there's one entry point for the batch cluster that takes the jobs. And then there's multiple worker nodes that consume the jobs, run them, and send back the results. The main thing here to see is that if we would do this in heat, we could deploy it in OpenStack easily. But as we have a diverse set of infrastructures, we want it to be able to have one entry point, but then have worker nodes running in multiple clouds. So it can be in our cloud at CERN in Geneva, or it could be in a cloud in Valencia, or a cloud in some of the partners in Germany. And then everything should just work. There shouldn't be any complication due to that. And by defining it in Tosca, then we can convert to the local systems in each of the sites and reuse the same templates to deploy parts of the infrastructure. So that's one of the main benefits here. Again, an example, it's a bit hard to put this in templates in a slide, because the templates can grow quite a lot. But the goal here is an elastic cluster defined by our templates that has one front end, and in this case, one or more worker nodes. So the front end looks like the configuration of it will be an Ansible template that we refer to, but the main thing is that it depends on an actual server, so it requires a server with these properties. And then the template would define the Galaxy roles that are needed. Same for the worker nodes here. It says the host is required here, and then the description of what the worker node should be. And then this translates locally to the clouds as needed, and this is where we stop defining the infrastructure, which makes it quite easy. There's a lot of work then at the infrastructure level to make all these pieces work, but this is what we've been doing with IBM by contributing upstream. So I'll just show a quick demo if this works. Just to ask. Oh, there we go. So this is a very simple example, but it's just to get a feeling of how all works. I hope you can see it. So in this case, it's like the batch example. So the goal is to show you how this works. So I'll stop here. In this case, as I've mentioned, there's this tool called HitTranslator that takes a Tosca template and outputs whatever you need. So in this case, we're saying take my elastic cluster template and generate Tosca in this case and to this output file. So then we actually copy it to another. We have this distributed file system inside, and then once we have the hot template, we can just submit it to Hit inside. So in this case, we are submitting to the cloud at CERN, and we're generating exactly the same. If you use Hit, this is all it took. We just generated the hot. We deployed the stack. The stack looks like this. It's building, so we'll wait a bit for it to build. All the resources are complete, so we do a novel list. And we see in this case, we only had an initial set of one server and one worker node, and they're both active. There's an IP that we took there, and now we can try to access the service. So as I mentioned, this is a batch cluster, so the goal is to define a job and to run a set of jobs. So in this case, we have two PBS, Torq is PBS the tool. So PBS nodes lists the available nodes, and we see that we have one node that was the one we deployed. Then we define our job. In this case, we'll define a very simple job, which is a batch script that just echoes a nice sentence there. And we'll try to deploy this into a batch cluster and see what the result is. So QSub is the submission tool for this system, and we stay here waiting that this status changes to complete. So it's still executing, and now it's complete. So once it's complete, I can log in. So in this case, just to check what it did, we'll log into the box. In this case, we only had one worker node, so we'll log into the box and see what happened. So our worker node's IP, we log into that box, and we check the execution of the job, and it will just print whatever message we have in the beginning. So it's a very simple example, but you can see the potential of this in terms of large deployments. So we have other deployments that we have the same exact Tosca template is being used to do much larger deployments. So what we do is we use the same. We'll deploy the same front-end node, and then we'll just scale the worker nodes to many sites and tens or hundreds of nodes as required. So right now, we are building slowly, so we already have some clusters with a few nodes, and eventually we'll scale to a lot more. So in terms of the Indigo project and the status of Tosca usage, the first use cases are already covered. A lot of work we've done upstream with the IBM has been very positive. So for the Tosca parser, actually, the University of Valencia and Miguel has been doing a lot of work. And at CERN, we've done the integration with the heat translator for the requirements we had. And then one good thing is that there's the heat translator CLI, but actually there's integration also with the common open stack client tool. So if you prefer that, there's integration there, and you can just deploy using a Tosca template. And there's integration with other APIs, which in our case is very important because we need to integrate with existing systems and having a better interface than the CLI tool is good for that. So coming next, we'll expand this deployment for more and more sites that are part of the project. And then try to start using Tosca also for end user applications. So this example was for infrastructure. We want to have the specific applications of the users defined in the same templates, and then continue contributing to the upstream. And with that, I give back to Sadev for some of the plans. So just real quick, and I'll make sure Matt can talk to you. Yeah, just real quick about the Newton plan. We are continue releasing point releases. Matter of fact, the next month, we should have 0.5 out and then another at least two releases throughout the Newton cycle. As we work with the Sun and other members of Inigo, we should have more need come up. And we're going to work on them. We continue working with the Tacker project with some of the more complex NFV use cases. And similarly with the Open Platform NFV team as well. We have completed most of the work against SPAC 1.0, but there are still a few items that needs to be worked on, like a container support, some of the interesting function, et cetera. And we're going to continue working, growing the ecosystem. Those last couple things, we already have some work in progress. Based on the stakeholder feedback, we're going to let them pass parameters as an input file. And lastly, Horizon Plugin to enable translational deployment from the dashboard insert to make it more user friendly. Thanks, guys. I'll hand it over to Matt for SPAC. So I'm here to tell you what's next and what cool things you should be looking for in Tosca, the standard. So metadata. We added metadata a while ago to just the service templates. You can track special values and things you want to track through your templates through the system with the service template level. But now we've added it to all top-level entities. So anytime you have a resource or a connection or anything like that, you can add metadata to track those things. It's interesting to note, I'll talk about this later, but we also have a metadata standard we've adopted in OASIS called TAG. So one of the big things we're doing in terms of conformance is using the metadata for tagging service templates for test cases. So we can actually add the metadata information about which part of the spec you're testing or if you implement your own schema for Tosca, you can actually embed in the metadata a test standard if you want to. Group type, as I've mentioned, we're advancing the group type. It's beginning to look more and more like a node in Tosca. It has currently everything but artifacts. So basically, you can actually have a node abstraction that manages other resources as a group. And we're actually evolving that towards a cluster type that you see two bolts down. Policy definition, that's a big thing. I'll show the next slide. Policy is very important. Cynlin Project, if you guys follow that in OpenStack, has blueprints open to use Tosca for policy. They need some people to do the work. The blueprints are written. They're all approved. So it would be nice to have a standard like Tosca be used instead of an ad hoc format for policy in Cynlin. Cluster type I mentioned, 75% of the work is complete. We've agreed on how to implement standard representation of clusters for homogenous types of deployments, which again matches Cynlin. We're debating on how to do some heterogeneous things in terms of different types of composite applications. And they might scale or be clustered differently, have different considerations. So we basically will have node types that will handle be able to describe load balancing and routing and some very basic functions you expect from any cluster, whether it be a ANOVA cluster, a bare metal cluster, a Docker cluster, or even going forward a serverless cluster, those types of things. Workflow, now this is a very interesting area. So one of the big things we recognized in Tosca was that it's all or nothing typically. When with Tosca or Heat, when you go declarative, it's basically saying, forget your imperative, leave it behind, or you have to write your imperative to match it to certain things like I can only install with this script. I can only destroy with the script. Well, we're actually very close to finishing workflow, where it can actually say, I want to take over certain state changes in Tosca. So if I want to take over the installation or configuration, I can say, I want this workflow where my Ansible and other scripts can take over, take you to completion and do configuration, do a bunch of stuff, and I'll put you back and tell you where you re-enter the declarative workflow. So it's a very cool thing we're doing in Tosca you'll see more of going forward. Again, we want people to preserve their value they have in their existing script languages where they have complex things they don't want to rewrite declaratively. So this is one of the major goals for Tosca. Next slide, policy. I want to show this because we actually have a model. It's important not just to have policies in name and say, we're going to have policies, these abstract things. We actually have a concrete representation of policies based upon an event condition action model. So basically, we're focusing on operational policies. You can actually say there are a small number of Tosca event types that can map to your target system. In Tosca, they can map to a Manasca event or a salameter event of some type, an alarm. You have conditions that are evaluated based upon the state of the system that can fire an action. And those actions can be script actions or call-outs to webhooks or services. So that's the concept. And again, if you see Sendlin, that matches well with Sendlin's concept. If anybody's are looking forward to serverless technologies AWS Lambda, OpenWisk is a new thing that IBM announced we're working on in the open community. Those all follow an event condition action triggering type model. Very cool stuff. Going forward, we can see we're very busy in many areas. I'm very excited about interoperability because everyone says to me, when you try to use Tosca, how do I know, as a standard, if somebody implements it, in OpenStack, we make sure we follow the standard, someone else wants to do Tosca. Well, we're gonna create conformance test suites. We're actually gonna create a GitHub repo. We're actually have one already. We're populating it with test cases. We're gonna announce in May. So every part of the spec will have test case coverages, coverage for any Git repo, and people will be able to contribute their own test cases going along. So you'll be able to prove to yourself if somebody comes with a Tosca tool that they're conformant, whether they be a parser or whatever. Network function, that was mentioned by Sotov as well. The TACR project is using Tosca. The entire Etsy community around network virtualization needed a standard, and Tosca seems to be that standard. We can actually carry configuration data from Yang and other places, and we actually do that in TACR. And we actually are about ready to complete the final version of the NFE profile for Tosca 1.0. It has things like how do you do composition, how do you define network functions, network services from virtual network functions, and in turn they can be composed of even smaller granular units. And they have actually forwarding paths, forwarding graphs. You can actually describe your network flows with Tosca, very cool stuff. Instance model, we have a role group talking about how you do management. How do you, after you have it running, how do I introspect and get out the running model? So the input now is a template, but after it's running for a period of time and it has policies, it's scaling, it's failing over, doing whatever it's doing, you wanna capture that state. Maybe you want to get the idea of something and manage it or introspect it. These are things we're doing, and we're gonna define an API for instance model manipulation. Monitoring, ongoing work with the monitoring work group, you'll see some of the things in terms of be used by the clustering stuff, by the policy stuff to trigger events that can fire, evaluate rules to fire alarms if you're choosing. So you'll see these things, we're taking a very high level stance. We're doing a green, yellow, red approach saying you're in the green, you're in the yellow and the red and you need to pay attention. And you can actually derive your own granularities of events if you want to, but we're basically from a portability standpoint, we're having a very high level event monitoring type, a very flexible concept in TOSCA. So those are the things we're doing and I just wanna let everyone know that the TOSCA 1.0 spec is done. We had our 60 day review in December. We had comments, we answered, we finished our 15 day review in March. The final standard is out there, it's just been published. We're actually gonna take it to final OASIS standardization with a full OASIS wide company vote and hopefully that'll be done by in the summer. So if you're, no matter where you're using TOSCA, whether it be in OpenStack, other places, you have, you can write tooling against the standard, it's not gonna change, we're gonna track a Rata, you have complete faith that a group of companies are going to maintain that standard and not yank or change things out from underneath you. So that's all.