 Well, we'll get started. Welcome to our discussion on upgrading. My name is Charles Bitter. I work for Comcast. And they've asked me to moderate this panel discussion. So we'll jump right off and do a quick introduction to each of our panel members. So we'll start right here. All right. Hello. My name is Siddharth. I work for VMware. Actually, on the implementation of the team implementing the upgrade for VI or so, I have a different perspective. My name is Mark Felker. I'm the OpenStack architect at VMware. Hi. My name is Basil Baby. I'm with the cloud engineering of Comcast. Hello there, folks. My name is Stephen Dake. I work for Cisco Systems, where I'm bringing containers to OpenStack, also in the call of PTL, and worked on Magnum briefly for about two cycles, as well as started the HEAP project in the past. Hi there. My name is Tom Howley. I work in Hewlett Packard Enterprise. I'm working on the life cycle management team. So that's primarily involved with the deployment and upgrade of OpenStack. Hello. My name is Michal Jastramski. I'm a worker in Intel. And I'm a part of Cola Core team. And I've wrote most of the Cola upgrade logic. And I'm Jan Grant, Hewlett Packard Enterprise, also working on the HLM team, dealing with deployments and upgrades. So that's our panel. Before we start, we'd like to get an idea about our audience, even though you're kind of dispersed. Who in the audience, and keep in mind, we can barely see. Who in the audience actually operates an OpenStack cloud? And who has ever upgraded OpenStack? Who hated it? All right. Who wants to? Deal. Cool. All right. So my next question is for each panel member. Take about two minutes and explain what kind of cloud you upgraded, what version did you go from in to, and just some basic considerations around that. All right. So well, essentially, we have upgraded OpenStack cloud for some of our customers who have deployed it in production. And we basically upgraded from Icehouse in our version one of the product to Kilo through your jumping release. And we did upgrade to Kilo for them. In terms of the challenge, I guess. Well, for us, essentially, one of the important part was with our customers, considering that they were in production with that, we had to plan out in terms of what sort of requirements. We had to clearly lay out what additional requirements do we need for them before even they start with the upgrade. Because we wanted to make sure that they succeed the very first time when they try and do it. And they don't have to go back and put off the fires along the way. So that was very important for us. And I think so far for our product, with our customers' experiences, it has been really very good. In fact, some of our customers, they actually went ahead and been brave enough to do it themselves without even telling us. And they were quite successful with that, which is actually very positive for us. Yeah. So we did our upgrade on from Havana to Isos. So it was a full production environment on multi-region. So the challenges side, since it's production and a lot of customers, the workloads were really different. Like, we have connected boxes directly to the VMs. So the planning part, it took some good time to plan the upgrade, see it in really action, port some of the data, what we are seeing it in production in the lab, do the same kind of deployment methodology on the lab, and then upgrade and see how it works, collect all the data, what breaks, hydrate through that, again, do the upgrade many times. So the main challenge was, since it's production and getting a lot of production traffic from diversified tenants. So that was the main. Yeah. Thank you. So I have personally upgraded from Liberty of Kola to Mataka of Kola with running VMs without any interruption of the system. Kola is a new project, so we're not upgrading from Ice House or Kilo or older distros, distributions of OpenStack. I would say the key point or message I would make to people trying to do upgrades is what often happens is folks deploy a system. They get five or 10 folks together and they deploy an OpenStack system and they say, oh, victory, we're all set with OpenStack. But they make a mess during the process in a lot of cases. And that mess is very difficult to clean up in an upgrade. So one of the key factors of Kola, one of the reasons it makes its upgrades so fantastically good is that it does not make a mess of the system. So there's no mess to upgrade. And that's really the key value of upgrades in Kola. All right, Joe. So two parts to my experience would upgrade. So originally, we were running a public cloud. So actually, the first region we had deployed was Diablo and we did an upgrade by jumping much later on to Grizzly in a separate region. So running that public cloud, we upgraded from Grizzly to Havana and through to Ice House for most of the services, I believe. So I think that fed into our, we got a lot of experience from the problems we would have had, because it was early enough days and an upgrade. And I completely agree with what Stephen said. You need to deal with and think about upgrade from the beginning when you're doing your initial deployment. So more recently, as we kind of moved to actually providing a product, that we have an upcoming release, which is basically going to be an upgrade from Kilo to Liberty. And we're currently working on the Liberty to Mataka. So that's kind of two aspects to our experience. One is in running a public cloud and doing it possibly more manually and running into lots of problems with conflicting dependencies between packages and the various services. And then I think we know we've dealt with some of those issues in our more recent upgrade mechanisms that we've implemented for the product. So I upgraded OpenSync Seems Havana actually on the multiple occasions. I'm a more developer than operator, so I was more playing about upgrades. And since they have Havana app, all the experiences I had, I gathered, and that was the base of the Kola automatic upgrade script. So in Kola, all that experience was gathered and brought, actually, automated upgrades from Liberty to Mataka and from Liberty to Liberty security patches as well. Yeah, obviously I work with Tom, so let me just add to that. With the initial upgrade in the public cloud, we had the fantastic luxury of opening up a new data center, which obviously, if you can manage that, that'd be great. And then playing a shell game with the original region as we managed to get our cohorts stood up. That obviously is not on the cards for most people. And I echo what Stephen said, the more you can containerize in whatever fashion your software that you're running, the better. It puts you in a much better position. So let's bounce off that. Let's talk about containerization and automation. I know that the Kola project is all about containerization. And I'm getting the theme that you don't upgrade without automation, right? So can we get a quick snapshot just so everyone kind of gets an idea of the automation tool sets that you used? So let me start with this. Most importantly about why containerization is important from upgrade perspective is the separation of the dependencies. So for example, the big issue with upgrades is I want to upgrade my Nova. But Nova uses Oslo in version x.y. And I want my neutron to be upgraded a day after when we confirm that Nova is actually working. But it also uses this older version of Oslo, which means I cannot run my Nova and neutron at the same time in the middle of upgrade, which in essence turns out to be we need to upgrade every single service at the same time, which is disruptive, which is hard, and which is extremely hard if not impossible to roll back if something goes wrong. In Kola, actually, any form of separate containerization gives you the one service has its own set of dependencies, which means we can have multiple sets of dependencies coexisting at the same host. And that, in essence, give you the ability to upgrade just Nova, for example, which means the upgrades are more atomic, less volatile. And actually, they are actually quicker when you consider that because you don't need to deal with the conflicts in the process. The other nice thing about Kola, it is actually image-based specifics. So Docker is that when you have pre-built images, you can test them out prior to the deployment so you can make sure that your upgrade works before you actually upgrade it. And you don't download any form of packages in the meantime. So when you do upgrade and the repository with your new version of packages dies, it will not affect the pre-built images, which is yet another way to make it more less volatile, actually. So with Kola, we have multiple ways to deploy Kola. However, our main one is Ansible. And with Ansible, we have a set of play. I mean, we have a playbook for upgrades, which assumes that you deployed Kola with Ansible. You can just run the playbook from upgrade. And because of the atomicity of the upgrade, because we can do one service at the time, we could do it automatically. Yeah, we run into some of those same constraints. And we have a lot of the same philosophy, I think, about kind of containing things and making sure that they work before we actually flip the switch and put them on in production. We tend to think of upgrades actually as maybe not atomic, but more transactional. In most cases, when we're doing upgrades for customers, they have a set of things that they want to get upgraded. And if that's one thing, that's one thing, right? It could also be 12 things. Maybe they want to upgrade every service from Kilo to Mataka. So we tend to think of it as let's test that entire system all together and make sure that the whole system works before we turn the switch and put it on in production. And because of that, we do a blue-green upgrade pattern where we can actually get a control plane that we know works together before we actually flip the switch and put it into production. So we have a lot of the same philosophy there. Well, it's essentially a staging environment. And however you do it, the best way you could do it is to upgrade one service, test it out, upload the second service, test it out, up to the point of upgrading everything. So you basically replicate what you will do normally on production. And thanks to the image-based deployment, you are guaranteed to have every single thing in the same way you tested on the staging in the production unless you broke something yourself. However, so I mean, I agree with you. You need to test the whole procedure to make it to be sure. Just to add, we completely agree with the need for isolation to dependencies. We basically achieve that using Python Virtual and so it's not quite going the full distance as container. But this definitely solved a lot of the issues we would have found in the public cloud experience in terms of conflicting dependencies between different services. So as you said, it doesn't go to the length of the container because there are C libraries. This is a significantly less of a problem than the Python libraries. There are hundreds of Python libraries in the requirements right now. There are only several C libraries. But to have the full separation, you need to go all the way. But it requires more from the deployer to go all the way from day one. So we see a pattern, right? We need automation and we need isolation. We can isolate at the Python library level, a Docker container level, an entire image level. So that isolation, I think, is super, super important for rollback. But how do you know to rollback? So tell me about how you test your upgrade. Well, I'll just go around and start right there. You have a mic. OK. So we actually test upgrade both in CI and QA. We're in a slightly different position that we're now preparing a product. So whereas with the public cloud, we were looking at very regular cadence of updates with small deltas. Now we're looking at maybe one or two releases per cycle. So we have some luxury of being able to test and test and test again. And we've got a good idea of what's deployed currently. We've got quite a bit of flexibility in our deployable architectures. So fortunately, we have amazing QA people as well. And that really can't be stressed enough that they come back to us with all sorts of things and say, well, what about if such and such happens? They say, why on earth would anyone ever do that? But it's something that needs fixing and sorting out. So with the more flexibility you've got in your potential deployments, obviously, if you have one particular sort of virtualization technology in mind or one particular kind of back end storage for Cinder, you, with your own site, you can scope your testing. But I think the real thing is to, before you enter an upgrade live, you want to know damn well that it has a very high probability of passing. And whether you do that starting small with a virtualized setup, we use much the same tooling for our developer environment using a vagrant-based system, which is a reasonable fidelity model. So our developers are using the same tooling right from the start as we then use on bare metal and so on. So we do it again and again and again to really find out where the weak points are and those are the things you've got to focus on. So having said that we've finished, you have to finish the upgrade is not actually a trivial thing to say because we may forget about upgrading this one particle package on this one of the 100 nodes. And that's the reason we need automation. With Docker, it gets a bit easier because to validate that we actually upgrade it is we had this set of images on day one. And these are images valid for labor key. Then in Mitaka, it's this set of images. And in every single node, these images, the Mitaka images are up and running. And everything was running in the meantime. And if everything obviously is running after the upgrade, you pretty much are done with the upgrade. So your state of your cluster is very easy to figure it out, which is not a trivial matter. So as for how I tested upgrade, well, I just did it probably dozens of times right now and test deploy liberty, deploy VMs, ping VMs, make upgrades, see if everything's working, and tear down the rinse and repeat. Just to add to what Jan said. So we, as he said, we have a vagrant environment for developers to test deployment and upgrade. And we use that same tooling for CI. I think when we first introduced upgrade, and we did this early on, so with the first release based on our first Ansible-based release, we actually had upgrade pretty much implemented, even though you don't really need it until the next release. And it was a bit of a challenge, actually, introducing that across the various services, getting your first kind of upgrade CI job to work. So we have a CI job that runs on many commits across the various repos. So as a result, we're testing upgrade thousands of times by the time you get to release. Now as Jan said, because we have flexibility in how you can define your cloud in our product, we can only test a certain set of patterns and try and be smart. And indeed, one of the issues you get with this, you know, it's great putting stuff into CI, but you only have a limited set of physical resources to test. So you kind of have to be smart about what you want to test. And I think we're looking, you know, within ourselves to streamline some of the CI so we can actually test more kind of more intelligently on the various different services. So, you know, what are the issues with upgrading Cinder? And to be honest, actually the more, the kind of problems we found in CI are not around upgrading OpenStack services. It's actually around upgrading the infrastructure services so that keep alive DHI proxy for Kona. And you find some very interesting intermittent issues when you run upgrade thousands of times or even hundreds for. And what's been very useful is actually, you know, together with the kind of an Elk stack in the background, building up stats to figure out, okay, this issue actually looks like it's gonna be important for us to address as in it might happen on a customer site, because you can't address every single issue that CI throws up. It has been very useful as a kind of initial testing ground and then obviously QA is invaluable after that for kind of full testing on bare metal. So I think one thing that's different about Kola versus other deployment tools is Kola is meant to be an upstream for other downstream products. What that means is people take Kola, they maybe make some changes to it and they ship it as their product. As they do this, they qualify the upgrades. That said, we definitely wanna qualify our own upgrades before that happens. The way we're doing this in this next cycle is we're adding co-gating jobs for Glantz. We're gonna add a co-gating job for Nova and a co-gating job for Neutron. They test the upgrade from one version of Glantz to the next or one version of Nova to the next or one version of Neutron to the next. As well as a gating job for all of the deployments or all of the packages we deploy. So we will have that co-gating job. Those co-gating jobs will detect failures in those services as well as kind of our regular gating job which will detect failures in our infrastructure based around upgrades. But our upgrades are so simple and so straightforward. They were hard to implement but they're so straightforward to look at now that they're done. I don't expect a lot of problems. That said, there could be problems. And I think it's really up to the downstreams that use COLA to make a decision as to how they wanna test and qualify their products based upon COLA. Yeah, in a Comcast story like some of, I mean, a part of our infrastructure was already, was built like sometimes back. So like the survey released in like this release cycle, I think 50% of the users are still on Isis or behind I believe. So we had the same thing like, even though COLA is good and all those good things came after. So we are stuck on some of the older, some of the infrastructure where we are stuck on the older version. So the testing part, like for the question you asked, we start with the way ground like I think if you all are doing the same. So we start with the way ground and then in our case, like we are using Puppet on this Isos and when we are using Puppet and we orchestrate through Ansible because we already have the manifest so we don't want to rewrite or make a big change there. So we used Ansible to orchestrate that runs because Puppet, it is not that good in the timing, like how to time the runs. So and in the production, while doing the production side, we are using Tempest test, like after the controller upgrade, we run Tempest test to make sure that the existing clients are good and we can create VMs or delete VMs. All those operations are fully functional. The control plane is really good. Then we'll go to computes one at a time or in batches based on the region where the kind of workload we are putting into the region and then we go in batches to do the computes. So Tempest was really helping us. So we wrote some custom Tempest ones to fit our needs and the rest of them we are using from the community itself and that was really helping us. So kind of interestingly, we being in the product space, we don't actually always get to define acceptance criteria for an upgrade because all of our customers wind up with different things that they consider to be working and not all of them are even directly in OpenStack's control. So for example, we might have customers who actually want to swap out hardware as part of an upgrade to a new version of OpenStack. You know, add new storage, add new hypervisors, add, you know, whatever else. So in our case, in addition to some of the strategies that we've already talked about, one of the things that we wanted to do was have an open window where the customers could actually go in and do the things that they consider to be tests for their particular use case, which again is kind of one of the reasons that we gravitated toward a blue-green upgrade. Once we stand up the new control plane, that gives them a chance to go in and break it however they want to. And either say, yeah, it's good, now actually flip over to it in production or not. And that kind of dovetails into your original point about rollbacks being important because we don't always know those acceptance criteria ahead of time and because they're so different from customer to customer. It's really important for us to be able to very quickly flip a switch and declare an upgrade failed and get back to working very quickly. With a blue-green upgrade, we can actually do that because we actually still have the old control plane. And the really nice part about it is even after we do that flip back, we've still got the new one that they considered failed. It's hanging around, it's not being used in production. It's been cut off from the outside load balancers. But now they can actually go in and figure out why it failed. Because they can actually, they've got it still there to do forensic analysis on. So it felt like a boss, right? Right. Awesome. So I've been leading this discussion for a while and now it's your turn. If you have any questions, we'd ask that you use the microphones so we can record it. Hi, my name is Aniket. I'm from Box. We're rolling out open stack liberty on a massive-scaled box. And what worries me is when I hear a lot of open stack users talking about still being on Ice House or being in June or even few ones being on Kilo, why do you think it is such a problem for customers to adopt near current releases? Any thoughts? So one of it will, ironically, will be because upgrades are hard and they used to be hard and used to be horrible a couple of releases ago. Horrible to a point that you will lose your product, you will lose the connectivity to your workload during an upgrade. So because the early upgrades was pretty much setting up new environment and telling your people to migrate on their own. And that's a very costly procedure if not impossible. Later upgrades were sort of about migrating, life migrating to virtual machines and that may not be possible. It was only like last two releases when you can do the upgrade without significantly or any effect at all of your running workloads. So cost of upgrade will be my best bet. Yeah, I guess my perspective is a little bit different. For many of our customers, at least for a year or two now, I guess, upgrade has been a problem that there's been sort of tractable strategies for now. There's kind of a lot of lore in OpenStack about how bad upgrades used to be. And I think you're right, it's improved dramatically in the last several releases. In our case, what we actually hear from a lot of customers is just it's working really well for us and we don't necessarily need the new stuff that's coming up. So we just don't have a good reason to change. So it's kind of a, if it ain't broke, don't fix it more than say once a year. Got it. And I have a follow-up question for you about the blue-green strategy you mentioned. That strategy essentially involves fencing off one cloud, right? Not the entire cloud, just the control plane. In our case, that's a set of virtual machines. So it's half a dozen or so in some deployments and up to about 13 or 14 others. I see, okay. So that way you minimize the infrastructure overhead of doing that. Yeah, so we have a pretty decent decoupling of the data plane and the control plane. And the only thing that we need to do the sort of duplication for is on the control plane side. In most cases, that's like a trivial amount of resources. I mean, if you're talking about a dozen VMs for a cloud, that's not really very much, in most cases, as it turns out. The biggest constraint we run into on customers is generally they want a couple of extra IP addresses. And as long as they plan for that upfront, then it's fine. All right, thank you. Hi, thanks. I was wondering about database versioning and upgrading of each services and their database versions. What does that look like and are there any pitfalls? Yeah, okay, we can talk about that. So we, in terms of database versioning, are you talking about the schema migrations that happen between versions of OpenStack? Yeah. Okay. So in Blue-Green World, that generally defines what our state sync period is gonna be. So basically the sort of simplest way to do that in Blue-Green Upgrade is basically at some point, you say I'm ready to cut over to the new control plane. That's the point at which I maybe take a dump out of one database, run it to another, and then run all migrations, right? Or even just do that in place if you've got a backup or something. That generally is the period where you don't want things to change. It is possible, and I've seen it done in other distributed systems, to actually have data coming into both the new and the old schemas. I don't really recommend it for a system like OpenStack. And that's partly because we have a whole lot of different projects that are doing their own things independently of each other. It just becomes more complex than you would want it to be. How might you stop the changes, the right requests coming to the database? Sorry, can you speak up a little bit? How would you stop the right requests from coming to the database? You said you'd stop things from changing. Oh, how do we stop the old instance? Yeah. How do you stop the new request from again? Yeah, so generally we advise people to basically cut it off with a load balancer. For most everything, the public interface to OpenStack is the load balancer, and in our case, it's also all our internal requests, like Nova talking to Neutron also goes through an HA proxy pair. So we can generally cut it off there. So let me add to the, lately in OpenStack, I mean, again, about two or three releases, there's this idea of lockless upgrades that Nova implemented via their services are about to implement. It's the idea is that you keep your old database, I mean, you upgrade your database before you even touch the code, which means the new code is supposed to be working with the old database, and sorry, the new database is supposed to be working with old code, and obviously with the new code. That being said, OpenStack services doesn't support, I don't think any OpenStack service supports are rollback in upgrade, rollback of database, which means when something goes wrong, I really suggest you snapshot your database before doing anything with it. And but in ideal solution, which actually, in ideal situation actually is pretty often, you just do the database migration prior to even touching the code, and once the migration is finished, and your localities keep running, you just upgrade your code. And if it's not the case, then you need to just put the maintenance window, turn off the APIs, stop people from making any changes, upgrade the database, upgrade the code, start APIs. Yeah, that's sad as well. It's a good point about the APIs. One of the things we've actually looked at doing, we haven't personally implemented this yet, but one of the things that we looked at doing, depending on how long that state transfer takes, what you may actually be able to do is just basically hang the connections at the load balancer rather than cutting them off and returning error codes. And essentially then you've got requests that don't ever actually fail. They actually just take a little longer. So there's some balance on that that I'm not a big fan of. I mean, if you have a really massive amount of data to move and you're skipping 16 OpenStack versions and really want a whole bunch of migrations to run, that could be a really long period. And most of your clients would probably rather you just fail. So, but it's a thing to consider. I think we have a kind of similar strategy around performing the migration before we switch over to the new service. And to McHall's point, there's, until we've implemented this across all of the services, the support for old code working against new database and we can't, there's no way around actually stopping your services at some point. And so there's nothing we can do until this is, it's a common mechanism that we have, that they have a nova has been adopted across all the services. And I'm not sure what state that is in at the moment. So the approach is that Novatov is actually more political than technical. I suggest you, if someone is interested, Dan Smith from Nova, it's Nova Core team, he wrote a blog post about this particular thing. It's about just don't make a migration that will lock down your database. And if you actually need to delete something add one release of the deprecation, which will end up you not using a column, but it's better than stopping your service. So obviously it depends on what release you're starting from and where you're upgrading to, but could some of you talk about the amount of time that you plan for an upgrade? Like is it, on your teams, does it take one month or a couple months or a week? And could you discuss that process a little bit? The timeline line. So let me break a little bit because we tested an upgrade. So Cora makes things significantly quicker and granted we didn't have heavy database load. We have pretty much anti-database. However, on 60 or so nodes, physical running cloud, it took 12 minutes to do a really subgrade. It will be more if you have more data, obviously, but basically unless you are really, really heavy on database, it should be less than two or three hours of total process. Image-based deploy really makes this faster. The non-color world, I can give you some perspective. So if it is not on the container world, things will be like a little more difficult. The planning, you need more planning. So that's what we faced in the output. So we planned like the planning and execution total took like a month plus. So it was multi-region. So we had to coordinate with the users within the cloud was like different kinds. And it's a multi-region. So we had to do some sort of coordination of these upgrades. So it was like a month plus. Yeah, I'd just like to straighten something out. So Mihail talked about the 25 minutes for deploy and the 12 minutes for upgrade with the database. I really don't think it would take two or three hours to upgrade a really heavy database because the migrations are relatively quick. But it might take 45 minutes. The thing is we don't have to do a lot of planning with our upgrades because our upgrades are systematic and deterministic, idempotent, declarative. I can go on and on about how technically they basically are always repetitive and work the same way every single time. So because of that, there's no planning that's really necessary in a container world, at least with COLA and maybe with other container deployment tools there is, but definitely not with COLA. Yeah, and to kind of tag onto that as well, when you think about how long it takes to do an upgrade, I would submit that in many cases what you actually want to think about is two things. One, how long it takes to do the full upgrade. And the other thing is how much of that time is actually outage because the two are not anywhere near the same. So when we're doing a blue-green upgrade, the time that it takes to stand up the green control plane is not outage time. Customers are still up and running on their old control plane. They can go test that new control plane. That's not outage time because they're still running on the old one. It's only when we do that state transfer that there's actually any possibility of an outage time. And since, in our case, we can't actually determine all the tests that they may want to run, that works out very well for us because we can still bound the actual downtime. And customers can take as much or as little time as they want. In terms of planning for upgrade, I think kind of useful comparison with, I think similar to what's been said about color, we have a standard mechanism that we use for upgrade that doesn't essentially change between releases. And this upgrade mechanism is not just for open stack release upgrades. It's also for interim security patches, minor updates, that kind of thing. And I just happened to be talking to somebody, a developer that would have also worked in the public cloud back in the day, and he reckoned it was probably a couple of months planning well, planning and execution and then upgrading a service that he was interested in in the public cloud. So it's been a radical change. Once we had a mechanism that we worked on for the first upgrade, once we'd figure that out, then it's literally just changing the input data in theory. I haven't said that. We will run into issues, but that's just issues that you need to solve at that point, whether it's an issue that's been introduced by that release, but the mechanism is essentially the same that we had before. So I will add additional caveat to this downtime. There are two types of downtime. One is for control plane. The other one is for data plane, which is the actual VMs. And as for now, for Liberty to Mintaka, we haven't seen any downtime caused to VMs themselves, which means VMs are keep running. They work, the network is there during the whole process. For us, it goes for the data plane downtime. Some of the services like Nova is actually almost, I mean, it's actually rolling upgradable, which means there is no downtime during an upgrade for Nova. There are services that actually causes downtime, mostly because of incompatibility between schemas. However, often consider with Kola, it's pretty much determined by the length of the database migration. Apart from that is minutes on top of it. So if I, I'll give you 30 seconds then. The basic rule of thumb here is you wanna be able to reason about the process. And if you've got a, if you're installing a whole bunch of packages on the system, that is an extremely complex set of potential interactions. If you're talking about components of services, and you can reason about those in isolation, you're in a much better state of to actually work out what's going on, to figure out what's going on, to plan, to understand, because it's all about comprehension really. Now when you're in the hot seat, you gotta be able to know rapidly, is this good, is this bad? And decomposing things into the right level of granularity. That's some, that makes all the difference. Brilliant, so we're out of time. Apologize if you still have questions. We can probably take them over here. And thanks for coming. Enjoy your summit.