 Okay, yes, so thank you for having me my name is Jamie pool I'm the compute platform engineering manager at g-research and I'm here today to talk to you about Armada, which is a Application which we've created at gr to enable massive scale runs and completion batch jobs on Kubernetes This is something which we're currently using production running millions of jobs through a day across tens of thousands of nodes I'm going to talk a little bit about the application itself the motivation for it and How we use it its architecture Some lessons learned some challenges we've had and also some successes along the way a little bit about the roadmap and then Explain to you how we can use it First I'm going to quickly cover g-research who we are and what we do So g-research is a fintech company based in London in England That's our shiny new office in central London, which has just opened We employ teams of quantitative researchers to look for patterns in noisy real-world datasets financial data Ultimately to look for and create algorithms that can be deployed as trading strategies that can be monetized As a company we've existed for about 20 years and we've grown quite a lot in that time However on to Armada So what is Armada? Armada is a multi kubernetes cluster batch scheduler. I Mentioned that we're about 20 years old when we started as a company Everything back then was very windows and dotnet framework based anyone working in fintech would experience that Over the last five to six years especially with Migrated heavily towards Linux because that's where all the latest ML and AI action is and along with that With migrated to containerization by default and kubernetes became the de facto container orchestration platform in that time We had a lot of success running our stateless services and other applications on kubernetes So we thought to ourselves it would be pretty cool if we could also run all of our batch workloads there So in terms of what we do The vast majority of our compute all on-prem is actually used for running batch jobs because typically our researchers Want to run some experiments run some software to crunch some numbers and then spit out an answer So historically doing all this on Windows We were using HD condor and we saw the pivot to Linux and containers as an opportunity to work out if we could do this on kubernetes Because we've had so much success with the services We figured if we could have a common substrate for all of our compute to be kubernetes It would be really advantageous for all of the reasons which we've already heard about today Same reason everyone sort of trying to do this I suppose And that's really where our model was born our model was Conceptually an application where we thought if we could have the missing features on top of kubernetes of effectively queuing fair share and scaling then we'll get all the ecosystem benefits of running on kubernetes and have the just General benefit of having all of our compute on a common substrate We started having this conversation back in 2019. I think it was in Barcelona keep con And there was other people who were interested so we figured with open source the project from the start And recently in fact it's been accepted into the CNCF sandbox. So it's now sandbox application, which is pretty cool Okay So now I'm going to talk a little bit about how we actually use our mother and then I'm going to dive into the architecture And what's inside that big middle box? Fundamentally what we have at GR is a large number of users and applications who want to submit jobs and get some answers The big box in the middle is our mother which I'm going to dive into and that contains Tens of thousands of nodes many tens of clusters. I've got some stats in the bottom right. You can read there and Typically the workflow as I described as a user Has an image a container image or has either an existing one. I was just created a new one Submit some jobs to our mother they get scheduled Images get pulled containers start up They access lots and lots of data from storage platforms or other application services Crunch some numbers Do some maths and then write out a result somewhere Now this picture here is really one what we were called our mother environment So if many environments within G research When I say environment, I guess I mean something like Development staging production within production. We have multiple environments as well in different data centers This is what one particular data center might look like so there's some core concepts, which is important to understand about the system These will be familiar to anyone who's experienced in HPC probably all of you guys, but I'll go through them anyway So we have a job in our mother sense. This is a Group of related Kubernetes resources, but fundamentally mostly it's a pod spec And this is something which a user wants to be created a Job set It's purely a group of related jobs that you want to manage as a unit. So submit together watch progress together and cancel or you know, whatever together and then a queue which is a sort of a standard queue of jobs where queues can have a priority relative to each other and Also jobs within a queue can have a priority relative to themselves And it's these two dimensions that we mostly use to implement our fair share algorithm Which is pretty similar to what you'll find on condor We have a simple gRPC API that users use to talk to or applications and Then users ultimately subscribe to events on these through the API to track progress of their jobs or job sets So they will see that a job is go from queued to pending to running and then ultimately hopefully succeed or sometimes fail for Whatever reason So now I'm going to talk a bit about how users access it before getting into the actual nuts and bolts of how it works So on the left here, we've got a bit of YAML, which is the simplest possible Armada job specification Fundamentally we've got a little bit of Armada metadata at the top around the queue The job set name the namespace you want it to run in and Kubernetes and then underneath a pod spec Now under here, this is just a raw Kubernetes pod spec You can put anything that a pod supports in here could go into here Which means anything that you can do with a pod in Kubernetes you can do through Armada Most of this is just pass straight through but some certain fields are used for scheduling decisions such as things like node selectors Tolerations things like that We've then built a simple CLI which is called Armada CTL similar to how you would use something like cube CTL So this is meant for interactive human use really and this allows you to submit jobs or sets of jobs And then do things like watch their progress and you get a simple bit of output there Which is probably too small to read But it will it will move as you press after you press enter and you'll be able to see Transition state changes and then the bottom right here. We've got a screenshot of our UI So it built a UI for the system which we call look out It's just a simple react UI this screenshot is actually of a prototype for a new UI Which we're going to be creating in the coming months the one we have at the moment is similar But a lot more basic and we've want to put a lot of time into Investing in this UI making it much easier for people to use the system and reason about what's going on But this allows you to do all the things you'd expect from a user perspective to see the status of jobs Progress find out why they failed find out why they're not scheduled yet And we also want to flip it around to make it really useful for administrators of the platform as well To reason about how many nodes are in the system how much compute we haven't so forth So now I'll get into the actual architecture of the system how it's in bills So everything on this diagram anything in a light blue box is a Kubernetes cluster And anything in a light yellow box is a Kubernetes namespace So first I'm going to talk about the left-hand side of this picture So this is what we will call the server side of our Marder We have by convention. It could be anything, but we always put things in our Marder namespace And here we have a couple of applications. We have the API and the UI which are the Applications which we've built and there's some other components which we've chosen to use to for the backing stores for our system So we use a combination of pulsar and redis for events and job specific job stored in queues We also make heavy use of Prometheus for monitoring as you'd expect And then there's the slightly random elephant on the outside there for the postgres database which we use for the system We're actually probably at a point of like peak complexity at the moment I would say because we've been on a bit of a transition through the architecture So we're probably going to move away from redis and just use postgres But at the moment we have a few of these components mixed around So this is all running on a single Kubernetes cluster in this this case I put a note there as well. We use flat car as our operating system, but that's not particularly relevant If we just have the cluster on the left-hand side, this wouldn't really do anything This just presents our API and UI would allow users to submit jobs and watch stuff, but nothing would actually happen The clusters on the right-hand side are where the actual action happens. So these are what we call our executor clusters So you'll notice there's multiple and I'll come to that in a second If we just look at one of them what we basically have is a namespace here called Armada and a simple component deployed into it called the executor This component is the thing that's responsible for sitting there looking at how much free resource there is in the cluster Talking back to the server and saying hey, I've got I've got this much compute. Give me some jobs And there's anything queued it will lease those jobs and then spawn them in the relevant namespaces I've actually labeled them as jobs here. However, they are just pure Kubernetes pods So it ties in quite neatly with all the other stuff. We've been describing this morning and this afternoon Possibly in future as the job API evolves we could imagine actually just using that But we just use plain pods because the furniture around the job API at the moment is kind of redundant to us And then the executor's job is really just to sit there lease new lease new jobs schedule pods onto the cluster and Then trace their progress and report status back to the exact back to the server side Which then users would experience through the UI or the API Now these clusters on the right hand side tends to be quite large So we have a large amount of on-prem compute here What we've found is we want to be able to scale to all of our computers and use all of them to run our jobs on and to be At a scale effectively indefinitely Now we know that we can scale it given Kubernetes clusters to many thousands of nodes You look at what open AI have done fantastic work there to scale to seven and a half thousand nodes and beyond But that's actually quite a lot of work to do that So what we decided to do was devise a system whereby we didn't have to push Kubernetes to its absolute limits And if we could just deploy multiple of these clusters we could just scale horizontally that way effectively indefinitely So what we tend to do is run these clusters up to about a thousand nodes at a time And then just have multiple of them So now I'm going to dive into the actual anatomy of one of these executive clusters and the sorts of considerations and design We've made around these because this is most important around how many jobs we can schedule So we have the sort of standard Kubernetes control plane which you would expect the head nodes running the regular Kubernetes control plane components API server controller manager, so on We have three what we call system nodes These are to run cluster-wide resources for things we decide in GR that we want to use such as cert manager Dragonfly which will come to open policy agent and other things and Then we just have n of what we call batch nodes and these are the jobs These are the nodes that we actually run the jobs on We want to minimize the amount of resources that our cluster acquires are on these nodes so that we can maximize the amount available for jobs So we take very close attention to the demon sets We run all these things and the resources requested and limits and so forth on these nodes We run the absolute bare minimum of things like the CNI acuproxy storage integration pods Dragonfly as a caching layer and then as many jobs as possible So make some key choices along the way for scaling Kubernetes itself in this way So we have we consider large clusters of thousand nodes, but I know it's possible to go bigger For these anyway, we've decided to use bare metal for all of our states including the master nodes and system nodes We keep the xcd nodes virtual because through this this armada architecture We don't actually store a huge amount of data in xcd itself almost all of it metadata wise is stored inside the armada storage components However, we still scaled up the control plane components within kubernetes and just made the xcd nodes as big as they needed to be There's a lot of work around tuning Prometheus and the CNI But actually one thing we really found which I think is remarkable worth noting is that we haven't had to do a huge amount To kubernetes to make it work this well Most of the interesting scaling work we found was actually in the dependencies So silly things like we use terraforms build our clusters and the first time we went to thousand nodes We actually found that all of a sudden maybe unsurprisingly our plan and apply times went went to hell I'm really terrible We did a small amount of refactoring there and actually found we could 10x theme performance The plan and apply time in terraform just by rechecking the way we represent those resources Dragonfly has been massively beneficial for us. So if you suddenly scale thousands of nodes, you're gonna need to pull lots of images That's a very quick way to destroy your container registry whatever it is or get locked out of Docker hub say So dragonfly is a tool which you can deploy an open source tool on top of kubernetes Which is like a caching layer for Docker images with its own peer-to-peer network It's actually you've had a thousand nodes all pulling a new image It goes through this dragonfly component inside the cluster There's a single pull from the upstream registry and then it's all distributed in the peer-to-peer network across the nodes and Then a lot of work to scale off storage platforms on the security front. We're a very security conscious organization So this is a multi-tenanted environment. We have users all sharing the same platform So we need a lot of focus on security. We don't want any user or administrator to be able to accidentally access anyone else's data or Purpose so we've got the standard sort of security rules you'd expect around user workloads so things like good all back in namespaces Principle of least privilege and then all the standard stuff around no route no privilege no host networking or Storage access no extra linux capabilities We've implemented most of this just using either built-ins and kubernetes such as good our back And then a couple of extra tools that open policy agent has been really beneficial for us That's a great way of just as Ensuring that these things are true and can't be Can't be violated and we also make heavy use at the moment of pod security policy although We know that's deprecated and we'll go away. So we're going to replace that soon enough So now I'm going to talk a little bit about some challenges that we've had along the way There's probably like four categories of things that we've found really difficult. I think The first thing that scaling to this size just running kubernetes at this size Is very difficult operationally you have to be very good at managing kubernetes and rolling out changes reliably Biggest thing probably is performance and in fact not of the system itself as I said, but actually What you realize is when you suddenly are running back shelves at a scale In your environment, you've effectively created a giant DDoS machine and anything you point at you can if it hasn't been Scaled properly you can reliably destroy So we found that quite a lot with especially our storage clusters that when we scale to this size possibly we didn't appropriately scale the storage and we've had situations where users have Suddenly launched large large number of jobs destroyed the performance themselves and unfortunately other people who also using the same shared resources So that's something that is always a constant challenge. We need to work out how to improve as much as possible Then the next two really are our integration type problems where Along our particular journey, we've been doing all of this at the same time is moving from Windows to Linux I think we've all definitely underestimated how much work that would actually be for everybody It's not as you can imagine just a case of doing some finding replace for backslashes the forward slashes We have we slowly realizing quite how entrenched the windows behavior is within our software And it's been a lot of work to help people move away from that silly things like using DFS as a pattern for accessing storage and Then the last thing really is a side effect of all of these other challenges whereby because of all this stuff taking up our time We haven't found enough time to reinvest in the tooling and make the experience of using the platform as good as it can be Which can be a bit frustrating. I feel like we're starting to turn the corner on that now and putting some of these previous things to bed We can now focus on improving the tooling for our users as much as possible and successes so We're one of these organizations that are always striving for continuous improvement and I think like a lot of us do this It can kind of make us focus on the negatives a lot of the time And we need to I think take the time to stop and acknowledge successes and celebrate these things So for me for along this whole project the things that have been really great Are we've proven that Kubernetes and Armada scale really well and don't seem to be at any kind of limit When we started doing this I think people were queuing up at my desk to explain why Kubernetes wasn't designed to run batch jobs and couldn't ever possibly work And they're right. It wasn't designed for it, but we've proven that actually with not a huge amount of effort. It can be made to do these things We've had really good quality distributive metrics has been a big success for us I definitely recommend doing that because you need to be at a reason about the platform and see what's going on and Then after that point a lot of the Kubernetes wins really started to pay off So because we're using Kubernetes we get all these sort of ancillary benefits around making configuration changes really easy We could just if we need to suddenly change something across all of our estate It's a couple of full requests and running our automation pipelines and changes just go out, which is fantastic Furthermore because we're on Kubernetes We get all the integration for free with all the other tools that we might want to use so dragonfly again is a great example of this We realized when scaling it would need to do something to protect our upstream registry If we weren't on Kubernetes, I don't know exactly what we would have done We'd have probably either had to hope something already existed that did it for our platform or Build something ourselves, whereas because we're running on Kubernetes. We just Google it and oh there it is and you deploy it and it mostly works And then finally the modular cluster design of a modder itself has been a massive win for us so It's taken the stress out of applying configuration changes things like Kubernetes upgrades We can stage them obviously we test things in dev and staging But we're even when he gets production as we all know with the best will in the world That's often where you find the problems for the first time With this design we can apply we can choose certain clusters as canary clusters and we can upgrade these first Sit back and observe and go. Oh, actually everything is okay Oh crap something has gone wrong that we didn't spot and then roll back or fix it and it's greatly preferable to lose One I don't know 120th of the calc farm as opposed to all of it So I'm gonna briefly touch on the roadmap here for 2023 There's three categories of things on here really there's a lot more going on Within G research, but this is the stuff which is specific not specific to GR in general to the platform itself So firstly, it's the observability piece So we want to put a lot of work into this UI so that people can better understand what's going on and our administrators can Better understand what's going on we spend a lot of time answering user questions saying hey Why isn't my job running and we can work it all out through Grafana and other things But it's much better if they can just have a UI that explains that yes, it's queued because You've already capped out on the amount of resources you're allowed on our compute or I don't know for some other reason possibly They're asked for something which can't be scheduled at that time The second category of thing is around smarter scheduling. So we're enabling we it's already possible to do the things like basic preemption through Armada Basic again scheduling all of these sorts of things. We want to be able to do you can kind of prefix with basic What we want to be able to do is do all of these things in a bit more of a smart and native way So that it's possible for us to really easily offer these features to people Which are the the next big enabling things post the basics around fair share and queuing and so forth and Then the last thing which I put in Q4, but actually I'm having a lot of conversations about just this week I want to try and bring in a lot is a bit more native Kubernetes integration You'll probably have seen from the design that in a way We've kind of been a bit sort of keeping Kubernetes at arm's length where we when we first started designing the system We were kind of hedging our bets because we weren't sure if we actually wanted to use Kubernetes as the Substrate for this and how it would be nice if we could be a bit optional about this and maybe use Armada as a System on top of another platform It's probably the case to be honest now that ship has sailed and Kubernetes as well Embedded as the standard platform for running containers. So I think we'd like to now make it a little bit more Accessible for people through Kubernetes. So it's just easier for people to reason about So things such as having either using the Kubernetes API directly or having an API that looks exactly like it Maybe a couple of simple CRDs and so forth. So now just a slide on how you can use it So we've got our own slack channel since being sandboxed in the CNCF slack. So please use that It's just hashtag Armada come in there ask questions lots of friendly people there We have our obviously our github page because it's all open source the link is at the end of this presentation So please take a look at that. I guess we've got the of course Alex's group Which was discussed just now exactly for this kind of product this amongst other things And I'm also going to do a shout out for the CNCF research user group, which me and Ricardo run every every other Wednesday I can't remember the time now. So it's 8am as well PST, isn't it? So do you come along to that where we talk about things like this and others and that's everything I have Do we have any questions? Thank you How this is different than the Kubernetes Federation project like a Karmada or Open cluster management projects So how this is different than the Kubernetes Federation project? Which Kubernetes projects Thank you. So the question is how does this differ from Kubernetes Federation? I guess in a way it solves a similar kind of problem that I was trying to solve I think that is a bit more generic in the sense that it was a way of federating all sorts of Kubernetes resources We want to be able to in effect I suppose federate jobs to multiple clusters, but very specifically those things and as a design choice We've decided to keep the storage of the state outside of Kubernetes itself because there are limits to how much you can put in scd and You can imagine as well if we want if you want to destroy the system It's preferable to overload our Marder itself, and they'll have its own limits But in such a way that you don't overwhelm the actual cluster that you're actually running on so you don't want to Brick the whole thing So I suppose that's how it's similar but different to Federation at the moment We support a subset of things so it's pod specs. I think we actually also include services and so forth so that you can run distributed jobs But yeah at the moment, it's just really it's quite tightly bound to the resources which we've found we've needed within our within our company If you would like to use any Kubernetes ecosystem tools that allow you to run on top of Kubernetes like Argo workflows Spark operator Coop flow It will not work right? But mother you need to be dedicated integration layer for for our mother for these tools at the moment Yes, that's one of the main reasons actually for looking at moving a bit closer to the Kubernetes API because then it would make integration with All these other tools much easier At the moment it anything that talks to a mother needs to understand this API So you could access it through those things, but you'd need to write some kind of layer to do that transformation You're talking about users and different users on the system How does that Flow through from Armada into Kubernetes. Do you create service accounts or what what do you use in the in the back end? So what we do in the back end is we tend to have a one-to-one mapping between cues and namespaces So we have our own automation which we use to apply definitions of both those things where we Another thing which I think we should open source which is a effectively tool which says there's a bunch of definitions This is the cue Alice is accessible by these users which are users that Kubernetes understands It has these other you know priorities and other other things and that gets translated into namespaces within Kubernetes So then users themselves can keep CTL and access their own namespaces and their own jobs if they want to But we try and encourage everyone to use the sort of official tooling to do that So it will get translated that way. I can ask one then because we discussed a lot about batch and other topics But here you mentioned a lot about multicluster and I think In the initial definition of the batch working group. It was explicitly stated that multicluster wasn't something that would be focused or or scheduling across clusters No Okay, because yeah, we were just talking about Instantiated resources across clusters accessing the jobs. Do you access through to your mother? API's or can you actually talk directly to the clusters things like this? So what where do you see this this could fit in? Kubernetes and cloud native this discussion because we we heard about Federation projects Yeah, so how can we push this forward? It's it's a yeah, so it's a really interesting problem I think the multicluster thing that the approach we've taken is probably like the USP for our Application is that I haven't seen anything else which supports a Multicluster setup as well as this however It's something that everyone wants to do and all of these discussions It's really interesting listen to everyone have their own sort of solving this in their own way I kind of feel like we're in this Cambrian explosion of people trying to run batch on Kubernetes and we're all gonna develop different ways of doing things and Eventually we're gonna have to see some things some things wins some things lose and I guess sort of converge on the Right ways of doing things. I don't know what the answer is right now for multicluster, but it feels like something which should be a bit more somehow Available through Kubernetes in a not not in a sort of Federation kind of way but I ultimately I think what we'd like to see is as many of these sorts of Capabilities that we've developed here being pushed into the platform so that we can then opt out doing ourselves and use them And maybe eventually this all falls away I just I don't know but until that point we we need to solve our problems So so we do have the multi-cluster sake right and they did They have other like it's not only batch. There's stateful workloads as well, but how do you move storage like how do you migrate like you know? volumes What do you represent? How do you represent a workload? Yeah? Because a worker is not necessarily a part or even a job. It's a collection of things resources that you need stenchier Etc. Yeah, I feel this is much bigger than batch. I mean batch is one massive use case for it Yeah, there are many other use cases for multi-cluster Yeah, I might take on this and you probably talk about it in the panel discussion later is It's it's okay to have different solutions for things And I think sometimes we try too hard to try and come up with one one ring to rule them all and then fail because It's things end up just too generic and you suffer from this generic side concept Ultimately, we'll probably settle on I think a smaller number of patterns are running software whether it's services batch And a small ish number of other things and if we have different ways of solving Multi-cluster for that smaller set of things that at least is common within those sets. That's probably fine I think trying to be too grand about it and have Federation for all something might just be a bit too ambitious and that's why you just spin your wheels for it Probably Yeah, that's that's my view anyway Thanks. It was a great talk. Can you talk a little bit? I guess about the general question of how you've done storage with this like you've got users running all their ML models How are they getting their training data in there? How are you making it so that with multi-tenancy especially seem like a really hard problem? Yeah, sure so within Our world we tend to use most of our storage on shared storage platforms. So we use things like Iceland We're actually moving quite heavily to using vast as a storage platform. I saw that in some slides previously We have a good multi-tenancy set up on those platforms already where users or groups of users already have Areas they can access could think of it like a bucket. I suppose in S3 which they're already perm to use Through sort of convention, I suppose we allow users to access those resources through Armada through Kubernetes So it's easy for people to run a job and have a templated way of accessing their Personal area or some shared area on on Iceland I've asked to load their data And a lot of the performance work we've been going and looking at is how we actually make that interaction as good as possible Because you can really make people's lives a lot easier a lot better or worse by making that integration work better or worse More questions Make you walk. Yeah No, while I'm walking there like why do you use postgres? Why don't you put everything on the API server or like on it CD everything in the Kubernetes API server? Why do you need an extra storage? Why do you need an extra storage? Yeah, sure. I guess that's one of the motivators for the architecture we Back of a napkin calculations when we first started doing this told us that the amount of data We need to store because we have requirements to store millions of jobs and have a huge amount of throughput and churn through the system is that putting all that in it CD would Definitely break it if we didn't do anything and we'd have to tune it pretty hard to even make it possible give an awful lot of storage And then we've got the other thing I mentioned previously around failure domains. I guess where if you then break it CD You've actually completely broken the cluster that you're even running the platform on and then you can't access it You know cut off the branch. You're sitting on kind of thing So that was the motivation for that However, if that could be solved and again that was a further simplification we can make I think a lot of these Parts of our design have been optimizations. We've made along the way, but I'm never too We're not too precious to say we won't ever change something and if an optimization is no longer required then we should re-evaluate that Yeah, you talked a lot about the infrastructure behind the system and it looks interesting in terms of like the scheduling algorithms, are you dealing with lots of very large jobs and placement of those jobs and maybe with like fragmented placement of these Large scheduled or yeah, that's a really good question. So I actually skimmed over that a little bit I didn't have a written note so I wanted to talk about it We have quite a large range of different types of jobs So we have everything from jobs which run for some seconds up to possibly even some weeks So we want to be we want to be as clever as possible about how we schedule these things And then also we've got the headaches around maintenance where if you suddenly got every node in your cluster is running a job That runs for two weeks. How on earth you patch everything? So at the moment we're in that situation But we're working towards the world through improving our scheduling components to put things a little bit smarter together And I'll point to Albin Severance and over there. He's doing a lot of the work To make that a little bit more clever and Ultimately, we're trying to work to a world in GR anyway where all users work loads are completely preemptible and Then a lot of these sorts of problems are a little easier to deal with because you can be a little less precious about Making sure things are scheduled in the right places We have time for one more question No So what do you find like your users most comfortable with when like trying to observe their jobs? Is it Using like explicitly looking at metrics or they won't look at statuses or the one use Cube cut or whatever like tool. Yeah, what are the yes? What are users most comfortable with? I? Think frustratingly for us. They always want a UI I find and that's because our users in particular are not experts in Kubernetes Containers even Linux some of them. These are their day job is trying to do ML or you know They're basically mathematicians So typically people want tools to use that are easy to understand and it's either you guys that we build or Meta applications on top of a model which other bits of our engineering organization have produced But we have a massive range So we have some users who are effectively power users who are perfectly happy running keep CTL or a mod of CTL or whatever And then other people who just don't want to know I just want to use UI So we kind of have to support everything which is one of the challenges One last comment you took a picture, but how do you prove to your wife that you are in there? So you should have taken a selfie. I should have done a selfie I will do that I'm being told to stop so if I can have a selfie with things told to stop in the background as Well, I think that would be best wouldn't it? Is it gonna work? Alright, you can all tell me to go away Three two one. Yeah, thank you All right. Thank you everyone Thank you. We have a coffee break. We'll be back in 250