 All right, good morning, everyone. Welcome, thanks for attending. So my name is Eric Malm. I'm the product manager for the Cloud Foundry Diego team. And today I'd like to give you an overview of what Diego is in the context of Cloud Foundry and give you an overview of some of the improvements and milestones that we've achieved over the past year or so. So the most important of those milestones is that Diego has finally arrived as the official container runtime inside of Cloud Foundry. So this past November, we finally cut our 1.0 release after achieving some significant scale milestones. And just last month, we officially deprecated and removed the DEAs from Cloud Foundry. So it's all Diego if you're up to date on Cloud Foundry. So today I'd like to give you an overview of some of those of the important components in Diego and their responsibilities inside of Cloud Foundry which are primarily concerned with placing and running the application instances and tasks that you rely on Cloud Foundry to do. I'd also like to cover some of the updates that we've had recently. So some of those include new features that we've enabled on the platform as well as validating the scale at which we can run and improving stability of the platform itself. I'd like to give you a little introduction to some tooling that we've been building that we hope is gonna be useful for operators. We call it CFDOT, the CF Diego operator toolkit. And the last, I'd like to point out a few directions we might be working on in the coming year or so. Okay, so let's get started. If you've operated or used Cloud Foundry, you're likely familiar with some of the more externally facing components such as the Cloud Controller or the Go Router that handles HTTP traffic. And if you've used Cloud Foundry at all, I hope you've experienced the magic of running CF push and very quickly seeing your application instances whether they be say build pack applications or now doctor image based applications or Windows apps running on the cloud. But let's look under the hood a little bit and see how this all fits together, how the system orchestrates to keep these applications running. So all these application instances are actually containers running in Cloud Foundry's homegrown container engine, Garden. So Garden is responsible for knowing all those details about how to execute containers, how to make a container, how to run your processes in it. But Garden itself doesn't know anything about the rest of Cloud Foundry. It doesn't know the distributed context in which it's operating. And that's where Diego comes in. So the core responsibilities of Diego are to handle placement of those containers across dozens or hundreds or even thousands of these container execution sites and to keep those application instances up and running even if they crash or even if the VMs that they're on crash. So Diego itself has a few dependencies that it relies on to perform those functions. The most important of those is that it needs a consistent data store to manage this distributed system. And nowadays that comes in the form of a SQL database such as MySQL or Postgres. Additionally, these Diego components need to coordinate amongst themselves and discover themselves. And for that we rely on console. Finally, Diego brings in some other systems that enable core features of the platform. One of those, for example, is the SSH system that allows interactive access to containers. So now those front layers of Cloud Foundry are talking to Diego on the back end, Cloud Controller submitting work to the Diego core to get it to run and the GoRouters are integrating with the information that it provides about the running instances to maintain their routing tables. Okay, so I'd like to dive in a little bit and give you a better picture of how these Diego and Garden components are organized in a typical deployment and what the responsibilities are there. So let's start fleshing out a typical deployment of Cloud Foundry but just the Diego and Garden parts. The Diego VMs that you're most likely familiar with are what we call the cells and these are where we're gonna actually run those containerized app instances. So naturally each cell is gonna have its own copy of Garden and that may be a Garden implementation that's suitable for the particular operating system that that cell is running, whether that be Linux or Windows. And as I mentioned, Garden knows all those details about how to run containers. So it can create containers, run processes in them, destroy containers, hook up networking for them. But it doesn't know anything else about this distributed system that it's running in. And that's where the first of the three Diego core components comes in. That's the cell rep. So the rep is responsible for advertising or broadcasting the presence of the cell to the rest of the system. It also controls that local Garden and tells it what containers to create on behalf of Cloud Foundry. And then it has some other duties in terms of managing assets that it's downloaded for the various Cloud Foundry applications that it's running or other administrative assets to do its job. But when a client, so there's a Cloud Controller interacts with Diego, it doesn't talk directly to these cells. Instead it talks to the second of the core Diego components. And that's the BBS or the Bulletin Board System. So that's typically running on a separate VM dedicated to presenting the public API for Diego. So clients come in, they talk to the BBS's API to describe the work that they want Diego to run across this distributed system. And then the BBS knows how to enforce the particular characteristics of the life cycles for that work. So we'll illustrate what those life cycles are like in just a little bit, but the BBS is in charge of governing that. So when the BBS gets new work, it doesn't immediately start running it on the cells. It instead delegates that responsibility to the third core component of Diego, which we call the auctioneer. And again, that's typically running for isolation on a separate VM in a deployment. Like if you look at the CF deployment manifest, then this is how these things are split out. And that auctioneer is responsible for communicating with all the cells individually and getting a fresh picture of their state and then making optimal placement decisions based on the information that it gets back. So finally, in this complex distributed system, all kinds of things can go wrong. VMs can disappear, requests can fail. And so another important responsibility of the Diego system as a whole is to make sure that it's eventually consistent. And so that is the other core responsibility of the BBS component to periodically check that the desired state that has come in from clients is matching the actual state that's running on the cells. And if it's not to apply corrective action. Okay, so I mentioned that there are different lifecycle policies associated to the work that Diego can run. And in fact, there are really two main flavors of that work. The first of those is what we call a long running process or an LRP for short. And this has three important characteristics. So the first of those is that Diego assumes that this work is intended to be running continually. So even if it terminates for any reason, even if it exits with status code zero, it interprets that as a failure and it tries to restart that work elsewhere. It also intends this work to be scalable. So you can start with one instance of this workload and seamlessly scale it out to five or a dozen or a hundred or a thousand instances just by changing a parameter on it. And then the final characteristic of this work is that Diego makes some assumptions about how flexible this work is in part because it assumes that it's scalable. And it assumes that under the hood it can for a short period of time run duplicate instances of this workload in order to provide seamless availability of it as platform updates are going on in the background as those cell VMs are being individually updated by, say, Bosch. So all of these characteristics are things that we abstracted out of the needs of cloud controller running application processes like a web server or a worker process on the cloud. And these are also characteristics that are very common to the 12 factor app model that Cloud Foundry supports so well. So the other type of work that Diego knows how to run is pretty much the complete opposite. These are tasks. So these are pieces of work that are assumed to terminate at some point and Diego can distinguish between a success and a failure condition for that work and report it back to the client. These are also strictly one offs. They're a single unit of work. And so if the client wants to schedule multiple units of that work, then they're responsible for scheduling individual tasks. And finally, Diego tries to be much more consistent about how it schedules these. It's not as flexible in terms of potentially running multiple copies and eventually converging to one. These are gonna be at most once work. And again, the client is responsible for determining the success or failure of that and rescheduling it if necessary. So these are all characteristics that we extracted from the needs of running staging tasks on the platform, the work that actually compiles and vendors dependencies for build pack applications and extracts metadata for Docker images to run them successfully as app processes. But we found a second immediate application for them in terms of application tasks. So there's lots of one off work that you often wanna run in the context of your already staged application like doing a database migration. And these are a very good fit for that as well. Okay, so I'd like to give you a little illustration of how these Diego core components coordinate to schedule and run instances of one of these long running processes. So to walk through that, here's a typical small Diego deployment. There's the BBS, the auctioneer, and we've got three cells that are already running some work. So let's say the cloud controller comes in and it's gonna talk to the BBS and tell it please run three instances of this Ruby application that's coming from a build pack. And part of that specification is a full level of detail about what assets to download or to use locally for that application and what command to run and what constraints to put on it as containerized work. So when the BBS accepts that request, it saves it off in its data store. But it's going to create three additional records to track those three desired instances that it's supposed to run. So those are labeled zero, one, and two to correspond to the indices of that work. So as I mentioned, the BBS doesn't talk to the cells directly to place that. Instead, it sends that work to the auctioneer. It says please run these LRP instances at these indices. So the auctioneer batches up that work. It might be in the middle of placing some other work. And when it's ready, it'll contact all the cells that it can find in their deployment and get a snapshot of their current state. And it keeps that in memory and makes placement decisions based on that, updating it as it goes. So the first instance here that it's going to place is at index zero. And looking at the cells, there's a natural candidate in terms of which one has the least amount of resource usage, that second cell on the middle there. So just in its head, it's going to assign that index zero instance to that cell. Okay, so now it's got pretty even utilization across those cells. So for the next instance, it decides to place it on that first cell that it found out about. All right, but now it's in a little bit of a quandary for this last instance. It would like to place it on that last cell to even out the instances as much as possible, but it turns out it's incompatible because it's running Windows instead of Linux. So at this point, it's really forced to put it on that second cell if it's still trying to even out resource utilization. So it's now finished making placement decisions in its head and it communicates back out to the cells to tell them what work to run. So the cells accept that work, they actually have the capability of rejecting it if something else has changed locally, that would mean they couldn't take it on. And then the next thing that the cells do is they check in with the state machine that the BBS is managing to make sure that nobody else started to run that work in the meantime. So for example, the second cell is going to report that it wants to claim the index zero in two instances of this app. And at this point, nothing has claimed them so far so the BBS allows it. And then cell one also reports in that it's claiming index one. Okay, so at that point, the cells start creating those containers and start actually running the application processes in those. Now cell number one has a little bit less to do so maybe it finishes early. So it reports back into the BBS that it's now successfully run that application and the BBS records that in its state machine. Well, let's say that cell number two is not so lucky and that second instance that it's scheduled actually crashes on startup or it doesn't start up in time. So the cell also reports that back to the BBS and it relinquishes its claim on it. So as I don't know in this anymore, I'm gonna clean up that container because it crashed. So at this point, the BBS is then going to reschedule that work through the auctioneer and try to get it running elsewhere. If it keeps crashing, it's gonna back off asymptotically. Okay, so that's a brief overview of how all these Diego components coordinate to run workload on Cloud Foundry across potentially very large deployments. So now I'd like to tell you about some of the updates and milestones that we've achieved over the past year or so. And one place we can really see that is in how the services deployed even on a single Diego cell have evolved over the past year. So in the beginning on a Linux cell, things were so simple. We just had the Diego rep and we had Garden Linux, the original Linux implementation of Garden. Well over the past year or so, there's been a lot more standardization efforts coming out of the broader container community. And Cloud Foundry has been very committed to integrating with those and providing inter-operation. So the first of those really to emerge was from the Open Containers Initiative and that's the notion of an OCI bundle which gives a more abstract specification of how to run a container that's separate from all the different implementations of how you might run that. But it also provides its own reference implementation of how to run a bundle and that's called RunC. So the Garden team has spent a lot of effort in 2016 to reshape Garden Linux into something that they call Guardian that knows how to delegate a lot of that work to RunC itself. So that's leavened a lot of the burden on the Garden team and has allowed us to take advantage of all the security and performance and standardization benefits that RunC provides. So other container standards have been emerging as well. I should mention that together Guardian and RunC formed the core of the Garden RunC release. So that has fully replaced Garden Linux on the platform. So other places where we've seen these standards emerging have been with respect to volume or storage attachments. And so the CF persistence team has been working over the past year or so to help the Diego system integrate with those and to provide various volume services to containerized work running on Cloud Foundry. Likewise, there's been emerging standards around container networking interfaces and the CF networking team has also been doing a tremendous amount of work to integrate with those standards and to provide a really functional batteries included container networking solution for Cloud Foundry. So in fact, they just cut their version 1.0 of their release a few days ago. So congratulations to them. And then finally, the second standard that's been emerging from the OCI effort has been around the format of images. And so there's been a sister team to Garden in London that has been working on tooling for that called GrudaFest. So that's intended to replace all of the image management that's still in Guardian around how to manage image layers and image downloads for these containers. Okay, so there's been this explosion of activity and complexity and extension points even just on the Diego cells themselves. It's super exciting. But here's the more exciting part. This is also coming soon to Windows. So the Garden Windows team, they're fairly early in this effort but they've been on Windows 2016 working to integrate with all of these standards and to provide their own Windows-specific implementation of running containers there. So it's not ready yet but we hope it'll be production ready very soon. Okay, so I'd like now to talk about some of the scale milestones that we've achieved over the past year or so. We've set a very lofty goal for cutting version one of Diego. In particular, we intended to be able to support 250,000 application instances running across at least 1,000 Diego cells before we were willing to cut version 1.0 and say this is ready for even the largest Cloud Foundry installations. So as you may know, our previous consistent data store in Diego was at CD. And this is a really fascinating, great key value store. But we were finding that with how we were managing our data in it, we could really only scale up to about 50,000 application instances and about 300 cells. And then just the constraints of how it forced us to manage the data inside of it were causing us to hit various limits. So we stepped back on the team and said, okay, how can we overcome these barriers? Can we stick with that CD, which really has a great promise as this next generation of persistence on the Cloud? And all the solutions that we were coming up with started to look really relational in nature. So we said, why don't we try switching the BBS's persistence backend to a SQL database? So we implemented that and we did some validation on it. Things looked promising enough that we set up a massive CF deployment on GCP and succeeded in running 250,000 application instances, real CF apps that were generating traffic, doing logging on the platform across over 1,200 cells. So this gave us the confidence to say, we're ready to support even the largest Cloud Foundry deployments that are out there. I should mention that we validated this against both of the databases that we support, both a highly available multi-node CF MySQL cluster and a Postgres deployment. Because those have been traditionally the databases that Cloud Foundry has supported and we wanted to maintain support for those even as we changed the persistence nature of the Diego backend. So obviously this has been an expensive experiment and validation to run. It took us about a month on the team just to get the environment up and running and instrumented correctly. And we wanna make sure that we're not regressing on our support for this kind of scale. So we actually have a benchmark test suite that runs continually in the Diego team CI pipelines that has identified the bottlenecks that we observed in this deployment and exercises them with that amount of data load to ensure that we're still supporting that kind of scale. Okay, next I'd like to talk about some of the improvements that we've made to stability within the Diego deployment itself. So again, here's a slightly more sophisticated picture of a typical Diego deployment. And on the left here you can see some of those services that have really important global responsibilities. So I've mentioned the BBS and the auctioneer already responsible for maintaining eventual consistency in the deployment and for taking care of global placement decisions. One that I haven't mentioned so far but is no less important is called the route emitter. So this is responsible for collecting the information about the routability of all of these application instances across all of the cells and broadcasting that to the routing tiers such as the GoRouter so that they can maintain their routing table. Okay, so we have these components with global responsibilities and we certainly want to have multiple copies of them deployed so that if one of them fails another one can immediately take over. So in order to achieve that we have all of these components contend in console for a distributed lock record. So that has worked great it's a very common pattern in distributed systems. But as Nima and Adrian will get into in their talk next we've observed some problems trying to operate console with Bosch. And so sometimes console explodes when we're doing those operations on it and that's bad news for all of these services that hold locks. If they can't be sure that they're holding a lock then they shut down for safety's sake. And this even though the application instances are running for the router emitter this is particularly problematic because then the GoRouter prunes those application routes and we can't route traffic to the applications. So we observed this being an occasional stability problem in various environments and we said okay we've got to make some changes to fix this. So the first thing we did was to look for places where we could eliminate those locks entirely and we identified that router emitter as a prime candidate for this because all of those routes are really associated to application instances that are running on those individual cells. So we said well let's give each one of those cells its own copy of a router emitter but let's tell it that it's running in this local mode and that it represents only work on this particular cell. So this first cell, its router emitter is gonna be responsible for broadcasting only the route for this Windows application here. And the second cell is gonna broadcast the route only for that Ruby application. So now we've effectively sharded the responsibility of those route registrations across the execution sites for those app instances. So at this point we're now totally covered in terms of route registrations and we can just get rid of the global router emitter entirely. So we spent a few months doing this work to have a seamless transition over to these local rout emitters. Obviously you wanna deploy the local ones first and then get rid of the global ones. And a few months ago we declared that we have enough confidence in this that you can switch to this model. And so at some point we've effectively deprecated the global mode of the router emitter and we'll be removing it in that operational mode at some point when we cut a major version of Diego next. Okay, so for some of these other services though it's still very important that they have this global responsibility and that there'd be only one active at a time. So we still wanna stick with this distributed lock pattern. But we wanted to get out of this occasionally fragile dependence on console. So the Diego team has implemented our own locking service that we call locket. And it uses the other consistent data store that we have available in Cloud Foundry. Namely the SQL database that the BBS already uses. So that's efficient for us to maintain a consistent picture of locks, especially when we use a cluster deployment of a database like the HMI SQL deployment. So now you can configure the BBS and the auctioneer to talk to locket to contend over those locks instead of talking to console. Or you can, if you're transitioning from a console based deployment to one using locket you can configure them in a joint mode where they will correctly contend over those locks and still be safe in terms of not deadlocking themselves. Okay, so lastly I'd like to give you a brief introduction to some tooling that we've been building that we hope will be useful in terms of inspecting the Diego system that's now backing your up to date Cloud Foundry deployment. And that's what we call CFDOT, the CF Diego operator toolkit. So this is effectively a command line tool for Diego that lets you inspect various layers of the system and either get information about it or manipulate it as you see fit. So we finished implementing support for all of the commands and endpoints on the BBS API or at least the ones that are exposed to external clients such as the Cloud Controller. So these let you create and read and manipulate long running processes and inspect the instances of those. They also let you run tasks and inspect their state and find out what cells are registered with the system. And as we've been building up this new locket component we've been introducing commands to let you inspect it through its API. Well, you might ask why can't you just use Curl for these kinds of things? These are all talking over HTTP, that should be suitable. Well, it turns out these APIs are not very friendly to just ad hoc querying with Curl. In particular, all of their payloads are protobuff encoded so they're just binary data. So it's not something that you can just read out on the command line and be both machine readable and human readable. And we've done that for various efficiency reasons within the deployment. So CFDOT for these APIs gives you something of a translation tool to something that's more readable. And in particular, we've been very consistent about the output of this tool. It always emits a stream of JSON objects on standard out. So you can feed that into tools such as JQ or even classic command line tool such as sort and grep to slice and dice the data coming out of it as you see fit to do ad hoc querying. In fact, we even have a Bosch job that deploys this on Diego VMs and hooks it up with its environment to have the configuration to talk to these APIs. It even deploys JQ alongside of it because we know that'll be useful for manipulating this data. And it goes so far as to put it on the path for you. So you just Bosch SSH onto your cell, run CFDOT, whatever. So let me give you a couple examples of some ad hoc querying that you can do with this tool. So one example, let's say you're inspecting a deployment and you want a quick count of how many app instances are in various states according to Diego, which ones are successfully running, which ones are crashed in this kind of back off state, which ones are claimed and in the process of starting up on a cell and which ones haven't been claimed by any cell. So you can dump out all of the information about those actual LRP instances from the CFDOT tool as JSON and then slice and dice it with JQ. So in this case, you can do some lightweight aggregation and print out a little report of those statuses. So this is actually data from the release integration teams A1 environment yesterday. They've got about 300 running instances, about 40 that are in some crashed back off state and three that haven't been claimed. They might have memory limits that are too large for any single Diego cell, so they can't be scheduled. Here's another example where it's more about querying through this list to find some particular app instance. Maybe you're looking at the routing table in the go router and you see an IP address and a port that you don't expect to be there. Well, if you wanna track down the app and the index associated to that, you can again try dumping out all of the actual LRP instances that Diego knows about and sift through it to find that IP address and port and then print out the app good in the index. Okay, so finally, I'd like to mention a few things that we're considering doing over the next year to improve Diego. Some of these involve different changes to how we manage the life cycle of app instances. So one thing that we're looking forward to working with the CAPI team on very soon is how to implement zero downtime updates. So having a native notion on the platform of how to do a rolling deploy safely through your app instances without having to stop all of them and then start them or resort to some sort of blue-green deployment strategy externally. We're also looking at where it makes sense to add support for more consistent scheduling of LRPs. I mentioned that we've been fairly flexible in terms of running multiple instances of those, but you might have other software such as legacy software that doesn't tolerate that very well. Maybe it only wants to run as a singleton instance. And so tweaking our scheduling to respect that kind of contract with the platform is also something that we're considering doing. And then there's a bunch of improvements that we'd like to investigate making to how we're doing placement on the platform, making sure that if there's a cell that's having a really bad time to stop giving it work, even though it looks like it has capacity or how to do automatic throttling and rebalancing of app instances on the platform to accommodate different workloads. And then as always, we're committed to improving the stability and security and performance of the core system as a whole, making sure that we're providing the best container runtime system that we can for Cloud Foundry. One place where we're looking to make our next wave of improvements along that, the Bosch team has been working on providing their own DNS system for discovery of application components and we wanna make sure that we're compatible with that as we look to transition to that as an alternate option for component discovery inside of the CF port runtime. So I think we've got a minute or two for questions. It's in 2016 is coming soon, but what's the difference between what's coming in 2016 versus what's available in 2012? Oh yeah, sure. So right now today, you can deploy Windows 2016 and 2012 cells with Bosch that's now fully supported and generally available. And that will deploy the current implementation of Garden Windows along with a Windows compiled version of the cell wrap. But it doesn't have the integration with all of those other container ecosystem interfaces yet. So that's what the Garden Windows team is working on is their next wave of effort around that. So I think there's a talk maybe on day two about the efforts that they're going to there. Yeah, so Bosch deployed Windows 2012 works today and we're just looking at the frontier of matching the state of the art on Linux cells and having parity there. All right, any other questions? Yeah, Adrian. Oh, right. So the cells, they're not just advertising that they exist, they're advertising a whole amount of metadata about their capabilities. So in particular, those Linux cells are advertising a notion of what we think of as the stack, like CF Linux FS2 or potentially that Windows flavor of stack. And the auctioneer does a bunch of predicate matching in terms of its placement to winnow the field of eligible cells down to the ones that can successfully run that workload. So I believe there's a talk on isolation segments later and that exact same mechanism is used to have the Diego auctioneer and BBS match workload that's tagged for a certain isolation segment with the cells that are designated for that. So there's a question about if you can do more sophisticated tagging of those cells. I think in that case, isolation segments are a good solution right now. And if there are more specific needs that make sense to build into the placement algorithm for Diego, then I'm not sure that we support them now other than the support for various OSs or support for running Docker images that we have now. But that's one potential avenue that we can look at as that need arises on the platform. I think we haven't seen the need to make the placement algorithm for the auctioneer pluggable. You could, I mean, that's one potential extension point that we could build in. But we really want to do the work that makes sense for a cloud foundry. And so if this placement policy works to run CF app instances, then that's great. Yeah, so we don't have any plans to open it up, but if it becomes important to, we might do it. Yeah, so I'll make this available as PDF and keynote presentation, certainly through the schedule website. And then there's also a few other sources of Diego documentation on GitHub. So if you go to the Diego Bosch release repository, then it has pointers to all of those. And I'll have a link to the slides and eventually the video for this talk there. There's also slides and videos from previous summits. So you can see exactly what slides ever used. Yeah, sure, so the question was about how do we do those seamless updates without disrupting app instances too much? So this is a feature that I didn't touch on called evacuation. So when you're in a Bosch managed environment with Diego and Bosch runs the drain script for the cell rep, it puts it into this evacuation mode. So the cell stops taking on new work and it signals to the VBS through that state machine that all of this work should get rescheduled elsewhere. So it kind of shunts it over into this evacuating state and then creates new replacement instances to get scheduled. And so part of the state machine that the VBS manages is to get those placed through the normal auctioneer route and then once they have successfully come up running on other cells that are not then evacuating, then the cell that is evacuating detects that and shuts down its instances very quickly. And so there's various timeouts that apply there too. The cells don't have an infinite amount of time. Eventually the operator can configure a cutoff timeout to say, well, you've been trying to evacuate this work for 10 minutes. It didn't work out, so just shut down and move on. We've got to get the deployment updated. Yeah, I think depending on your tolerance for downtime for your application, you can get away with just running that single instance. Obviously you won't be resilient to other kinds of failures on the platform. Like if the cell VM that's on goes away or crashes, then you'll lose that instance and it's gonna take the Diego system a short amount of time to react and reschedule it elsewhere. But if you're okay with that operational contract compared to the amount of resource usage that you're willing to dedicate to it, then that can work fairly well for you even as the platform underneath undergoes those upgrades. So one of the main reasons for the split between the BBS and the auctioneer, in some sense it's a little bit historical or archeological. The BBS is actually one of the, it's the most recent core component in the Diego runtime. And all of these components used to coordinate directly through a BBS library that talked to at CD. So there there was just an independent auctioneer and there was no central server for the API in that case. And so when we looked at centralizing that responsibility into those BBS components, we certainly could have moved the responsibility for the auctioneer into that component as well. But we've occasionally found it's useful to have that separation in terms of being able to mitigate problems on the platform. Like it's sometimes been useful to just shut down the auctioneer and stop placement to stabilize the platform if there is some sort of catastrophe going on. And we've also been able to keep that separation of concerns between the BBS managing the life cycle of those things and the auctioneer having a more focused responsibility about pulling the cells, making placement decisions and communicating those out. All right, well, thanks very much for attending. I hope you enjoy the rest of the track and some of the excellent talks that I'm certainly looking forward to. Thank you.