 I guess it's good afternoon now. Thanks for joining us. My name's Melissa Chapman. I'm the product owner for Cloud Foundry at T-Mobile. Today, Brenda and I are gonna just talk a little bit about our journey from deployment to our first production app, and we've got 30 minutes. So we will try to get through the slides as fast as we can. We thought we had more time. My name's Brenda Day. I'm one of our Cloud Foundry platform architects at T-Mobile. Been working with Cloud Foundry about five years or so in production, only a little over a year. Previously open source, but switched over to Pivotal in the last couple years. So we'll kick it off. So T-Mobile had some issues with our deployments. It was really, I'm sure some of you have seen long deployments. We were taking seven months and 72 steps to get new code out the door. That was everything from development to getting servers, networking, maybe storage, firewall rules, load balancing, DNS. All these things together resulted in a very long time to actually get code to production. And this was the same problem again when we tried to scale this application up. So a few times a year we had big promos, Black Friday, the whole holiday retail season. And then also we have a Father's Day or other promos like that. And trying to scale up to meet demand for those meant we had to start planning pretty much as soon as the previous one was over. So we're always trying to scale up infrastructure, always trying to expand our applications, and then we can't scale down quickly enough. Finally, the environment configuration and stability. Some people have great stories around config management. We don't have a great story around it for applications. And so it resulted in development, test, and production servers that were all very different from each other. And that made it very complex to do deployment sometimes because what worked in development did not work in production. And so we have to make changes last minute, shoot from the hip, and that results in either further configuration drift. It's harder to account for later on. So we found Cloud Foundry. We had a development VP who sat in the room with Pivotal for a good eight hours or so, which is a pretty good show of faith in someone's product that you're willing to spend that much time as a VP. So he immediately was sold on the concept of microservices. He wanted to go down that route since previously we had a lot of large Java modelists. So along with that, shifting over to DevOps. We talk a lot about DevOps, but we hadn't really been practicing it very much yet. We were all very much separate development, test, and production teams. So with Cloud Foundry we've been working to move towards a more formalized DevOps model where we actually put these teams together and they are wholly responsible for owning and developing and operating these different services they're developing. Consistency is a no-brainer. It's very easy to get a ephemeral container to be consistent with another ephemeral container that you push in the same way. The concept of fail fast, fix fast. It's hard to take on risk if you take 72 months to get something out the door. So by allowing us to move more quickly, we're able to try things out that we might not try otherwise and we're able to recover from that risk just as quickly. And of course, the elasticity we need, the scaling. You know, the principles of the cloud is the rapid elasticity. And so being able to get code out the door more quickly and being able to scale these applications up and down to meet demand, even sometimes daily, which we're doing with some of our apps, has been a huge help to us for Cloud Foundry. We talked about open source versus Pivotal. T-Mobile likes to have a neck to choke. And so we have some Pivotal guys in the room here. You are all our necks that we can choke. Just having that expert guidance and support from Pivotal where we can call somebody on the phone was really a big benefit to us. In addition, we are primarily platform operations team coming from a UNIX ops background historically. We're not developers. We can't begin to suggest the best ways to architect your apps to be cloud native. So having Pivotal with us allows us to have these application architects that we can bring on and talk to our app teams and help them build better applications. And finally, you know, Cloud Foundry was sort of an experiment when we first started out. It was not going to be the standard going forward. And so to hire people that were developing the open source components that are actually very experienced in this is a big effort. Whereas if we get a vendor in there, if we don't like them after a couple years, it's much easier to get rid of those resources than having full-time employees. We did face some problems. Every company has people, process, and technology that makes up the core of their IT. We'll focus on each one of these. Technology is one of our biggest ones to begin with. So we really are developed around, you know, the stateful applications, stateful servers, large Java monoliths, and all of our infrastructure and setup was designed around this. So because of this, we had some challenges that we had to overcome. First one was our networking layout. We have a situation where a layer two network cannot exist on more than one switch pair. This means our fault domain is basically a single rack or a single switch pair, which for Cloud Foundry was not great. So what we have is a bunch of separate networks for each one of our AZs. And this caused some problems for PCF. It was not set up to support these multi-subnet networkings from the start. And so we had to figure out ways to work around that. We also had no on-premise S3 object storage. We weren't allowed to use the Cloud for various political reasons. So we had to find something on-prem that would work for us. There was an option to use a built-in NFS server from PCF, but that was a single instance, not highly available, so immediately that did not meet our needs. We wanted to run that RabbitMQ and MySQL, but the tiles are multi-tenant single clusters, which for a large production environment is not great. You have a bad neighbor who's producing a lot more messages, not consuming them. You can take down a whole cluster. Database the same way. Someone that doesn't have good database queries or that floods your DB with stuff they really should not be can really destroy your whole cluster. And finally, we have multiple data centers. We have multiple foundations, but we don't have control as a platform team over the global load balancing that we need in order to distribute load across these. So without being able to control this and automate it, we had to figure out solutions to this as well. So we know we needed MySQL and RabbitMQ. We did this to support Spring Cloud Services, which was a requirement for our platform. But these tiles did not support the multi-tenant networks we had. We had to actually crack open the tiles, use the Bosch releases from them, and deploy them manually with Bosch, which of course supports multi-tenants. And PCF has since been updated to support all of this. At the time, though, we had to deploy all these things manually really just to support Spring Cloud Services. For on-prem S3 object storage, we used LeoFS, which is an open source tool. We've since moved on from that. But at the time, we created the Bosch release for LeoFS, got that running across all three AZs, and got a highly available S3 object storage on-prem that we could use for our Blob Store for PCF. We did not offer RabbitMQ and MySQL at launch for actual production workloads. It was used for Spring Cloud Services, for histricks, and things like that, but it was not used for production messaging or for production databases. We said, you guys are on your own for databases. You have legacy teams you can use for databases or for messaging, and so you have to use those external services for now until we can figure out a good solution. And for cross-region load balancing, we said, you know, you guys are on your own. You can push your apps to both locations. You can set up your global load balancing requests. We can't do any kind of wildcard on this because you're not going to have health checks that you need it. So really, you guys are on your own for that portion of it. We'll work on solutions later on. I'll also mention these slides are all available on the schedule, so don't worry about taking pictures of them if you want to see all those or follow along. Process is one of our biggest challenges we had to face. Every company has a lot of process. The older the company, the more process. This grows organically over the course of the company's life. It never really starts out. The way it ends up getting back to business requirements, we really focused on that pretty heavily. So compliance, we probably have some compliance guys in the room, that was a big hurdle for us. This did not fit into any of our existing paradigms. And so we had to really work with the teams on all of this. Initially, we had some trouble getting enough traction with these teams. They basically said, nope, it doesn't fit. We're not going to do this. So by getting cross-organizational sponsorship from security, from development, and from operations, we're able to break through those walls and say, it doesn't matter. This does not fit in right now. This is a requirement. It's going to happen. All of our leadership wants this to happen. So let's figure out a way we can make this work that we're all happy with. Compromise on both sides, but really it was the executive sponsorship really made people come to the table and work with us to fix these problems. Wildcard certificate configuration. I talked to a few other big enterprise shops about this. It's a problem that a lot of people don't want to deal with. We said, wildcard certs are perfectly safe. You can use these things safely. Google has one on star.google.com. If they're able to do this safely, why can't we do this safely for something much further down the line from teammobile.com? So we had a lot of pushback on this. We had a lot of conversations back and forth. Eventually, we just said, this is a hard requirement. Let's figure out what's going to make this work. We cannot function without having this wildcard cert. They wanted us to have app-specific certs or environment-specific certs. And it was really just a non-starter because our network can lay out. We could not move quickly enough if we had to get a cert for every app. It would result in, yes, you're on the platform, but you can't use a cert yet for another four to six weeks while we get the cert. We install on our load balancers. We can figure our go-wetters properly. It just was a total non-starter for us. And then finally, we have some resources people need. Pipelines, access to source control, access to an artifact repository. We don't control these resources. Or we didn't at the start. And it'd not be able to get these things when somebody needs them. Resulted in a kind of slow start for our customers. We can say, we bring you in here. You should pipeline your deployments. And they say, how do I pipeline my deployments? And we have to say, oh, go talk to that team over there. Hopefully help you out in a period of time. And those teams have come to the table as well and worked with us to figure out how we can control those resources, which we figured out is the only solution to this problem. If you want your customers to have a good experience and someone else is not giving that, you have to take control of that. And you have to own that yourself and make sure that you own that whole customer experience from your platform to your data services to the kind of add-on services like source control and pipelines. Finally, we had people. This is always a fun one at companies. You have some people that are eager and happy to get involved and some people that are a little bit slower to adapt and they think that the future is very bleak and they're not gonna have jobs in the future. So the biggest issue we saw initially, the whole DevOps culture not being practiced. We had some app teams and some developers who we said, let's start this process. Let's do a dojo with Pivotal. And you guys will sit with Pivotal for four to six weeks while you work on your app. And they all said, we can't do that. We don't have enough time for that. We have other priorities. We have other things to do. Furthermore, they'd say, all right, we'll develop our app. You guys test it and you guys over there run it. And it was like, this is not really the whole goal. You guys all own this together. So the first dojo is really set the stage for all that and allowed people to understand the point of this, why it will make their lives better overall and how they aren't dependent on someone else external to them to test or to run their application. It's a total ownership they all have to face and they're all responsible for the code they develop, how they test it and how they run it. So by making them all responsible for that, it was the best way for us to get them to understand what DevOps truly means and how it's not just, we put a few ops guys on a dev team and they told them what not to do. A lot of things around automation, we have people say, you can't possibly automate my job, but what would I do or my job's too complex? And that's something we faced a lot of, you know, our traditional infrastructure teams focused on load balancing requests or cert requests or DNS, we say your job for this platform is done from the start. You give us a wildcard cert, a wildcard DNS record and a very simple load balancer pool. After that, you guys don't do anything else. And so some people fear that and they say, you know, how am I going to function if I don't have these jobs anymore? And we just basically said, you know, this frees you up from doing the mundane, day-to-day repetitive work and gets you to focus on your higher level architecture, how you can work with us to automate these things even quicker so you don't have to do these tasks ever again. A lot of people get on board with that very quickly. Like we were paying our, some of our database administrators to install Oracle or SQL by hand through a GUI. And for someone making six figures, it just doesn't make sense to pace one to do that. They should be optimizing queries and, you know, actually doing the stuff. They became a DBA, reasons why they became a DBA. And so we got them to shift their mindset of, like your job security is not your job functions. It's, you know, your ability to do these higher level functions and to maintain automation instead of just focusing on what you do right now. So the whole cloud data thing was a problem for some of our developers. They were developing very stateful apps on stateful servers. So even things like, you know, how do I share files between instances or how do I, you know, recover my logs after my instance crashes? And we had to teach them, this stuff is not stuff you should care about anymore. Develop your stuff to be cloud native. The platform will help you out along ways towards doing these things. So it's been a bit of a challenge. We still have some developers who say things like, can you please mount some NFS points on all my containers inside PCF? And we're like, no, that's not, that's not what you should be doing at all. There's different ways to do this. You can look at external object storage for this. It's a different way of doing things, but it's been a little bit slow to start. What we've been doing is hosting PCF 101 sessions. This is basically a full day session where we do lectures and labs in the morning. We talk about cloud native apps and what it means and what it is. How PCF is built or how CF in general is built, how all these components work together. And then the afternoon's all lab sessions. They get their hands on it. They get to push apps, scale apps, bind services, log apps. And they're able to see pretty quickly, you know, the stuff I used to care about is not a concern for me anymore. And every time it starts out, we'll have a few people in the room who say, you know, well, how do I control my load balancers? How do I change DNS records? How do I, you know, get pseudo access to the server? And we say, this is not anything you should worry about anymore. Here's why you'll see it later today, but it really, it's kind of an eye-opening experience. And some of our biggest critics from the start become our biggest proponents later on and they're able to see the light and start telling other people how great this stuff is and how easy it has made their lives. Office politics, political bullshit, whatever you want to call it. We should we change that text in the slide, but I mean, honestly, like everyone faces it. Everyone's got to deal with it. People don't want to do what you think is their job. They don't think it's their job. It's just the back and forth. And we just kept on going back to that sponsorship. We say, you know, whatever you think, whatever I think, our executives, our VPs, our senior VPs, our C-levels, they want this to happen. They said it's a priority. So let's figure out how we can make it work instead of trying to be a roadblock to it. Melissa will talk about our first application launch. Yeah, so as Brendan mentioned, so, you know, kudos to Pivotal and their dojos. I don't know if the other companies offer them, but hands down, we would not be where we are today without them, so I'll give them some good plugs. So we mentioned earlier that, you know, so this, we had this huge monolithic application and we were, you know, Pivotal came in and we're trying to find an application and which one do we build. So we happened to find a very, very good dev team that was motivated, knew Cloud Foundry, knew what microservices were, so it helped that we started with a very smart team. The second piece of it was, is it was a huge application. We call it the middleware for our middleware. It was just, it was very convoluted and had thousands of calls and it was kind of crazy. So we said, okay, we're gonna pick one and we're gonna pick a low impact call, but that has big volume. So it was 12 million, so anywhere between two and a half to 12 million transactions a day, roughly running through there. So we, I'm going off the slide here. So we said, okay, that's good. So it's get usage. So whatever you do on your mobile phone, if you wanna check how many data you've used through your phone, through the website and I believe through our RBR system, you went through this call. So it was low impact. If it didn't work for a few seconds, you would probably hit refresh and it would be okay. So we ran it through. If they were to do this previously with our traditional infrastructure, it would literally probably take them seven months and they would end up buying, I don't know, 10, 15, 20 VMs just for a promotion and it would take forever to get it. With this, it was, I don't know, a day or so. It was nothing for them. So plan was rebuild it on spring. So we talked a lot about refactor versus just lift and shift and so I'll say that we did a little combination of both. For the most part, it was a lift and shift. They just had to refactor a handful of pieces to make it work. So big bugs happen, right? Outages happen and we are very shy, especially in the operations world to, I don't know, tell people that things like this happen. So this app had been running, I don't know, for probably a week and a half, two weeks or so and they had auto scaling turned on and they kept noticing that apps were auto scaling for no reason. It would crash and it would spin up and it would crash and it would spin up and it was no big deal. There was no outage. There was no impact, no customers noticed it. The architect happened to be in there looking at logs and went, I think we have a bug. What's going on here? So he literally started digging through code, they found the bug, they fixed it and rolled it through all of their environments in one day, zero impact. So their VP actually sent a mail up to our CIO and said, hey, just so you know, we know we don't celebrate outages but look what we just did. That was a huge turning point for us. Since that day, our CIO went, he pretty much went all in and said, this is where we're going, this is our future. We now have pretty much 100% of the traffic in PCF taking about 40 million calls a day. So that's pretty fantastic. And then just like I said, happy, happy, happy. There's no more, T-Mobile used to have, well and still does for our traditional apps, huge release weekends. I mean, people come in on Friday night, we get pizza and dinner and the whole nine yards and they're pretty much there all weekend. Well, we're not, we don't. We release on Tuesday, we release on Wednesday. You name it. They typically don't do releases on Saturday because nobody's there. And we do have a test store now that we can test with. That's actually only open Monday through Friday so it's even better. So it's been good. And we started our platform Dojo around May of last year just before CF Summit of last year. We finished up, we launched our platform live to production I guess around early July. Yeah. Around July 1st. So by August, around August 5th or so, actually during the spring one platform conference last year, they shifted 100% of the traffic over to PCF. So within four weeks of us going live, they already moved their entire app over there and we're taking all the traffic inside of PCF. And they did it from spring one. We were all down there and they launched their product on our platform while we were all at a conference. Yeah. So very good velocity from them. They've since been moving as fast as they possibly can, taking all the apps they can, either doing a lift and shift or splitting into microservices and it's worked very well for them. So looking at our platform adoption, we have the August there is when we started getting good metrics around our number of AIs. By February, we had starting to ramp up quite a bit. We had about 1,000 AIs running on our platform. Our CIO said this is the way forward. We're going to move to microservices for anything new going forward. Any apps that we're going to be using for a long time, let's start to refactor those and break those into microservices and run those on PCF. So the so-called microservice mandate that really sped up our adoption. Just about two months after that, we doubled our AIs on the 2,000. We actually had to, due to some compliance concerns, we set up automated provisioning of permissions to our platform. So having org manager was a security risk for our corporate security team. So what we actually did is use a tool called CF management written by one of the pivotal guys that allows you to use a Git repo as a source for your permissions for your orgs. So by doing this, we can leverage all the existing stuff you have in source control like pull requests, permissions. It's very easy to see who approved something when they approved it, what they changed, who actually changed it. And then this is applied through a pipeline to our foundations itself. So doing that allowed people to move very quickly instead of us having to help them out with permissions. They do it all themselves through PRs or through direct commits to the source control. And it made them go very quickly. Within just about a month after that, we're already at 3,000 AIs. And at that point, we had our users themselves selling the platform for us. They were telling their other users about the platform, helping them with problems, helping them on board. Really, all the teams involved started moving this ahead for us. I think we're over 4,500 now, just about a month later, and growing very quickly. We've got nine foundations right now, two of which are for a sensitive data or PCI socks environment, one for non-production, the rest are all production in one sense or another. So it's been moving very quickly and really has been very rapid adoption for us. So in terms of what matters to us, the biggest difference that we had, executive sponsorship for us was absolutely critical. Without that, we could not have gotten anywhere that we needed to be. We would not have gotten the collaboration from other teams and we would not have been able to break down walls between organizations that typically prevent us from doing things like this. Fight for what matters. The wildcard search, for example, this, we probably could have made it work without wildcard search and figured out more app-specific search or something else like that, but it would not have been the right decision. So picking your battles, figuring out what truly matters, what you need to ensure your success was very important for us and you just got to put your foot down and say we can't do this any other way. We have to make this happen. Let's figure out how we can compromise and make sure that we're all happy with this approach and whatever else it might be. You have to pay attention to that stuff. Branded and Sales, Melissa's our product owner and also our number one salesman. She'll find anyone who has an application and get them excited about PCF. So by having a consistent message that she delivers, we find people who like it and we say, please talk to Melissa. She can help get you on boarded, help get you guys started. She gets them rolled into our PCF 101 sessions so they are on the same page as we are and that's been very important for us. A lot of big enterprises will say the same thing. A consistent message to all of your users and figuring out the way to address them all the same way really goes a long ways. Slack has really been invaluable for us. It's the way we communicate with all of our customers. They'll say, do you have a DL you can use to email all your users? We say, no, just get on Slack, join our Cloud Foundry channel. We have 300 or 400 users in there who are from T-Mobile that are all actively using our platform. So we communicate to them that way. We talk about any kind of outages or upgrades we might be facing. People will post questions and either we will answer or somebody else will answer. We have a chat bot running there that will take in tickets, help with org requests, things like that. So it's been kind of our central clearing house for all of our customers. Then we also use it personally for our stand-ups. We use it for all of our monitoring. We use Datadog for monitoring. So all of our alerts go directly to Slack. See a history of everything. We can see everything going on there. It's been great for us to use Slack. Control your own destiny. Our comms guy picked the picture, but it seems pretty fitting. Like I said, the stuff that you need to be successful, for us, control and access to Git. We had an incident caused by a config server repo running on a Git server that was not highly available. This Git server goes down and config server can't pull its properties. The application can't restart or scale or stage up and it caused an application impact for those people. So we decided right then and there, we can't trust someone external to our team to run this because they were a development team historically. They weren't running this Git server for production. So we had to work with them to figure out the best architecture to use for Git because we saw it as being totally critical path for us. If we use it for config server and it's down, we're gonna have issues. And finally, monitor everything you care about. We, I mean, anything I can get access to, I will monitor that involves our platform, switches, load balancers, our infrastructure itself, obviously, and then every component of the platform. We send it all off to Datadog and then we do whatever we have to do there to configure monitors, thresholds, and send alerts to us. Even using it to control a status page built with cache where we have our Datadog alerts drive the statuses on cache through some Python scripts. So really by looking at every single component, making sure we know what's going on, it helps us figure out what went wrong later on or what's gonna go wrong in the future when we're not able to see stuff we manage directly, like the infrastructure. So monitoring the stuff we care about and we tell our app teams the same thing. If you depend on your config server for your properties, monitor a config server. Make sure it's up before you do anything. You can check the health of that. It's very easy to figure out a glance if your config server is working, it's already built in. I would just say, I would add to that though, that if you have a non-technical member on your team and you send them alerts, make sure you tell them that the world isn't ending. Yes, very true. That's about it for the slides. Any questions? We're actually running entirely on, the question was about which cloud provider we're using. We're running entirely on-prem with vSphere. We have that in two data centers right now for PCF and we're looking to deploy to AWS in the coming weeks. We don't have a clear goal for AWS yet but expecting it to be for burstability for big promos coming up and things like that. For redundancy, do you keep it in sync or it's just two different parallel environments with load balancing? It's two entirely separate foundations. It's up to our customers right now to push things and to load balance themselves across them, which we have some customers doing due to data gravity reasons. Some of them don't run in our traditionally DR data center and so we're working on essentially deploying a second production foundation to our primary DC so people can run in separate foundations there, kind of like an AZ inside of Amazon region. Yes, in the back. Questions around the last year and what we've been doing. So going forward to what our plans are, continuing to expand our offerings. Like I mentioned, we had some manual deployments of MySQL and RabbitMQ that got us into a not great state with PCF where we couldn't really upgrade our foundation, at least not cleanly. So we're deploying second foundations to replace these existing foundations on 1.9, 1.10 where we have the multi-subnet support. So kind of showing up that and getting ourselves in a better state and then beyond that it really is starting to work on the low balancing story to allow our customers to be able to push an app one time and have it run across all of our foundations that meet that context. That's one of the biggest goals we have. Also starting to offer those data services, bringing in a partner to help us with on-demand service brokers for MySQL, RabbitMQ and Kafka to allow our customers to have dedicated clusters instead of the shared multi-tenant clusters. Those are the current plans at least. Yes, in the back. So I would say they're fairly formal. It was initially all driven by Pivotal and now it's much more collaborative between us and Pivotal. We've got one of our local guys who comes in, he's got good slide decks, good labs. So they were fairly formal, usually improving on the same stuff we did previously, one slide deck on architecture, one on more cloud foundry in general, and then labs focusing on different architectures, some Spring Cloud services, MySQL pipelines, things like that. So it's been fairly formal and consistent which allows people to get on the same page with us and to make sure that they all kind of seen the same experience. I will say that in the afternoon lab sessions, we often break out where people say, oh, I've got an app, can I try running that? And we're always a little nervous about that. Like sure we can try running that, it might work, it might not. But we try to keep those pretty consistent and on the same message. So we generally, we'll try to do a PCF-101 every month or so where we'll have kind of growing interest among different groups. So we'll say we're planning one for next month in Bellevue or in Bothell Seattle area and we'll get those people involved. From there we say you have this organ space that we set up for the 101. You'll have this for 90 days. If you'd like to start running anything in more than just a POC environment, let us know, we can set up an organ space for you pretty much same day. And so our goal is to get people moving quickly if they're interested and they wanna actually be involved. We get them on very quickly, set up an org for their team, they can start selling it to their own people. Yeah, we try to move that very quickly. But I will say we started our first PCF-101 probably in February of the year we did the deployment. So we did all this great training for several, several months and we had nowhere for them to go and play. So if you are looking at building that, make sure you have an environment ready for them to go right after because some of them adopted quickly. Some of them took months to play. So just be prepared for that. Yes. So we have a separate infrastructure team supporting the on-prem IaaS. So we support primarily the platform itself. We started out with two guys, grew to three and now we have basically three guys who are more of the platform architects slash operators and then one or two guys who are more of the day-to-day operations, try to answer customers, questions, things like that as they get onboarded. We have one developer who builds custom tooling for us. So primarily three or four guys supporting the platform. So they generally will start with our team and if it's not something that our team can answer as non-developers, we'll generally loop in our pivotal guys. We've got two great resources from Pivotal who are helping us be successful and they generally will dig in with the app teams, even come on site with app teams to make sure that the app teams are able to resolve problems they have. And it's mostly been just moral support. When they have issues, we'll hop on calls with them but for I would say 99% of all of the issues that we've had in the last year, it's all been their code. Yes. Yes, it is right now. We'd probably estimate 65% is non-production at least. We're splitting off. Initially we had a single foundation for prod and non-production, which was not a great idea for several reasons. We're splitting that off, so we'll have better picture on that in the coming months. But we estimate that probably the majority of our traffic is non-production right now. We also have found that a lot of our application instances are Spring Cloud services. So we'll have app teams that run a single app in a space and for their microservices, they'll have config server, histricks and turbine instances. So three app instances might have three SCS instances as well. So we've seen a lot of additional swell from that. Yes. Yes. Yeah, so generally for organ space, we'll onboard people with probably a 50 gigabyte org. It's up to them if they want to further subdivide it into their own spaces. We give them that capability. We're not very restrictive in terms of how much they want to scale up. If we start seeing someone who says, I've got 50 gigs, can you please give me a terabyte? We'll want to know what they're actually doing. But for anything under 200, 250 gigs or so, we're pretty free to give that out. We don't worry too much about, yes. So we're running our middleware on our platform, so it's all Java based, right? So it's Tidco and it's their container edition and I think each service runs anywhere between one and two gigs at any given time. So right now I think they're actually at 650, maybe 700 gigs of memory. It's a big app. We also tell them for availability, please run at least three instances of your app that'll make sure you're on all three AZs. So as we do rolling upgrades, or if we lose a whole AZ or even two AZs, you won't have any actual application impact. We add the nodes in very deliberately. So we're keeping a good eye on our overall Diego cell memory availability, how much we have left. When that gets below a certain threshold, we'll look at our IaaS, figure out if we have enough capacity there. Because the IaaS that we have for this is dedicated, separate nodes, separate racks, separate clusters, separate vCenter, just for this purpose. And so we know that we can use as much of that as we need to and be okay. And so we work very closely with our IaaS team, make sure that we have enough capacity there and we scale up our platform as needed to meet demand. Yep, I wanna be sensitive at the time. I'm sure we have someone else presenting pretty shortly. Any more questions? Well, thank you. Have a good rest of your afternoon. Thank you.