 All right, good afternoon guys Hoping you guys all loaded up with caffeine. Hopefully nobody falls asleep. It's that difficult time. So hopefully The intent here is to show you guys what we have done from a PayPal perspective and the intent here is to actually show our learnings and hopefully learn from the experiences that rest of the community slash that the foundation is actually helping coordinate I Think we've actually done Fairly interesting stuff some things we actually found out after we actually fell five times. Hopefully you don't have to fall five times That's the intent Some of the stuff will actually try showing as much details. There's gonna be some details We're not gonna be get we're not gonna be able to cover here, but hopefully You know after online maybe over drinks or something we can definitely talk We actually have a member of the extended team here so we can actually get down into the fairly detailed level So first off about PayPal Hopefully all of you guys have heard of PayPal, but just wanted to give you a little bit from a business perspective We have hundred and twenty three million users We have three hundred thousand dollars of payment that actually get processed every minute We are in hundred and ninety markets 25 currencies and PayPal is the world's most widely used digital wallet That's the business We are in so Moving forward where did we start? Some of you guys who actually have been in operations can actually maybe relate to this Not everything is cloudified The day-to-day stuff is painful So I wanted to at least just put down what some of our own experience was and some of this is actually dirty laundry But we wanted to at least share the dirty laundry For us to launch a small service That my team was actually trying to deploy It took us a lot of tickets took us a lot of meetings and We also had different design documents and The goal here is it's not like The teams were not trying to do the right thing They all had different work that they were trying to do and they're just saying you know what I'll get to it Just submit a ticket and different teams were actually siloed So they all wanted to get the stuff done. Sometimes you gave All of the information to one team and the other team didn't have it So this is our own internal stuff that we were finding out and it actually we actually got a little bit of a sense of What our development team was going through because they were giving information some of them like hey I didn't get it give it to me again, or this is not right. That's not right so this is the world that we were living in from a day-to-day perspective and I think some of you guys who are actually in supporting either the major sites or your day-to-day work from a Infrastructure engineering perspective. I think you guys can maybe relate to that Maybe not but this was our story so now Talking to our development teams because they are under pressure to say if you want to launch something quick There are three things our developers want all of the time because this is the business expectations the three most important things to them were agility agility and agility and They're like guess what but we wanted at scale without compromising availability So this is what they wanted. They're like guys. I know my code is ready. I Want to be able to go here. I want to be able to do some sort of a CICD some sort of a gate I want to go to QA From there on there's another gate I go to pre-life and I'm in life Why does it actually take for you to do this basic stuff and I actually have to submit so many tickets Because I need to actually get To launch my product. That's what business is asking me for The reason there were so many different challenges from our perspective is our QA slash our stages environment were a lot different than production Some of the things that actually worked in QA Slash our dev environment did not work because we had different configs Our firewall was different. Our environment was different. Our network topologies were different So our developer were actually going through a lot of pain and sometimes they're like guys We just want to test something it might or it might not work But we just want to be able to roll it rather than waiting for a long time Again, you know, I think we actually talk quite a bit about dev ops, you know dev ops, but I think end of the day There are some real challenges. They just want to launch something in production and from an operations perspective We actually say you actually have to go through this block that block and their thing is like guess what automate it I don't want to be able to deal through that underneath it. We can call paths. We can call is we can call open stack I don't think it actually matters to them. They want to actually deliver something to production. This is not real This is our aspirations So from here on you're like Okay, we know what we want to do but before we actually started building something we wanted to at least have a concept of what our guiding principles were a Lot of different teams even internally with an ink a lot of different teams We're doing different things some really interesting But we are like let us start because we are starting from a new approach Let us come up with some of the guiding principles that we can Talk about so we said we won't actually adopt open-source solutions wherever possible And I don't think open-source means bad quality. I think some people actually have that concept I actually truly believe it's the opposite Other thing is we did not want to actually have any vendor lock-in that does not mean we don't work with any vendors We actually work with a lot of vendors. The only thing is we want to be able to have a abstraction layer That the entire industry Supports they know what are the API's and we don't actually have to explain something to somebody so that was a concept there Of course industry best practices, you know and also leverage industry and eBay Inc We had a lot of different engineering teams From our marketplaces team from our ex-commerce team. They're actually all trying to Start thinking about okay, you know does open-stack make sense So and our goal was hey, whatever we want to do Let's just make sure we can at least all leverage to one internal open-source model wherever we can Functionality we actually said okay We want to be thinking from a developer's perspective and that's the concept We actually talk about self-service tool for application lifecycle management Some people call it pass some people call it something different but from our perspective viewers saying What is our development team looking for and what can we think from their perspective from the get-go? The other thing is just making sure We do have some automation and orchestration Existing at the time that we actually started But it actually all started with one team and it actually ended in their team So systems team actually had an amazing tool Networking tool had no idea It didn't know how to talk to each other So what we are really going after is we wanted a operating system for our entire infrastructure That's what we were at least thinking about to say that is the level of agility Because end of the day from a developers perspective. They are looking for a compute Storage and any other infrastructure that comes So they can deploy their service. They don't care about hey my network ticket is done My systems ticket is not done. That is the internal complexity. We actually needed to get over The third one was from our aspect is sometimes there's a huge demand for a certain service We wanted to be able to fulfill that service in matter of clicks Rather than going through multiple teams and the amount of time it was going to take for us to fulfill that service And we are in a place where sometimes some things don't pan out Sometimes some things grow huge. So this was something our on-demand capacity fulfillment Maybe it's a marketing buzzword, but we are like, you know, what we'll use it So these are the basic guiding principles that we were going after So based on this guiding principles This is what we came up with. That's the reason we picked open stack We picked open stack based on our guiding principles And we also thought at that point we didn't know Whether it's actually gonna gain significant momentum or not That was a secondary thing for us. The primary thing was it worked for us Because it actually met with the guiding principles that we had come up with that was unique to our business challenges And through the business things that we were Getting paid to satisfy So here what our goal here was that we're gonna have a common infrastructure as a service That is gonna be completely Identical in our stages slash where we test our code Our M&A It when we actually potentially buy companies, they don't actually have to worry about it It's gonna be the same thing or production And underneath it right it's gonna provide some of the common things which is compute storage network load balancer of firewalls DNS basically the things that you need for you to construct a Infrastructure layer that you can roll your code on to So that's where we thought open stack was actually gonna help us and then on top of that We are like open stack is a foundation for us a Common and consistent foundation where we can expose our API's but on top of that We actually gonna had another abstraction layer and then on top of that are our business units So we actually have different business units So they all can come in and they can consume our infrastructure in a one consistent manner and the key thing was Open stack met with our guiding principles, and that's the reason we actually picked So just expanding a little bit on to that. This is primarily what we wanted what our Product development teams that they were actually screaming at us. They're like you guys It's actually great. You can actually have whatever complicated infrastructure that you want basically if I have an idea I Want to be able to go want to be able to change my code that I'm actually fairly intimate with I Want to be able to deploy it and I'm done Underneath it all of the complexity what you're actually telling me I can't do this because it it is related to compliance It's related to security. I didn't put this ticket right. I didn't put that ticket right there like we get it But that is something for you to manage These are the things that we care about and those things we actually did is that is the feedback Once we actually spoke to several teams That's what we got from them. So that's what we are like, okay, we get it We actually picked a IS platform and we know the things that you want to do We're not going to be able to do everything. They're like, you know what? No, no, no, no whatever Our life if you can make it a little better, we'll take it if you can get actually infrastructure on demand It'll actually be a amazing accomplishment for us So that's how but at least those are some of the things that we are actually thinking from from our North Star perspective Open stack is a beginning. It's the journey. We are on If somebody asked me, you know when when are you gonna be done? Hopefully never because we want to actually make things better our goal when we actually talked about going from Developer desktop to production, you know, we actually said an hour But now as things are actually coming on some of our actually executives are actually saying isn't our too long We're like, we're not even there yet, but the goal is people are Realizing the value of it. What are the things we can do? So the engineers are actually really excited about open stack So we're like, all right. So this is some of our technology stack that we have Nothing fancy to be honest fairly generic what we actually have for rest of the Open stack that we actually have running Basically, we actually have our operations portal. We also have our PD development portal our operations portal is primarily geared towards our Loud administrators slash our infrastructure engineering folks Our PD deployment portal That's where we want our PD counterparts to actually come in and Manipulate things we do not want to give them access to the IS APS Directly we just wanted to at least have some sort of a and all of those both of those orchestration is actually done by heat and The other things that we actually have listed here The only thing that we actually did was PayPal specific was actually related to load balancer as a service DNS as a service and then firewall as a service Those are the things that we did Rest of the stuff, you know, basically from a hardware perspective x86 compute storage network load balancer but the key was our APIs up top and That's when we could actually tell our vendors to say, you know what you want to come play with us great help us Because when we are actually managing the infrastructure at our scale We want to be able to have common tools common API's Which are open that we are we are not the only one. It's actually rest of the your Customers that are actually all saying the same thing so this is Our basic stuff that we've actually done also the other thing is related to load balancer as a service DNS as a service our Our counterparts on the marketplace aside, they had actually done it So we're like guess what if you actually done it we want to copy it Because our goal was not to reinvent something our goal was to put these things together and see Does it make sense from the business perspective? And the business value that we are actually trying to provide to our engineering partners on the product development side It actually took us a little bit of a time, you know, we what I'll do is I'll actually talk a little bit about this step right here so from idea to reality It actually took us approximately six weeks We had two engineers They're working on it. We picked two Specific applications not the entire applications We wanted to actually have a fairly narrow use case That we could go from top to bottom That we can at least just validate to say okay for these type of applications and we do it it seems like All of the initial investigations that the team had done Seems like it was actually possible. Some people were also saying, you know what it works because I ran it on my laptop Common thing that we actually hear from the engineers, but you're like, okay. We wanted to just make sure it runs at the scale we want Also, there's an internal joke. We actually have one of our architects He says everything is running because I actually have it running on my laptop And he's like when are you gonna actually have grizzly installed in production because I have running it in my laptop So so what we wanted to do is we are like, okay. Let's just put our milestones together and These are our actual sprints that we did we started and That's when we ended and Some of the things I was able to take some of the things out But basically this is the same stuff that we actually even use internally I Think if you can at least just look at it. It's actually Fairly basic stuff. We actually started out in our lab. I wanted to make sure we don't blow up anything We actually had a DNS as a service from our marketplace counterparts We actually copied that and we actually copied with pride. We were like, you know what they developed it Why should we waste our time? There are some specific use cases we could modify on it because it's part of the ink family Other one was just making sure we actually have the portal set up load balancer as a service Lab setup the pool view is just our internal application Just from a high level and we were able to provision but those are just the three sprints and on fourth sprint We were actually fairly comfortable because we were actually doing this and in fourth sprint We were not doing this in lab We were actually doing this in an isolated instance in production for two specific applications That was our key thing. We did not want to say we are going to do this for our entire stack I think if some of you guys if you actually come from a Running website or a running product We've been running for last ten years. There's some things that you cannot run Because there's so many skeletons in the closet. That's the reason we wanted to at least just pick two applications and these two applications Were where we actually got the feedback from our application teams to say guys This is our pattern Slash our container for future So that's the reason we actually pick those applications Just for us to ensure anything new that we are actually building From an agility perspective because our business and our product and our landscape. It was actually changing so rapidly What it was actually acceptable that it actually took two or three months They actually want it to be done in 15 to 30 minutes and the reason for that is That's where the industry was already there The developers could actually go somewhere If they actually had a working code They could launch something in production and their question to us is why can't I do it? They're like I want to do it internally, but these are my expectations. So that's the reason we wanted to stay fairly focused So this was our concept Fairly straightforward We actually learned a lot of things. It was actually a fairly fairly aggressive schedule and The reason it was actually aggressive schedule is we just wanted to at least make sure we could do it and If we failed we wanted to fail fast That was actually another Another feedback that the entire culture within the PayPal They're like, you know what we want you to take chances But if you want to fail on something fail fast learn from something. So that was the idea here So all great stuff, but how do I actually go tell? somebody to say to our business counterparts to say we've actually done something phenomenal and Our SLA where we had none before we actually have a SLA of 1 to 15 nodes and Less than or equal to 30 minutes and the rest of the stuff you can see before that it would actually take three weeks If it was actually escalated because of different cumbersome processes some other internal process challenges and What are the things we did do? We were able to actually do our DNS bindings. We were able to do our load balancer provisioning We were able to actually incorporate our existing production monitoring for those two specific applications The stuff that we did not do that we didn't think we actually were able to do in the time frame Was our firewall rules? When we actually spoke to our Team responsible for firewall after we actually spoke to them the lack of APIs for the device We actually just said you know what it's not in we are not going to do it So and also the code deployment The reason we actually did not do code deployment. We actually have a lot of our business policies That dictate when we want to deploy a certain bits It might be related to sometimes a announcement or a product commitment to our external Consumers it might be related to some of the other dependencies that the other that that specific application is relying on So there's a lot of that business logic So we said you know what we're not going to be able to do that, but we could at least do all of the other things So this is what we actually were ready in around October 13th time frame and after that by end of Thanksgiving we actually go into a We actually go into a Internal little bit of a freeze just to make sure we are not making that many changes Because of the heavy traffic. I think some some of you guys were actually maybe in retail Some of the other major companies don't make any huge changes During the Christmas time frame. That's our busy time frame. Some of our staff is off. So that was then so now We were actually fairly impressed We did run into issues but we were actually fairly impressed that we did not run into as many issues as we expected and It also gave us a level of confidence that we were actually running open stack The bits that we were not as comfortable with because we were not sure But we were actually running it within our production and ornament and I think one of the engineers ran into a Some issue and there was a 45 minutes time frame for us to make a change He actually just put it on the one of the chat boards and he actually got seven or eight different answers in 10 minutes and that actually proved something to us To say there are actually a lot of smarter people outside our company and they actually all have a passion on actually making this open source work and we were able to Use it for our business benefit and we're also finding some of the things how we are actually contributing back Either the use cases or anything else So based on that the level of confidence In April time frame. We actually have it deployed in our web tier. That's where a majority of our customer-facing applications are We are actually rapidly expanding to our mid tier. That's where some of our business services are and Also in management here, that's where we actually do majority of the infrastructure management of the different computes that we actually have in the enterprise and We also have a fairly aggressive plans with milestones and dates that we actually going to be expanding it to Devin QA and also our merger and acquisition If you remember our initial picture, we wanted to at least have a common IS layer or a common data center operating system that manipulates compute storage network We were able to prove it in a fairly aggressive schedule of six weeks that gave us a level of confidence and Now the team is actually moving and actually making further investments to actually help our business with the agility that we are looking for and The good thing is it's not just those two engineers anymore We actually have a little bit more help so we can at least move it at a fairly rapid speed and this is just some of the Under the hood a little bit what we are actually running on Basic setup like any other major enterprise. We actually have it running on a commodity hardware We actually have a high density rack with top of the rack switches and And internally we also have two different fall zones from a network How they're wired to the network so if actually something happens to one we can easily Migrate without impacting the services on to the other from managing our Availability requirements because any time a services down as you know, it's a big deal because we are actually losing money It's actually a big thing everybody's on the bridge. So from a high availability perspective very critical for us We have a standard hardware profile Across our development our stages. We copied some of the things in the industry. We actually have a small medium large I think one of the teams internally they wanted to feel important So they wanted an extra large profile So we'll work with that but the goal here is before different business units wanted to actually say I want this hardware. I Want that type of network? Access or these type of requirements now we are at least is trying to say hey You know what industry has changed as you're saying, but also the way we deliver service has changed It goes into a common Building blocks that we can actually repurpose so for example our aspiration is We can take QA and make it production and vice versa in matter of minutes Maybe initially it takes a couple days, but the building blocks are all the same So we also have an engineer. That's actually is responsible for making sure He is talking to different PD Engineering teams to say what your needs are and then he comes up with a profile to say, okay We actually have a small for this application medium And we're also trying to have a better documentation to say which application matches which compute needs And other one is a actual picture. I think that engineer who I'm talking to has a lot of pictures He definitely thinks very fascinating stuff. So we wanted to put it here also just to show the stuff that we're talking about Is something that we actually have so at least we can learn what others in the foundation or in the open-stack community are doing So we can learn some things These are some of the technical challenges and lessons that we learn from our perspective It does not mean it should apply to everybody. This just means These are the things that we actually have I'm not gonna go through in details with all of them Some of them were actually fairly Simple, but we actually spent 12 to 18 hours And then we actually find found out what the root cause was was actually fairly simple But those are some of the pains that we had One time you were actually running into an issue and we actually told the site services guys to reboot some nodes They actually rebooted other nodes that were actually live in production and they came up as read-only It actually it was actually a big thing to say, you know, hey open stack is not stable and we were actually all panicked We're like shit. Yeah, it's not We spent a lot of time and then we actually started doing the analysis. We're like, okay, we shot ourselves in the foot The amazing thing was our monitoring things did not pick that up and Those are some of the lessons Not only from a technology perspective There are also some of the process things that we actually had to make sure some of our processes were not gonna scale when we are actually talking about manipulating and Managing this many nodes. We are a defined software Some of the other things is also the the compute tuning Also, some of the other tuning the good thing is we actually have a good ecosystem of our partners and vendors that we were able to rely on and This was the good thing is all these things that we are actually doing We actually feel like that the dividends are tremendous because this is a pattern that we are creating before that We actually had different complexities for different business units because they wanted a specific type of hardware Somebody else wanted a different type of hardware and guess what the third guy is like, you know what? I want the same hardware as the other guy because sound school So we are actually trying to just eliminate all of that complexity to say, okay based on your profile of the application And your application needs we will publish and we will give you different compute nodes So those are some of the things I'm not gonna get into a lot of details The other one is just a little bit on the cultural aspect Some of you guys might finding might find a little bit insightful It's sometimes the challenges are not related to technology. They're also related to cultural challenges and sometimes those cultural challenges are actually a little bit Challenging to overcome So for example the concept of agile Sometimes it does not translate completely into the way operations teams are set up So it was actually a it was actually a big thing for us and we also We're gonna miss some deadlines If we had not realized that it is a little bit of a cultural issue and we actually had to switch some things We had to go talk to those teams and figure out, okay Forget about agile. This is what we are trying to do. What can we do together? so very Something for you guys to keep in mind is when you're actually taking a Endeavour to change a technology There's also a cultural aspect to it and sometimes as a technologist We forget that and that was actually one thing that we actually realized it From our perspective. So this is all good stuff, you know end of the day from a business perspective They're like, okay, are we on target? So, yeah, we actually were on target These are some of the things that we did to meet some of our production guidelines based on our challenges Based on the things that we are trying to do Some of the things we actually did not change completely Because we wanted to make sure that the existing monitoring that the various teams were used to we just wanted to actually accommodate that the other thing is Just making sure we actually had a central monitoring We wanted to make sure we actually accommodate to the same pane of glass that rest of the team Slash or knock team is looking for rather than say no no don't look at that So these are some of the things and underneath the covers. We actually made some changes But the goal here is we had so many different teams that are actually involved for us for us to do everything Completely to the left side or completely to the right side. We're not going to be feasible for us So these are some of the things that we did for example the cluster deployment What we are doing is we're actually separating out our physical clusters from our perspective just to make sure that Just to make sure if there's any issues sometimes if it does impact It's not going to impact we at least have some sort of even a physical isolation that we are doing Other thing is you know puppet modules for deployment. I think you know either puppet chef whatever works Whatever works for anybody the other internal things We actually had a unique guest name naming across our data center And that's where some of the monitoring and some of the other business logic relied on You actually had to meet a certain sort of an internal Guest naming policy that we had not a challenge if you're actually starting brand new But these are some of the things that we had to account for and these are the things if you're thinking about Deploying this in enterprise some of the things you might want to keep in mind because these are some of the things that we did not account for Because we didn't really foresee this The other thing just wanted to see from a some of the highlight highlights perspective What have we been able to do to at least say hey this has been a Challenging but also some of the things we've actually done is actually given a level of confidence even to our business To stay we're on the right track And when I say business this is just to our executive leadership team to say you know Hey in addition to some of the technical KPIs that we are actually monitoring We're also paying attention to some of our other business level KPIs Some of these things are actually related as I actually mentioned before not completely to technology There are also some process improvements For example, we actually looked across to say every quarter actually has a certain level of demand And we worked with our capacity team And that's the one thing I could not say capacity team, you know our knock teams all of these teams They actually showed an amazing appetite to actually do this cultural change within PayPal There are some things that they would not budge on rightfully so, but it was actually an amazing Team effort and collaboration because we actually walked up to capacity team We're like you know what we want to buy everything that we need for this quarter up front And we're like do you have this data? They're like well not completely. They're like they actually asked us Why do you need it? We're like our goal is to make sure we always have a cash that we can always give it to our business demands So rather than actually spending a lot of money. They actually helped us Give us a good financial guidance of the things that we're needing So we did not have a unused inventory just sitting there forever But also at the same time we gave an allusion to our PD counterparts that every time that you come in You're gonna have capacity available So so those are the other things just wanted to share some of the process improvements also Sometimes in an enterprise Sometimes some things become a status quo It does not change year after year. So we actually eliminated some of the handoffs From a technology perspective But we wanted to make sure that they were actually in sync and they were proposing the things that we wanted to do But the end result was for certain application use cases We were able to cut down on our infrastructure provisioning significantly It was actually a amazing thing and now the funny thing is even internally everybody's like you know I want to move to open stack, but I'm like well. There's some other process challenges. They're like no No, I don't care. I want to move to open stack So I think it's actually a good thing But there are other work that we actually need to do because from an overall process perspective There are a lot of other process things that we need to do and that is the one thing Something for you guys to keep in mind is technology is something that can be solved with the right folks sitting together But process is the other aspect where the relationship and the cultural aspect comes in and you want to pay a close attention to that Because unless you pay close attention to that your project Completion or a successful project completion might be at a significant risk Other things we actually have going on Actually the network validation tool that we actually did was actually just internal to our application needs What are some of the application connections that a certain application needs so we can at least just do that validation? Other things and the smart smarter Capix buys We actually talked about working with our capacity team and countless other teams that actually helped us Other things we actually have going on we have SDN software defined networking We actually have that pilot going on and we have a one-click application life cycle management Or in other words, we are thinking about what to do for paths. We actually started open stack was a foundation If we did not have a foundation we could not build something on top and That's where some of the agility that some of our business and our PD counterparts are actually looking for But we actually really needed this open stack and I think it's actually a critical foundation for us So those are some of the other things that we actually have going on Our goal is actually to share a lot more. We want to be and and our intent is actually fairly selfish We want to share and we want to actually copy some things that you guys are doing and If you are doing something that is of actually interest to you We want to just make it available for you guys to take a look at it So from that perspective, this is our commitment to community. I Think the open stack foundation has been an amazing partner with us And These are the things that we are actually trying to do and again the things that we are trying to do is just to make it Make it available to share our data points for what it takes for one of the Secure enterprises that's actually running a high available infrastructure What are the lessons that we learned and what are the potential things for you guys to consider? We want to make sure that any of the changes any of the tuning either from a Compute perspective from a storage perspective anything that we actually do we want to make it available So you guys actually have the option of actually taking a look at it and using it if it makes sense that's something we want to do and Any changes that we make whether it's actually related to either puppet modules or anything else We want to make it available for anybody to consume and the goal here is to hopefully develop an ecosystem Where users can actually look at things and actually help create an ecosystem that actually is flourished via users like us vendors like you know some of you folks and I think it's that ecosystem that we are trying to help flourish because we will gain Selfishly from it, and we want to actually contribute back also Some of the architecture design and blueprints, you know load balancer as a service from our perspective What we think are the high-level use cases from a load balancer as a service Firewall as a service as you actually realized we did not tackle that But we are planning to tackle it and hopefully by end of H1. We actually have that tackle So those are the things but our goal is Before that we've been a little bit not as good of actually sharing some of this But hopefully our lesson moving forward is we actually want to share more as you're actually accelerating in this journey And and as we are building this muscle We want to learn from rest of the community because majority of the brain power is actually outside of our company Same thing if there are some leads and engineers that have actually helped us We want to make it available for the rest of you guys same concept learn and Anything that we are doing from a reference architecture perspective working with our vendors Sharing that and see if you guys can actually poke holes or make it better And the other one the last one I think we put it there as a marketing thing But we are not good at user community and foundation growth. I don't know what this means But basically for us to just come out and just talk about the things that we are doing so That's in a sense is our journey and This is a team that actually made it happen And I'm just the one that's actually just speaking on their behalf But the real credit goes to the guys here, you know, some of them are actually not pictured It's actually the entire broader organization that actually came together We started in this journey. We actually proved it in six weeks It actually gave us a lot of confidence and we are actually going we are actually running So that's our story just for us to share and then learn from you guys Yeah, hopefully you're able to actually see and if you didn't see maybe it was actually my We are trying to make a data center operating system agnostic of compute storage network hypervisor Our goal is to actually make a platform that enables agility agility Agility without compromising our availability. Yeah, I'm sorry come again Yeah, yeah, hope Yeah We actually we should actually have that done I was told either in this print or next print as part of Cilometer as a way of actually doing a showback if that's what your thing was We've actually started but we have not completed yet Yeah, yeah, it's not really encryption as a service. It's basically just saying any of the Message cue that we actually have any of those messages that we are she's sending are actually all encrypted nothing from an encryption as a service Yeah, I think that's something we actually pay a Significant attention to it and as of now we don't but we actually have an active track where we actually always look at proactively Are there any things, you know, that is going to be impactful to make sure that we are actually ahead of it As of now, we don't see anything from our perspective Yeah, so our strategy there is we wanted to certify it on the platforms the application platform of the feature future Some of the application migration There's a separate track as part of the application Movement where everything net new will be built on this new platform and Everything else slash legacy either will die or in our case. Sometimes it doesn't happen as gracefully So it just stays on for a little longer than we would like Yeah, so the specific numbers is something we actually don't share as as part of our policy One thing I can definitely say is the numbers that we actually have is something that actually has given a meaningful insight on the overall OpenStack platform from a scalability perspective. Cool. Thank you guys. I appreciate it