 Thank you for joining us today for a session on the application level optimizations that we've been working on with VGA and his team from American Airlines. First of all, I'm super excited to be here. I'm from the Bay Area, and I thought the traffic was bad in the Bay Area, but after going through Paris traffic this morning, I'm very excited to be here and that I've made it. Here, my name is Marcus Flewell. I lead Intel Developer Cloud, as well as Intel Cloud Services, which also includes Intel Granulate, an acquisition that we did a couple of years ago, and we'll talk about some of the work that we've done. A little bit about myself, I'm based in the Bay Area, used to work for NVIDIA, came over to Intel a couple of years ago, but NVIDIA had implemented NVIDIA's GPU Cloud infrastructure, and coming over to Intel, the focus has been on Intel Cloud Services. With me, first of all, let me introduce Vijay Premkumar, my good friend who we've been working with for many, many, many years. Vijay, do you want to introduce yourself? Thank you, Marcus. It's truly my pleasure to speaking with you on the Kubernetes journey we are in. It's something that is story to be told. It's a game changer for American Airlines, and we have been very successful with the journey. And to tell the story with Intel makes it even much better. You and your team at Intel with the development you're doing on the software space has been simply interesting, and more importantly, we are presently surprised. Thanks, Vijay. And thanks for joining us. Yeah, so why is Intel talking about application performance? A lot of you know us as the people who are building the chips and maybe build a little bit of firmware on top of that. But over the last few years, we've dramatically increased our investment in the software development, but then also into building various cloud solutions on top of that. Because we realize that at the end of the day, what matters is the value that you're getting out of our hardware, and whether that's coming from just the lower levels of the chip by adding more and more instructions, more offloads, or by adding more cores, or whether it's the cloud service, at the end of the day, what matters is how much application performance are you getting out of that. And there's a number of different areas that we're investing. One of the areas, at the end of the day, what we're trying to do is really solve your higher level business problem. And at the high level, some of the problems that we constantly hear from customers is that they are not being successful implementing their IT projects. And that's an area that, you know, by building out our interdevelopable cloud, we're trying to give you much earlier access to our hardware and our software so you can de-risk things and implement and find some of these issues earlier. The next area of investment is, of course, on the security side where we're seeing that data breaches are becoming much more prevalent and much more costly. And also with a lot of the work that's going on in AI, we expect that that threat vector is only going to get worse and there's going to be new threat vectors that are going to be coming in. And last but not least, a lot of people, a lot of you have quickly moved into the cloud. They're very, you're very happy about the speed at which you're moving, the elasticity, and all the other benefits of cloud. But at the same time, your CFO or your finance people are chasing you and they're upset because of the money that you're spending on cloud. So these are some of the areas that we're trying to address in addition to some of the low-level work that we're doing on the hardware and all the fabs and everything else that we're doing as a company. And so the first area is, as I mentioned, the Intel Developable Cloud. This is what I built up after coming in from Nvidia. And the idea is that we're building out the latest and greatest, not just CPUs, but increasingly a big focus on our accelerators. And we're making that available not just after the harder ships, but oftentimes six or 12 months even before the harder ships. So that means you can come into the developer cloud today, you get access to our latest, to the next generation CPUs, to our next generation accelerators. You get access to all the software stack that's running on top of it. You bring in your workloads, ISVs comes in, come in, they can test their workloads. We work also with cloud providers. They come in, in a lot of cases, they have pass and SaaS services that they offer. And they can test that six or 12 months prior to us actually shipping the hardware. And then even after we ship the hardware, oftentimes it takes another six or 12 months for the CSPs to pick up. So you get essentially a sneak preview into the future, and you can start optimizing much earlier than what you're used to. The other area of investment is around security, as I mentioned, with our Intel Trust Authority. And I will talk about that in a little bit. And then the third area that's what we're gonna be focused on today is really on the granulate on our autonomous performance optimization service. And this is where we brought in granulate a couple of years ago. Israeli based startup, Asaf, who's sitting here in the front, is actually the founder and will come up on stage later. And it's an area that is super critical for us because I think we can add a lot of value to our customers by providing these services. Essentially the highlight of the way I think about is that we as a company, we've been investing thousands of engineers are optimizing Kubernetes, they've been contributing to the Linux kernel, they're contributing to various PyTorch, TensorFlow, a lot of the open source frameworks. And with granulate, we have an outlet where we can make those optimizations available to our customers very, very quickly. Doesn't mean that we're gonna continue to make those optimizations available and continue to upstream those. But in addition to that, we can give those optimizations to our customers early on and help optimize their workloads years before some of these changes will actually be available as the next generation of an open source. So it's a tremendous, super excited about this area and it's super critical, I think, for a lot of our customers. So a little more detail on the developer cloud. What are we doing there before we jump into the details of granulate? It's, as I said, we've built it out, we're making tremendous investments, especially in the AI, in the days of AI. And as we're building out these accelerators, it's not just about one virtual machine and a few CPU cores. Of course, we make that available and then it helps our customers come in and test that out. But also when it comes to some of these AI workloads, if you want to run a large training job, you need to run this on a large training cluster. And that means we're building out some of these large training clusters with thousands of our accelerators and you can bring in your workloads and test those out early. We have, of course, a number of free options, but then you also have commercial option where you can come in and rent an entire cluster. And we also allow us then, if you're seeing what, if you like what you're seeing, it's not just only a sandbox, you can use it as a sandbox. But then we can also, we allow you to also come in and deploy your workloads and run your workloads there for production as well. With Intel Trust Authority, the main threat vector we're trying to address here is any kind of supply chain attacks. And what we do is we're allowing to be using our trusted domain extensions with a trusted execution environment. And the fundamental problem we're trying to solve is we want you to be able to run your workloads and we give you out of station, we give you the insurance that what you're running is the hardware that you think you're running on, as well as all the software components that are running on top of that, that these are the components that are really there. And that your supply chain, nobody has swapped out the hardware or somebody snuck in some malicious code that you don't expect and that, that is going to harm you. That's the problem we're trying to solve. And this is what we have available today. We already have a set of customers using it. And as we're building out more of our newer generation hardware, that means we can, we can provide out of station for all of the workloads that are running out there. So with that, let's just focus in now on what we're doing specifically with Grand Laid and specifically with, with American Airlines. Let me just start out by asking you, how has it been going working with Intel? It's been a fantastic journey and it's been about the partnership. I think it's about 25 plus years Intel and EA has been this partnership mode and mutually beneficial to both organizations. And when I say the partnership, the way it works is we are IT community which helps Intel's account team to tap into their extensive knowledge on the software hardware, help us overcome some of the business challenges we face. So the big benefit to our IT community to learn from the one of the best and help us overcome those challenges and deliver value to our application teams. Okay, and obviously, we're in the high-tech industry. We've had lots of ups and downs over the last decades. But being in the travel industry, it feels like we're in a cocoon here. What are the kind of problems that we've been able to address with you? Definitely Marcus, last 25 years has been extremely challenging. You've seen lots of ups and downs, especially in the travel industry and also in other industries specifically, right? Whether it's COVID or 9-11 or in a greater session. We had a great impact on any of those scenarios, right? So as a traditional company, we have investments across multiple data centers in our compute and we really rely on amazing partnership like Intel to optimize our application runtime, and we have been doing this for the last 20 plus years with Intel. And by optimizing an application runtime, we're able to deploy our work close in a more efficient way so we get the best value for the investment we've made in data center. Especially during the difficult years, those investments are placed off in a very nice way. What were some of the key activities that you're currently pursuing? Yeah, so Marcus, currently as we know in the last few years, the partnership heavily focused on two aspects. One, we continue to partner on the runtime optimization. Whether it's a data center applications or in a cloud native, we can do that. The second is about the cost optimization. So both cost optimization and the application has been the foundation, fundamental in all our aspects of relationship. Now, beyond that, what we are trying to now build relationship is on the next frontier. For example, we all know AI has a great role to play at American Airlines. That's been in the travel industry. And we see a tremendous opportunity in front of us to tap into the outcome what AI could do for us, either from a customer experience to enhance that, or to the operational excellence, the possibilities are endless. So we have been partnering with Intel's AI data scientist Dr. Mellon team. He has been visiting our office every frequent once in every couple of months to help onboard our engineers, AI engineers to the right. We are building the code and building the solid, the right business cases. So that partnership is in ongoing. And also you talked about IDC. It's pretty interesting concept. So while we look at building the right architecture principle that we are workload could be deployed in the most right way. So IDC provides an opportunity like company like a where we could test out the best in cash software and hardware with our workloads and benchmark it and prove it ready to deploy in our cloud CSP. So that enables us to the quicker, faster, easier way to bring in the solution to our business community. Great. Thanks, Vijay. Do you want to talk a bit more about like your architectural principles, like in terms of how you approached your cloud environment build out? So definitely before I talk through our cloud environment, I want to level set out the journey behind. So I mean, until two before I would say until, you know, four years ago, predominantly our workload has been in the data center. So what it meant is we have five plus data centers and you had complex big data solutions hosted there. While we continue our cloud journey, we start about three, four years ago. We continue to see the data silos, which means we are not able to process them very efficiently to serve our business community. That's when about two years ago, we started organizing called Cloud Engineering Platform under our leader, Rasika. She kept the focus on four important aspects. One, we set up the cloud center of excellence and the community of practice. Second, we define the cloud platform innovation as an independent unit. Third, we set up the great governance policies in the cloud and four, of course, you know, focusing on the cost organizations. Keeping those are the focal point, our organization and our team were able to go after the four major goals we were, you know, targeting. One, putting the CCOE framework and bringing the COP simply enabled and brought in the collaboration aspect and sharing the knowledge between our application team. Second, setting the cloud platform innovation opened up the engineering excellence mindset of our engineers so they could go after new and new age solution just like granulate while we were able to onboard it, right? Third, putting the governance principles helped us to have the resilient, reliable, secure policies set it from the, you know, from a core perspective. And four, of course, you know, we want to make sure every dollar we invest in our now CSP or data center, we get the most outcome out of it. So the focus areas and the goals clearly define helps us and our engineering team to get the value for our business community. So now I want to hear, speak about the two major areas where in the last two years, our growth has been. Like I mentioned previously, most of our work clothes has been in the data center for the big data. So last two years, there has been a tremendous growth into the cloud. Obviously, we all know big data solutions, data lakes are very expensive, extensive and complex, right? So while that kind of to grow, while our business were getting the outcome, we were failing challenges. The challenges came from the cost perspective. Now the second most, you know, fastest growing environment as the Kubernetes platform, what we built in. Kubernetes, we all, we know that, you know, it's complex and it provides a value and it could, you know, grow exponentially as well. So both big data, data lake solution and the Kubernetes, the two fastest are also the most expensive and putting cost pressure to us. So that was a challenge we were facing. So while we're facing those challenges and like we, you know, reached out to Intel's account team to see is there a solution we can go after? While we were working with Intel's account team to see if there a solution, we were probably looking at other industry, you know, optimize the solution that is in the market. So at that time, about two years ago, Intel suggested to look into a company called Granulate and a little bit of, you know, background story here, Granulate at that point was an independent company and Intel referred to us a more like a partnership mode knowing that Intel would not make any revenue out of it. And eventually Granulate was purchased by Intel which is a store, you know, after a few months later. So my team, Vamshishravan, Ishan and many more, you know, worked with Intel's, you know, Granulate team to test it out the Granulate software in our, you know, big data solution. And that testing along with the, you know, comparison with other solutions, did the whole nine yard and they choose the Granulate to be implemented in our big data to begin with. And we started the implementation in non-prod about, you know, 12, 14 months ago. Today it's under percent in our production and non-prod, about 1,300 plus clusters. We have been successfully implemented. And as you could see in the graph, we are about 37% fewer resources than what it was previously. And that results in about 23% cost savings in our data lake solution. It's been a fantastic journey on that. So while we achieved that in the data lake and the big data solution, our journey continued with the Granulate on our Kubernetes environment. So team last few months have been benchmarking Kubernetes workloads and comparing Granulate with other optimized solutions in the market. And the non-prod testing so far has been very interesting and the outcome has been very satisfying to American Airlines. Just curious, BJ, how did you pick Granulate versus other solutions? So as I said, cost is important. That was the starting point. We were facing extremely high cost pressures. So the reason we went after the market to look for solution was the solving the cost problem. While like Granulate, many companies provided that cost solutions, the unexpected or the additional benefit what we got from Granulate is the runtime optimization and performance improvement, which was unexpected for us. And second, the benchmarking we did with other solution did not provide the same outcome. For us, it's, going after cost was one thing, but getting two additional benefits was a huge win for us. One other question here, a lot of customers are like great, you're getting all these performance enhancements, but how much effort is it to get there and how operationally does it fit into my operations environment? Like what's the pain I have to go through in order to get all these optimizations? Yeah, so good question, Marcus. Any of this requires some commitment from the engineering team for sure. So in our case, our engineering team was committed to partner with the Granulate team. And from a Granulate perspective, it takes about two weeks to discover, to provide an outcome, seeing what is the potential saviour that can come. So it was easy plug and play and the portal was very easy to navigate and we were able to get the data. And then it's up to us at that point, based on the outcome what we see in the portal, we choose to optimize. That's when does the optimization pose outcome? So the implementation was pretty straightforward and easy to partner with to get it going. In addition to your data lake, I think you've done a lot of other optimizations as well with Kubernetes, I believe. Yes, correct. So it started the Kubernetes journey on the non-prod and again, just to put some background on the Kubernetes, right? As I mentioned previously, it's our second fast-growing environment in the Dominican Airlines. And we want to ensure that we remain focused on the three major principles that we committed to our application or our leadership. One, of course, you want to provide the 99.99% SLA to our application community. Third, the growth, what you're saying, is nothing. This year, we're project to go three X more. That means tremendous amount of application, leaning on to our shared cluster to onboard into that. And third, we want to continue to provide under percent in our sales service. That means ease for the application team to deploy their workloads and get the outcome is in our shared cluster. Keeping this as a priority and a focus area, we ensure that any tooling we bring in doesn't compromise on the quality and the objective of what we laid out. So just to give a landscape of our Kubernetes tech stack, like many of you know, it's complex and unique and very special to you all. In that case, also very special. RNG has spent years to build this, right? We have primarily Azure AKS shop on the Kubernetes and we use the Argo CD for the CI CD pipeline. You have the Kuma service mesh. We use a Rancher control plane. And we also the home ground UI, which we call Runway, which is the self service portal. And we also use the, you know, the custom manner controllers to integrate across the solution. While all this in there and many more, we are able to plug in the intelligent solution without disrupting the ecosystem. Again, our objective is cost savings is one thing, but not at the cost of SLA, not at the cost of delaying our application team to self service or impacting our outcome, what we committed to the application team. So given all those things, GranLit was able to plug in without any disturbance to our ecosystem, which was our huge expectation from the team. And now in the last, I would say, few months, installing the GranLit agent in our non-prod Kubernetes cluster, this is the outcome we have seen. We have seen about 40 to 40% cost savings. And the best part is the additional benefits, right? We're getting a 30% in a job time reduction and 20% throughput increase. And that's the second and the third on the additional benefits, which we never expected from other solutions. We're not getting it, right? And the most importantly, we could get all those things with zero code change. Isn't it a beautiful thing? It is, signing up. Yeah, and just to add to this, I mean, we also were using GranLit, of course, extensively inside of Intel, and we're seeing very, very similar results both on our data lake environment as well as on our Kubernetes environment. And again, as I mentioned at the beginning, and of course, there's a lot of other customers that are seeing similar kind of results. So it's not like VJ is a complete outlier and that's why it's here, but this is a very typical improvement that we're seeing. And again, the thing to note here, this is not like, these are customers have not done any optimizations at all, and they're just picking up the low-hanging fruits here. These are additional optimizations that they're getting after they've already applied a lot of the other tools, right? Because it's a different dimension, right? When you think about this, CloudWatch and a lot of other tools that allow you to have low-hanging, you know, address low-hanging fruits, like optimizing instance types within your CSP or doing other kind of, you know, detecting idle workloads. What we're doing here, it's really about minimizing the footprint of your application. And that's why, I mean, it's very non-intuitive to say, I'm gonna get more performance, but I have to spend less money on it. How is that possible? Most of the time, it's the opposite. I'm gonna add more resources. I'm gonna add more overhead and more headroom. And that's what's guaranteeing my application performance. Here, it's the opposite. It's like, you know, we're shrinking things down. We're shrinking down the footprint, making it fundamentally more efficient. And that means, yes, you're getting more performance, but it also means that you're spending less money on it. And I think that's really something that I think is really a very compelling message. And this is something, again, as I mentioned in the beginning, this is the beginning of the journey. It's only been two years since a Southland team came to us and Granite came to us. Going forward, a lot of the work that we're doing across all these different areas, there's thousands of engineers that we have optimizing different parts of the stack. And a lot of these optimizations are gonna be expressed early through Granite. And so we expected over time that some of these optimizations that we can do through Granite will actually increase in the magnitude. Yeah, so just to summarize what we're able to address, Vichar talked about the data lake that he's been able to optimize. And that's something, you know, I think it's where we just shine, and typically within a week or two, we're able to demonstrate what kind of performance enhancements we're able to achieve. And then of course, as Vichar said, you know, this is just like with Intel, you know, oftentimes there's security views and other things, but typically those improvements, we will show up very, very quickly and your next cloud build can already reflect that. The second area as Vichar mentioned was Kubernetes. That's another key area of focus. And that's again, where we as Intel, we have a lot of investment, a lot of optimizations that we're doing on the Kubernetes side that going forward will be available through Granite. And then third is just those runtime optimizations. This means, you know, your Java workloads, it could be a custom workload. This is not just an off the shelf third party hardware, a software that you're buying, but this is also for your own custom workloads that we can also optimize here with Granite. And we're continuing to invest in that as well. Yeah, so in terms of some of the things we're doing here, one is again, you know, with Kubernetes, it's really, you know, optimizing the footprint there and eliminating any kind of waste. So in this particular example, you know, this is a customer and they were running like 5,000 cores initially. There were a lot of extra, a lot of extra compute power essentially dedicated just for the overhead. And with Granite, we're able to shrink it down significantly without impacting the SLA. And then the runtime optimization, meaning we can optimize everything from the application layer through Kubernetes. These things are all the way down into the infrastructure layer. Those things are all additive and that's where we get the additional optimizations. In addition to the optimizations, the other thing of course that's important is the observability. People need to understand what's going on in their environment. There's a production workload, super critical. Vijay, do you wanna talk about that a little bit and how you're leveraging Granite for that? Definitely. So like the screen says that actually it's the outcome output from our own, you know, implementation. So providing the full customization and visibility of clusters, as a customer, I could pick and choose what we want to enable and what kind of configuration we need. So empowering ourselves to do, you know, the outcome what we expected with a partnership was a beautiful thing. Okay, thanks Vijay. Okay. And then last and the least, it's, you know, again, efficiency is really making sure that the SLAs are being met. And a lot of cases where we're seeing is that even though we're driving the efficiency and we're squeezing out costs, the SLAs typically actually end up getting better and be able to better meet them than prior to applying Granite. Yeah, so before we jump into Q and A here, just a quick summary in terms of how Intel, what we're doing as a company. As I mentioned earlier, through the developer cloud, we are making, providing that early access to our hardware and our software. And we're doing this not just for our CPUs, but we're doing this across the different product lines including our accelerators that you can go in and test out. And the same thing, you know, not just the hardware, but also all the software components that you can test drive early on and not just you as an end customer, but also the OS vendors will come in, hypervisor folks, they can come in and they can use things at the bare middle layer. We have other people that, you know, they want to be a middle system. We have folks that would just want a virtual machine. And then we have a managed Kubernetes as well. And then with Converge, we also have an envelope solution that you can also test drive. So you can come in at the different layers of the stack. You can run it across different layers of the different types of hardware. Right now we have predominantly, a developer cloud is predominantly located in North America. We're in the process of now expanding also into Asia and into Europe. So we expect to do that over the next, over the next throughout this year. With Intel Trust Authority, we're addressing some critical threat vectors around the security. And then of course with Grand Laid, we're addressing the performance challenges. So with that, before we open up for Q and A, maybe just the last question for Veeche, what is next for you in this journey with us? The journey will continue and continue. So we got our outcome from the, like we spoke about the big data and Kubernetes, it's, you know, almost done. Next, we're going to continue this engagement on our, you know, data center workloads. We have, you know, several thousand VMs in a data center. We want a partner to get that also optimized. And then we also workloads in the, in the CSP in the cloud provider, which is non, you know, Databricks related. We want to optimize that as well. While all this is going on, we're going to tap into the IDC to get the, you know, the architectural principles, you know, rightly done. That way we could be from day zero, get the most outcome for us, you know, benefits. But I can you to challenge markers and our staff and team to bring in the A plus team to LPA to get the outcome, what we decide. We truly appreciate the partnership. Thank you, Marcus, the staff and team. Appreciate it. Thank you, Vijay. Thank you. All right. We can just ask the team to come up here. So we'd like to invite a soft Vamshi, Shravan and Nishan. So please. Yeah, the way we do things is that we get all the easy questions, the top questions go to these guys. So just to be clear with that. Any questions? I'll ask Vijay a question. So we work for really large companies. How do you get the right support, the right confirmations to work with so many startups like you do? Thank you, Asaf. And just to put some perspective, right? Like I talked about the vision, the strategy, our leadership are laid out. So we clearly marked ourselves as a cloud platform innovation as an independent unit. So I was fortunate to lead that, that unit that gave me and my team the authority to go after solutions that never possible before American Airlines and a company like traditional American Airlines, right? So that enabled me to go after startups like, you know, in this case, granulate with, in a partnership with Intel and many more. And just to put some perspective, we have been dealing with 40 plus different startups and we onboarded about eight of them in production. And thanks to my partner Vamshi, he has been an architect along with this journey with me, along with Shravan and Nishan and many more, right? So it's a leadership vision to have your market a dedicated independent unit and that enabled me to go after those. Thank you, thank you. Hi there. So the granulate platform sounds really good, but what's it got missing at the moment that you would like it, like to see it have going forward? So I think that's for the AA, please. Yeah, I'm sorry we didn't catch that. Do you mind repeating that? Yeah, sure. Granulate platform today sounds very good, useful for American Airlines, but what would you like to see it have in terms of capabilities going forwards that you think that it may have missing? Which any enhancements that you're looking for? I would say, like I mentioned, our team has went through extensive analysis, several platforms, several tools in the market. In fact, we are very popular in the startup community and granulate provided more than what we expected. If you ask me what else, I don't have an answer at this point, but knowing me, Asaf knows me very well and so is Marcus, I continue to challenge them. They got used to our VR operations. They always come with solution before asked questions. I think Shravan, you wanna add something here? I think since few weeks we are working with granulate extensively. So they are able to incorporate the labels were not easily available, but they were able to add those as part of so that we could select the workload so that the granulate optimization can be worked on on specific workloads within all the Kubernetes clusters. And we can isolate what workloads we need to optimize and what not. And just to add on that note that Shravan mentioned, I joined the team like way later, but these are the pioneers in working with them, it's been wonderful. So with granulate, basically the best part that so far I've been working with the team closely implementing this in our non-prod at least, the best thing about granulate is that it doesn't bother any existing, like running workloads and services. And as Marcus mentioned, it basically shakes and without hurting our SLA. That's the most important thing for us. And then it's so easy, the implementation is so easy and the results are so fast that you can actually, we just projected, right? And then that's the best, I would say optimization tool so far we have seen. Thank you. Just recommend trying out yourself. Can you guys hear me okay? Yes. Excellent. Question bit to one side. I noticed we've got some similarities with the way we kind of our organizational goals. Can you describe briefly how you, what your organizational structure is for your cloud teams, for your infrastructure teams, what that looks like and what challenges you have and how you deal with them? Yeah. So it's a very good question, especially a traditional organization like us where we continue to have workloads in data center and aspiration to ramp up in the cloud. It's a challenge. And there is no one size fits all or a proper size, we could define it. But again, I'll go back to this slide where I spoke about the exact challenge what the question is about. We took a very difficult and, you know, art stand, we are going to, you know, carve out a clear focus areas, what we want to do in the cloud while respecting the application workloads in data center. Like we talked about setting up the cloud center of excellence and community of practice. There's a huge benefit setting it up. It could be a small team, but by setting it up, you're basically empowering, enabling a larger engineering community across your organization. Then very clear, we're not going to solve all the problems that we may have, but we're going after the most important, the cost. Like I mentioned, let's go out to the cost aspect first. If you solve the cost, then we get the money in, we can do more things. In our case, we solve the cost in many complex areas, like we had a choice. We could have taken the most easy, simple application to implement granulate, but we took a different stance. We went off the most complex, most used, most difficult, big data solutions, right? We committed to that and that helped us. And now we're going after the second most complex. Generally, it would be the reverse. You go out to the most simple things. So the intention, the vision, the strategy from our leadership to our engineering folks has been single-focused to go after the most complex problem. So the size of the team could vary by the intention and the vision is more important. I would like to add to it, so because there are products, because every under, what the cloud in engineering platform itself has every product as a platform. So we have platform as a product and the innovation that was started under Vijay, you can see as innovation leader. For every product, innovate like a startup scale, like an enterprise. So that's the logo that he has been setting into the products for all the teams, cloud teams. And there are architectural teams, which we are like as part of architecture that spans across. And the cloud community of practice is a virtual that goes across the cloud teams and the products. So that's how the structure has been laid out if that's a question that... Yeah, that's great. Thanks.