 Hi. Good morning. Good evening. Good day, everyone. My name is Alan Ren. I'm the general manager for VMware China R&D. And I'm also the co-founder of our open source community, ACE Co-Innovation Ecosystem. Today, I'd like to share with you the topic from multi-cloud to edge native, distributed, intelligent and secure. First, a few quick points about VMware. Obviously, we started with virtualization, going into software defined data center, and now we're leading provider of multi-cloud software services for all applications. And we provide a digital innovation foundation for enterprise business and transformation. And we provide freedom and flexibility for developers, DevOps and operation teams in IT. Just a few quick highlights. We're close to a 12 billion dollar revenue last year, 500,000 customer worldwide and 36,000 employees in 50 countries in the world. We are no stranger to open source. Here we bring family of open source projects that benefit the communities. We're starting with, you know, Call Foundry for Platform as a Service, Grimplum Database, one of the leading open source MPP, Massive Parallel Processing Database in the world. Spring, the world leading Java framework, as well as a couple Apache projects like Comcat. More recently, we have started to contribute to cloud native computing, such as a project Harbor, which was created and incubated from VMware China, and contributed by the China cloud native open source community, as well as the number one, the first open source project from VMware worldwide into Azure. Since then, we've also contributed contour, build pack, entry for software defined network, as well as a Valera for data management and protection. Our view of the cloud and edge era starts with some of the drivers from multi-cloud. We're actually seeing an explosion of demand for a compute, and it ranges from traditional and cloud native and SaaS applications to mobile and distributed end users, as well as the world of industrial IoT and the smart sensors. And to that is artificial intelligence and AI ML workload. With that explosion in demand, we see a drive towards the multi-cloud era, including public cloud, telco cloud, edge cloud, and private cloud and traditional data centers. And there we see huge requirements, very demanding requirements on increasing elasticity on compute, latency in terms of response time, as well as a privacy for data protection. At the same time, we are also seeing massive application migration, as well as a new generation of applications going from the centralized cloud to edge native. We're also seeing the cloud native computing platform extending and providing lightweight support on the distributed edge. Adding to that, as I mentioned before, the industrial 4.0 comes with not just cloud computing and edge computing, but also the modern connectivity to various parts of the world, as well as, you know, next generation of endpoint devices and new generation of edge and IoT applications. So on edge compute, one data point from WWT is that 75% of enterprise data will be created on the edge outside the central cloud by 2025. And we see a new type, various new types of edge related apps, such as latency sensitive apps, mission critical apps, and telemetry and monitoring apps, typically requiring response times on the order of tens of milliseconds compared to the traditional centralized compute, which is on the order of hundreds of milliseconds. However, the multi cloud edge collaboration or connectivity right now is quite diverse and a lot of silo environments growing more complex as we speak, we have the distributed users, edges, various type of applications, as well as multiple clouds and infrastructure. So what are we seeing as the new generation of requirements supporting multi cloud and distributed edge. We see on the developer side consistent experience and productivity on the operation side, support new generation of edge native applications. We're also seeing the management of application performance and cost life cycles from various clouds. And also consistent security and networking that can span from multi cloud to distributed edge. And then of course we have the cloud agnostic edge native solutions that we will need to deploy and manage. So from VMware's perspective, we look at the cloud, the edge categories as the far edge, where is the enterprise locations. And how native application can run on edge compute stacks, and this include running on bare metal VMs, as well as new generation of container orchestration platforms, such as SD when on Kubernetes, as well as, you know, VMware's container orchestration platform, 10 zoo Kubernetes grid, TKG. And on the near edge is where things are happening between customer premise and private data centers, as well as centralized clouds to deliver edge native application as a service. And here we see VMware POP telco core and radio access networks running on top of virtualized environments, container environments, as well as a telco cloud platforms. Now looking into the details of the edge, supporting multi cloud within sort of three layers, the top layer of course is the application itself for multiple industries, and then the overlay edge service that supply support sassy secure access service edge, as well as edge compute. And then the underlay edge services provided by telco operators, supporting radio access networks, as well as private 5G. So what are the VMware solutions in various layers. So first we have the VMware 10 zoo platform running at the edge. I mentioned, it's, you know, supporting 10 zoo run manage, and also build a different life cycles of the application development and operation. And really having the capability to run just enough, you know, lightweight Kubernetes from the corporate data center or centralized clouds and fleet management at scale, and being able to deploy and manage application life cycles on the edge locations of the data center which is robo or whether is closer to the sensors and gateways in the industrial IOT world, and also being able to centralize in a SAS control plane to have visibility monitoring and observability. And of course, a part of this is built with Kubernetes and with container registry like harbor, a lot of the open source contributions from VMware. And next I would like to introduce to you some nice project we call the CAPB it's a container cluster API provider, bring your own host, and really is to support Kubernetes native manifest and API. It also supports local control plane of single or multiple nodes, and they can run on either physical or virtual machines of deploy Linux. And we also have a nice demo of the 10 zoo community edition TCE with the bring your own host by which on both x86, as well as arm mix cost check out our demo with the YouTube link on on on the slide. And of course, it is open source under VMware at 10 zoo on GitHub. And speaking of edge, want to mention the one of the Ajax foundry, China project, which VMware and Intel China co maintained. We actually contributed code ourselves in terms of UI and Kubernetes support, and we actually facilitate and enable other startups and enterprise partners to contribute to the community as well, including equipper loose engine from emq device services from a and then docs and code samples from Thundersoft. In fact, it is the number one country in terms of visitors and downloads for Ajax project globally, and we've recruited, you know, 20 plus contributors and partners, and trying to co innovate and accelerate with the very, very very vibrant community. And in our special interest group, hosted by popular social platform, social network platform we chat we have 1300 members of contributors users and partners. And because of our work for Ajax foundry, we've also partner with the leading hyperscalers in China to expand to extend Kubernetes with CRDs to support Ajax. And this is an example of reference argumentation for open year, which is one of the cognitive open source projects in CNCF. And here you can see we can operate from control nodes from multi-cloud, creating devices, OTA CRDs, no shadows, and we can run worker nodes on the edge in terms of adding devices, tunnel agents, and to support various Ajax services. And so check out our implementation GitHub is actually in RDE Cloud's open year open source project. And speaking of edge computing, one thing to know is the requirements for privacy computing, and in terms of supporting data protection, while getting smart data models from these new generation of AI ML workloads. So OMLP is an omnipresent machine learning platform solution that VMware is developing to provide a multi-cloud and distributed edge federated learning service. Okay, and it supports multiple org, geo location, and it can actually support multi-cloud or distributed edge deployments and running on container and Kubernetes, you know, control and runtime. And here is, again, a reference architecture of OMLP in terms of using FATE, another open source project contributed by WeBank to Linux Foundation as the framework runtime. But here also VMware contributed the federal manager for data model management, as well as the FML lifecycle management for participant lifecycle management in terms of various clouds and also various distributed edge. And here is really you can using, you know, local encryption to train the model and then send and receive the model to the coordinators and then aggregate the encrypted model for a shared use while protecting local data and providing privacy computing for business intelligence. So with all these various projects on the multi-cloud side as well as the edge side, we've really moved it one step further by creating a open source developer community called ACE, which stands for on the technology side, artificial intelligence, cloud native and edge computing. But on the community side, it stands for with a little bit of a twist on the recursive naming, ACE code innovation ecosystem. And it is launched, you know, by VMware in China, with other, you know, leading open source project leaders, such as Intel, Pincap, Kiligence in the data side, and EMQ on the edge, as well as a louder for a cognitive. So we aspire to be the co-enablers to serve and empower innovative organizations, enterprise or startup and really to create a win-win organic and symbiotic ecosystem from project to product and to profitable business. And we also wanted to use this technology to drive innovation for good for environment, society and governance. So just really at the end, a quick snapshot of the ACE code innovation ecosystem. We cover tech areas, various industries in SAS, and we provide, again, the core sort of directions and the technology interest in ML ops, AI for software development lifecycle for AI, privacy computing, I mentioned, general cognitive management platform, as well as a SMARTNIC and the software defined networking for cognitive. And on the edge side, I mentioned the EdgeX Foundry for device management, and we're also looking at Edge native applications with a better support. And we really work with startups, industry leaders, as well as academia and NGOs with their technical requirements. And co-enablers are partners, including venture capital on the corporate side, as well as on the financial side. We work with various incubators, as well as world leading open source foundations and associations, such as Linux Foundation and its leading soft foundations such as Edge and AI and analytics, as well as CNCF. And we partner with all our co-enablers and with various developer and accelerator events, summits and forums, co-produce industry reports, media coverage and community matrix and collaboration. We really also want to call for action if for the audience out there to join us, either as users, contributors or partners, and to move the co-innovations ecosystem ahead, particularly in multi-cloud and distributed Edge. So to summarize some of the values we cherish and for our share mission is diversity, whether it's open source users, contributors, whether it's hyperscalers or startups, global enterprises. We want to drive from community to ecosystem, from project to product, really making it organic and symbiotic to reach a win-win for the ecosystem and also for our users and customers. And last but not least is really the force for good. We really like to harness open source, open cloud technology to be a driving force for good for our environment, for our society and for our social governance. So with that, thank you for listening and we are happy to have a few Q&As. No, very good, very good. Thank you very much, Alan. If you can stop sharing, we can be on the screen together. There you go. All right, very good. So first of all, thank you for covering so many topics and we really appreciate the work you're doing on the community side. I do also want to point out that it's fascinating to see the VChat group and the EdgeX Foundry downloads and things like that. I know we are a little bit over time. I think there's a question on open collaboration outside China. I would probably say that that's already happening in local meetups and things like that, you know, led by multiple organizations. So that's a very good question as well as comment, it is happening. But with that, I think we have a 10 minute break because I think we are almost four minutes over if that's okay with you. Okay, sure. Yeah, and just a quick note on the question from Will that we are actually opening up. Although it's something started in China, it's really for the world in terms of the ACE Co-innovation Ecosystem and all the open source projects. So really, we'll have some English editions, you know, time zone friendly events for EU and the US. Beautiful. Thank you very much. So we are in for a bio break. Thank you very much.