 Test, test. OK, I think it's time to start right now. Hello, everybody. Good morning. So welcome, you guys, to join my talk today. This is the last day of Open Source Summit. And I will talk about the open and neutral edge computing architecture, how to do that, and why to do that. For my background, you may find the information at the website, so I will not repeat that right now. A little background about our team, actually. We are part of the office of CTO of VMware and Bayside Beijing, actually. Our team have a name called Edge Computing Lab. The mission is to investigate potential disruptive technology challenges and business opportunity at computing and industry IoT, create some unique values and some new product solutions for enterprise customers. You may all know that VMware is a pretty leading vendor for cloud, private, hybrid, and enterprise mobile management. We are also looking into this new area. So our focus right now is basically falling into two directions. One is that horizontally we want to create a general multi-layer platforms with open ecosystems. The other is vertical, which is relative to VMware's whole business, the data center management. I want to see some new approach to do the data center management with Edge Computing and IoT technologies. So the focus here around innovation projects not existing products. Some examples I'll talk about a little bit today is about virtualized Edge device. Edge native applications a little bit more. Machine learning inference accelerators. Some others like RoboSmart Detention, that will be not mentioned a lot today. So as you know, VMware is a software company. We do a lot of software, and most of the team are working on software only. But here, our team at Beijing is a kind of a special team. We have a physical lab for the Edge. A lot of devices you can find on the screen. We got that either by ourselves or some contributing from our partners. And also some components for machine learning accelerators. I will mention that a little more later. Also some robot-based stuff. So let's start talking about the main topics today. Some background, firstly. You may all know that in the recent years, Internet of Things and Edge Computing are pretty popular. And they are often mentioned by different companies globally. In different industries, telecom, enterprise, consumer, internet company, a lot of worries about that. But overall, according to RTC's landscape, I post a picture here. You can see that globally, the market, the vendor names are quite many. The market is quite crowded. In 2016, that's on the left. 2018, that's on the right. So overall, you may get an impression that more and more companies want to get involved in this industry, want to get involved in this market. And that leads to some pretty interesting situation. We see a lot of chimney systems, a lot of silos deployed in the customers on prime. And in a big picture, quite a few of most of these systems are not well integrated with one another pretty well. Another number we got also from the South Party consulting firms in 2018, the result that more than 2,000 global vendors in this industry, US, China, Germany, the top three. So why is that? That's kind of an interesting question we discussed in the last two years. The overall impression is that because all these systems, all these vendors got deployed in these systems, they are in a pretty early stage. That's a good thing for internet of things and idle computing. But also trigger some issues for the customers because they want to control the cost. They want to get more value. And it is challenging for them to get all things consolidated together. OK, so from our perspective, from VMware's perspective, what's edge computing and what's the big picture architecture? This is a reference architecture. So from top down, we see that generally people will have some cloud in place either on prime or hybrid in the public cloud. And they are somehow connected to the edge locations. In some cases, there are some compute edge. We call it a compute edge, or someone called edge infrastructure edge, or someone called edge cloud. These are basically reactable servers. And they may sit in a certain maybe net one closet or small half-height rack in the robot position. And then they can run some somehow complex jobs distributed from the real cloud. And then if the case is around IoT, they have things, devices, endpoints to connect back to the cloud. There will be generally some edge device edge layer exist in the whole architecture. This is not a natural case, but most cases there are, especially for end persons. Because they want to ensure there are enough and just enough data collected and passed to the cloud and processed. So we see some pretty interesting things will happen on the edge device layer. And that's also the major topic and the major focus for our team within the company. OK. So what's that? We all understand that in the cloud, all kinds of applications are running there in all kinds of frameworks, cloud-native, traditional, different architecture. And on the other hand, on the edge layer, we all see similar things happens gradually. Maybe a dozen years ago, the edge location is not connected well with cloud yet. There are some pretty old-fashioned applications running there on pretty old OS, or even known as general-purpose OS. In that case, it's totally different. But right now, with the devices, with all the endpoints connected back to the cloud, we see more than one requirement to well manage the applications on the edge, especially on the edge device layer here. And how to manage all these applications? How to make sure the data exists on these edge devices are well monitored, and their life cycle, their interaction, their isolation are in place? We see a great requirement comes out that the customer probably needs some central report to manage application data remotely and their life cycle management. And integrate with certain application marketplace remotely, properly remotely, and distribute application and data to the edge dynamically. And they could push updates to the edge. They could make some policy-based measurement mechanisms. And they could support different packaging methods, like VM-based, like container-based applications. That's kind of some general requirement. And one major thing, or major challenging in this whole picture is that other price customers may often have a concern when they want to deploy a certain edge computing system or industry IoT system, how they could control their own data on the edge. As I mentioned at the very beginning, in the world, most of the system are kind of siloed from one another. And that's all vertical systems. The vendor may control all the parts, not only the infrastructure but also the application, business data, how the customer could really control their data. So we see the proposal, the architecture proposal here is that we see the requirement to decouple the infrastructure layer and the application layer. We see the requirement to manage the infrastructure separately from the application layer. And the customer should have the full control of all their data and the relative applications. And they could send data to wherever they want. And that's the whole picture. And that's the major direction we want to explore. So that's the full context for all our relative projects. Talking about the multi-layer infrastructure, this is a technical stack for right now contributing in our team. From bottom up, we see a lot of different devices there. In the middle column, there may be some traditional hardware components like a CPU, the RAM, the disk. And there are more and more components appear to support the machine learning inference on edge devices. Like some embedded GPU, FPG component, XPU, or maybe TPU, VPU, and some async components. And on top of that, we have a project called Asteroid to virtualize the edge devices with the type one hypervisor. I will refer to some other open source projects here also. Because the topic here is mostly run architecture, so I will not talk too much details about how the project is implemented, but maybe a little bit. And on top of that, for the application platform of framework layer, we have another project called Supernova, which is pretty relative to Azure Foundry. That's an open source project on the Orleans Foundation Edge Umbrella program that will provide the marketplace I mentioned earlier in the size. That's the project on marketplace about Nebula. The Supernova project is run in machine learning inference service, which is also relative to Azure Foundry. All these three green blocks are the major projects I will mention today. That's blue ones. The green ones are mostly relative to the smart data center management, so I will leave that to later. Actually, some project will also be talked at Open Source Summit in Europe. OK, about infrastructure. So why hypervisor in the device edge? We see that, as I mentioned, in the last decades, before the edge, before the traditional industry system connected to the internet, people have limited or almost no concern about how these devices should be managed remotely, how that should be connected back, because there are no such connection and there are some error isolation, actually. But with more and more systems, IoT systems, edge computing systems in place, unprocessed C, the potential challenge or risk appear gradually. More and more systems may be placed into one place, for example, smart manufacturing. In the same production line, there are more and more boxes in place, measure all these sensors on different machines, get the data. They may come from different vendors and then they may run totally different applications on top of different operating systems and get different data. The companies have to collect all these data, manage all these different boxes in a different way because they are siloed systems. And with more and more requirement to place the computing tasks on the edge and the edge devices, the requirement on the computing resources on the box will be stronger. We see a trend coming right now, that is the computing resources for these small boxes is more and more powerful. Maybe gradually grows from the traditional MCU to the more powerful CPU model, from ARM to X86 platform. That happens. And people want to consolidate all this stuff together to simplify the management to improve the automation to make more operations happen on edge. So that's the requirement for hypervisor here, generally. This is a reference to the open source project I mentioned earlier. That's called edge visualization engine, part of the F edge umbrella foundation. The overall idea, kind of similar, we shared a similar vision for this issue on the edge devices. For unprimed deployment, there are diversity complexity issues to be resolved. Users want to avoid certain locking, and they want to deploy the common API to operate all these hardware and enable a pretty flexible control of all the applications that are on edge devices. Here's a more detailed architecture of the UV project. I will not probably talk a lot about that. I'm not an expert of UV. But you may go to the fedge.org website to check out the details of the architecture. And they also share the source code there. The reference implementation is based on Xen technology. OK, so how about us? The project Asteroid is the project we are working on for the infrastructure layer. You may find this is a pretty typical 3-tier architecture from left to side for the things, edge devices, and the cloud. I put all these kind of edge server or infrastructure edge layer into the cloud. Because generally, their operation model is pretty similar to what we do right now in the data in private or hybrid cloud, either for unprimed or for telecom case. But for the edge devices, there are pretty limited resources. I'm sorry that because formerly I tried to run live demo here. But I find that the power cable is not long enough. So this is just an example of the small edge devices, which is pretty small, similar to Raspberry Pi. But this box is actually 6 CPU-based. It can run all this general purpose. We can run the hypervisor here on the box. And this is pretty small. And generally, we find that in this case, real case, the edge device is a little bigger. Maybe smaller or bigger than a laptop. But that is good enough to have this technology work there. So from both of them, in the column layer, we have all the hardware. And then we can put type 1 hypervisor on top of that with certain security hotening, supporting the TPM, and maybe VTPM into the VM layer. And then we can run multiple VMs on top of that. Multiple means generally not too many, maybe 3 to 5 for a kind of mid-sized box. And within different VMs, that's the key part. The customer can run different embedded system, embedded operating system, or different IoT OS there. And in that case, they can run different application frameworks for edge computing, for IoT case there. Here in architecture, for example, we can run EdgeX Foundry. We can run other open source frameworks there. And they could connect with the external endpoints and the things with the IoT interfaces on the box, either in the wireless or in a wired fashion. And to ensure the edge device are well-connected with the cloud, because we see a pretty common situation for the question is that the edge device are connected to the cloud via the internet. And the connection is generally not mutual or IP reachable. The edge device could only see the cloud and connect to the device. But the service cannot reach, you know, in first hand, out to the device. So we leverage a pretty common pop-up mechanism to reduce the device for the virtualized box and manage that box remotely from the cloud in a pretty similar fashion as the IoT administrators manage the host in the data centers. That remove a lot of concern from the IT perspective, because for enterprises, with all these devices connected to the cloud to the data center, the idea is we have a pretty serious concern about how this stuff can build. What's the security handling situation there? How we can ensure that they are compliant with all the security and networking policies that are standard in enterprises? Without that, it will be quite challenging for the OT guys to convince the OT guys to permit all this connection. So we try to address this issue. Another thing I mentioned, with the consolidation, we can put multiple physical applications ordinarily running on different physical devices into the same one. So that saves space, that saves power, that simplifies the management. And with the software-defined networking functions, we can secure, isolate all these applications, ensure that they are running a different operating system in different beams. And go a step further, it's also possible to enable multi-tenancy and microsecondation, much similar as in a data center. You can think that in the future, for IoT systems end-to-end, from the data center, from the application in the data center to the things, to the sensors, all these vertical systems could be isolated completely end-to-end from the cloud to the things, completely. And with all these things together, as I mentioned, multiple application frameworks are supported that provide a pretty wide flexibility for the application developers to keep their current efforts, the current applications running without much migration efforts, and also protect the enterprise investment before. So that's the whole idea for the SRAD so far. OK, that's the first project on the user layer. So for the second project about Nebula and the marketplace, as I mentioned, there are actually different ad computing frameworks right now existing in the industry. And AdFoundry is one pretty good example hosted by Linus Foundation's AdEmbrailer program. We actually joined the efforts about, I think, more than two years ago, early of 2017. And for a little background, AdFoundry is a microservice architecture based and all these services are packed into small containers that they can run in the same box. And technically, they also can be distributed into different physical boxes also. And the only issue we find, all the potential constraints we find on the frameworks that are officially on the community, there are just some basic services running in architecture. The users, the customer, have the flexibility to build their own applications only if that's compatible with the API in the framework. But it's not very convenient for them to manage the lifecycle of all these applications, third-party application frameworks, together with AdFoundry. AdFoundry is a basic framework. You have some third-party applications. Your application could run there. But how do you manage? How do you manage the lifecycle? And how do you see other applications created by other vendors? That make some limitation when we're trying to expand the community, when we're trying to expand the ecosystem. So the idea here is that we want to build a marketplace for AdFoundry for third-party applications. So the users, enterprise users, have the flexibility to manage their third-party applications remotely. These markets could be deployed on-prem in their own cloud, or they could set up certain public cloud and open the services to other vendors so they can develop their own applications, upload there, and the users could download applications and push it to their own edge devices. The mechanism will be quite similar to Android App Store, if you can imagine. The edge device is similar to Android-based mobile. The user have the flexibility to download whatever they want. Going further, that's also open a commercial opportunity for SVs, for SIs, for device vendors to monetize their own, maybe close the source-based applications within this framework, or this infrastructure. So we see that's a pretty interesting project. And actually, the project in Nebula, we finished most of the code already, in the internal open-source process right now. Actually, when I applied the talk for open-source image a few months ago, I will assume that the open-source process will be finished in time, but unfortunately, it's not. So I could only talk about architecture, and you may check out the video on my YouTube channel. And hopefully, that will be open-source and published completely very soon. So that's the full idea. And for that project, we also implemented some reference applications to show the capabilities. One example is the local UI. We implemented that on top of the Edge Framework architecture. Another example is the video streaming service. We can get all this streaming video from certain local IP-based camera connect to the device, and show that radio and pass it back to the cloud and to other services. Some other examples like custom cloud connection, as I mentioned earlier. People generally want to have a custom data consumption, analytics services, in their different user cases. And that's a pretty good feature in Edge Foundry, which support multiple custom cloud connection. Edge Foundry does not bundle with certain specific data and analytics cloud. So that's a very good feature we could leverage. Of course, if you want to download and install certain third-party applications, we need application installer. That's also in the yellow box to achieve that goal in the back home. That's the current status for Edge Foundry marketplace. Going a step further, if we think bigger, not only about Edge Foundry, but also some other cases, I mean, Edge Computing architecture, the services provided by different companies, this is a kind of conceptual vision for the protein envelope in the future. If we see a setback, we can see that for enterprise customers, they have multiple choices when they want to deploy the Edge Computing systems. They can choose that options from different vendors. Here, there's a few logo there. But the issue here is that after they make a decision, they have to deploy that on their devices in the center. And if they do not feel very good with any vendor in the list or not in the list, it will be quite not convenient for them to switch to another vendor. It will be not quite convenient for them to integrate one IoT system with another. So we are thinking that whether we can expand the scope for Nebula a little bit, not only for Edge Foundry, but also for other Edge Computing frameworks, we can deploy the Edge Computing frameworks themselves directly to certain blocks, manage the framework lifecycle, and help customers, help the enterprise users to manage the lifecycle. And once they find it's not quite good, they are a requirement change. They want to switch to another one. We have the capabilities for them to decommission the existing Edge Computing frameworks and download another one, whatever they choose afterwards. For example, they may switch from Edge Foundry to another framework and maybe switch back to another. That's also possible. In a case, we can decouple the hardware actually from the operating system perspective. We decouple the operating system and all the resources below from the Edge Computing frameworks and applications on top of that completely. In a case, people will have much more flexibility, and their cost, their potential, their risk of blocking will be much less. That's the future we also want to explore later. OK. So the third project, Supernova, was that. As I mentioned earlier, we see in recent years, there is a pretty strong trend saying that about how we can leverage machine learning technologies in Edge Community in the IoT systems. You may also attend some workshops a few days ago in the Open Source Summit. Some vendors provided some workshops about how to make visual analytics, inferences on Edge on a small device camera and how to make all the things work. That's pretty good, pretty cool technologies. Actually, there are also a few vendors globally provide similar technologies. The general common approach is to deploy certain hardware accelerators onto the box, whether it is a common box or kind of specially created manufactured box With all these accelerators, they could run certain specialized decays that also generally created by the vendor and trigger the machine learning inference pretty productively, efficiently on the box. And in the mode, generally, the data will be pushed in the cloud and the training. With all these big data volumes altogether get a pretty big and accurate model. And they may optimize that neural net models and push that to the devices. Sometimes they may compare the data models into native code and run that code on the Edge devices to get a pretty fast and exact enough result and take some following actions to operate things or make some decisions later recognize certain opportunities, things like that. And actually, the issue here is that different vendors, the X-ray reaches are heterogeneous. Here are some examples from companies like Intel, NVIDIA, AMD, Google, and Zilinx. These are top vendors, of course, some others like Qualcomm, others. And there are SDKs like OpenVINO, TensorRT, Qualcomm, TensorFlow Lite, and the DNDK that come from the Zilinx and others, a lot of others. From API's perspective, they are different. From hardware perspective, the structure is also heterogeneous. And on the cloud, we see also pretty heterogeneous, pretty different machine learning training frameworks on the cloud. How we can simplify the process? Because in the current situation, the customer and the users have to choose a certain model firstly and make co-changes to support certain accelerators and SDKs specifically. And they even don't know whether that works before that's deployed. Here, for Supernova, we want to create a pretty common layer for machine learning inference. We want to integrate with Azure Foundry. And that supports heterogeneous hardware accelerators and their SDKs underneath. And that connect integrate with multiple machine learning training frameworks in the cloud. In a way, we want to achieve the goal to decouple the edge from the cloud, the hardware layer from the software on the edge devices generally. So simplify the overall operations and management. Here also, we refer to the similar architecture. You may find that also talk a little bit about the yellow box on the right, the machine learning inference server. So that's the overall architecture here. Also, this project, also in the open source processing, we cannot show the code right now. But hopefully, that will be released to the VMware GitHub account pretty soon. OK, so we have just five minutes, just last slide, maybe. Besides all these projects, as I mentioned, we advocate the Edge Foundry and other Edge projects a lot in China. Besides official members, we also build some retail groups. It's kind of discussion groups in China. I tried a lot of users there. So if for you guys, you are not part of the community or didn't join the discussion, I encourage you guys to join the discussion. Whether you're US or wherever place you come from, that's a pretty good framework. We want to advocate and expand that usage a lot. So some key degrees. First, the Edge should be loosely coupled and integrate well with cloud. And the open and neutral architecture is very important to ensure that end-first users have lower TCO, have a higher ROI. And multi-layers architecture on Edge is a long-term architecture we want to pursue generally in a few years. So big picture, please refer to the success story in the cloud. That's all the information I want to share today. I think we also have some minutes for questions. Any questions? Good question. Yeah. Crino is a data project. The major use cases run Tercome, cloud, how to access the applications there. Affirmative, Tina is the expert at the technology for our Crino project. So around our Crino, you may ask her directly. I can only say that Edge Foundry is very integrated. There are some use cases for Crino to deploy Edge Foundry, the framework onto the Edge device layer, and further, that could connect to certain sensors around. So that's a kind of top-down or from cloud to Edge integration mechanism. Our Crino itself does not connect with the sensors directly, so that they are kind of two layers from the cloud to Edge. You may see from cloud to the Edge servers around this station, and then to Edge devices in the fields, and then to the sensors in factories. Any other questions? OK, thank you. So I will hand around a little bit in the hallway after the session, so feel free to ask me questions you want. Thank you very much.