 Okay. Hi there. Welcome to this session. My name is Guo Zhengao from Huawei. Today, I'm going to talk about the 5G open-source MEC platform, which is Edge Gallery. Okay. Let's move on. Today, I have three topics to show. The first one is the introduction about our platform. And the second one is the feature details and the deployment method of Edge Gallery. The third one is, I will show you a demo about Edge Gallery. So, let's move on. Okay. So, here's the pictures about the 5G scenario. So, you know, now the construction of 5G networks is everywhere, covering more and more places. So, how can we effectively and efficiently unleash the productive forces of 5G? So, this definitely requires us to build relationships about a production. So, for a long-chain relation of production, like the chain in the ICT industry, there must be new relationship about the production based on networks and ecosystems to efficiently, effectively unleash the productive forces of 5G. So, like the pictures right here, it's an intelligent factory, intelligent school, gives government to have classes. And here's the intelligent hospital. Here's the intelligent harbor. Here's just, you know, the B2C, which means vehicle to everything. So, you know, the whole ecosystem needs a platform to have the Catalan about the service governance, traffic offloading, capability of openness. So, here's why the Edge Gallery comes from. So, right now, Teleco needs an open source platform that covers the whole system. So, in the past few decades, you know, the communication industry has undergone three transformations from IP-based transformation to network industry separation and then to digitalization. So, in this process, the network infrastructure supports more diversified services. However, the structural value transfer also reduced the share of Telecom parts in the internal industry chain. So, how do we coordinate new growth opportunities on the new network infrastructure? So, this is the key to reverse the current situation. So, the Edge Computing provides a possible solution to us. Yeah, that's the Edge Gallery. Yeah. So, Edge Computing provides a possible solution to us, which is feasible. So, some people even think that although we missed the chance of data center market worth trade-ins of dollars, you know, there are more enterprise markets worth trade-ins waiting for us. Yeah, it's right here. So, therefore, somehow the Edge Computing is a huge opportunity for the Telecom or ICT industry. And right now, we cannot afford to miss it. Here's the industry and here's the chance. Here's the opportunity. So, let me talk about the architecture or the platform. So, here's the developer. So, like the code right now, like me, yeah. So, we can code and develop a software project on our platform. And after developing and they need to test, and they need to package the whole project. And here's the tool chain about the whole system, the whole project. And then they need MEP platform. MEP means multi-access Edge platform. And they need this MEP simulator to round their project. Yeah. So, after develop, after test, after package, they can just publish their project to the app store. Here's just like a marketplace for the user to download the app. Yeah. And for the operators just like just like the operators in the market, yeah. So, they need to manage the whole system, they need to manage, sorry, they need to manage the whole store. So, here's all the app in the app store. So, beyond the MEP platform. So, yeah. Here's just like the, here's the platform's architecture. MEP just is a bottom of the architecture. And the operator can just manage the dozens of apps here right now. Yeah. So, here's the production environment. So, yeah, let's move on. So, here's the Edge Gallery. And just what I said is why the Edge Gallery comes from. Yeah. So, Edge Gallery build a technical-led Edge platform and ecosystem. So, our mission is to create MEP open architecture and open standards. And we want to build to be ecosystem. We are simplifying the development to let the common developers to develop their project faster and faster. Yeah. So, here's the scope. MEP's common platform needs to have listings. The first one is MEP. Like I said, MEP is just a multi-access Edge platform. So, the architecture of Edge Gallery is divided into four parts. Here's the, the first part is the MEP platform. It provides the basic service governance framework and some gradual improved network capabilities, such as location and QoS, which means quality of service. To enable the MEP basic platform layer, at the same time, the basic platform layer provides the simplified MEP management, applying for operation and enterprise self-management. Yeah. Here's the things. Self-management. And the implements the customer service, self-service and self-service portal and automatic service orchestration and management and provides the capabilities of application release, synchronization and the security authentication. Here's the MEP. So, this is, so the third, maybe the second one is MECM and O, which means MEC management. So, MECM unified application life circle management and monitoring and, you know, yeah, just the manager of the whole system. And the third one is App Store. App Store unified the application hop store. So, just for the user, for the added user to download the app and after the, you know, developers, the common developer, I mean the coder, huh, after coder to publish their app, their application to our system, to our platform, and the end user can download the app at the App Store. Yeah. Maybe the third one is, maybe the fourth one is the MEC developed portal, just for developers, for the common user, for the common coder. Yeah. It's IDE plugins and testing tools. So, based on the application repository, application of each carrier can easily enable to achieve a business close loop. Yeah. Yeah. That's what we want. Yeah. To facilitate application development for the developers and the partners from all industry. So, we can also provide a developer platform and tool chain to support application, migration, mutable platforms. Yeah. For example, migration between x86 and ARM. Yeah. So, here's architecture about our platform. So, yeah, Azure Gallery uses a cloud connector to implement. Yeah. Here is the cloud, and use a cloud connector to implement the interconnection between the open source edge platform and the public cloud ecosystem. So, the applications on the cloud can be deployed at the edge. So, currently, Azure Gallery plans to share applications with our public cloud by some agent plugins. Yeah. So, in the future, we will gradually share applications with other public clouds. Yeah. Maybe, just like that. Yeah. So, the secondly, I will give you some feature details and deployment method of our platform. So, here's the quick wheel about our platform. Yeah. Here's the Azure Gallery developer GUI, which is just for the common coder to test, to develop, to publish their app. Yeah. Their project, maybe. Yeah. Here's the app store. So, for the end user, for the common user, they can download the app just like this app, app, app, app. They can download the app here. And the third one is the MEC manager, just for the administrator or the manager. They can manage the whole system here. Yeah. So, sorry. So, the Edge Gallery developed, so our, so our platform is developer-oriented. Maybe, it's just a really, really friendly for the coder. So, here's the developer platform. So, for the common coder, for the common developer, they can just, they can choose a platform to test, to publish their app. And just like I said, MEP, they need to choose MEP capabilities. And the third one is they need to, they can choose a tool to install their app, to install their project. Yeah. Here's just the GUI and here's, maybe you can code, you can publish your project here. Yeah. So, after develop preparation, so they can develop and test their project on our system. So, here's just like, you know, every coder knows the, here's some interface, maybe. So, after develop test, maybe after, after they package their app and they can just have an integrate test. So, after test, maybe they can publish their app, maybe they can publish their project on our platform. So, yeah, after published, here is the app store. So, after published their project and we, and we tested their project with, with safety. So, yeah, right now here, they can publish their app here. And so, for the end user, they can download the app here. Yeah. And for the manager, for the administrator, so they can manage the whole system. So, the resource situation or the edge situation, they can just manage the whole system right here. Yeah. So, this is MEC management for the administrator, for the manager. Yeah. Yeah. They can just see the every edge, every IP note. Yeah. Very, very, very simply. So, yeah, I will, so maybe I will give you some, some introduction about how to, how to deploy our system. Yeah. Here is the, here is the, here is the official website. And maybe, sorry, sorry, here is the repository of our, our project. And it's, it's giddy slash edge gallery. And here is the, our official website. And so, everyone here can, everyone here can just, sorry, everyone here can just download our project very simply. And if you want to just download the edge, edge set, maybe the edge management, you can just download this app. And for the controller, I mean, maybe for the, for the, for the manager, you can just download this one. And if you, if you want to download and deploy all the, all the things, and maybe the edge and the manager, like I said, you can download this one here. Yeah. It's really easy. Yeah. Okay. So after the, so the, after the introduction about our deployment method and, and the feature details here, I'm going to show you some demos about our platform. Yeah. Here, just like I said, it's a V2X, V2X, which means vehicle to everything. Yeah. So here's a unified, unified coordinate system for a roadside. Here's a roadside and here's a roadside. Here's a vehicle site, perception information and secondary fusion of information such as the, here's a car car position, car speed, and the stated, stated of the detected object. Here's object, just like maybe an object, just like a pedestrian, a walker, or yeah, some object here, just in front of the car. Yeah. Here's a roadside formation perception, roadside perception. Here's a car side of perception. Here's a unified coordinate system. Yeah. Perception information, local closed loop, a local closed loop of the road network information. Here is based on intelligent calculation. Here's a intelligent calculation. Yeah. Sensor information is fused to form real time data, which is transmitted to the vehicle side. Yeah. Here is the roadside to the vehicle side and to improve the confidence of autonomous driving perceptions. Yeah. It's just for the driving automation. Yeah. Here's the architecture of the, of the things. Here the camera is for the roadside. Here is it for the roadside perception. And here is the car, the vehicle, is a ROS, which means it's an operating system. And here is the RSU, RSU. Yeah. Yeah. These things is just for the, for the V2X. This logical connection is the car, car, I mean the car information is to, to the roadside, roadside unit and just upload to camera and the camera can just use this information to calculate these things. And when the, just like that, when the car finds some object, I mean like a walker or, or some object in front of the car, maybe the car will stop. Yeah. Just like that. Just, it's just for driving automation. Yeah. It's just like a AI application. So here, here is the, I'm going to show you how to deploy it on the edge gallery. So yeah. Give me one second. Yeah. Here's just the developer view. This is just a brand new developer view. So, so after we, we get this page and we can just create the application here and start, plus start. And this app name, application name, maybe it's a V2X road app pp. Sorry. And yeah, we need an icon here. So yeah. I just already have here. So, description. So we can just, here's for test. Yeah. Just to confirm. Yeah. Now the new project added just now is successful. So we need to upload some fell right here. Yeah. We, we upload a YAML, YAML fell to create the environment. Yeah. I already, I already down here. So, so just to push next. Yeah. We need a environment to test our app. And here I just choose the sandbox environment. And we just start to deployment. Yeah. Here's, you know, it's just a default, default deployed. So it's very simple, huh? See, after create the deployment file, assign, assign test nodes and the instantiate application. And now we, we need to get the deployment status. Yeah. It needs some, some, some, some well. Yeah. Yeah. See the new, the brand new, the brand new system is very beautiful, huh? Purple is our official color. Yeah. Yeah. Right now we just published the, the deployment successfully. Yeah. Here deploy successful. And next. Yeah. We need to release the resource here just for our environment, for our, yeah. Okay. So next step. Yeah. We need here please click application certification to test before application release. Then we, we clicked application certification. Just to test the, the, the safety, the safety of this application package. Yeah. See it right here. It's, it's, it's testing right now. Just like that. The common complete, a complex test. Uh, here is a common sandbox test. And here is the common security test. Yeah. Security test. This is for maybe for the, for the environment. And here is for the security, huh? Yeah. All past. See it's all down here. Yeah. You can just upload your self-test report, but we don't need it right now. Yeah. And the next step, see our v2x world app is upload successfully right now. And it deployed right now, uh, successfully. So we can just publish the, this app, this application to, uh, app store right now. Yeah. Just after publish, publish, uh, after publish the, uh, this app and, uh, you can just find your, I found your, uh, your application here, just in the app store. Yeah. Here is, is the app store for the, uh, for the, for the, for the user. Here's, you can just download the, all the, all the, all the app here, just for the end user. Yeah. After developer, I mean the common coder or the common developer, uh, upload their application package and upload their, their app project, something like that at the developer system. And, uh, after publish it, so, so the end user, maybe the admin, administrator or the common, common end user can find the app here. Yeah. Here is the whole, whole system. Here is the whole, um, routine of, develop the, the app here. Yeah. Okay. So let me, let me see. Yeah. So after, so after, uh, after the, this V2X app has been published. So just you can see, here's just like a demo for us. Yeah. The, the car, I mean, this little car can just run on the road. This white thing is the road can ride on, can just, can drive on the road smoothly and smartly and intelligently by their self, by itself. Yeah. It's called artificial intelligence. It's called artificial intelligence. It's called driving automation. Yeah. Yeah. What's done is done here. And thank you all for listening and watching. So here is the, uh, website, uh, official website about Ash Gallery. It's GTI. It's the, our repository is on the GTI. So, uh, I wish you guys can just uh, follow us and uh, our vision is to build a collaborate and a women relation of production for Ash computing and our system, uh, our platform aims to build, um, the standards for the 5G Ash computing and architecture and capability openness in the open source mode, reducing the threshold of bringing enterprise and applications on board and the, of the industrial platform for 5G Ash computing. So in this way, uh, a large scale 5G application business ecosystem is formed through the new collaboration mode, develop mode, uh, ecosystem construction mode and the transaction incubation mode. So our platform, I mean, Ash Gallery hopes to, uh, enable the, the new relation of protection and unleash the productive force of 5G. So if you guys want to go fast, go alone, but if you guys want to go far, please come with us, go together. Thank you. Hello everyone. Uh, this is Victor Ko from Ash Gallery community. Uh, today we are going to print, uh, the tutorial about, uh, when IOT might have me see the contingency of the cool page and the Ash Gallery. So today's topics, uh, we have three people to present the slides and at last, we have a demo to show how the cool page and really integrated together to give us an open framework for the industriality applications. Uh, so Ash Gallery is an open source and easy platforms that could provide the 5G open capabilities. And they're doing the device that helps to change for the developer and the applications, Federation repository for the industry applications. Also, we have the orchestrator for the carriers that could orchestrate all the applications and the resources around the different world, different location and for different edge node. So cool page is an open source page computing framework that extends the power of the Kubernetes from a central cloud to age. So today we are going to present as a integration of Cool Page and Ash Gallery. So, uh, today's contents, our topic we have four. So first we are going to introduce Ash Gallery, how it really works. And secondly, we will introduce the Cool Page and we have the brief introduction for the, these two projects. And of this, we will give the overall view that the convenience of Cool Page and Ash Gallery. At last, we have the demo comes from Commander Kumar to how Cool Page and Ash Gallery works together. So, um, at first, I want to talk about the age representation and scope. So our scope is building a unified MEC ecosystem and accurate as a commercial use of MEC. As you can see on the right side, we have the four part of the age gallery. The first part is MEP that has known as MEC platform that defined in at CMEC SIG. So, uh, we'll leverage the software capabilities such as AI or UPF or 5G capabilities that comes from the carrier network. And we also provide like a gateway and tools, applications and applications can, uh, using such capabilities by using the unified gateway, we call MEP. And the second one is MEC application orchestration and management, as known as MEO and MEPM. So MEO and MEPM, we are, we are provided as a unified application lifecycle measurement and resource and application monitoring. Also, we can, you can see, we can currently, we already support as, um, OpenStack and Kubernetes support, um, for as, as to give the application lifecycle management and the resource monitoring. At the third part, we have the application federation. The application federation is unified application repository and it's mostly interconnection with commercial application markets. That's the makes, uh, uh, application easily to deliver from the application store to the carrier MEO or MEPM system. Or, or communicate different application store so that we can share the application ecosystem for, for all the carriers and application developers. So at four, uh, first part is developer tool chains that, um, we provide a code integration standard MEP APS. Also, we provide a huge capabilities, um, comes from the hardware or third party, uh, applications that make, uh, make, uh, uh, developer to easily to use. And we also will package and test the applications for the end users. So our position is a carrier lead, uh, each computing architecture and the capabilities openness, defective standards, lowers, uh, lowers the threshold for enterprise application deployment, build a skill and build a to be business ecosystem. So our goal is build on each computing projects that is most compatible and with connection plus computing in the telecom industry. So here is a each gallery, uh, each native architecture for the latest release. So we, we give the concept of the each gallery, we call the agent native architecture that we, um, resolve so many concept of about the whole agent native architecture should have. So at first, let me introduce the each gallery architecture. We have separated our each gallery into three terms. We call it design time, distribution time, and a run time. So design time, as we said before, we have the developer portal that provide the developer tool chain for the, for the application developers that we use. They can use an SDK API or plugin to using the 5G capabilities. And we also have the capabilities exposure design that that is an application developer can use it. Or we also have the container, the virtual machine packaging, we already support. We, we both support as an open stack and the Kubernetes. So we also have the online debugger and the monitoring we call the sandbox. The sandbox, we could include different capabilities such as 5G or other GPU, CPU, such hardware capabilities. So, uh, we also have the application test platform that we can use in such platform to, to test the, to test the, the applications for, for the application developers. That is before you are going to the application repository, before we call the application store until you, you must pass such tests to the, on the, on this platform, you can upload it to the application repository, we call the app store. So on the application test platform, you can use in the, the existing test cases. Also, you can upload your test cases comes from different, uh, uh, I mean the customers. For example, that carriers, they have the different test cases, you can get it and we can together using the application test platform to test it. We also have a design portal to design the applications and also we can also design the each carry on demand installation. So after you passed all the tests and you're also integrated as, uh, the application on our platform, you can go into the application repository, application repository, we can, uh, do the application distribution between different application store and, uh, between the application store and the MEO. So we also have the online experience and the catalog of iterations, such communities or functions. After we distributed the application to the MEO, the MEO will do the left circle management. And, uh, for example, we also have the rule management to configure the UPF rule or DNS rule or any other things that we also resource management and the plug or cloud management for the different private or public cloud. We also have dashboards that you can show the different, uh, monitor the resource and, uh, and, uh, get to the different state of the applications. The last one is the runtime. The runtime is about two things. The first thing is our MEP. We include the API gateway and the capabilities and the data plan. Data plan is we extend as a Kubernetes multi-network plan that we are using MetaLas to extend as an, uh, network that we can have the different, uh, differentiation from the Kubernetes counter capabilities. But actually in telco, telco world, we must separate as an application network and management network. So we have, we're using the Kubernetes controller and the informal mechanism to give the capabilities for the developers that we can give multiple network for our applications and for the different measurement and API call. So that we provide such capabilities in addition that we also add the, uh, the controller for that the API server could call the multiple, uh, multiple network by using the API server. So we do not need to directly use the API address. We can, you can use your domain name or any other things. Another part is application LSM and application rule management. After this, we, uh, if you deploy the application LSM on the edge side, you can do the edge autonomy that you can after you have the disconnection between the central cloud and each cloud, you can use the application LSM to control your edge node. So, uh, we are supported the container-based application, the virtual machine-based applications. Also, we have the MEP agent to support, to support it. The MEP agent is something, something like the set a curve for the applications. They can help with the application to do the, uh, application registration and subscription for the MEP. After this, you also have the, would use the MEP agent for, as a gateway to route to the API gateway to call the open capabilities. Yeah. So, um, as I mentioned before, we have the top 10 native edge guard, edge native technologies in, in, in able digital transformation in the 5G network industry. So this is a mapping tool from the, uh, our edge guard to the edge natives technologies. We have the edge infrastructure and the edge network, edge orchestrator, edge cooperation, edge AI, edge security, edge mesh, edge storage, edge framework, and edge security. Both have, we have the different capabilities. Edge infrastructure will, will give the different capabilities that can be used in different hardware and edge security will provide the edge node as a security for different, different location or different hardware or different usage for the edge. And we have the edge framework for the developers that could use such edge framework to get the, uh, um, convenience for, for development. Uh, diverse, uh, sec ops, we have the two chains, edge, edge orchestrator, we have the, um, the declarative, uh, resource, resource orchestration for the, for the left side commander and the resource orchestration. Edge collaboration, we, we are going to collaborate as a network user end and there's a 50 network and also public cloud. We, we can use an edge cooperation to work together. MEP is easy to understand. We have the edge AI data, mesh and network network is a sparing for the 50 network. So architecture view of the cool beach and edge integration, as you can see on the source bond of the, uh, the green, green block that we can collect as a stream from device by using different protocol. So we leverage the cool beach protocol stack to, for example, we are using the cool beach OPC you remember to get as a data from the, uh, uh, from the devices after the, we go to the devices, we have the, uh, the data and let's take a tools. For example, the creeper or a flink, we can, we can automatically to configure it, configure everything together. For example, we can configure the cool beach creeper and, uh, OPC you remember and the devices and the preferment we can configure together. Everything is, uh, automatically down and the end user just can use one profile. So here's a profile is we can call it as a different protocol profile. We also can call it is, uh, um, maybe some scenario for the different industry applications. So after this, we have the different profile to goes to the profile management. The preferment will expose the different APIs. I mean the unified API for the profile management that we can, we can get the data from the profile management. For example, we'll output such data to, to the, uh, LTFs or public cloud. So I, I, you can see on the right side the applications or public cloud also can use the edge gallery to empower the capabilities to themselves to, uh, using the, uh, edge gallery capabilities. All right. Uh, so for QBH the project was actually started in 2018 and donated to a CNCF as a, as a Sandbox project in March, 2019. And now QBH is the only incubation level pro edge computing project in the CNCF. Until now we have, uh, received more than 4500 GitHub stars and 1300, uh, GitHub folks. And also, uh, we have more than 800 contributors, including more than 200 code submitters from, uh, over 60 organizations or around the world. Uh, in the QBH community we actually have, have a lot of, uh, special interest groups and, uh, working groups to, uh, discuss, uh, multiple, uh, uh, technical areas to better, uh, help them in, uh, achieve the, uh, edge and the cloud, cloud collaboration, uh, architecture. Uh, so the, uh, AIC is, uh, focusing on, uh, simplified the AI workload, uh, running on the edge and also automatically, uh, enable the, uh, collaboration between, uh, cloud edge with federated learning and also, uh, incremental, uh, uh, incremental training and the lifelong learning and the joint inferencing, uh, sort of things. And the, uh, IOT, uh, special interest group is, uh, focusing on simplified the, uh, device integration, uh, no matter, uh, the, what the, uh, uh, protocols, the, uh, device they are using and the MECC is, uh, focusing on provide a reference architecture to leverage the MEC, uh, uh, uh, uh, platform learning on top of, uh, QBH and also the wireless working group is actually, uh, uh, uh, discussing about how we can leverage, uh, the edge computing deployment with, uh, edge nodes actually, uh, keep changing the positions. And recently there was, uh, a discussion about, uh, establishing the robotics special interest group to simplify the, uh, robotics development on the edge and, uh, collaborate with, uh, services in the cloud. You can actually, uh, check out the QBH community over governance, uh, documented for more details. And, uh, from technical perspective, uh, actually, uh, QBH is based on Kubernetes. So, uh, we provide the Kubernetes native RAPS support on the edge. So basically, uh, if you have any, uh, applications that depends on Kubernetes, uh, or you have some, uh, plugins or, uh, operators, uh, that is, uh, relevant to Kubernetes APIs, you can easily run on edge with, uh, the, uh, native RAPS support, Kubernetes native RAPS support provided by QBH without any further refactoring. And to, uh, deal with, uh, a low quality, uh, network environment between the edge and the cloud, actually, uh, QBH provides, uh, seamless cloud edge collaboration, uh, mechanism, uh, which, uh, makes life much easier to, uh, for the whole system or work over, uh, low quality network. And also, uh, the edge nodes can be located in, uh, any, uh, subnet or private network work, uh, behind the firewall. And even the edge is disconnected to the cloud with the edge autonomy functionality provided by QBH. Uh, the node and the applications on the edge are easily to, uh, uh, work autonomically, uh, with all the relevant information persisted on the edge node. And also we, uh, optimized a lot, uh, of the, uh, underlying, uh, system overhead to, uh, make, uh, a, the whole system easier to run on the, uh, low resource devices and the low resource edge service. So, uh, for QBH itself, it takes only around uh, 17 megabytes, which is quite a small footprint. And also to simplify the device integration, actually, uh, QBH have a, uh, a device, uh, mapper, uh, framework, which provides extensibility to easily integrate any, with any, uh, uh, device protocols. And also QBH provides the, uh, cloud view for, uh, the metrics data from the globe. Okay. So this is the QBH architecture actually, uh, uh, in the cloud, it need a, uh, uh, to run together with the vanilla Kubernetes. With this, uh, actually user is able to manage both nodes in the cloud and nodes on the edge. While for any updates, uh, relevant to the nodes on the edge, for example, a new pod, a new application, uh, is scheduled to, uh, any edge nodes, cloud core will help synchronize the updates, the, uh, application definition, the containers to the corresponding edge node. And the edge core actually, uh, uh, it contains a lightweight tool that will, uh, help spin out the, uh, right container to serve the applications. So, um, actually, uh, uh, the edge core will support any, uh, OCI conformant container runtime, including, uh, Docker, container D, uh, CIO, and, uh, Cata containers. Besides the, uh, standard, uh, container, uh, the CNI, uh, plugins, QBH provides the edge mesh framework to simplify the service communication between, uh, different, uh, subnet, uh, different, uh, private network. So it will automatically establish, uh, the P2B connection between different, uh, subnet. With that, the, uh, containers, the applications on the edge, uh, can easily talk to is with the, uh, each other, um, with, uh, uh, exactly the same experience, uh, you are using, uh, Kubernetes service in the data center. And for their storage on the edge, we also support the standard, uh, CSI, uh, plugins. And the, uh, connection between cloud and the edge, uh, by default, we are using, uh, the WebSocket as the underlying protocol. And, uh, actually on top of that, we, uh, it's, uh, uh, messaging mechanism, uh, provided by QBH. And we also implemented, uh, the application layer, uh, uh, message, uh, uh, the, the message, uh, persistency, uh, uh, which ensure your, uh, applications on the cloud is up to, uh, on the edge is up to the, uh, desired state defining the Kubernetes. And to simplify the, uh, uh, the device integration, actually, uh, uh, QBH has a, uh, device member framework. So, uh, any, uh, devices with its own, uh, protocols can be easily integrated with, uh, QBH. So basically, for example, we have, uh, uh, some devices, uh, uh, uses Modbus, uh, protocol. So, uh, QBH provide the, uh, Modbus, uh, device mapper to have a convert, uh, the, uh, messages, uh, data, data from Modbus to the standard, uh, MQTT based, uh, email, uh, messages. So, uh, with that, the applications on the edge, they don't need to worry about the, uh, the real protocols the, uh, devices using. They just need to, uh, know about the, uh, message format defined by QBH and subscribe from the, uh, standard message topic defined by QBH and they can easily talk with the, uh, uh, device. And also, uh, QBH, uh, makes it quite easy for the users to define their own, uh, device mappers, uh, to integrate with any, uh, uh, third-party, uh, uh, devices and the device protocols. There are like thousands of IoT devices which are OPC-UA based in a factory. So here, uh, QBH is used to collect data from these devices which are sending 1000 readings per second. And this data is goes to EdgeGarly. EdgeGarly runs CTL engine, lightweight data store, and a visualization application. So, first we'll create a project. We'll give name of our application. Here we can upload our deployment file. How do you select? Now we can do, uh, Sandbox test for our IoT test. Success. Sandbox testing successful. Next step, our, uh, deployment is success. So we can, uh, provide, uh, some information about our application. Here in this file, the configuration, go to next step, testing. So it has certain type of test cases we perform on this application. So all tests has passed on the package. Now we can go to next step. We can first distribute this application to the selected node. So I'm going to, uh, deploy on edge node one. Now distribution is successful. We can deploy this application. It can take some time. So our services are getting ready. Yeah. So our services hasn't started successful. So we're using, uh, TD engine. So I will select, we can see our dashboard. So currently, uh, there is no data coming to the dashboard. So till now we have not enabled any device. So no data is available in this dashboard. Now we'll see people. So this is the topic from the Q veg. And here, uh, we're giving by card device. So whatever device Q veg is getting collecting data, all will be, uh, get by people. And then we have set rules in Cooper. So we are transforming and filtering the data, I to data. So here we transform like, uh, uh, time stamp in the readings from the devices. And we transform in the, uh, in readings, it is coming temperature as time. So which we converting to a, uh, name like temperature. And, uh, this stamp value are coming in stream. So that we converting to in to store in our DB. Okay. So now we will, uh, deploy OPCA server, which is collecting data from 1000 OPCA clients. And, uh, Q veg will get data from this OPCA server. So, uh, this is the OPCA server. I will start OPCA server now. Now we can see the law of this OPCA server. So we can see, uh, it has connected with 1000 OPCA server. Listen on this URL. Now I will start a client. So this will start 1000 clients, which, so this is our Q veg setup. So I will show you. So, so this setup has one master Q veg master and one Q veg node. Now we will see if any device is configured in Q veg. So currently no device is configured. So I have already created a template file and device instance file. So I will quickly show. So we have model here. I will show one model. So device model is like device template. And, uh, Q veg device instance will use this device model as a template to create a device instance in Q veg. So our device model has two properties. One is temperature and other is humidity. I will show you device instances, configuration files. So we have already generated 1000 device files. Our file has one device configuration. I will show you one device. So this is kind of device in Q veg where we can define our device instance. So we can give our name. So we gave lamp one. And this is the model which this device will create. So next is protocol. So this device is using OPCA protocol. And this is the URL server URL from where it will get the readings. So this server we already started. Next is property visitors. So this device, what all property it has. So it has two property, temperature, humidity or temperature. This is the OPCA ID from where it will get the temperature. So this temperature node ID is like name space 22 and string, which is a type string is sensor one. So each device name is like lamp one, lamp two, lamp three, until 1000. And here this detail of the property is provided. So here temperature and here we can see the sensor is 100. So now I can show you Q veg use OPCA mapper protocol there. We have two model in this path. So this two model has been created. So we have created two model. Now I will create two devices. Okay, we will create 1000. We will configure 1000 devices. So in this path, we have 1000 configuration file for devices. So this will create and configure 1000 device instance. So in Q veg, we have configured 1000 OPCA device. Okay, so it has created, you can check whether OPCA service started or not. So it is running, it is started. So we can see this service. So we can see our OPCA service started and it is getting data from OPCA server. And these are the readings from OPCA server it is getting. So we can see like it is getting temperature, humidity, and values are like in string. And time step is in like a number, which these values will transform in. Okay, so we will see the DB. So in TD engine, we have a DB IoT DB for collecting and storing IoT data. So it is almost 1000 devices data. So device plus device data it is storing with timestamp. In our visualization application, which for which we are using Grafana. So here we have configured some random devices, which are user interested. So we configured like LAMP 1, LAMP 250, LAMP 700, LAMP 999 to show different device data. So here we can see. So since we just started, we may not, data is coming. So with different LAMP has different readings at different point of time. You can see here. And here this graph is for a total count, message count per second. So we are attached like 1000 devices and each device ending once reading per second.