 Thank you so much, Arpit. Let me just share my screen here. One second, and putting this on a slideshow, OK. All right, thank you so much, Arpit. Thank you, everyone. I'm Sam Armani, SPP Business Development Admimic. It's a pleasure to be here today at the Open Networking Edge executive firm. In my presentation, I'm going to go over how we can enable smart devices to act as cloud servers, which is something that we at Mimic have been working on for the past decade. And it's available today commercially to use by the developer community and enterprises. So to start, let's start by reviewing the current status of data connectivity and processing. Today, the way applications are architected on all of these connected end nodes. And by edge, I'm referring to all these devices that are running applications and are generating data. So majority of this data, over 90% of it, is going through a combination of a public and private cloud for it to get processed, stored, and streamed amongst various devices and systems. And with the increasing amount of data that getting generated from these devices on the edge, there is an increasing demand for high bandwidths and heavy compute power by adding servers in the data centers. Now, there are a few challenges that we're facing as a result of this centralized model. One is latency in communication, because everything needs to pretty much go through a third party node in order for it to go from point A to point B. And that leads to latency issues. And that's especially important for mission critical applications. Then there are issues around data privacy, both on the user data as well as enterprise data. And again, that's because you have to send all the data to third party servers to do anything. And then more data in motion, more vulnerability to security hacks, energy consumption. And the reason for that is that any application you launch, basically you have to add more and more server in order to support that application. There are bandwidth issues, bandwidth congestion issues, because everything needs to go through network. And these are already existing hardware that is available on the edge. So let's look at the evolution of these edge devices that I'm referring to. So when you're looking at this graph, you see that year over year, these devices are becoming more and more powerful in terms of memory processing power and storage. Just taking, for instance, iPhone 13 or Samsung Galaxy as an example, it's pretty much a server in your hand. It has more compute power than a server room had just a few years ago. And then when you're looking at the headless IoT devices, again, you see the same trend is happening. The MCU and MPUs of these devices are in orders of magnitude more powerful than they were just a few years ago. So we see that the trend that is going, that these devices are becoming more capable in terms of the processing power. Now, let's look at the evolution of application architecture that is on these devices. Now, in the world of legacy apps, which is pretty much a still majority of applications today, you had a light client agent running on the front end devices. And remember, these devices didn't used to be as powerful in the old days. And then you had a monolith backend, which is the entire business logic or intelligence behind this application. Now, in the modern applications, even though these devices have become more powerful, the front-end side is still a light client or agent. But the backend is now decomposed of an API gateway and microservices. So it has evolved from monolith to microservice-based architecture. Now, what we are promoting in enabling here is a more balanced approach of putting microservices as part of the front-end application on the devices as well as the backend or anywhere in between. So basically, we are enabling the application developer to now decide which functions and form of microservices make sense to run on the front-end side, what makes sense to run in the backend on the cloud side, or any hardware in between. For instance, if you have a gateway on-premise, then you can decide to put some of the functions there. So this way, you're basically leveraging the power of all the hardware that is available to you. And you get to do some of the processing right at the data source where the application resides. By doing this, we're pretty much turning every device into a metrocloud server. And we are moving from the world of centralized to a hybrid. So instead of everything communicates through a central node, now every device here has a cloud capability at the application level. And they can form ad hoc clusters of service mesh at the application level based on policy. And they can communicate in a peer-to-peer fashion. And this cluster can be based on account or network or proximity. And yet they can go through the cloud when it makes sense, for instance, for running global functions. This way, you would have a much cheaper, faster, resilient, private, and green solution for the applications of future. So the benefits are some of them is less cloud hosting costs because now, and of course, this varies depending on your application and how much of the functions you put on the front-end side. But you can reduce that by up to 90%. You have optimal latency. Functions that run locally, it's pretty much minimized. And if it's multiple devices involved, since in some of the cases now they can communicate directly, that latency will be minimized there. We're enabling 100% GDPR compliance by function. So the developer can decide what data remains on the device, what goes through the cloud or other hardware that is available. And for the data that remains on the device, they get to decide how it's going to be accessed and shared amongst other devices. We have a military-grade security built into our edge engine, our core platform. And of course, you can add more security at the application level. Optimal resiliency, so let's say in a hyper-local environment, if you lose connection to internet for a period of time, your operation can continue. And then reduction in the data transfer to the cloud, which leads to lower carbon emission footprints. So overall, this is a much greener approach to application development. Now, at Mimic, our philosophy has always been that we want to be aligned and contribute to the open tech community. The way we do that, one, is that we're enabling microservices and open APIs everywhere, even on your front-end application side. We are an open platform in a sense that we're enabling cross-OS, cross-device, cross-cloud, and cross-network communication. One device and one Android can be connected to Wi-Fi and communicate with the iOS that's connected to LTE. We are enabling developers to use any tools of their choice, programming language, or cloud deployment model. We are basically agnostic to all of that. We work with everyone. For the software vendors and enterprises, so basically your solution, you're not going to be tied to one vendor. You can switch vendors. And then still, your system is going to continue working. Your architecture doesn't have to change. Lower development costs, why? Because you build your Edge microservices once for one platform, and you can reuse it as many times on other platforms. You don't have to rewrite everything from scratch. And we're accelerating innovation by basically combining capabilities of various vendors with the developer and user community. So basically, we're providing developers more choice. They're not going to be locked in, and they will have more control. And one last, but not the least, we just recently integrated Mimike edge engine with Open Horizon, which is now extending the reach of Open Horizon to all the smart devices like iPhone, Android, and also enables the ad hoc deployment model on the edge. And we've open sourced that integration code and submitted it to LFH that is now available for developers to use. Just quickly looking at how pandemic has affected everything. According to Deloitte, COVID-19 has accelerated the demand for digital transformation and edge computing by at least six years. And the reason is, one, the contactless and automated services are pretty much required everywhere all across. Economic meltdown has forced enterprises to become more cost-conscious. A high volume of data that is now going through network has proven that many existing solutions are not as scalable as they thought before, and hence the acceleration for using API-first and microservice architecture and edge-enabled applications. More rigid security requirements, especially for contactless approaches. And the overburdened systems have proven that the tradition of the paradigm is not sustainable and scalable. And the pandemic has forced the development of using new tools enabled by edge computing for things like remote monitoring, remote management, provisioning, and repair, and such. And looking at some of the current business and technology priorities, it does appear that yet we will have another record year with the growth of IoT applications and platforms. We'll see increase in AI and ML-related applications for things like fraud, anomaly detection, and automated inputs, facial recognition to improve workplace access, image recognition-based identification. And in every segment, pretty much we'll see that health care, automated manufacturing use cases will be on the increase. But almost in every segment, we'll see that, for instance, the smart city infrastructure, robotics, there will be a lot of investments done in those sectors. But when you look all horizontally, some of the key focus areas for improvement is still going to be around security, privacy, and enabling edge computing, including edge AI all across segments and building the applications. I encourage everyone to go to developer.mimic.com. You can download Edge Engine for the operating system of your choice for free. There's a lot of documentation and training there. Then you can use it for building your application. You can start developing your own edge microservices and application in whatever language you want to develop. And then you can publish your app. And of course, we're here to help you. Then we have a set of hybrid edge-optimized app domain software as a service components for things like identity management or for managing the edge-based microservices that we also make it available to the developers. And then we have things like analytics for enterprises. And we're really looking forward to work with the community to build a microservice marketplace as well. That wraps up my presentation. You can find me on LinkedIn or email me for any questions at any time. I'm happy to even get on a call to discuss any of these topics for you. But for the bit of time remaining, I'm here to also answer questions. Also, don't forget to check the Mimic Open Horizon integration that I mentioned as well. Thank you so much. Thank you. Thank you, Sam. If you can stop the presentation. Perfect. Thank you. Yeah, that's great. Good overview on the edge compute market and how COVID has accelerated the transformation. To your last point, there's a question that says how much of open source, how much of work that you have been doing is either open source or built on open source. And I know you talked a little bit about the open horizon integration. Can you elaborate a little bit on that? Yeah, so the edge engine itself is not open source, although it's available for developers to use for free further development. But the whole integration piece that we did of edge engine with open horizon, that's open source. And that's something that we recently submitted to the LF Edge Open Horizon project. OK, OK. So that's kind of you utilizing the power of the community to sort of enable this. But it's really fascinating to see things move and the whole edge compute market go significantly. There is a question on, I'm not sure if it's completely related, but there's a question on, how does it relate to SaaS and service mesh? I don't know if that's relevant. So I think we might skip that one. The last question, and I think we'll be running out of time, is how do you manage microservices deployed over smart devices? So I think it's more at the device to the edge kind of level. Right. So basically, first of all, back to the one thing I wanted to mention on the previous question, that we also do have a bunch of edge microservices that we open source. So that's also available when it comes to the open source side of things. Then on the second question, we basically create an ad hoc edge service mesh amongst devices at the application layer and microservice layer, that now these applications on the devices based on the policy of the application can communicate in a peer-to-peer fashion. And then on the deployment side, we use OpenHorizon, particularly on all of these devices. Very good. OK. Excellent. Thank you for answering the questions. And thank you for providing the insight. Appreciate it. Thank you. Thank you so much, Arpe. Thank you. My pleasure.