 So, my name is Antonio Pugliafito from University of Messina and he is my colleague Giovanni Merlino. We belong to the University of Messina, but also in this C&I, I will explain what it is, and the company Martin Mioio. So the University of Messina is a university located in Italy, in the southern part of Italy, in Sicily, where we teach computer engineering, specifically the mobile and distributed system lab where we work and operate. Cine is an Italian university consortium of almost all the Italian universities that are focused on computer engineering and informatics. It is organized in several labs. One of these is the Smart City and Smart Community Lab, that I'm leading at the moment. And it involves several hundreds of researchers distributed all over Italy. Finally, Smart Mioio is a company that was originally created as a spin-off company of the University of Messina. Now it's a company on the market. We collaborate with this company and it supports us in a series of development related to hardware and software and a specific application onto the market. This is the location of the company, the new building where the people transferred about six months ago. Okay, so the outline of this presentation is the following. We will discuss about edge computing, specifically some hardware named Arancino, and I will try to give you indication about this hardware. Then we will go on details related with the software part that is the Stack for Things framework that is strictly related with OpenStack. And we will also give you some examples of applications that we are doing at the moment or some Smart Cities and industries. So the hardware, the problem that we try to solve is how to deal with a huge amount of sensors, actuators, distributed around, and also mobile phone device that they have some computing capability and storage capability. And we want to consider them part of the infrastructure. We are firmly convinced that boards, camera, sensors, actuators should be considered since the very beginning as part of the infrastructure that we are to create. So the infrastructure will involve computing resources, will involve storage devices, networking issues, but also IoT. Because nowadays sensors and actuators are part of the system that we have to set up and then we will develop services on top of it. So the problem is how to deal with these devices. Co-operating with this company, the family of Arancino boards has been developed, the one that you see there, but there are several versioning and we will show you the real objects that we are with us. And the idea that is behind this board is to try to emulate the way in which our brain works. So on the same board you have two hemispheres, two parts. One is based on a microcontroller that is able to interact in real time with the external world, that means the sensors and the actuators. The other side is that there is a based on microprocessor, like in these cases a Raspberry Pi that you see here, but there are also other possibilities there, more powerful, that is able to work with this data, to store this data, to keep memory of the past. And according to the past also try to predict what is going to happen in the future. So this is the two hemispheres that are represented in this way on this kind of board that has also some other interesting capabilities, like these expansion slots that are conformed to the standard, there are hundreds of possible expansions available on the market, these click standards, others can be created. So this is the basic board and then you can specialize in adding some dedicated hardware. So this board is used in some products like environmental monitoring station, getaway Lora, smart camera, noise detector and more that are products based on these components. At the same time you can have a different communication channel here, like UMTS, Lora, SIG, Fox, LTE and more, so at the same time enabled to interact with the board itself. So in some sense they try to replicate the way in which the human brain is structured. On the other side we are to control these boards. So the problem is how to have thousands of these devices distributed around how to interact with them. They create a sort of a fit of devices and you want to talk with them, to specialize, to inject code, to receive the data, to group according to the specific problem that you are to solve. And typically the approach that is available is the following, you have the device, the device generates data, these data are stored in the cloud and then I go and look at this data. This creates a serious problem with regard the latency, for example. So in several applications it is not convenient and possible to use this approach. In some cases we have some API available that can be used to interact with the device but it depends on the producer, the developer or the board, if it makes them available. So in some limited cases it is possible. As I said at the beginning, what we want to do is to bring the edge devices inside the cloud and consider the same level of computing, storage and networking resources to build the infrastructure we are interested in. So what we want to do and what we are able to do is to have edge devices physically connected to different local area networks that interact with the cloud and I can mix devices from one network, with devices with another one, with a machine, with storage resources somewhere. I create my infrastructure and then I deploy the services on top of it. So this is what we are trying and we are able to do. We will explain how it works. So this is our stack, our reference architecture that somehow is similar to the Loki architecture we were discussing yesterday. So we go from the lower level where you have the devices, then there is the operating system on top of it, some components that are part of the evolution of OpenStack that are the stack 14 sliding road component that runs on the devices, on the board and the Iotronic services that is running into the cloud. And then you have the different drivers that allow you to interact with the physical environment. So this is very quickly the architecture and what we try to do is to arrive to this sort of vision that we define a software defined city, software defined industry that means a full abstraction that goes from the physical layer where you have the sensor, the actuator, the detect data from the city or from the industry where you're working and they generate data that are managed according to board like this one. They have a counterpart, a virtualized representation inside the cloud where they mix with the other resources of computing and storage and provide the services and application to the final user. For doing this, OpenStack has been extended with a fourth pillar that we name stack 14s. This is where you can find it, where it is available that adds the capability to interact with edge devices. And what we will try to do now is to give you more details related with the way in which we implemented this interaction with OpenStack and that will leave the stage to Giovanni. Thanks, Antonio. So, first of all, a comment about the video it's important to say settle on the naming because we have both these iotronic namesake and stack for things. The reason for that is that, as you can imagine, especially in the industry in our company, in our spin-off company, we have a bigger, deeper stack which includes lots of application-level logic. So, stack for things is a kind of namesake because we try to keep it cool also because it's still always open source software, but it's somehow a larger umbrella project. iotronic is what we are focusing here right now on because it's really the subsystem for iot, as written there. So, it is meant to think about iot devices as, again, OpenStack compliant resources to manage. So, okay, this is a very high-level overview of the architecture just to know which is what. We have the iotronic, say, main components which stay in the cloud, in the data center, let's say. Then we have a number, of course, of interfaces, CLI, of course, the custom panel for Horizon. All the things that we know are needed, of course, in the case of OpenStack subsystems, but there's something special which is, in particular, so-called lighting rod. It's the name we have given to the device side, the board-hosted agent. And you see a number of arrows depicting the fact that we have a number of elements at play, as we'll see in a moment, for the communication between the cloud and the far-edge constrained node. So, this is a depiction of the iotronic architecture on the cloud side. As we said, it's a very OpenStack compliant subsystem. Actually, to be honest, even the namesake somehow can bring you some memories. It started all as a kind of a fork, an official, say, atom fork of iotronic. Why? Because the idea was to really take all the boards as bare metal first and then enabling all other kinds, as we'll see in a moment of workloads, for instance, containers, later. So, on the device side, there's lighting rod, and lighting rod interacts with the cloud mostly, foremost, through the WAMP protocol. WAMP is not the Windows, etc. kind of a crime. It's actually web application messaging protocol. It's a sub-protocol of web sockets. It's somehow standardized in the sense that there's a draft RFC, and it's interesting for a number of reasons. There's no time here to discuss about that, but it can be thought as a protocol akin to MQTT, which most of you know is quite popular in the IoT space, but it's a kind of more advanced, more featureful protocol, and bonus point is based on web sockets. That's very important, because I guess we'll see later, but you can possibly guess. We based this kind of communication on web sockets, because the idea is that the board should be able to call home, so to call really the Iotronic cloud, anytime, anywhere, behind any kind of constrained, very corporate-like kind of network. So, to be clear, as you'll see later, Antonio will show you some use cases. We talk about situations where really most other systems that try to control and to interact with boards are break, are not able to really do what they need to do. So, it's kind of battle-tested. Okay, there are a number of functionalities, but we'll get to that later. So, for instance, yeah, plugins. Why plugin injection? First of all, what is a plugin? It's just a namesake for an amount, of course, of a code that we can inject at runtime on the board. Why it's important, because this notion of plugin injection has been the first way for us to be able to customize the business logic, let's say, the inner workings, but even, say, lower-level software running on the boards when deployed in the field. Again, in the field means maybe in the middle of the sea. So, that's the point. There are two kinds of plugins, synchronous and asynchronous, as you can expect. The idea is that you can either, of course, make a kind of RPC-style call, or you can instead just pull and check what is the status of the run. And by the way, this kind of duality, synchronous and synchronous, is really enabled by the WEMP protocol. The WEMP protocol has so-called rooted RPCs and the rooted PubSub. That's why I said it's a kind of a superset of MQTT. Okay, about tunneling and forwarding, which is, again, not the only, but one of the foremost features about Iotronic. It starts by questioning what we want to do with the boards. In the most general way, we expect to be able to create virtual networks, spending, of course, domains, spending, of course, geographic distances, and connecting IOT devices. And again, when we say IOT devices, just like what we showed, we are thinking about stuff that we can exemplify like single-board computers, but not only those. We have done some work also with mobiles, for instance, Android mobiles, and so on. So the idea is that we can, of course, enable a number of applications. Just an example is being able to support some, let's say, not really legacy protocols, because that's not the right name, but protocols which have some limitations. For instance, some time ago, we made an application based on all-join work, because all-join is a kind of bus, like the bus. But the service discovery worked only across really a layer-2 network. So the only way to make it work was to deploy our solution. Okay, this is a very quick overview of how you can break down the networking part. Again, we didn't want, that's our approach in general, we didn't want to put the so-called kitchen sink into Iotronic. We thought that everything should stay where it belongs. So, of course, this means involving Neutron, because Neutron, of course, has lots of functionalities in this sense. We also have been careful not to overload devices. So, for instance, the footprint of our solution based on Iotronic plus Neutron is very lightweight, because what happens is that the device doesn't know anything about the existence of Neutron. Neutron does everything on the cloud, and we just get the wire, really the tunnel, back to the cloud to instantiate interfaces and then assign IPs and so on. Okay, the kind of use cases, of course, are many. Segregating, of course, devices on the same physical land or some, say, kind of a local network or grouping dispersed devices in the same overlay. Creating networks that combine devices on one end and virtual machines or bare metal, whatever, on the data center and even putting up together very, very heterogeneous devices bored on one side and, let's say, mobile on the other. So, the forwarding of services was another key point. So, being able not only to always be able to reach the device, to send comments, as we said, we can send RPCs. So, let's say, not just sensors, but actuators are totally part of the picture, as Antonio told you before. So, the concept of a cyber physical system in full and also exposing services through service forwarding that works by a complex system that involves, it is complex on one end, but it's simple on the other, where simple matters. On a constrained device, simple matters in terms of having composable, very unique style kind of tools. This stuff is based on a tool we have just forked and modified for our own usage, which is called WS-TAN, so tunneling system based on web sockets. Our innovation has been creating a reverse tunneling mode. So, the idea is that the device calls home and then is able to get a number of tunnels instantiated back. Again, whichever the situation, firewalls, nuts and so on, middle box of every kind. But also, it works by piping TCP or UDP connections. So, what happens is that, and also using other small unique style tools, for instance, we have used here also SoCAT, that maybe you know as a tool to instantiate virtual interfaces. Then discussing about forwarding is just a kind of appetizer for a kind of a huge use case, which is the web of things. Some people call it web things. You may know already that Mozilla was behind this, say, initiative some time ago. At a certain point, it transitioned to third party, but it still is an open source and open community framework. The idea, of course, is putting together things that talk very different protocols, and we'll get back to that in a moment. But also being able, again, to reach things and exposing resources the way the Mozilla guys, of course, prefer as pure, say, web kind of, web-based kind of interactions. And, of course, we like it too. In fact, we have enabled that over these other implementations, so adding some other elements in particular we have used the NGNX that maybe you know as a reverse proxy on both sides of the connection, so on the cloud side and on the board. And we have also used for SSL and Crippett communication CERCBOT that, again, is a very nice open source tool, and lots of people are, say, investing in this kind of technology. In our case, means having, say, kind of PKI for free. And the use case can be exemplified very quickly like this. I can have a service name like WOT, web of things, let's say. We can have a generic domain, example.com, say. A subdomain, which is board A. Why? Because the subdomain here will be really the name of the board or whatever name I want to give to the board. And a certain port that I need to reach inside. Then this is the full URL. And this is what happens very quickly. I get really those parameters. I get the electronic to interact check and eventually ask designate another subsystem in the open infrastructure community. And then, of course, it means that we get to our real endpoint, which means it means, of course, it's meant to be the reverse proxy and GNX. And so this is where, say, there you see the green arrow, which is the WebSocket base tunnel. And in the end, this is the whole picture. This is an example of usage by a client. So, of course, DNS resolution, stepwise, of course. And then we get the forwarding and I'll show you with a very, very quick demo what it looks like. Very, very high here. This is a web page. You are not seeing that. How can I show you? Maybe stopping here. Now you see it. Okay. This is just a simple web page. But what's interesting is it's compliant with this vision because we have, you see, okay, some graphics for what? Temperature, humidity, red lead, the green lead, sensing, so, and actuation. You see maybe a guess. You see the temperature, humidity, real time. Okay. Just a moment. I'll refresh the page. You should see here. Yeah. Unfortunately, in Italy, it's nine hours of time zone difference. But you can see that I'm turning on the LEDs. Okay. Now, of course, this is a very simple example. But what we want to show is that this works by really engaging the endpoints, which are just URLs of sensors and actuators. Okay. So, slash green lead, slash red lead. Okay. We want you to play with it as well. If you want, you can just check it yourself. You can also check some other information about the eutronic because there's lots to unwrap. But we presented that both at the opening Fridays in Italy in 2019 and another time as well and also in Vancouver five years ago right here. So, if you want to know some more information about virtual networks and plugins, you can check that one in Vancouver and the one in Italy for containers and function as a service, which is, of course, I guess, very interesting for all of us, but there is no time and space to talk about that today. Just wanted also to show you these are all subsystems, components, hardware that is in usage here. And there's something which is on our radar, of course. We didn't work yet on top of Kubernetes, Cata containers and Starlink X, and we are very, very interested in collaborating. And about collaborations, by the way, I'll get back to Antonio. Okay. We just selected some real use case applications of this technology, but we cannot present even more. This is a company that produces this big truck that go around Europe and all this truck are equipped with a station of these boards. There are three of these boards on each of this truck, lots of sensors. We collect in real time about 60 different parameters that are used to monitor the device itself and to make also some preventive maintenance. So they recall the truck back if they realize that there is something strange. And these are some of the things that you can do apart from waiting all the different opening, close of this big truck. It's amazing because with a cellular, you just move this truck and you do lots of things. And on the other side, you have the telemetry related with the different parameters and of course location, position, acceleration, if you break, how you use the truck itself. At the moment, we are controlling roughly 300 of these devices going around Europe. This is instead a different application because that one was related with industry, but we have some use case with Stellantis, with Ferraris, where we use this technology to monitor some production chain. This is an example instead of Smart City, where this is an area of Milan that is named Loren Teggio and we are using, it's already sold, running this technology to integrate 13 different subsets of sensing device produced by different producers. So there are noise detector, smart parking, flow, vehicle flow, detection, environmental monitoring station and many other services all integrated with one single dashboard where the employee in the municipality can take under control a big area of Milan. Just to conclude, collaboration, we are cooperating a lot with IRIA in Paris where there are some researchers working on this technology. Together, we have this MES project that is a sort of integration of different kind of communication protocol under the same umbrella, so it's an extension of what we are explaining here. And these are two projects recently approved. One is an European project named SLICE-PP that involves almost all the European countries and the idea is to create a federation of research infrastructure that will be used by researchers on one side and industrial industry on the other. The other one, the So Big Data project, instead, is an Italian project that is related with SLICE. So So Big Data brings the money to create the infrastructure in Italy and so this is why they are related. And inside So Big Data, we are developing this virtual lab on pervasive intelligence in cyber physical system for future society that will be the infrastructure in Italy that will be federated with the other one in the other countries. So these are a long time project. SLICE will be over in 2040, so it's a long vision project. So Big Data study is a four-year project. So under SLICE, there are several kinds of specific projects that will be activated year by year to create the big picture of SLICE. Okay, future work. We have been approved this book, so we are writing this book. We hope to complete by next year, 2024. And the title will be assembling smart cyber physical systems heterogeneous diffuse green technological infrastructure for seed and industries where we will try to put together in a comprehensive way all the different technology that we try to introduce today in a more ascended way. So this is something that we are working on. So thank you very much. And if there are questions, of course, we will try to answer. Thank you. Okay, so thank you again. Thank you.