 Hello everyone and welcome to my presentation. I'm Frédéric Débiens, Program Manager for IoT and Edge Computing at the Eclipse Foundation. Today, it is my pleasure to introduce you to Zeno and next generation protocol for IoT and Edge Computing. So the structure of this presentation is quite simple. First, I will explain to you why you need Zeno. There are plenty of protocols around and Zeno addresses very, very specific concerns and so we'll take the time of reviewing what's on the market right now and why Zeno is on certain points, on certain aspects better. Then I will give you a run through the basic concepts of Zeno. As you will see, the protocol is quite flexible in its approach. After that, we'll cover a step-by-step example. So we'll take the Musée du Louvre in Paris and see how Zeno could help you gather data from various rooms in there. We'll review a few performance metrics. So the Zeno team ran some benchmarks last summer and wanted me to share those results with you. Finally, I will let you know how to get started because I hope you will find Zeno awesome. So I will explain to you how to set up your development environment for Linux and Zephyr. So with that out of the way, let's get started. All right, so when you look at the IoT market and typically when you start working on an IoT project, one of the choices you will have to make is which protocol or which protocols plural depending on what you have to work with, you will use. And there are several mature and high quality options in the market and I put a few of them on the slide. So each of those protocols has its own approach. Some of them will be request response. Some of them will be publish subscribe, which is typically highly useful in scenarios where you report sensor data from the edge all the way to the enterprise. And there are options even about the topology. Some of them are client server. Some of them are peer-to-peer and NQTT obviously is known for its brokered approach to communications. One exception here is OPC UA. I wrote this complicated here because it can be client server, it can be PubSub. But overall, given the complexity of OPC UA, it's hard to come with a simple answer to that particular question. But anyway, with OPC UA, you certainly have options and plenty of device support. And each of the protocols on this slide is really a useful option that you should consider when starting a new project. Really, now which one you will pick depends on the variety of factor. For example, the kind of device you have in mind, constraint device, the type of topology, power consumption, et cetera, et cetera. And obviously the overall familiarity you could have with one or more of the options on the table there. So all of them are great options. And all of them have open source implementations at the Eclipse Foundation. So in our case for co-app, we've got Eclipse Californium. For DDS, we've got Cyclone DDS, which is now the default middleware in the ROS2 operating system. Lightweight M2M, we've got Leshan, which is both server and client implementation. And then we've got Wakama, which is purely a client library written in C. For PC UAV, we've got Eclipse Milo. And in the MQTT space, we have the very popular and QTT broker, Mosquito, and the Paho set of client libraries in several languages that you can pick from. A new player doesn't even have a logo at this point is Eclipse Amnon. So Eclipse Amnon is another MQTT broker that we've got at the Eclipse Foundation. It's been contributed by IBM and essentially it's a clusterable, highly available broker. So Mosquito is very lightweight, fantastic for embedded deployments and things like that. And Amnon is really the thing you will want to run in your data center or in the cloud. And in fact, this is what IBM is doing. They are running several cloud services, offering MQTT services using the Amnon code base. And now they are contributing that to the Eclipse Foundation. So, you know, the whole point of this presentation is to tell you about Zeno. And I want to emphasize here that Zeno is certainly a great option. However, we've got support for every mainstream protocol that you can think about at the Eclipse Foundation that each and every of those options is certainly a valid choice even in today's market. Now, digging a bit deeper, each of the protocols I'm mentioning, you know, they have limitations or you can criticize them for something or another. For example, in the case of Co-App, there's been some research done. And you know, even though it's based on UDP, the transition times are maybe a bit higher than you would like. And then obviously the security model is built around DTLS and DTLS has got specific limitations when working around certificates and things like that. Then in the case of DDS, DDS is an open standard from the object management group and it's got its own foundation in the OMG Galaxy, the DDS Foundation, and all of that is great, but real-world users will tell you that more often than not, DDS implementations, especially proprietary ones, are not compatible with each other. And the other problem with DDS is that it's a fantastic protocol for local communication, but routing DDS traffic over the public internet is tricky. And you may need to do that, for example, if you have, you know, deployments in several factories all over the country or all over the world and you want to, you know, bring that traffic together in a single location, central location for monitoring, for example. You need to use the public internet for that and then routing DDS traffic in a scenario like that, it seems very painful. In the case of Lightweight M2M, one of the common criticisms is that it's tied to co-op and co-op is tied to UDP, so obviously if you don't like UDP or if you don't like co-op Lightweight M2M is not necessarily an option for you. In the case of PCUA, once again, its main Achilles heel is really its complexity. The spec is several thousand pages long, it's got six distinct transports and more than 200 facets. So just saying in a generic fashion, hey, this product or this device is OPCUA compatible, doesn't tell much. You have to dig a bit deeper and see if it implements the facets that you need and, you know, given all of the possible combinations of transports and facets, sometimes interoperability could be a challenge. And then finally MQTT, MQTT is tied to TCP. Now there's MQTTSN, which runs on the top of UDP, so a different approach there, but MQTTSN is literally a different protocol than MQTT and that means that typically the mainstream options for MQTT brokers will not support out-of-the-box MQTTSN and so you have to think about a whole different infrastructure if you want to run both MQTT and MQTTSN. So this may change in the future years, obviously, but, you know, each of the criticisms on this slide is probably valid and are things that the Zeno design team wanted to address in one way or another. Now, if we take a step back from every technical details, when you think about the journey of data, you know, you capture it at the edge on constraint devices and you bring all the way to the cloud and here that's private cloud, hybrid cloud, public cloud, whatever, okay? You know, you look at this journey of data and really, obviously, the protocols that I told you about up to now, they don't concern themselves with the capture, so that's literally where your own code on the constraint device will play a role, so you have sensors on the microcontroller or something and then you gather the data and then in step two, you will use one of the protocols, whether MQTT, co-op, whatever, in order to transmit the data from the edge to its final destination or even intermediary destinations. So most of the protocols I talked about up to now, they focus squarely on step number two, transmission. Now, when you think about, you know, the full journey of data, there are other steps after the transmission. You want to compute the data in some cases, you want to store it as is in some cases for telemetry or things like that in a time series database, whatever, data is stored as is or after computation, but, you know, those two steps are really linked to one another. And finally, once you have stored the data at some point, you will want to retrieve it. Once again, maybe you just want to get through the data set just to check some values or maybe you need to retrieve it for future processing to feed an AI model, for example, things like that. Anyway, so what happened is that the Xeno team saw an opportunity in there in the sense that existing protocols don't care about computation, storage and retrieval. And so they wanted to, yes, have a protocol which is very efficient about transmission, but then would provide primitives for computation, storage and retrieval. But before we get there, there's the whole edge computing concept. And, you know, Xeno is a fantastic option for IoT deployments. Okay. And at the same time, it's been designed from the ground up to address edge computing. And when we say edge computing, the first thing you should realize, well, first, you know, the definition for edge computing is literally that you bring compute and storage and networking capabilities as close to the source of the data as possible. Okay, so that's my own little definition there. A bit simplistic, but you know, if you challenge it you will probably see that it holds up quite well. The problem when we say edge computing is that the edge is a fuzzy concept depending on your application, depending on what you're trying to do, depending on what role you play, whether you are the end user, whether you are the solution provider or the telco company. You know, the literal definition of edge will vary. So obviously, if you look on this diagram on my slide, on the right, you've got the constraint devices, the things that are in the field where machines and even user terminals, things like that. And obviously there is some kind of edge infrastructure over there. And then you have 5G or you have LTE or you have something that will, you know, provide communication support. And then your communication provider will have multi-edge computing infrastructure there. And then there's the core communications network of that provider and eventually you will get all the way to the cloud. So where's the edge in the diagram below? The answer may vary. And this means that if you have a very rigid definition of the edge and if you are using a protocol that has a very rigid vision on this, you won't be able to address the multitude of potential deployments, topologies and use cases. And so the fact that the edge is a fuzzy concept centrally complicates things a bit. The other thing is that obviously there's a lot of semantic confusion. So several years ago already, five, six, seven years ago we started talking about fog computing. And now edge computing is rather the dominant term. And there are many, many interpretations of what edge means and fog means. And some people will start distinguishing between closed edge, far edge, telco edge. I mean, there's a whole semantic mess, I would say currently. But whatever is your definition of edge or whether you prefer fog to edge or whatever, Xeno is a good solution. And we will see why in a short bit. Our vision in this debate at the Eclipse Foundation and specifically in our edge native working group is quite simple. IoT solutions or pure edge computing solutions that have nothing to do with IoT like gaming edge computing infrastructure or things like that. They always leverage a continuum of compute storage and communication resources that are spanning literally from the very microcontroller that your sensors are connected to all the way to the cloud. And the cloud here is private, hybrid, public, whatever cloud deployment model works for your organization. And the components for the various planes in your solution, whether it's the data plane, the management plane, the control plane, all of those components, they can be spread over all variety of physical locations across the edge to cloud continuum, okay? Which means that a true edge computing platform, a true protocol that has been from the ground up from the edge will be able to have the flexibility to be deployed in all of those potential locations all across the continuum as you design your solution, okay? And this brings us to Xeno because Xeno is literally the answer to all of the concerns I told you about about currently popular protocols. And obviously it addresses really the concerns of edge computing. So what is Xeno? Xeno is a protocol that unifies data in motion, data in use, data at rest, and computations, okay? And really it has a PubSub model but it blends that with distributed queries. And it's got built-in support for geographically distributed storage and distributed computations which means a growing concern in IoT and edge computing is data sovereignty. You want the data not to leave a specific physical location, for example, or you don't want it to leave a specific country. And this is really important in highly regulated industries like healthcare or in industries with, I would say, a high potential for disruption like defense, right? You don't want the enemy forces being able to penetrate your communication infrastructure for your autonomous drones, for example, or things like that, right? And so that's why the built-in support for geographically distributed storage in Xeno is so innovative and so important. So you see on the slide adopters of the technology. Chief among them is ADLink. So ADLink are the guys who created Xeno and contributed it to the Eclipse Foundation. And among them, among the authors are early adopters and people that are working with the protocol on the variety of use cases. And I should say there, my colleagues at the Eclipse openly DX working group. So they work on tool chains for autonomous driving and essentially Xeno is certainly a very important part of what they are doing for automotive in there. So robotics, automotive and IoT are pretty strong use cases for Xeno. At this point in time and the other partners around it are certainly working on other use cases as well. So what is Xeno exactly? So Xeno is the sum of two specific APIs, you could say, or two specific layers. There's the Xeno net lower level layer and then the Xeno layer on the top as you see on the diagram. And here, what's really interesting is really that those two are decoupled in the sense that you can just work at the Xeno net level or you can use the higher level Xeno API independently from each other. So what Xeno net implements is really a networking layer capable of running above a data link and network or a transport layer which means essentially if I'm simplifying this it can run over UDP, it can run over TCP, it can run over quick, okay? So you really have the flexibility to pick whatever option works well for your specific use cases or even you know, you can run segments of your Xeno infrastructure over different transports and this will work quite well. Xeno, Xeno net also provide primitives for efficient pops up, distributed queries, things like that and supports fragmentation and ordered reliable delivery. So you've got a configurable quality of service level so to speak that you can specify when you are working with the Xeno net API. And then there's a higher level Xeno API. So Xeno is a high level API for pops up and distributed queries. It supports data transcoding and it's got the implementation of geographically distributed storage and distributed computations and things like that. And the two together work quite well. Xeno is written in Rust. So in a security perspective such a need you won't suffer from buffer overflows or things like that. So this I think makes the protocol even more attractive and it's got a number of language bindings as we will see later. So Xeno supports multiple interaction modes and one of them is the peer to peer mode. So either you can establish a Mac or full clique where each of the peers is linked to each other. And really what makes this mode interesting is that the peers can do scouting through essentially multicast or just through the gusset so the chatter between the nodes will help the other nodes that are joining the mesh or the clique figure out whatever peers are available around them, okay? And this flexibility in scouting is really important because obviously not every network will enable you to work with multicast so the gossip option there is certainly very attractive. And then another interesting thing is that obviously peers are important in Xeno's architecture but really you have this option of having routers around as well and those routers enable you to have clients that won't be peers, right? So in this case, typically clients are smaller very constrained devices that cannot implement the full feature set. And so they implement just a subset the client subset of Xeno and so they can be lighter weight. And then obviously the routers here are the key component that enable you to bridge various deployments over the public internet. So the peers can talk to each other on the local network with the clients and all of that. And then you can have routers in several places in your infrastructure and those routers can communicate with each other over the public internet in a secure way. So a few other highlights about Xeno. It's been designed from the ground up for to minimize bandwidth usage to optimize power consumption and optimize memory usage as well. And really with special attention paid to extremely constrained targets. As I mentioned, it supports PubSub but you can implement it in two different ways. So there's the traditional push PubSub where essentially the subscribers will receive the data in real time as it is made available but then you have the option of pull PubSub. So essentially a subscriber will awake from sleep maybe to minimize power consumption, pull the data updates from the closest peer and then go back to sleep after that. So you have both subscriber modes that are possible. The resource keys, right? So the keys literally to the data that you store on the Xeno infrastructure they are represented as integers and those are local to a session, okay? And this minimizes the traffic on the wire. As I mentioned, Xeno supports peer-to-peer and routed communication. So it's very flexible. So it can be exactly like DDS. It can be exactly like MPTT or a mix of both depending once again on your use case. Xeno supports zero copy which makes it certainly very efficient. Reliable delivering fragmentation. We covered that. What's really interesting about Xeno is that there's really minimal overhead for user data when it is transmitted. So typically it's around the five bytes, okay? Which is quite on the low side as far as protocols are concerned. Some other protocols are a bit smaller but given the functional richness in Xeno delivering all of that in five bytes is certainly quite an achievement. All right, so now let's have a little tour of the primitives inside Xeno. So first, how do you name data elements in Xeno? So following the tradition of name data networking protocols essentially data is always identified by a sequence of byte arrays. And in this case that's called a key, okay? And the keys are always the archicals. So in this case we have the example of a home. So we see here home, kitchen, sensors, step, okay? So there's a clear hierarchy there. And then you can see that home, kitchen, sensors, C202 which is our sensors for carbon monoxide and oxygen there. And the data interests any intent. So when you make a query or when you subscribe to something you can use white cards or regular expressions. So for example, if I want to retrieve all the current values for all temperature sensors in my home I can specify home slash star slash sensors slash step. And independently of the room I will get the data for all of my temperature sensors. And using two stars like in the second example I will retrieve all the values, the current values for the C202 sensors, whatever the ER key is, okay? So you can use those regular expressions to be very flexible in the way you are making those queries. Then there are, you know, our selectors to define the data sets when you are making queries and things like that. And a selector is always composed of a key expression and optionally predicate, projection and a set of properties. So we've got some examples on the slide. So for example, I could retrieve the current values for every temperature sensor, whatever room they are in, but just if the value is over 25 Celsius, for example, which would give me the list of rooms, for example, where the temperature is too high in my home. And then there's a second example about a connected car, you know, on the second line. But really, those key expressions are used to route the query, okay? The key expressions are there for that. And predicate projections are interpreted only by the entity that executes the query, okay? And as you would see, in some cases there can be several entities that can answer a specific query in the infrastructure. So the fact that the key expression is used to route and the fact that the rest is interpreted by the target of the query, it means that you've got a very efficient way to route the queries and at the same time that every entity will be able to interpret the query in its own way at the other end. And Zeno will also provide various policies to control the consolidation of query and obviously calculate quorums and things like that, depending on the number of nodes you have, et cetera, et cetera. Certainly in that perspective, Zeno is very, very robust. Now let's have a closer look at the primitives in Zeno. And we start with a set of entities. So first there's a concept of resource. So a resource in Zeno is always a named data item. So that's a key value combination. So for example, home kitchen sensor temp, that's my key and the value currently would be 21.5 Celsius. So it's a bit on the colder side in the kitchen right now. And then if I have a humidity sensor, you'll see the key for that as well. And that would be 67%. So that's a bit high. Maybe I need a dehumidifier or something in my kitchen. Anyway, now when I declare, when I create publishers and subscribers, I will use literally the same kind of key expressions that I will use in resources. You'll see a publisher is a spring of values for a key expression. So it can be very specific. So we see home kitchen sensor temp or it can even be a bit more generic. So all of the sensors in the kitchen would be represented by the second example, which is home kitchen sensor star. And the same for subscribers. So you can be very flexible in declaring both publishers and subscribers depending on the context. Then there's the notion of queryable. So essentially, once again, you provide a key expression for a specific query. So that would be the equivalent of a name query in a SQL database, for example. So in this case, home star star will return a wealth of information about all of the sensors in my collected home, for example. There are a few interesting operations that are defined in Zeno. First is the scout operation. So this will look for Zeno entities on the network explicitly. The type of node that you will look for can be specified through a bitmap. So you can look just for peers, just for routers, mix of the few, et cetera. So that's really flexible and really interesting. So depending on the specific use case you have, you can tweak or shape the scouting in a way that makes sense for the use case. Obviously, open and close as primitives, simply open and close Zeno NAT sessions. And then declare and declare are you primitives that are used for resources, publishers, subscribers, and querables. And in the case of subscribers and querables, essentially when you declare them, you have to provide a callback that will be triggered when a data is available or when a query needs to be answered, okay? So certainly, Zeno is fully supportive of asynchronous programming. As it should be in a perspective where in IoT, you don't necessarily get data all the time. A future interesting primitive operations in Zeno, so write will write the data for a key expression. Pull will be the one that you use in router if you have a pull subscriber to simply pull the data from node. And then query enables you to run a distributed query and there the target of the query, the coverage consolidation will depend on the policies that are set on the Zeno procedure. Okay, focusing now on the concept of storage. And this is really something innovative that Zeno provides. So like the publishers and subscribers, storages in Zeno are defined by a selector and a backend. So the selector is like any other key. So we see here, my home slash status slash star could be a selector, okay? And then the backend for the selector can be a database engine. And there are currently supported options include FireSystem in FluxDB, in-memory, HashMap, RockDB and a variety of SQL databases. And the support for those backends is implemented through plugins. So if your favorite option is missing there, then obviously you can write a plugin to add the support for that. And those storage backends can be loaded dynamically at runtime, so you can add them as you go on a node depending on your needs. The storage selector here, it's really interesting. It can obviously be bound to its own little standalone database, which could be created on demand, but it can be bound to existing database that you would be using for other proposals in the infrastructure. So you have the option for both there and that's certainly a good level of flexibility to have. Now talking about evals. So an eval is once again defined by a selector. So that's the set of keys that will trigger that particular computation in the infrastructure. And the implementation is the user code that you will write in order to implement the computation. The implementation can be written in any language for which they know as a language binding. So obviously Rust, Go, R, certainly supported options. There's Java API, CC++ and Python. So plenty of variety there. And once again, if an option is missing, you could have the option of implementing your own if it's not there. And all of it is open source as well. So this diagram really show you how everything fits together in Xeno. And there you see both the Xeno net layer and the Xeno higher level layer. So Xeno net has the right operation. So you do a put in the fuller higher level Xeno API. The notion of subscriber query goal is obviously tied to our storages around there. The notion of subscriber and yet. So all of those things really you can use depending on your use case or the level of granularity that you want. You can go the lower level route directly in Xeno net or you can use the higher level Xeno API depending on what you're trying to achieve. Whatever you choose to do, both options are fully supported and fully documented by the Xeno team. Okay. So now let's focus on specific examples. So this would be a fixtures deployment in the Musée de Louvre in Paris where essentially I have a hierarchy of sensors on various floors and in various rooms. Okay. So for example here, if you look at the top left of the graph, you will see that we've got publishers that are under the key Louvre 142 sensor temp and Louvre 242 sensor temp. So we start with Louvre, which is our museum here. The first integer is the floor. So first floor, seven floor. And the third value is the number of the room. So we have room 42 on floor one, room 42 on floor two, et cetera. And in those rooms, we have a variety of sensors. So the publishers in the top left corner, they are simply clients, right? So they publish specific values over a specific Xeno key. We see that in our furniture here, we've got a number of storages. So Louvre one star star is one of those storages. Louvre two star star is one. So all of the data for the first floor goes to the storage in the top right corner. Whereas all of the data for the floor two rooms is going to another storage. And then we have a number of subscribers at the bottom of the screen. So one of them is a full subscriber at the bottom left. And it's listening just for Louvre 242 sensor temp, so a very specific value. But then the two other subscribers are our push subscribers. And one of them is listening to every value for the temperature sensors. So you have this level of flexibility and variety in the protocol. And all of this is well supported. So now, if we go ahead, OK, and you will see that my storages, they are defining both querables. So you can query the database and subscribers, obviously. And this is really important. If they are just subscribing, I will just put the data in the database. I won't be able to query the database in order to retrieve the stored values in there. So I need to define querables in order to query the database as well. And that's an important distinction. Now, what happens, let's say, when my first publisher is publishing a value to the Zeno network? Well, in this case, I get the temperature value for Louvre 142. And this is a publish and subscribe. So the publisher just sends out the value on the nearest node. And then the value will propagate to the storage, which is interested in everything that covers Louvre floor one at the top right. And then you see my two subscribers will get the value in real time, since one of them is literally listening to that particular temperature sensor. And the other one is listening or subscribed to every temperature sensor in the museum. And obviously, my full subscriber doesn't get the value because it cares just about the temperature sensor in room 42 on the Zeno floor. So now, another example, what happens when Louvre 242 sensor publishes its value? Well, then, obviously, the Zeno nodes will route that value to the standard storage that we've got on the left, since it's recording everything for the Zeno floor. And then the value will be sent to the nearest node to our full subscriber, but currently it's sleeping. So the value is kept there and waiting since this is a full subscriber. And then at some point, that particular application will wake up on the phone. And then what happens is that, OK, it will do a pull on the node. And then the Zeno infrastructure will return the value to the subscriber. And then the subscriber, since it's a full subscriber, will go back to sleep. So all of those interaction modes are completely supported in Zeno. Then let's take a different interaction there. So what happens when I run a query? And in this case, it's really interesting. So my query here, so the phone that we see on the very right with the green text, it runs the query for Louvre star 42 sensor temp. So essentially, this is a query for the data that has been stored for every sensors for temperature that are in rooms number 42s, and number 42 independently of the floor. And so you see that, OK, the query is propagating. And then the data for all floor one rooms will come from the first storage, whereas the data for the seventh floor rooms will come from the other storage. And all of that will be consolidated and returned to our query here on the right. So that's the types of interactions that you get in Zeno. And really what really shines in this example is how flexible the protocol is. And you see what's missing here is a router. So all of this infrastructure that you see is really self-contained. But if I am managing all of the great museums in the world, then I would have a router in Louvre and then a router in all of my outer museums and then all of those Zeno infrastructures could communicate with each other over the public internet. Specifically for clients, something that we have in Zeno. So if you have a reconstruct device and you want it to be as efficient as possible, the Zeno team developed Zeno Pico, which really is targeted at constructing devices and simply offer a pure API for pure clients. So just in C and just the client mode in Zeno. So there's no support in there for peer-to-peer communication. And this is what we will be using when I will show you the Zifia support in Zeno a bit later in the talk. Also, a new thing that has been added recently to Zeno is Zeno flow. So essentially, when you have a data flow, you have sources that produce data, operators that compute the data, and things that will consume that computed data. So Zeno flow is a programming framework that enables you to define data flows like that. And really, they will span from the cloud to the device. So Zeno flow offers automated deployment and management for data flows like that. And it's a new feature in the Zeno family. So it would be a worthy of its own presentation. So I'm just mentioning this in passing today, but please have a look if you are interested in complex data flows implemented on the top of the Zeno protocol. Now, let's talk a bit about performance. So the Zeno team back in July 2021, they ran a number of tests just on a single machine with a powerful horizon and two to two gigs of RAM just to see the kind of throughput we could get out of the protocol. And you see for payloads as substantive as 4 kilobytes or even 8 kilobytes, that on that single machine, it was able to support millions of messages per second. And that's just on a single machine. So you can imagine if you have an even big fewer one with multiple network interfaces and all of that, the numbers would be quite higher. So the kind of throughput in messages per second is quite astonishing there. And then if you think about the number of gigabytes per second transmitted, then once again, very, very good throughput there. And certainly, you see that the difference between the Zeno API and the lower level ZenoNet API is not that great. So both of them are quite efficient and effective. And then in terms of latency, we are comparing here Zeno and ZenoNet to Ping 2. Obviously, the latency is a bit higher when you have less messages. But as you can see, starting around the 1k message per second, you will see that the latency is quite low, 60 microseconds, which is very, very, very small. And that holds up quite well. And obviously, Ping 2 will be even lower since it doesn't actually transmit useful data. But the fact that we are so close shows how efficient and low latency Zeno is. And in this case, yeah, I'm just providing the graphs. But if you want to learn more about the testing environment, the kind of tests that the Zeno team ran and all of that, please get in touch with them. They will provide you the full details. OK, now the good stuff. So I convince you that Zeno is interesting. How do we get started on Linux? So in this case, it's fairly simple. If you are using Debian and its derivatives, then you simply add the proper repo to your sources, do an app update, and install the Zeno library. And you can start local router by running the ZenoD comment. If you prefer to run the router in a container, then you have the Docker command to do that right there. Obviously, please tweak your ports according to your local environment. And let's say you want to code in Python, like I will be showing examples in Python in this talk. Then you do pip install eclips Zeno, and you will get everything you need to be started. OK, so that said, once you have this environment running, you can use the REST API of the router to test your environment. So there are a number of useful initial tests you can do. So retrieve info on the local router, list the current backends, list the current storages. So just cut and paste that in your terminal, and you can kick the tires on your local install to ensure that everything is running, which is something that I will do right away. So let me switch to the command line. So in this case, I am running in the Windows Linux subsystem on my Windows machine because it was more convenient for me to use that machine for recording purposes. I'm running the long-term support Ubuntu version, and here I just start the Zeno daemon there. There's no output. This is expected. And just running a few of my curl queries there just to show what's happening. So I run this query to get the status of my local router. I get the version of the code and a few other properties about it. So everything is running correctly. And now, if I am to retrieve the backends for it, you will see that by default, I didn't pass any parameters to the router, so it's running in-memory backends. So everything will be a hash map in memory, so a key value hash map to store the information. And this means, in turn, that there are no storages defined, so the list of storages is empty. Now, there is a very extensive set of samples that have been provided by the Zeno team. So let's say I start here a little subscriber. So let me get there. So I start this Python program to subscribe to values, and you see the output of the program in there. Obviously, that piece of code is supposed to print the data received. I don't have any data here because I'm not publishing anything, so let me get to this second window and publish something. And we will see here we start publishing some sample data. And in the other window, it's been received. And obviously, I have a single publisher there and a single subscriber. But if I were to not this one, if I were to publish, to author, I have another publisher here, not another publisher, another subscriber. Then obviously, the data will be displayed here as well. And we see that we are starting at 28, 29, and everything. So obviously, we missed the first messages, but it's getting everything that the author subscriber has been getting, 42, 43, 44. So we are in sync there. So this is just me running the samples that are provided with the Python API, each API as its own set of samples that you can compile and run. And obviously, it works quite well. OK, so let me now get back to my slides. OK, so I showed the API. I showed the samples. The samples, by the way, they are a completely open source as well, and they ship with each and every API. So if you want to have a closer look, open your favorite editor and have a look at the sample code. OK, so in this case, I'm showing the Zenonet samples for the Python API in my editor. And this editor, by the way, is Eclipse Teya Blueprint. So Eclipse Teya is an editor based on VS Code. But if you don't like VS Code because it's a single vendor open source project and you don't like the fact that the marketplace is owned by Microsoft, Eclipse Teya is the solution for you. Essentially, it is managed in a vendor-neutral fashion and the marketplace is open to all and not controlled by any single entity. So really, please have a look at Teya Blueprint if you're interested in that. And so every implementation of the Zenonet API has got its full set of samples. OK, so getting back to the slides, how do you get started on Zephyr? So in this case, you download the DEB RPM or the TGZ from the Eclipse servers. Or you can build it from the source and I put links to both of those options on my slides. Up to now, Zephyr support is nascent in Zenon Pico. So essentially, the team tested it successfully up to now on the real board in a specific nuclear board, but you can expect this list to be expanded over time. And the team recommends you to work with Platform IO to work on your Zephyr applications and Zenon together. So the typical structure for Platform IO project is that you have a lib folder where you will put your various libraries. And then in SRC main, you've got your main and you can add your C files over there. So to get started, you just create a project directory. You run Platform IO init and dash B and identifier for the board you're targeting. So using a different board, you can list the boards supported running the Platform IO boards command. And then you do a Platform IO run. And to be fully ready to work with Zenon Pico, then you need to copy a few files in the correct locations. And then you add obviously your code in the main file and obviously to what your files as your application requires. And then you do Platform IO run and Platform IO run dash T upload in order to flash the board and run the code on a specific board. OK, so I included a few code snippets and they are for illustration purposes only. Please refer to the actual version on GitHub or read them using your favorite editor. In this case, I removed the includes and definitions and things like that just so that the code fits on the slide. So this is the basic Python code for subscribing. So at the top, we've got our call back and then we simply initiate the logging, open a Zenon session, specify the reliability mode and the fact that we are a push subscriber and then simply declare the subscriber and we're done. So as long as this program is running and this is literally the sample I ran a few minutes ago, you will get the data. Then this is the same in Zenon Pico, so in C. So as you can see, well, a bit less readable since this is C, but quite simple there. You will not just need to add some kind of loop where I put that common since, essentially, if you run it as is without putting a loop, then you will just declare a subscriber and then undeclare it and close the session. So in the official sample, they are just waiting for a keyboard input there. And then the data will be displayed as it is received. And this is the basic code for publishing. So once again, quite simple in Python. And the same in Zenon Pico. So once again, nothing too complicated there. And you will notice how consistent the API is from one version to another. So I use the full-fat Zenon Python API and I use, obviously, Zenon Pico for C in the case of Zephyr. And everything is consistent and well-documented. So kudos to the team. OK, all this work on Zenon and many other projects are happening at the Eclipse Foundation in our Edge Native Working Group. And the focus of the Edge Native Working Group is really to foster the evolution of Zenon and many other projects. We are code first, and we care about EdgeOps. And EdgeOps is simply the fact that if you do pure DevOps at the Edge, you will run into trouble because you don't patch a smart road infrastructure in the middle of the day when everyone is on the road. So you need to tweak DevOps. And EdgeOps is our vision for that at the Edge Native Working Group of the Eclipse Foundation. Obviously, everything that we do is embedded in a wider IoT architecture where Edge Computing is in the middle. And as you can see, we have projects, 50 plus of them for IoT and Edge Computing at the Eclipse Foundation, plus a slew of development teams. Not development teams, but development tools. The traditional C IDE is still going strong. But we've got Eclipse Shea and Eclipse Steya that are browser-based as well. So lots of tools to pick from in order to build IoT and Edge Computing solutions. So I put links to a few resources in there. Please feel free to visit the Gitter channel for the Zenon team to interact directly with the developers. And please visit the website for the Edge Native Working Group if you are intrigued by the prospect of this working group and want to learn more about the projects or even join the community. So thank you so much for watching my presentation. I'm Frédéric Débien from the Eclipse Foundation. I can be found on Twitter as Bluebird Recorder. It's been a pleasure to introduce you to Zenon today. And I hope you will try it. So thank you for watching and see you around.