 Hi, I'm Frédéric Desbiens, Program Manager for IoT and Edge Computing at the Eclipse Foundation. Welcome to Game of Protocols, the presentation where protocols will fight for your hearts and minds. Alright, so what do we have on our play today? Really simply three things. First, I will explain to you why you need to pick a protocol. This seems trivial, but some people think there's a one-size-fits-all approach in the market, and I would just want to show that that's really not the case. Then we'll review a few established players in the market, so options that are mature, that you can use right away, and that are widely supported. And then we'll have a glimpse at a few emerging challengers, so protocols that are maybe a bit less mature, but show a great deal of promise for the future. Alright, so let's get this show started. So why exactly do you need to choose? I think the best way to illustrate this is with quotes from real developers, real people that I met at conferences, or that I interacted with over the internet. One common thing that I will see is, well, I don't need to choose. Ajax Foundry or with whatever product you want to put in there, okay, fill the blank. That product has multiple local supports, so I don't need to choose one. And okay, yes, maybe at the technical level it's true, but the problem with that approach is that then you don't take into account the specific use case you're having. Or just picking random software stacks and parts even for your project, and you're not getting the best fit for your specific use case. Then the second option would be something around the lines of, oh, REST is simple, I can do everything with it, it's widely supported, so why would I need anything else? Yeah, but the problem is that yes, REST is widely supported, but it's not necessarily tailored for the world of edge computing or even IoT. If you have a tiny, tiny MCU and you need to make the battery last for five years or more, then it's probably not the best option, even if it's simple to use and widespread in its adoption. And finally, another thing that I heard a lot is, well, plain TCP is good enough for me. I can work with that, it's literally everywhere, and I can use straight UDP maybe if TCP is a no go for my project. And okay, if you want to go that route, it's up to you, but the problem with that approach is that essentially you're operating at a very low level, and this has consequences. So the reason why you need the right protocol for the right job is really about three things. First, okay, your choice of protocol really will impact many, many, many things about your project. And that's really important, okay, the performance, the throughput, okay, depending on the protocol you will use and the specific amount of data you need to shuffle and the conditions of the network, you know, they have better choices than others. And then there's the battery life in IoT, that's a fundamental concern, the number one constraint you need to think about. Developer productivity is another thing, I mean, maybe plain TCP is, you know, an option that you will get nearly everywhere, or plain UDP, but then you need to parse all of those packets on your own, et cetera, et cetera, and what about security, and what about encryption, and what, I mean, you are using the lowest common denominator if you do that, and you won't be productive. And then there's the whole question of security. And implementing security on your own, you know, is one of the biggest, the biggest mistakes you could do. I mean, okay, maybe you're a better developer than me, fine, but, you know, writing security code is really, really something that not everyone can do. So why not use, you know, measure solutions that will integrate it for you? Okay, then when we think about REST, the thing is, as I mentioned, HTTP as a protocol, it's fine and REST as an approach is fine, but it's lacking many of the features that you will see IoT specific protocols got, quality of service, reliability, and there are many, many others. So assume that each and every feature that the protocols I will talk about today or, well, not will, I'm actually talking about today about those protocols. So all of the features they've got, most of the time, HTTP won't have them, okay. And then there's the whole question, once again, of productivity, low level protocols will slow you down. I need to re-emphasize that point, right? Especially if you're working on using C, you know, working on the Zephyr application or something like that, writing C code, you really want to write all of those lines of code to allocate memory, deallocate memory, you know, create buffers, parse payloads and that kind of stuff. I mean, that's not solving the problem, you're just writing, you know, utility code or glue code. And we don't want you to do that. We want really to focus on your specific use case, which means that using a higher level protocol is a good option. All right. So now let's review our main contenders for today's game of protocols, Co-App, DDS, MQTT and Lightweight M2M, okay. And we'll review them in alphabetical order, although on my slide here they are not, okay. And this is to emphasize that I have no favorite there. It's really about picking the right protocol for the job. Okay. First, as a reminder, you know, the Zephyr RTOS, and I know that many of you at this conference really care about this, the Zephyr RTOS really has built-in support for three of my four main contenders. So I put links here, so if you download the presentation, you can access directly the samples for Co-App, Lightweight M2M and MQTT. And as you can see on diagram, this is really an integral part of, you know, the Zephyr RTOS. And this distinguishes really Zephyr from competing options in the RTOS space in the sense that you have all of that as open source as part of the main tree, you know. And those clients for the various protocols are, you know, high quality and tested alongside with the OS, which is not something that you will find in some competing options. Okay. Now, going forward, so what is the Constraint Application Protocol or Co-App? Okay. So Co-App is a protocol really that has been engineered from the ground up to really target constraint devices. Okay. It's managed by the Internet Engineering Task Force, IETF, and documented in a specific RFC, and then there's a bunch of other RFCs that, you know, provide additional features on the top of that. Really the obsession of the designers of Co-App was to have a minimal overhead, okay. Co-App can run on the top of most devices that can support UDP or will use protocols that are roughly equivalent in terms of resource usage as UDP. The thing with Co-App is that really, you know, it's meant to run over the Internet, okay. So you can use it to integrate devices that are on the same network, but you can integrate those devices to general nodes over the Internet, or even you can join devices that live on distant constraint networks, and those networks would be bridged then by the public Internet. Okay. So all of those deployment models are possible with Co-App. Co-App intentionally has been designed to be really, really, really close to HTTP. So this is a request and response protocol, and it follows the REST models. So the get, put, post, and delete verbs are used. It's using URIs, response codes, MIME types. So if you're familiar with HTTP, Co-App is fairly easy to pick up. And one thing that has been adjusted is strong support for Co-App for multicast because more often than not, you will want to broadcast a bunch of data to a number of nodes. All right. So at a deeper level, what are Co-App's features? Everything that Co-App does is asynchronous. Okay. So it's really, you send message and execution continues right away. Low overhead because it's on the top of UDP is another big concern. And even in the design of the protocol, the headers will typically be about four-byte long only. Okay. So that's fairly compact when you compare to many other protocols that are used in the data center. All of the payloads and, you know, the messages themselves are really simple to parse because they are exactly, they are exactly replicating what you will find in HTTP. And there are many HTTP-like techniques that you can use with Co-App. Okay. So URI, content type support, that kind of stuff, proxy caching, really all of that is accessible when you work with Co-App. So very briefly, let's have a look at the stack for Co-App. And an interesting thing is that really Co-App is well supported over Bluetooth and Six Low Pan as is lightweight M2M as we will see a bit later. And the fact is Co-App in its approach is also relying on UDP and something called DTLS, which is you can see it as a specialized version of TLS to go along with it, but it's a bit simpler and a bit lighter on resources. Anyway, so essentially the whole point of Co-App, as you can see in the stack, is really to send messages over this request and response pattern. All right. If you have to pick a Co-App stack, obviously on device, if you work with Zephyr, you have built-in support and there are many small libraries written in C that you can leverage for that. But on the server side, especially if you need to write a fairly ambitious application that leverages Co-App, the most popular option, at least in my perspective that's currently in the market, is Eclipse Californium. And Eclipse Californium really has a whole bunch of additional features over simple Co-App implementations that you will find elsewhere. For example, it implements Observe and Notify. So by default, Co-App is a request and response, but you can make it, you know, you can use it in a kind of publish and subscribe model by using the Observe and Notify features of Co-App. Californium also supports block-wise transfers. It implements the latest version of DTLS. And as even experimental implementations for Co-App over TCP, so, you know, if you want to benefit of some of the advantages of TCP, then you will be able to leverage them. And there's something called OS Core, so a kind of security model for RESTful environments and Co-App as experimental support for that standard as well. Also Co-App will enable you to bridge Co-App and HTTP connections through cross-proxies. And it comes in the case of Californium with a scalable web resource framework and you even have a kind of runtime container for JavaScript mashups. And you can leverage an OSGI wrapper if you are working, you know, on a managed server that will support that. So really, Californium is a very major project that we've got at the Eclipse Foundation. It's certainly well maintained and has tons of adapters in the market. And I invite you to have a look if you are interested in working with Co-App. And Californium is both a client and a server implementation. So if you need to access to Co-App resources from a Java application, then you can also leverage Californium. OK, our next contender now for your hearts and minds is Data Distribution Service or DDS. OK, DDS is a different animal than from Co-App. Really, it's a publish and subscribe protocol optimized from machine to machine communication. And it's really focused on the decoupling of applications. OK, so the really fun thing about DDS is that any node can be a publisher, subscriber, or both simultaneously. And the reason why is that is that there's no central broker on anything like that in DDS. OK, so there's just, you know, DDS is a fabric or mesh and, you know, the nodes communicate with each other, you know, over that mesh. DDS is already widely adopted in specific verticals. You see it a lot in aerospace, defense, air traffic control, robotics. And DDS is a specification of the DDS Foundation, which is one of the foundations that are related to the Object Management Group or OMG. And OMG, they are the godfathers of UML and many other interesting technologies if you are less familiar with them. So there's a formal spec for DDS. And you can certify your projects and products against it. OK, so what are the main features of DDS? DDS at its core has this concept of a data space, OK, that will enable you to decouple applications from one another. So this decoupling is spatial so the nodes can be on any network anywhere and temporal in the sense that the communication is not happening necessarily in real time. As I mentioned, DDS is completely decentralized, which means that there's no single point of failure. So maybe that's an interesting feature for your use case. And DDS supports a quality of service policies, OK, and those policies, they express specific constraints about the temporal and about the time and availability of the data, OK. And that's really, really something nice about it. On the top of that, DDS has mechanisms for built-in dynamic discovery, OK. So essentially out of the box, DDS nodes can discover other nodes around them and establish communication with them. So this is our protocol stack for DDS. And in this case, there's an interesting distinction. DDS can run over out of the box TCP or UDP, and it has a kind of wire protocol for interoperability. That's quite a mouthful. So DDSI-RTPS is the wire protocol for interoperability. And then on the top of that, you have DCPS, which is literally the published subscribe model of DDS. So those two things are documented and there are distinct specifications around them. OK, now, if you want an implementation of DDS, one of the leading ones is certainly Eclipse Cyclone DDS. So this is a pure C implementation of the protocol, and it's got really a tiny, tiny set of runtime dependencies. So typically, it will compile very well in a variety of environments. It's really compact, so it will fit if you strip a few features in as little as about half a megabyte of memory at runtime. And out of the box, it supports multiple platforms, and it's tested over Linux, Mac OS, and Windows, but then many people have run it in a variety of environments. The great thing about Cyclone DDS is that it's now a tier one middleware in the ROS2 operating system. So the robot operating system, very popular, as the name suggests in robotics, has Cyclone DDS as a first class citizen there. So it's certainly an interesting option if you are invested in that particular ecosystem. OK, continuing now with container number three, lightweight M2M. So what is it exactly? So in this case, lightweight M2M has really a tight focus on lightweight and low-power devices, OK? And lightweight M2M requires co-op. So it runs on the top of co-op, but as you will see, it brings a whole lot of value-added features on the top of it, OK? One interesting twist of lightweight M2M is that it defines an extensible resource and data model, OK? So you can really out of the box work with what's there, but you can extend it in a standard way so that even if, you know, let's say you produce a device and it supports lightweight M2M and people don't necessarily have the documentation for that or they are not necessarily familiar with the specific resources and data that you are exposing, then there are standard ways to discover that using lightweight M2M and then to leverage that. So this extensibility makes it really attractive. The specification for lightweight M2M is owned by a nonprofit called OMA Speckworks. And you've got the link to their website on the slide. OK, so what's so interesting about lightweight M2M? Lightweight M2M really, as you will see, really focuses on the management of devices. So it offers you, for example, bootstrapping mechanisms. So if you, for example, you produce devices and you ship them from your factory with specific keys and encryption certificates, that kind of stuff, the thing is your customers then can deploy a bootstrap server where your devices out of the box will connect to that bootstrap server. And after that, they will be able to retrieve their production certificates and encryption from parameters from literally the bootstrap server. So this means any device that you ship from your factory to your customer doesn't need to ship specifically with certificates that belong to the certificate authority of your customer. You just bundle your own, and the bootstrap server will make the substitution in your customer's environment. And this is really a powerful mechanism in lightweight M2M. Lightweight M2M will take care of device configuration, fault management, configuration of devices, control, reporting. But one interesting thing that it does is to support firmware updates. So lightweight M2M supporting device connected to a server. The server can start the process of updating the firmware. The device, typically, this involves to pass a specific URL to the device so that it will know from where to download the new firmware. The device will do this and automatically deploy the new firmware, reboot, and et cetera, et cetera. So all of that is really well-defined in lightweight M2M, and it's an interesting option if this is the kind of scenario you want to implement in your device. So the lightweight M2M protocol stack is interesting in the sense that it will support TCP and UDP with or without the TLS, but it will also run over SMS. So you can send SMS messages over a cellular network, and the lightweight M2M stack will be able to understand that as well. And obviously, the diagram here doesn't repeat everything about co-app, but really you see that lightweight M2M runs on the top of co-app. And then there's the principle that you have specific objects in the extensible data model that are leveraged by your application. So if you want to work with lightweight M2M, there are projects at the Eclipse Foundation for that. Eclipse Leshan and Eclipse Wakama. So Leshan is not a complete server, but rather a library for implementing lightweight M2M clients and servers. It's very mature and well-supported. It's very simple, so it's not using any frameworks of its own, and it's using really few dependencies. And it provides you a basic web UI to discover and test the devices. And really, you just build the code using Maven install, so it's very straightforward to work. And it's leveraging Eclipse Californium under the hood. So really, we are drinking our own champagne here, for sure. Then on the device side, obviously, there is built-in support for lightweight M2M in Zephyr. But if you are in another type of environment, then Eclipse Wakama is a C-client implementation, fairly portable, I would say, that many people have adopted in the market. And our last contender, last but not least, is MQTT. And MQTT previously, earlier in its life, was an acronym. It stood for MQ telemetry transport. And MQ was a reference at the time since it came from IBM. So maybe you heard about MQ series or WebSphere MQ from IBM. So the MQ reference is there, and MQ stands for message queuing. Anyway, nowadays, MQTT is the name. It's not an acronym anymore. And really, it's a protocol that targets constrained devices, but specifically constrained devices that operate over low bandwidth networks. So it runs over TCP, but there's a flavor, so to speak, of MQTT, MQTTSN. And SN stands for sensor network that runs over UDP. But MQTTSN is less mature, let's say, than plain MQTT. And really, the focus of MQTT, when it was created at IBM and with the help of Eurotec, one of our members in Eclipse IoT, is really that it was targeting SCADA types of applications. So industrial types of applications that need to control machines or get data out of machines and that kind of stuff. It's really using a publish and subscribe architecture and is not meant for durable and persistent messages. So you don't run your whole business with, let's say, banking transactions over MQTT, but you shuffle data that eventually will come into those business applications. And MQTT is a specification owned by OASIS. All right, so what are the features of MQTT? MQTT defines three levels, essentially, of quality of service. At most once, OK, so if your message is ever delivered, there will only be a single copy delivered, at least once, where you have at least the guarantee that one message will be delivered, but there could be multiple copies and exactly once. So you have a guarantee that each message will be delivered. It's guaranteed and only a single copy will be delivered. Obviously, as you go from 0 to 1 to 2, the throughput falls because the servers and clients are exchanging more messages in order to implement the quality of service. A nice thing about MQTT is that it implements persistent sessions. So essentially, if you don't use persistent sessions, every time that the client connects, it must specify, OK, I want to subscribe to this topic and that topic and that topic, OK? Persistent sessions means that a client automatically will be reconnected to the same topics all the time. So this simplifies your code. MQTT has also a feature for what is called retained messengers. Normally, by default, you send a message, and whoever is online as a subscriber will get the message at that time, OK? And then the message is gone. But MQTT has a feature where essentially the last message sent on a topic will be retained so that if there were clients that were offline, subscribers that were offline, they will get that message first thing when they will establish their connection. There's also a notion in MQTT of last will and testament. So essentially, if a client or subscriber is disconnected violently from the broker, then essentially, it will be able to send a last will to a predefined message that will announce to the rest of the network that that specific device is gone. And then there's also this notion of keep alive on MQTT. So even though it's running over TCP, it will keep the connection open for you, which is typically more something you will find in UDP. So this is the MQTT protocol stack. And here it's fairly standard. So MQTT, the plain version, so to speak, runs over TCP. And you've got MQTTSN on the top of UDP. OK, so now we have our four core tenders. So which one should you pick? Well, before I say that, we've got implementations. I got ahead of myself there. So we've got implementations for MQTT, obviously, at the Eclipse Foundation. Eclipse Paho is a collection of MQTT clients in several languages and they target multiple platforms. And all of those clients have a wide assortment of features, automatic reconnect, offline buffering, et cetera, et cetera, even high availability in some of them. And it can be either blocking or non-blocking in some cases. So depending on your style of programming and the environment that you are targeting, this is certainly a trend. And then we've got the very popular mosquito broker. It's part already of most Linux distributions. And mosquito is written in C and is certainly a very popular option as far as MQTT brokers are concerned. Okay, now, before we get to the choice, I must also cover very briefly a few emerging challenges. And here we will cover specifically SPAplog and Zeno. So one of the great things about MQTT is that it doesn't say anything about the payloads. So it's fantastic because you've got flexibility. One of the bad things about MQTT is that it doesn't say anything about the payloads. So you have, when you deploy an MQTT-compatible solution, let's say you buy robots from one supplier, and then you buy a software stack from Anautur, and then an Anautur machine from a third supplier, they all speak MQTT. But you will need, at a minimum, to configure them, to point them to the right topic, or to parse the payloads, or to massage the payloads in a specific format, et cetera, et cetera. And all of that is error-prone, it's time-consuming, it's frustrating. So a few bright people at CirrusLink and Inductive Automation two Eclipse members came together and with other community members to start what we call the Eclipse SPAplog working group. So SPAplog is both a specification at the Eclipse Foundation, and there is also an implementation in the Eclipse Tahu project. So what SPAplog is about, it's essentially about three things. First, it defines standard payloads, okay, standard payloads on the top of MQTT. Then it defines standard topic structures, and finally stateful session management. And all of that, okay, make it possible out of the box, okay, when you have devices that support SPAplog, they will be able to speak to each other out of the box without, well, with a minimum of configuration. And this is really a game changer. So as I mentioned, SPAplog is both a spec and an implementation, and all of that, both the spec and the implementation in Eclipse Tahu are completely open source. And this really solves a big problem in the market, really. So MQTT is fantastic, it's there to stay, but by running SPAplog on the top of MQTT, you can really get at the next level. And then we have an intriguing new kid on the block, which is called Eclipse Zeno. So Zeno is a protocol that comes from our efforts in edge computing. And really, it's a pop sub protocol that matches, you know, storage and queries and computations, you know, in a fabric. So the great thing about Zeno is that it unifies all of the data in the system, data in motion, data in use, data at rest, and even computations, okay? It's a unified model. And it blends the traditional pop sub primitives that you will see in MQTT and DDS, for example, but with, you know, the dimension of geographically distributed storage. So this makes it very attractive for edge computing because obviously in edge computing, you want to have the data and compute as close to the source of the data as possible, okay? And finally, Zeno has been highly optimized for low latency and high throughput. It's really the obsession of this team. So Zeno is available in a variety of flavors and they are working now on PicoZeno, which is a C implementation that should fit, you know, on most RTOSs. So it's still, you know, an emerging project in the sense that it's been around for a while, but they are working on many language bindings and stuff like that. But anyway, please have a look. I think that's certainly one of the most innovative projects that we've got at the Eclipse Foundation. So how would you pick any of those protocols really for your specific project? And there are really, there are three major things. There are many other things, but really if I want to boil this down to the fundamentals, you'll see them on this slide, okay? So first, there's your use case. Are you collecting data or controlling devices? That's really the fundamental question. If you are collecting data, then probably publish and subscribe protocol is the best. But then if you are controlling device, maybe request response protocol could be used. But then, you know, there are many, many other considerations there and sometimes maybe you will choose to use more than a single protocol in order to optimize the solution, okay? Then you need to think about your constraints really in picking. So your bandwidth, your battery, the compute power of whatever MCU you are using. You know, what are your constraints? Because some of those protocols will require a bit more resources than others. Some of them run just over TCP for the time being, so that's a bit more resource intensive since you need to maintain the connection, et cetera, et cetera. So there are many, many things to consider there, but you need to be aware of your constraints to make the pick correctly. And finally, there's the whole dimension of support. Does it support the hardware, the OS that you have? Can you procure sensors and devices in the market that already supports it and then your time to market for your wider solution is faster? And that's really an important dimension. And you need to take that in consideration because once again, what you care about is delivering the solution, not to write low-level code and solder components together in order to build your own sensors, right? All right, so as I said, there are utter factors to consider, but really those three are really the most important ones in my opinion. So now, I told you a lot about the protocols and the various implementations we have for them in the Eclipse IoT community. So what's the Eclipse IoT community? Well, essentially, it's 46 member organizations and now getting close to 375 contributors writing code in 45 open source projects and counting. Those numbers are never up to date. They always go up. And overall, when you consider all the projects we've got, that's roughly eight millions lines of code that you can leverage in order to deliver solutions. This is our memberships. So we have three strategic members, Bosch, EuroTech and Red Hat. So those are the ones setting the vision for our working group and really leading the others. But as you can see, we've got many, many members there and a shootout to the Linux Foundation. The Linux Foundation is a member of Eclipse IoT since essentially we're working together on promoting the Zephyr real-time operating system and trying to make sure that our components work well on Zephyr and at the same time that our members are aware of what Zephyr has to offer in the embedded space. And obviously, Linux is another big environment for us as well. So that's another place where we work together certainly. All right, so Michael, to you now that this presentation isn't really over, please join us. Please become a member of the Foundation and please become a contributor to our strategic open source IoT projects. You can contribute to any of the projects I described today right now. You can submit PRs and if you submit enough of them and they are high quality enough, you can become a contributor even without being an Eclipse member. But we would pretty much enjoy your support if your organization can join the Foundation. And especially in joining the Foundation, joining the Eclipse IoT working group would be the wise thing to do because this is where things happen in embedded and IoT at the Foundation. So at this point, the only thing left to say is a big thank you. Thank you for listening to me. And as I said before, I'm Frédéric Déviens, Program Manager for IoT and Edge Computing at the Eclipse Foundation. And you can reach me on Twitter as BlueberryCoder and please visit our website at iot.eclipse.org. Thank you.