 Thank you all for coming, we are talking about the IOT constrained project, it is an open source project that was released a few months ago, it runs on a range of constrained hardware and software environments, it is designed to build IOT applications based on standards by the open connectivity foundation, I will speak about some of the design choices that we have made in the project and hopefully by the end of the talk it should become apparent as to the fact that similar design philosophies would apply more generally for any IOT standards based software. So, this is sort of the outline of the talk, I will start by introducing open connectivity foundation and the IOT constrained project and briefly introduce you to the OCF standards, I will speak a little bit about the characteristics of constrained hardware and software environments, followed by discussion of IOT constraints architecture at some depth, I will provide some guidelines on porting IOT constrained to new environments and end with the discussion of how you build applications. So, the open connectivity foundation OCF was set up in an attempt to address the M2N fragmentation problem, it is an industry run consortium which aims to collaboratively develop and publish a set of communication standards for IOT, the IOT constrained project is a small footprint implementation of the OCF standards, it has a lightweight design and runs on constrained hardware and software environments, these might be typically battery powered and wireless devices like a smart door lock or a sensing platform for instance, such devices might might be connected themselves to a constrained network such as a low power and lossy network and such devices typically run small OS's like Zephyr, Riot OS or Kuntiki, the architecture enables of the project enables it to be quite easily ported and customized to any platform, this is kind of an essential and important attribute just given the variety of available hardware and software options, so having a flexible design enables the project to be used by a wider variety of users, so that is more people looking at the core potentially more contributors and a more robust implementation and also to that same point the people who would benefit most from this project are embedded IOT developers or makers who would like to tweak their application in OS bundle to a very fine grain level, so the OCF standards OCF protocol is based on the rest architectural style where all things are modeled as resources, so Cruden which stands for create, retrieve, update, delete and notify operations might be performed on these resources via standard verbs get, put, post and delete, the things communicate with each other by exchanging resource representations, the schemas for all of these representations is completely specified by OCF, the observer notify operation is a special case of the get method which lets you subscribe to notifications from a resource when there is a change in its representation, resources are generally defined with a set of standard properties, common properties they have a URI which identifies a particular physical object, you have a resource type which mentions a type, resources are tagged with one or more interfaces where an interface sort of describes all of the set of operations that might be performed on the resource and the nature of their representations, OCF completely specifies a set of standard resource types and interfaces that might be used, resources might have one or more policies associated with them such as whether it must be discoverable or observable and any implementation would act upon these policies and lastly resources can also be assigned a friendly name, OCF, the OCF standard also describes roles which are embodied in applications, the server role describes an application that hosts a collection of resources and expose it to the outside world, a client application accesses resources hosted on a server and you can have an application that incorporates both roles and such an application might for instance run on a gateway and serve as an intermediary, the OCF standard also declares a few well known resources which are just there and serve a specific functions, the OIC slash res resource is basically used to perform resource discovery and it takes discovery requests, device and platform are two logical concepts in OCF and their well known resources expose some metadata such as manufacturer OS and other firmware details, this OIC sec resources serve as interfaces to the OCF's security model and likewise there are some other well known resources which you can learn more about by looking at the OCF core specification, so with the OCF protocol, the wire protocol is based on the constraint application protocol co-app, resource discovery as I just mentioned is performed with the aid of the well known resource OIC res, endpoint discovery over IP is usually performed using multicast requests, OCF data payloads are encoded using the concise binary object representation seaboard which is a compact data format, OCF also describes the security model describes mechanisms for authentication and encrypted communications over DTLS and uses access control lists to restrict access to resources, you can read more about it by looking at the security specification, the OCF standard primarily relies upon UDP as a transport, but there's been some work to adapt it to Bluetooth via the generic attribute profile and there's still an ongoing effort inside within OCF to refine and standardize the scheme, so I'll just go over a few simple request response examples here, so for resource discovery you have two devices here which is where the smartphone runs probably a client application and you have an OCF branded light bulb running a server, in order to discover resources the client issues a multicast get request the OIC res resource to just find all resources around, the light bulb on receiving this request sends back a unicast response with and its representation mentions its URI, A slash light and also the fact that it's a light and that it is also observable it supports the observer operation, so as the client is interested in operating this light bulb but now sends the unicast get request directly to that server endpoint and addressing that particular resource with the get request and the light bulb responds with sends unicast response with its current representation which is state equal to zero and dim equal to zero its brightness level, the client wants to turn on the light bulb and so it sends the unicast put request and including a payload of state equal to one and brightness equal to like 50, the server receives this request and basically acts upon it by turning on the light bulb and setting the brightness level and sends back a success status via a unicast response, the client now decides to basically perform the observe operation and subscribe to notifications from the light bulb, so it sends a unicast get request with the observe option set, the server takes note of that subscription and sends a unicast response back with its current representation which still remains state equal to one dim equal to 50, so in this example the light bulb happens to be connected to a physical switch which somebody has turned off, so the bulb so essentially the state of the bulb represent, the bulb's resource representation has changed and so it sends back a notification to the client indicating its current state of state equal to zero and dim equal to zero, so I will now speak about a few characteristics of working in constrained environments and some of the challenges that they pose, but first to understand what it even means to be constrained, a quote from RFC 7228 which presents a constrained device classification, RAM and flash sizes were the key dimensions along which the authors of this RFC observed a clear clustering of commercially available MCUs which resulted in these three classes, class zero devices are severely constrained and this they could be something like a tiny sensor mode, in the best case they might be connected to the internet via a gateway to proxy, class one devices are less constrained than class zero devices, but they still have limited code size and as a result they might not be able to accommodate a complete protocol stack to talk to other internet connected nodes and they might also lack certain security features, but they might still be able to accommodate a protocol stack built specifically for constrained environments such as the constrained application protocol, but even so it might not leave sufficient resources available to an application. Class two devices are much less constrained than both class zero and class one devices, but their resources can be utilized more effectively by building software that is just generally lightweight, leaving sufficient room for building more and very capable application, but generally we have the challenge here to accommodate the operating system, the network stack, any drivers, the IATVT constrained framework and the application all within these constraints and based on our experience thus far with the few OSs we have found that it is generally hard to overcome this challenge on a class zero device, it might be possible to deploy an application on a class one device by doing a very optimal selection of features, but we can very comfortably fit within a class two device. Here are some typical characteristics of constrained hardware already spoken about low RAM and flash sizes. So, what this means is just not that we need to build highly optimized software, but also the fact that the RAM size limitations tends to also constrain the certain runtime parameters of the application such as maximum number of buffers for request responses and maximum payload sizes and these directly impact the serviceable workload of that application and this is something we need to be aware of. Also it pays to build software that is modular so that you could include only those features that your application needs and cleanly exclude everything else. When you are working with low power CPUs with a low clock cycle you need to basically build software that is not heavily over engineered, code path should be kept lightweight and hot spots and we should do minimal copies. While working with battery powered devices we should ensure that our code does not do any sort of busy work, it should be made to take advantage of idle periods and periods of inactivity to put the CPU to sleep because this leads to better execution efficiency and hence power conservation. But however in order to further address this need we are currently looking to extend the OCF standards to incorporate a specialized application profile for constrained devices that would potentially limit the responsibilities of such devices from a networking perspective. To give you an example we could expect a battery powered device to periodically go to sleep and so it should not be required to listen for multicast requests and if so it would mandate that we deploy some rich intermediary device that sort of serves as a reverse proxy to the constrained node. This is the sort of stuff, this is an example but this is the sort of stuff that we are currently trying to study and we are looking to hopefully incorporate it in those CF standards moving forward. So, these are some typical software characteristics, the operating system itself is usually small and lightweight it might lack the necessary capabilities on its own leaving us to rely on potentially rely on third party or proprietary libraries. There usually is no support for dynamic memory allocation which is something that we take for granted on full featured operating systems. We could statically allocate memory but if we do we should put a lot of thought upfront into the size of those allocations to because they directly impact the performance of the application. Also there is considerable fragmentation in the APIs and programming models amongst OSs. There are also variations in the design of their execution context and the scheduling strategies that they employ. For instance they could be like preemptive or cooperative like criate OS uses supports multi-threading, Kantiki essentially uses cooperative multi-tasking, Zephyr supports both preemptive and cooperative threads that can solve different purposes. But the point I am trying to drive here is that it is generally hard to write a piece of software against one set of OSs and libraries and have it easily port over to another unless we specifically design it for that purpose with that goal in mind. So just to summarize after sort of addressing a lot of these pain points that I just discussed appears a brief summary of all of the features that we have been able to support in Iotivity constraint. It supports the OCF client server roles, resource observations, separate responses, resource collections. It has utility functions to encode and decode OCF's data model. It executes the OCF protocol and handles it all internally and exposes a set of high level APIs to applications. It also supports the co-app blockwise transfers feature which is RFC7959. This is an important feature especially in constrained deployments where there tends to be limitations in the layer 2 MTU sizes. Taking Bluetooth and 802.154 as examples, we could use L2CAP or the six-low pan adaptation layer to do fragmentation and reassembly of application data. But this sort of makes it, if we rely on those blocks for that purpose then it sort of makes it a bit transactional, perhaps a bit error prone and it needs to sort of and we also incur the additional cost of, let us the network stack incurred an additional cost of creating and buffers, multiple buffers and maintaining state. But this state can be managed slightly better and the upper layer closer to where this data originates thereby eliminating the need to actually handle buffers and maintaining state in the lower layers in the network stack. This is essentially the benefit of blockwise transfers and an IOTVT constrained application can optionally take advantage of it by setting a prescriptive MTU size at compile time. An application need not do anything to utilize this feature because the framework automatically launches off a blockwise transfer if an application data payload exceeds that prescriptive size. IOTVT constrained also supports a minimally viable implementation of the OCF security model by it supports one of its onboarding methods, provisioning of credentials and the access control mechanism. You can read more about it in the OCF security specification which supports range of security modes of what? You mean when you're doing it blockwise? No, it doesn't that wouldn't happen because every block is acknowledged. So I think the question she asked was can co-app handle out of order reassembly if you use a blockwise transfer. And my answer was that since every block is essentially acknowledged at the same time that's one of the benefits because you're maintaining all of the state at the upper layer. So you're not going to be sending the subsequent block until the first block has been successfully transferred. So I'll sort of discuss the IOTVT constrained architecture. So based on a lot of the concerns we discussed in the previous slides these were sort of the architectural goals that we arrived at and which are all fulfilled by IOTVT constrained. It consists of a core block which is built to be cross-platform and encompasses most of its features which is the OCF protocol, resource model, application layer functionality, memory management and execution. This core block interacts with the platform specific functionality via a set of abstract interfaces. These interfaces are defined in very generic terms and illicit a specific contract from their implementations. Since the core block is the sole consumer of these interfaces the definitions of these interfaces are very simple and very limited and bounded. So this lends itself to be very easily sort of implemented on any environment and thereby IOTVT constrained can be quickly ported to and deployed on those environments. It statically allocates all memory and the size of these allocations are specified at build time and lastly it's highly modular and configurable so you can sort of cleanly exclude some features without affecting the rest of the system in your compilation. So this is the architectural diagram depiction of whatever I just spoke about. You have the core block there in the gray box which talks to these abstract interfaces. These interfaces cover only a specific sets of lower level functionality which it needs which is access to a system clock, a pseudo random number generator, the network stack for connectivity and some form of persistent storage to basically persist and retrieve credentials for supporting OCF security model. Concrete implementations of all of these interfaces is basically what we call a port and this can be built for any OS, network stack or libraries, lower level libraries. Yes. So we currently support all of these ports which are present for Zephyr, Riot, Kuntiki, Minute and even Linux and on the left you have an IOT application that runs on any one of these OSs that speaks to APIs exposed by the core block. So this is sort of a depiction of all of the constituent pieces of the core block which is sort of like zooming into the gray box from the previous slide. So on the right you have blocks that implement OCF's resource model protocol and security flows and all of those blocks interact uniformly with OS level stuff, OS specific stuff via the abstract interfaces that I spoke about. The blocks on the left perform more horizontal functions like working with OCF's the memory pools and as well as handling the internal execution of the framework. The framework executes in an event driven fashion where data has passed between internal modules via the propagation of events. The core block maintains an event queue internally of a fixed size which holds all the outstanding events that were posted by any of these modules and the events are processed by the receiving module in the order in which they were initially posted. So an application essentially needs to run an event loop and some background task in it to execute the framework. The code implementing the client and server roles are kept distinct and so that an application could choose to include either of them or both using compile time switches. And lastly the core block exposes a set of uniform APIs that an application may use. So this is a more sort of sort of internal deep dive into how the event loop executes. So essentially you have the application that runs the event loop, the OC main pole function instead of a loop and continually calls it. Every call to it processes all outstanding events at all of these internal blocks as of that time. So an application also registers a set of callbacks from the core block. So the connectivity block essentially is anticipating incoming messages over the network and when it when something arrives it passes that buffer up to either the security or messaging blocks based on depending on whether the incoming message was encrypted. The security block passes a decrypted message up to the messaging block. The messaging block sort of passes the co-app packet and sends the result up to the resource layer which ends up mapping it to OCF constructs and eventually calls back into the application to either handle a response for a client or handle a request in the case of a server. In addition the application itself can post events into the framework for processing and this might an example in which of where this might happen is in response to a hardware interrupt from a sensor. So one thing to take note of with the event loop design is that it gives the application opportunity or at least that task to enter a tickless idle mode during periods of known periods of inactivity. An application basically registers a callback which can be made to sort of wake up the task that is running the event loop. The code over here sort of illustrates a pattern by which this can be accomplished. You initially have on the top you have you can initialize a semaphore and essentially launch off the event loop. The event loop the OC main pole function returns a value which could either be which is essentially an absolute timestamp. It is either the absolute time of the next scheduled event that is known or 0 if there is none. But in either case the loop can subsequently wait on that semaphore for either that known time or until that known time for the next event or indefinitely and this callback function can be made to signal that semaphore to resume execution of the task above. The framework automatically invokes this callback when there is new work either in the form of an incoming request or if an application poster an event to be handled. And so in this way the event loop is not really running there is no work when there is when there is absolutely no activity it does not run and it just has it can just sleep and the CPU can move into a deeper sleep state. So I will now speak a little bit about porting my activity constraint to a new environment. So as I earlier mentioned the core block interacts with these set of abstract interfaces which need to be implemented for a specific port and essentially the whole task of porting the framework to a new environment boils down entirely to implementing these interfaces as everything else is OS agnostic. To give you an example here is the structure of the Zephyr adaptation where the adaptation layer that is the Zephyr port directly invokes APIs from Zephyr to access its clock, network stack, storage and some other kernel APIs and as well as a random number generator. The stack below it is a simplified stack diagram for the Zephyr OS where you have the hardware platform the above which you have the kernel and drivers and the BSP for that platform. The drivers interact with various features that we use such as the either the random number generator or storage for flash and things of that sort. It supports 802.15.4 Ethernet and Bluetooth as the layer 2 technology and on the top you have support for UDP, IPv4, IPv6 and as well as other Bluetooth host functions. And so you can once with the existence of a Zephyr adaptation layer you can write your application for Zephyr and you would compile both Iotivity constraint framework, the application and the Zephyr OS all into a single binary and which you would then flash onto your device. So, speaking getting to the interfaces themselves here is the clock interface. In order to specify this you have to specify a resolution of the clock that you would like to use. Clock is the Iotivity constraint tracks time via clock ticks and obviously higher the resolution the more precise timing that you can achieve. The interface itself consists of only the initialization of the clock and obtaining the current time which for instance on Linux you can implement using clock get time. The connectivity interface consists of implementing initialization sending a buffer to a remote endpoint sending a discovery request and like getting an assigned DTLS, assigned DTLS port and this all of the other details below these are implementation dependent. The OC message D structure it is again an abstract structure which tracks and maintains the remote endpoint information as well as the data buffer which contains your request or response. On the receive side you could basically look for incoming network traffic either through polling or via a blocking weight in a separate task. But essentially when you get your in either case when you get your an incoming message you would construct an OC message D structure and pass it up to the framework via the OC network event call. But however the OC network event call needs to be synchronized with the execution of the event loop. So in case you are calling that from a separate thread for instance if you are basically blocking on a select on socket file descriptors waiting for incoming traffic you presumably do that on a separate thread and the thread that runs the event loop. In that instance you would need to implement these functions to using the synchronization primitives of that OS that you are targeting and that would eternally synchronize the two threads and guarantee the synchronization. This is the interface to the random number generator you would you could choose to employ any seeding strategy as a part of initialization if necessary. And the framework calls OC random value to obtain an unsigned integer. This is the interface to any persistent storage that you want to support OC storage config is meant to implement some sort of an initialization function and it takes a parameter which could be some reference to an area of storage. OC storage read and write should implement as a part of the contract and defined specified by this interface it needs to implement access to a key value store because the framework internally just reads from and writes to keys. So I will move on to walk through a few application code samples to give you a sense of the APIs. So typically an application in itivity constraint consists of a series of implementing a series of callbacks. You have an initialization callback which is needs to be present in all applications. You have a callback for defining a registering resources which you would do on a server application. You would need to implement resource handlers for all over all supported methods on those resources in server applications. You would need to implement response handlers to all issued requests in client applications and you would also you could choose to define an entry point for issuing requests in your client which is invoked shortly after initialization. You have to of course run the main loop the event loop in your in some background task in your application. And you also need to configure a set of parameters to fit the needs of your application in this file config.h at build time. For which return values for I think I mean as far as the protocol is concerned it does define some values that you need to return but it doesn't really get into implementation details. I think that you would consider an implementation detail is outside of the scope of the spec. I think the spec really covers more sort of at the co-app level. So this is sort of the background task in your application where you would initialize stuff. This could very well be your main function. So you would create an OC handler t structure and populate it with the initialization resource registration and the event loop signaling callback. This structure is then passed to OC main in it to do the initialization and upon successful initialization you would launch the event loop. This is an example of an initialization callback where you would in typically you would be expected to populate the platform and device resources which as I mentioned at the beginning of the stock are two well-known resources and expose metadata on the operating system from where an OCF spec versioning information. But you could also use this callback to do any hardware specific initialization depending on your deployment. Here's the resource registration callback where you can see that we're defining a new resource with URI slash light resource type core dot light. It supports the get method and sets a resource handler get underscore light that's the callback for the get method. It's also marked as observable so it supports the observe operation. This is an example of the get light resource handler which is called whenever the server application has received a get request to that particular resource. If this callback, so the client is free to pass in query parameters along with your request much like you would do on HTTP and if resource handler accepts queries it can read them over here using the API. But basically the semantics of the get method is that it has to return a representation of that resource which is precisely what I'm showing here. In this case it consists of an object which is a map with two properties state and brightness level and that's being encoded and returned here in this function. Activity constraint has macros which you see here that you could use to encode your representations. So to do resource discovery on the client side at the line at the top shows you an example of how you're discovering all resources of type OICR.light and sets a response handler to the discovery request, which is discovery and so essentially at this point it sends off the multicast request with appropriate serialization and the discovery callback is called for every discovered resource overall responses that come back to the framework on the receiving end. So inside of the discovery callback you know that it's called because you asked for resources of that resource type that's something that you're expecting and so you would make a copy of the server handle which contains information about the remote endpoint that hosts this resource as well as its URI because you would use this information subsequently to issue a request to that resource which we do right here. So now that we have the server handle and URI we use the OC do get function to issue a request to the light resource and in this case we also pass in a query parameter units to basically select the desired units values in the representation that it returns and we also set a response handler here get light on the client side to process the response from the server. So in the get light response handler basically the framework when it receives a response it passes the payload and then creates an OC rep T structure which gets handed into this function. This is a structure that the application can simply walk through in this fashion to retrieve all of the key value pairs contained in the representation which is what you see here. So I spoke about the framework configuration. So these are some of the parameters that you would configure to suit your needs. So you would have to set the number of application resources that you want to support that you are supporting in your application. Number of request response buffers which has which impacts the number of requests that you can concurrently handle. The payload sizes, empty size if you want to support block wise transfers and DTL is related parameters. So this is something that you specify at the time to match your specific application. So once you've written out your application and as well as set all of your parameters in the configuration you need to build it using your target OS's build system. Taking Zephyr as an example you would use Zephyr's build system which is based on Kconfig. You could either specify a config file containing all of the OS configuration parameters or you could use its menu config interface to do that. These are some parameters that you would certainly have to specify at least the ones that are highlighted like the stack size of the main thread, the implementation of some implementation of the random number generator, the number of network contexts which is really Zephyr's implementation of a socket, number of send receive buffers and data sizes in the network stack and the network transport and layer 2 options that you see. If you want to support 6-lop and adaptation layer as well as the header compression scheme that's something you could choose to add. If you are using Bluetooth you could select from among the host options like the central peripheral roles one of them. This is again something interesting that you can do with Zephyr which I thought I'd point out. So the Zephyr's network stack has an implementation of RFC 7668 which is transporting IPv6 traffic over Bluetooth low energy. The mechanisms by which this is accomplished is fully documented by Bluetooth in the internet protocol support profile IPSP. So basically you can since as Zephyr supports this you can build an iotivity constrained server application as a 6-low node using these options that's 6-low IP header compression, the Bluetooth peripheral mode and L2CAP connection oriented channels which would result in a 6-low node type device and Zephyr also includes a sample implementation of the IP support service which is again something that's specified in the IPSP and something that you need to use to establish a connection with from the master that's the central. This is actually supported and something you can try out today on the Arduino 101. Yes, I'm not sure how to quantify that I mean I've actually run samples that we have so so I think the question he asked was okay once we have all of this built up and running on an Arduino 101 how many resources do we have left over for the application right. I mean I don't have the numbers with me right now but I think we I think based on what I remember I was kind of approaching the RAM limits on the Arduino 101 was the time but but yeah but you can test this out and this works on Linux you can use to Linux as a as a central device and you can use this effort documentation to for instructions on how you how you both set up your Arduino 101 as well as establishing a connection on the Linux side. So in conclusion I mean the project is progressing steadily there is growing interest in the community as well as prospective OCF windows who are looking to use it we are still tracking hoping to achieve full OCF spec compliance we are pretty close as it is we are looking to participate in an upcoming OCF plug-fest to put it through a series of interop tests. Something that I had mentioned earlier on in the talk was about defining a constrained device profile with an extending the OCF standard to support that and we are looking to use this implementation to actually run a series of experiments and to aid in coming up with that definition we hope to work with OCF's industrial and healthcare task groups to understand you know where they are heading in terms of direction and looking to see if we could do any sort of prototyping with an IATVT constraint to support those verticals and lastly we are also looking to add additional higher higher layer higher level components anything which is of utility such as you know like maybe interacting with the third with the third party services and we could have them reside in the project as many libraries which an application could use and as always we very much look forward to you know community involvement and more contributions and I think there are there might be opportunities to further optimize some of the core blocks so that was my talk and you can you can get the that's the pointer to the source in the mailing list and you can contact me if you have any questions yeah any questions yeah so for any operating you know system like that you know because for example there's this concept of packages you know which is basically like an external right so the minute port and specific was done by contributed by runtime.io and actually so they have a fork of the project on their repository and they will be upstreaming it at some point soon so I don't know that specific question but for any other yeah but I mean I I don't I don't see anything particularly limiting in the design of the framework itself that would prevent you from doing that if you have had infrastructure to support what you're suggesting. So you're saying that okay if you were to issue a discovery request and you get back a series of responses right well I define at build time the number of resources that I I expose as a server but I don't define how many resources I receive and it's I'm at liberty as a client application to decide how many resources I want to review but the discovery request itself is has a lifetime of I don't know a few seconds within which you receive all the responses that you receive and right I mean yeah on the client side where you actually issue discovery requests and process these responses you're at liberty to decide so you wouldn't be processing any more than you've already accounted for up front right yeah exactly so yeah so like in that example we ask for a specific resource type and that's the way to actually do it any other questions yeah so over Bluetooth at least for GAT it essentially does a series of unicosts by by connecting yes so actually for GAT we are defined GAT profile and a GAT service with a specific UUID for OCF and so those connections are essentially when you want to discover you would have and the server side is advertising and you establish that connection and then you actually you transport co-app directly on that on your le link and the server side responds if there's a matching resource yes it takes care of that exactly any other questions