 Today, we saw at the keynote a nice example of what is FPP and how it's applied in a Node.js environment, the Node Red. It's quite nice. Everyone could understand what is the flow, what it's doing, and how easy it is. So, I'm here. I would say who I am. The challenge and motivation that led me to the flow-based programming. What is flow-based programming? Soleta, the project that I lead. And the pros and cons to that. I'm a Brazilian guy. I'm a software developer since I was nine years old. So, I'm not that old, but I do software for a long time. And I think in terms of software, more than English or Portuguese or whatever, I work with Embedded since 2005. And I started with, like, my AMO devices, and then I did Tizen for Samsung and Intel and so on. I created my own software consultancy company called Profusion Embedded Systems. It was acquired by Intel in 2013. But this year, I'm out to recreate my own company. I have years of experience with event-loop-based programming. If you don't know what it is, I will explain. And I am the Soleta Arctet and Lead Developer. Okay, so IoT challenge. If you look at IoT, there is a big difference compared to traditional embedded systems. That was the topic of my research as a manager inside Intel. I had to investigate and see what is different and why it's different. So, basically, the difference is on speed of development. Particularly at this point of time where people still don't know for sure what the consumers will buy as IoT, they must experiment. And experiment means faster time to market. It means faster delivery. It means faster everything. So it's much faster than the phones. So when we think about embedded systems, we think about 15 years or more of deployment time. So costs must be reduced. But I have years to develop my product because it should last forever. But IoT is like I'm creating a nest now that should last for two, three years, unlike most thermostat that were built to last forever. Okay, so people need speed and people need some ease. We need some guys from the web and from the phone applications to migrate to this new development system. This is a problem because the web developer doesn't necessarily know what is an ISR and how to handle interruption service, what can be done, what can't. And the solutions that exist are focused on a single solution. So it's just for hardware access. Intel had a nice library called MRAA to access GPIO I2C. And then they created IoTivity that is nice for networking. How do I mix the IoTivity and MRAA with an HTTPP library or MQTT if I want to do more than one protocol in my software? So we need also a scalable solution. In the previous talk in this room, we had IoTivity constraint. There is one API that works on microcontrollers. Then you need to learn another API that runs on Linux. That's not that good because you need to learn two APIs. You need to develop two code paths. It would be nice to have something that is well integrated where I could reuse my knowledge, right? And that's Soleta. That's the Uniform API to abstract platforms so you know how to upgrade, how to store your files everything, how to access your sensors, and also how to do networking. So this is the Soleta project that is the base of this talk and why I came to this. So we were requested to create an easy to use API. To create an easy to use API, you must understand your users. Otherwise your users won't understand your API. So we started from the basics. How did we learn to program? Like when we did programming 101, what was that? And then look at the IoT workflow. Does it match? The IoT lifetime matches to programming 101. And this is how we start. We read input, we process that data, we finish. We report and finish, right? This is how almost everyone here started to program. Anyone started with a different kind of programming instructions. It's just read data, process it, print, and that's it. Finish your application, right? This is procedural batch programming. It's a single workflow and barely handle any kind of interruptions, errors. It's like you do, you pass the test, no memory freeze, allocations and everything. It's not that good for a real life, right? Because when we look at the IoT device, what is the lifetime? We expect to continuously serve multiple inputs at the same time, okay? From network, from sensors, from users, from timers. You may have a coffee machine that is doing like the coffee. So it's brewing, brewing from some sensors if I need to make the water hotter. And if the user press the cancel button or if there is some kind of network maintenance check, there's timers like cleanup timers. Do you want to fire the cleanup timers while you're bringing the coffee? No. So it is hard to make it work with programming one-on-one, right? When you go to novice developers like Freshman that came out of school and they want to work with you, you say, hey man, how to adapt these to continuous serving multiple requests? It's easy. I do a while one. I'm continuously running, not finishing. It's there, right? I fix it. Okay, but how about multiple inputs? You're just reading from one input. Easy, sir. We add like lots of functions, one at the bottom of the other. So we read from the network, then we read from sensors. But what if there is a sensor data to be read but there is no message from the network? You don't read the sensors. It's easy, sir. You go and you check if there is data. Then you continually spin your better drains and your SOC will die sooner, okay? But it's easy to understand. And that's why people come to this conclusion. Also, if process network data is reading from TCP and the guy did a huge batch of data to you, you'll be stuck there. So you'll be stuck into process network data or read networking input. However, you had a blinking LED or you had an animation on your UI and it stops. And the user thinks that everything is scratched or broken. It will kick the machine. It will complain to you saying that your product is bad, okay? So there's the last try. We create threads because thread solves everything. Threads are simple. You basically put the second solution in a thread, right? And I create three threads and it's all nice. However, what to do about shared resources? For example, I have a network. I have not a network. I have a UI and I have a sensor. They are on different threads and the sensor updated. I must update the UI. If you do from the thread, you break everything because the thread is not thread. The UI is not usually thread aware. So you run into problems, right? So you stop. You go research and you find the event-driven programming. Also known as main loop. That is pretty common on servers and UI. If you did a UI toolkit, you likely did event-driven programming. Okay? It is super simple because it takes the try tree. But instead of while one, it uses a while wait event. That is where the magic happens. However, the users, like for the naive developer, it's very simple. You just do a batch of ifs. It's all in a single thread and everything. You may even use a dispatcher table. Like here is all expliced. So if my current event is network, I do this. If my current event is sensor, I do that. Okay? You may convert this to what we are users doing. Like GTK, QTE, FL, like register event handler. When there is network event, you call that function. When there is sensor event, you call that other function. It's very easy to understand and people like it. Okay? Because it's very similar to 101. So they are used to that. The wait events may handle multiple input by using a thread or different primitives from the OS. But from the outside perspective, from the user, you are looking to it as a simple single-threaded application. It's very easy to implement with select, pull, pull. Timeout is also an event. So it's very easy to match and add more events, like network threads. And it suggests what we call idler. This is a concept where run this function where there is nothing else to run. So this is a kind of comparative task. Instead of having two threads where one thread is stopped at some random point for the other one to run, instead, you have a small function that will give its time to another function, and eventually it will be executed again. In more details. So at the top, you can see process network data, the traditional version 101. The programming 101. So when the process network data is executing, let's say it will take four seconds because of network input is big. Your LED, your light, is stuck into the on state. So for four seconds, the LED is not blinking anymore. It's on. And then times the function ends, times kick, LED will be off and on. Then you have double of data coming from the network. Then your LED will be on for double of time, like eight seconds. So this is pretty bad, pretty bad experience for the user. However, if you break your function into smaller functions, you may interleave with other tasks. For example, turning the LED on and off. So let's say you can segment each function to one third of a second. So you get many blocks, right? They execute the same thing, but usually it takes more time. As you can see in red, it's more responsible. So it feels faster to the user, but in reality it's taking more time. Take an example like you are painting some line on the screen. So you are drawing pixels. If you segment your work per byte, you know it will be fast, right? But it will take forever, because you're throwing away all the cache lines, everything, it will be taking forever. So you need some kind of balance. So the overhead is not that much. However, the itch of instance is not that big, okay? Because as you can see, there is a space between these two boxes. This is the time to execute these tasks. So there is no magic. It's easy to do. Iblers, if you're not familiar with them, they are not that hard to do. If you have original code, there's like a four, that you execute the inner block. That inner block to another function, right? So you make the original function just a starter, where you register the process data idler to be executed when nothing else is being executed. It's like a background task, okay? And when it executes, then you have the context that you use it and you call the actual function. That is the yellow one, the process item. The time of process item is like fixed or short, known to be short. However, on the original code, the count, the number of times that fixed amount of time will be executed is unknown because you are getting that from the network. You are getting that from some sensor, from configuration file. So that is a problem. Then there are pros and cons. Pros, no real concurrency. Four locks, no deadlocks, okay? It's very easy. It works everywhere. You may get the crappiest OS on Earth. If you have like a bare metal, it will work there because you don't need any kind of special help from the OS. And it's very lean on memory. If you consider this compared to a thread, here your context is not a full 4K stack, right? It's like one pointer, one integer, and then another point here, right? It's small, so it's very lean on memory. However, it requires you to manually analyze your code, and most people get that wrong or they are afraid of doing that because you are taking one algorithm. When there is a simple 4, it's very easy, right? But for more complex stuff, it gets out of hand and most people have problems doing that. You need to maybe recreate your algorithm. It requires callback and extra context, so it's more to remember and more to manage it, more to leak, more to crash. The consolation and error-rendling must stop the idler and free. Gustavo, this is easy. I know, as you did here, you call idler stop, and that's it, right? But take as in consideration when you read data from a network and then you decode that data and then you process it, and then in the middle of that, you get a consolation. You must know where you are. You may not have just one consolation. You may have multiple requests at the same time, like three network requests that you are doing in parallel. So instead of a single pointer, you are keeping a list, you must find what to cancel, and it gets out of hand, right? It's painful. So the Soleta project and the initial choices. We had to focus on scalability, so we had to run on very small systems, like microcontrollers. We had lots of previous experience with object orientation in C. We had lots of experience with main loop, event-based programming. I'm an EFL developer myself, so I did that for my most whole life in programming, so it was very simple to me, right? And we did network sensors and actuators on top of that. However, as expected, the same design led to the same problems. We had to test if our API was easy enough with random dudes, not just my-offs, but some random students, hackerspace and everything, and we got to the same conclusion that most people don't get callbacks. They get it wrong, okay? It will lead to segmentation fault, to leaks, and the boring parent. On-event, get some data, do something. It is error-prone, okay? And we could call them idiots and learn how to write code, but we couldn't, so we had to research for a real solution. And investigating, we found the flow-based programming. I wasn't familiar with that. I guess that... Who was familiar with flow-based programming here? From where? LabVille. LabVille? Okay, it's a good one. From circuit and from electrical engineer, I'm usually more familiar with that. Guys from programming, like, who started from COBOL or whatever, we start to learn how to program in traditional batch-oriented. But if you look at this, it's easy, right? In this example, I have an action that should be executed on a timely basis that is configured either by a dialer, so you have a dialer where you turn, like, 5, 10, whatever, or an HTTP server. So here, if you look, it's very easy to understand. If I have a dial output, it will be persistent. Persistence. The persistent data will come back to the dialer, so it's updated and to the server, so it's also updated. So you feedback the information and you also change the timer so it will use the new value and the action will tick, okay? So it mostly, everybody gets that and it's easy to explain, even to the non-programmers, like UI designers or your manager. So the flow-based programming was created by Joe J.P. Paul Morrison in the 70s, so it's very old. He says some banks in Canada and things like that use this system, so it's reliable. The idea is that you base your code or your architecture on black boxes. That you don't care what is inside, you just care what gets from it and what it takes. So ports. Input and output ports. They produce or take information packets. That's the data, right? The sensor is on or off. If there is a string, if there is a color, if there is a temperature, things like that. It started to gain traction in web recently, so there is this no-flow. It's a very nice project. It's written in Node.js, but there are also the big guys doing that, so Facebook Flux, Microsoft Event Hubs, Google Tensorflow. And also on embedded systems. If you've ever heard about ROS, the robot operating systems, it's all about you have some boxes, so this is the stabilization box, this is the image tracing box, this is the GPS, so you link them all and the robot works. There is a nice one called MicroFlow for microcontrollers and Node Red, what you saw today at the keynote. What is the problem with these guys? ROS is kind of big. MicroFlow was kind of limited and Node Red is based on Node.js, so it won't run on a microcontroller because it's like JavaScript and requires the full Node stack. Multimedia guys also do that. Video for Linux is a flow-based programming, Gstreamers like that, and Apple Quartz. So concept and terms and our translations to Soleta. So you have the domain-specific language at the top where you write Node 1, this is your variable, this is your instance, right? It's the, officially it's called the process. And you have the type that is your class or your node type as we call it. And you have the port, the output port that is connected to an input port of another node and of another type. So it's text description. It's pretty close to the actual graphical description. And it's even easy to convert, so it's very easy to read and to understand. Here, the focus is on the IP, the information packet, or just packet. You don't care what is inside Node 1, what is inside Node 2. All that matters is that I expect an integer to come from that when it changes. So Node has black boxes. It's a very simple interface. It's almost no coupling or it should be no coupling between the components. One box doesn't share with the other box. All the sharing is done, should be done by the connections and information packets. So that allows you to just replace, like in this, you could replace dial with some scene. Like in Soleta, we allow you to just replace that with a GTK button. So you could get a GTK spin button and then you request your peer, hey, you now implement something that delivers that based on a hardware. So I choose C plus GPIO plus whatever. But it's very easy for you to replace. It allows you to optimize code because each port, if it's not used, it could be removed. You can do that with pure C or pure C plus plus, but it's harder. With this kind of analysis, it's easy, you get an FBP, you see which ports are being used and you can drop all the other functions much easier. It allows for a parallelization. That means you can easily move each box its own process or its own thread. What it takes? Just you work in the core. It takes no effort from the user because each box is like a black box. So if I'm sending that package to that box in a different thread or in the different process, it shouldn't matter. We know that reality is a bit harder on us. We have shared resources like underlying libraries. You may have a HTTP node that shares the actual server and socket. So you have some constraints in real life, but the idea is that you could potentially move that to a different process without the user having to worry about that. And also isolation. So you take a third-party model, you're worried about that, you don't want that model to do any kind of ping back home or take too much memory. What you do, you easily move that to a different process and put an IPC in place. All that you care is about data. So the IPC is very simple. IPC using unit sockets for the basic data types. So this can increase security. You can apply like SCE or SMAC labels to that to that process and reduce capabilities, reduce quotas like CPU quota, disk quota, even have a different user. And internally, as the node is a black box, you are allowed to use whatever you need. So you can do threads. The only constraints that you need to cooperate with the packets and deliver them from the main track. And one nice thing is if an FBP program crashes, it's guaranteed to be the node fault. So the outer of the FBP. On the other hand, you can see this from a different perspective. It's impossible for the user to crash your node type if you did proper testing. Toss over there is a friend and we do lots of EFL development where we call back the user. What happens when the user double-free from the callback? It crashes. And he sends you the back tracing saying, Gustavo, this is your fault. Because here we have EFL whatever here and EFL is your code so it's your fault. EFL is just the main loop. It's calling you back. It will be in all back traces. However, the users don't get that. They take time to understand they did the double-free and trying to find out it's hard. But here there is no way to double-free or to leak memory. The only thing that they can do is connect where the information goes. So if the information provider or receiver is well done, well tested, then it will never break. That's the thing like kernel. If you break the kernel, oops, it's the kernel's fault. It doesn't matter what you did if you did a mistake in the user space. It's always the kernel fault. And it's all about information packets. So what goes where? And there is a clear data ownership. The data, like when I do a GAT you provide a GAT of some data like a GAT of a string. Who owns that string? Maybe you can annotate that and having some documentation saying don't free that memory. It's an internal reference. But users may do mistake. Here there is no mistake. You create one packet, you put it in the queue, it's not yours anymore. It's owned by the core. So the core will deliver and then real free, no leaks. And the memory management then is hidden in the core. The core is free to queue, to reuse packets. Callbacks are also hidden in the core. So if you look at our implementation it's full of callbacks but the user won't see them. Like when you send a packet you are basically calling the callback process of the receiving node. That's a callback, but the user never see that. And packet deliverer can be delayed. That means you can put the CPU to sleep or queue that for a while. So you don't busy loop if you don't want to. Packet memory can be recycled so you can have a fixed pool of packets. And you can even deliver calling or something if your queue is almost full. So you don't overrun in memory. Seconds can be typed. That in Soleta, for example. This allows you to pre-validate if it's going to work or not. So you don't need to go and flash and get the runtime error. You can get that in compile time as well. That is something that you may say, yeah, but that's also the case with C. But usually if you may take a different set of values in C go with a void pointer and a size or a type. The user may not check that and there will be a problem because the compiler won't tell you. So you are sending an integer and I'm getting a boolean. It will crash. Right? So for Soleta FBP what is the specific in it and why we did like that? Basically we had to focus on scalability from microcontrollers as the units end up. We had to make it extensible like microflow was nice but it was kind of fixed. And we had to focus on configurations because as people are trying to produce more hardware they are changing the hardware more software products, they are changing the hardware. That means that for product A it's almost like product B but that changes the sensors. So it still produces a binary but here is GPIO and here is USP. So we had to be careful with these configurations and a low easy configuration. For more details we can look at Ivan's study its flow-based programming on our wiki but going for things that matter we are statically typed. Why? Because we want to have more information to optimize on small MCUs. We still don't do all the optimizations that we could but we run very low on memory. So I can share with you that we have an OIC with Zephyr running in 100k of flash 32k of run, total. So the user didn't write one single line of SQL, just an FBP OIC client GPIO for LED and a button and it will work with not that big overhead. That means IPv6 UDP OICC board overhead that the networking technology brings. We type check at both compile and run time so we have an FBP generator and FBP runner that I will show you how they are used, we check at both. We have a series of predefined packet types so Boolean, integer, string, vector, caller but we also allow you to pack them in a single unit so you have three integers and one string that's okay, you can pack them together into composite packets and of course you can provide domain specific packets like I have an image I have a sound, I have an MIDI things like that, you can create your own packet. You don't need to serialize to a series of caller packets so packets are immutable and because they are read only you cannot change them that allows us to deliver the same packet to multiple connections because that's an extension that we created as well so you may be connected from multiple ports if you're an input port and you may be connecting to multiple output ports so this is one nice thing that we do on-send the flow core owns the packet so it's not your anymore you don't need to care about memory management anymore we are going to do that for you each deliver happens on a different main loop integration so if you have a feedback loop like this this is a feedback loop right from dial, persistence and then back if you do this naively in C it triggers an infinite recursion it will blow your stack but we don't have that problem because each function is called from the main loop so the stack is kind of low and you can do this feedback loops very easily we also as I said allow multiple connections to and from ports and the ports can know who is connected by providing the connect and disconnect methods or callbacks so these allow for things like I have 32 ports and the user just connected 2 ports should I wait all the 32? no I need to wait just for 2 ports to provide the result at the end so you can use connect and disconnect to do things like this and of course if you are taking packets you must use the process function that is your entry point this is per port not per node type so some solutions they queue all the packets for you like they put in an array and they call process and then you pick from each box in our solution you are called for each of the packets that come in you are free to reuse or do that internally if you want but this is how we do so the usage workflow you may write your own sql then compile that's the canonical execution or you create a source flow-based programming that is the DSL that you can use together with a solflow board.json that is a per board configuration that I show you and from that you have 2 paths one you can generate code then this generate file is just like this one goes to the final binary or you can execute it just like javascript or like python this bottom is only available on powerful systems like linux you need more memory you need introspection and things like that here at the top we just generate what we call the information so there is no json parsing here in the generated source there is no string when you see out in this binary in this source in this binary there is no out written in it we resolve that to a port index so the dispatching is pretty fast we have at the end two arrays one array is of the node types so this is my objects you create it like this these are the options and this is the type and then you have a connection table like a telephone where you say object 0 port 0 goes to object 2 port 1 and that's what you get so it's very simple and it's very fast to execute and the configuration is a unique feature from us the idea is that you can generate configuration files per application per board and per application and board you also have a fallback here so you can provide things to test on desktop for example this is the case when you have a board with a GPIO that is a given number the second board is a different number of GPIO the other board is not even GPIO anymore it's a keyboard and then you want to test that on your PC then you want a GTK button to show where you can press and it will simulate for you with this configuration file it's very simple to do it's automatically you just have to create a JSON file say the name, say the properties and that's it you can run or you can use that configuration file in the FBP generator that is useful for cross compiling so I'm compiling for a board with Zephyr and Ironboard but I'm running from Linux I don't want it to use the Linux file so you can manually specify and it's even auto-detected if you're running on Linux it will use a series of regular expressions from a JSON file and it will detect this is a big old bone black so these are the sets that I should be using this is an Intel board so these are the settings that I should be using it's very neat our node types are components what goes in here, in blue so they are very simple they are a C structure or a pointer to a C structure with an open and closed that's basically the constructor and the structure and an array of ports that's it you can reuse these objects even outside of Soleta you can use a libsoleta.so so you can have like a timer it's a built-in node type it's inside libsoleta or you can create an external .so that Soleta will find automatically based on the name or you can put that into your application you don't need to install that on your system or in the case of static-like applications, you use this option the descriptions are meta-information what is the port name what is the port type this is all compiled you can compile that out so on microcontrollers you are not getting that information to throw it away and you use the FBP generator to resolve that into C code and then you just compile it or you use the FBP runner that is very easy to use you can try it like you do in Python or Lua or Node.js you can even create meta-types or you can create types by using meta-types we have a type that allows you to a meta-type that allows you to create based on FBP so you use a FBP as one box into another FBP that is easy to split your code and organize your code you can create the meta-types for composite types so how do I split and how do I merge them and even JavaScript you can create a Node-type written in JavaScript and we also have Node-type options so the traditional versions of FBP do something like this they call initial packets so you have a Node here like a GPIO and you want to set the pin number to 1 and active load to true we convert that to two options the first option is to specify the options in the Node and the second one is to create a configuration file as I told you before like my GPIO and then when you run it says what's my GPIO I don't know so you look into the configuration file so the environment variable matches my board one or the regular expressions we pick the my board one and we look oh my GPIO it is a GPIO reader with pin set to 1 and active load set to true so you don't need to change your FBP at all you just reuse your FBP with different configuration files including on desktop you may use the GTK or keyboard and console to make it easier and this external configuration is something that we like very much because we had to deal with Zephyr boards running Intel Quark SE boards running Zephyr and we had to manage some Mino board Galileo Big O'Bone Black all the kinds of Linux boards and they were all different in pin numbers and everything so the cons paradigm shift it is very hard on experienced developers if you are used to think about procedural programming regular object oriented programming you are going to have problems I won't lie to you we all had when we started it was new to all the developers so we had to learn it and we did lots of mistakes the biggest mistake was trying to synchronize things that are not synchronous for example we have node here node there they are exchanging one information information inside that pipe inside that connection is synchronous however if I had two objects providing data to the same object if this data was generated before that data it doesn't mean that this data is getting in before it may be reordered across channels, across connections just inside one connection it is guaranteed to be linear and why that was a problem because people start to do hacks like this node and I have another and then another so this adds a hop it will delay the package delivery right so this one was like a straight line so the package arrived earlier so what people start to do let me add a hop here just to delay and let me synchronize it and then boom because that delay may be like a bigger node that will delay for three iterations not just one so it was us getting it wrong and learning with the mistakes so the paradigm shift is not that easy on experiencing a developer however if you get the naive developer, the freshman guys from non-computer areas like kids they get it right and they they are not stick with the information and to the paradigm that we are and it requires bindings so if you want to use another library you have to integrate that it's not automatic it's not like include the C file and that's it you need to create a node type and you do that in C of course and you need to expose this manually it's like doing a python bindings or java bindings or lua bindings we do add some overhead there is no magic there is no free lunch but we believe that the overhead is not that much it allows us to run on very small boards already and you need a balance to see what you are going to write in FPP and what you are going to write in C clear example we started with the basic node types like this is a addition, this is a multiply this is a comparison this is an end and that's it ok so let's write Fibonacci Fibonacci canonical example should be very easy but it's not it's very easy to get wrong and then you start to look and lots of mistakes, lots of people trying first it's not simple and it's not efficient because what should be a straight algorithm in C using all the registers you are now creating packets sending packets queuing going back to the main loop so it's not efficient however it is easy to get this wrong and do things that you should be doing in C in the node type trying to do as an FBP sometimes it's valid because it's just a proof of concept you are just experimenting then you can do that but you need a careful balance to see what you are going to provide in node type and what you are going to use in FPP the pros are much bigger in our understanding and that's why I'm here we have no leaks, no segmentation fallouts, reduced blaming it's just by itself it's worth but also it makes it easier to collaborate across teams so if I need to create an API and give it to you so you can implement it's not that easy, right? people get it wrong, you are dictating my design and so on this is not the coding style but here the interface is this is the data that I get in this is the data that I get out that's it so in the case of the dialer here I may just use like an HTTP server again or I may use a keyboard or something and I give it to you and say go and implement that using I2C SPI I2C, GPIO whatever I don't care this is my contract, I want an integer in integer out, that's it so it is very easy to collaborate with different teams it's also very easy to read, write and visualize we even have a tool to convert to graph viz so you can get a plot we don't have a visual editor like Node Red guys as they do we would love some people even started it but they abandoned that but anyway it's very fast to prototype and to test so as developers we started to use FBP more and more to test our code first FBP the inner code must be asynchronous so when you design this API sometimes you miss that so you have to review in the next iteration if you try to do the binding binding for FBP you get it right and then you get the benefit it's much easier to test so we have lots of tests in Soleta code and lots of samples that you can use as a base to learn Soleta and that's it, thank you you can check our OIC tutorial at the github it explains to you how to create to very simply create a switch and a lamp using OIC with Soleta so 0c code is just putting together some FBP drawings and some explanations even on how to read errors like if you did a mistake if you did a typo then it explains to you how to read error messages and all we also have, I don't have in this talk but I had in a previous one a tool where you can have Node.js application to run on board then you get the browser you type the FBP and you can run, of course our samples without having to install Soleta on your PC or your cross-compire to your board so one question someone has a question and then it goes to the but at one particular time how do you manage the the concurrent the concurrent realm and the derivative is the concurrent realm where if you are in between the thread save or whatever ok so mapping that to flow the flow is not meant to be doing at least our implementation of flow is not meant to be doing real-time tasks ok so what Soleta does Soleta is a project not just the FBP we use a main loop internally so all the interruptions we queue to the main loop and dispatch later so everything is dispatched to the user like I don't want to use FBP I think it's crap but you do have a nice abstraction layer like it's a glib or qt for IOT so you run code on Contiki, Riot, Zephyr and Linux ok I want to use that you are never called from ISR we queue that so it's delayed so what we recommend that drivers like I have a component you do the drivers in the kernel if it's Linux if it's another OS like Zephyr they are taking the same approach they have a sensor framework that is like a sensor subsystem where you can use that and you can provide stuff in there like a thermostat and things like that so you do it there what you expose here is high level so if you think about a company you have guys that are creating the business law guys that are creating the like the drivers guys that are creating the actual the their total subsystems so these guys provide libraries to each other like the drivers guy provide some libraries to these guy writing the no-types and I'm just linking the box together in a very manager and designer understandable way so it's all about macro blocks okay you are not doing like the driver in FBP itself you can try to do that but there are no real-time guarantees and nothing so you may run into problems however for quick tasks maybe it's doable so we don't block you doing that but it's not the recommended way does that answer your question? another question someone? nope okay another one the way that we do is you need explicit connections and what we do is we have an array and we walk that array from the main loop so it's like this I'm at index 0 and index 0 is your node so I'll get the packets that guy sent a packet put in my queue like sending queue and I check this packet must be sent to this and this guy this one is you so I say hey it's your time I call your process function you process that you are not allowed to keep a reference or modify that node that packet you may clone that you may copy it when you return I'm going back to the main loop so if there are some events like network stuff I will do and then I'll go to the next one if you send some packet that will be queued so I'm not looping from you to you again I give the time for the other guy and then I'm back to you so this is how it's done I have this Soleta this other presentation that will also be online here I have more details and even an explanation is the FVP syntax in more depth and the size like what's the size overhead it's not just our overhead right because this includes the OS with micro IP UDP 6 low pan radio drivers GPIO driver and everything C-BOR co-app but you can see it can be very small and efficient so we have 32K this hardware had 80K of RAM and in this hardware nobody could run the JavaScript in it so we really think that JavaScript is another great solution that's why we want to allow you to write some no-types in JavaScript particularly for prototyping so as I said I can define the no-type writing in JavaScript to experts go optimize this measure take like months I just need 2-3 days to do my prototype and then you take months to optimize that so it is a good approach ok and I believe it's time out so we can go drink some beer thank you guys