 Hello, thank you for joining us today. We appreciate everyone taking the time to listen to us today. And hopefully everyone is healthy and staying safe. I am Varbar Bosa, developer advocate at IBM's Center for Open Source Data and AI Technologies. And presenting with me today is Yee Hong Wang, software engineer for the IBM Cognitive Open Technologies Group. Today we'll be discussing machine learning on edge devices and more specifically how can you integrate Node-RED and TensorFlow.js to make it easier to incorporate some machine learning into your IoT device. Now it's no big secret that edge computing is on the rise and the Internet of Things is a hot topic. The growth of connected devices is staggering. You have home automation systems, smart cars and appliances, mobile phones and even personal drones. Data from IoT analytics show that there are about 7 billion IoT devices worldwide and it is predicted by the end of 2020 there will be more than IoT devices than laptops, desktops and phones. And by 2025 a gardener projects the number of IoT connected devices will surpass 21 billion. Using the sheer number and variety of these devices and sensors, getting started with IoT can pose many challenges. First, each device has its own set of requirements and restrictions around its interface, protocol and so on. Also, trying to set up these devices to communicate with each other or even just trying to get messages off of one of these devices can prove to be time consuming and often times are non-trivial. So solutions require pulling together different devices, APIs, services and sometimes protocols and then trying to get them to interact and work together. Tools are needed that make all this easier and allow you to bring all these various pieces of hardware and software together in a manner that is approachable. User Node-RED Node-RED is a flow-based programming tool for wiring together hardware devices, APIs and online services in new and interesting ways. It is a visual programming tool with a browser-based editor that makes it easy to wire together an IoT flow and deploy it to your device with just a single click. Node-RED Runtime is lightweight and built on top of Node.js, taking full advantage of the event-driven non-blocking model. Node-RED makes it easier for more people to get started quickly without having to immediately dig into code. So rather than having to write tons of code, you instead just drag and drop nodes into a workspace and connect them to each other to build your solution. Because of this low-code approach, it has become an ideal tool for low-cost hardware such as Raspberry Pi, but you can also run it from the cloud or locally on your laptop or desktop. Through its drag and drop user interface, you can develop and deploy powerful applications with minimal coding. So let's have a look. This is the Node-RED editor, and as you can see, we access it with a browser, and you can access it through the browser after you've installed Node-RED. On the left-hand side, you'll notice is the Palette area, which consists of a large set of pre-installed nodes, and you have nodes for all sorts of functionality, services, devices. Now along with the set of pre-installed nodes that comes with Node-RED, you can easily add more nodes, and you can find nodes out in the Node-RED library. So for example, you can just search for a particular area or a particular service and you can find nodes related to that. So you can find nodes in the Node-RED library. You can also find nodes on GitHub as well as NPM. But along with these nodes that you can find out there in the community, you can also create your own custom node and install it for you to use with your flow. And on the right side, you'll see we have the tools sidebar, and the tool sidebar just provides a number of resources to assist you in working with your flow. You have the info section, where you can find information like help and documentation for a particular node. You also have the debug, and that's where you'd have your debug messaging and your logging when you're running and testing your flows. And there are other stuff in the right sidebar along with a dashboard and configuration information as well. And in the middle area, that's where you would be your workspace, and that's where you would go about wiring together your flows. Now to create a flow, it's as straightforward as just finding a node, dragging it into the flow area, into the workspace area, excuse me. And then you can go ahead and add multiple nodes, and then you would just go ahead and connect the nodes, and there we go. We'd have a very simple node-RED flow. And now if we go to the information, you can see if I click on one of these nodes, it gives me the help information about that particular node. So now that I have this node, I can go ahead and deploy it. And since I have it installed locally, it's just being deployed locally. But for example, if I had it installed on a device, it would be deploying it to that device. And if I just trigger this node, we can see this inject just sent a time stamp to the debug node, and the debug node just locked that information. So that's the typically how you'd go about wiring your flow and testing it. But let's go ahead and make something a little bit more interesting. So let me go ahead and find the HTTP request node, and I put that there. And then I'm going to wire up this to the HTTP request, and then the HTTP request to the debug. And when I double click on one of these nodes, it brings up the edit panel. And this edit panel is where you can configure the particular node if the node accepts any configuration. So I'm going to change this from time stamp to numbers. So I'm going to come up with a number, so let's say 19. So what's going to happen when I click on this, it's going to inject the number 19. But let's go to the HTTP request node, and what I'm going to do is I'm going to put it to an endpoint that I want the request node to hit. So in this case, I'm just going to use the numbers API endpoint, which is a rest endpoint that you can send it a number, and then it'll give you a random fact about that number. So I got that. So I have my flow all set. So I can go ahead and deploy it. And now that it's successfully deployed, I'll go ahead and run it, and we can see what it does. And you see it sent the number 19, and we get 19 is the number of years and 235 luminations. Let's try it one more time, see if it gives us something different. No, I guess there's not much for, oh, there we go. 19 is the final year of a person as a teenager. So this is basically what it is for the idea of around the flow and creating your flow in Node-RED. And what you can also do with these flows is you can actually export them as a JSON file. And as you can see here, the flow is just a JSON file. And so you can easily share the flows with others as well. So that's all good and great. But what if you want to incorporate machine learning into your flow so that you can just as easily as I did here, drag and drop nodes. You can drag and drop a machine learning node to perform some AI task for you. And that is where TensorFlow.js comes in. TensorFlow.js is an open source JavaScript library to build, train, and run machine learning models in JavaScript environments, such as the browser and Node.js. And it's TensorFlow rewritten for the JavaScript ecosystem. And it includes a low-level API that allows for linear algebra and complex matrix math to be done all in JavaScript. But it also includes a high-level API that closely follows the Keras API for constructing machine learning models. And let's go ahead and take a look at TensorFlow.js example. The best place to start with TensorFlow.js is actually their website. And in their website, you'll find demos as well as tutorials and also their API documentation. And they do have very good API documentation. So let's go ahead and take a look at that. Now, quickly looking through this, you can see there are a lot of functions available to you. So you have some TensorFlow creation and transformation functions. You also have model functions for working with models. And you have the layers functions for defining the layers of your model. And there are also operations for doing the linear algebra and matrix math operation work. So, and there's also much, much more. So now since we're talking about JavaScript and JavaScript runs on the browser, you can easily run and try a lot of these functionality right here in the docs. So, for example, I can just go ahead and run this. And this right here, just define the tensor and then print it out. Now, once you import TensorFlow.js, you actually have this tf variable, which you can use for all your calls. So let me go ahead and edit this and try something a little different. So all I'm doing here is defining a two-dimensional tensor with these values in it. And then I'm going to take that tensor, get the log of it, then square that, and then print out the value. So I'll go ahead and run this, and then we get the results of that. So in addition to all these API functions that you see here, there are additional functions that are specific to the environment that you're running in. So, for example, we have the Node.js API, which has some additional functions for Node.js. And then you have the React Native API for when working in React Native. And you have a tf.js viz, which is for visualizing what the model is doing. So creating bar charts and he maps and things of that nature. So now let's go ahead and take a look at a more in-depth example. And here we have an example code showing running inferencing with TensorFlow.js. So as you can see, the first thing we do is just go ahead and load the model. And in this case, we're loading the model from a URL on TensorFlow Hub. And with the model loaded, you would pre-process the input. And in this case, the input is going to be an image. So we're going to take that image and convert it to a tensor. And once we have the input tensor, we can go ahead and run the model. And once we have the output from the model, we would go ahead and pre-process it. So turning it into something that's a little bit more human readable or consumable later on further on down the flow. So the whole entire flow would basically be load the model, pre-process the input, run the model, and then take the prediction and process the output. So that's the basic flow for running inferencing with TensorFlow. Now what if you wanted to create your model? So this example here shows how you could go about building a model with TensorFlow.js. So in this case, we're building a sequential model and we're taking the sequential model and we're adding a number of layers to it. So we add a couple convolutional layers, a couple max pooling layers, where we specify the activation and other properties that the layer takes. And then after that, we go ahead and flatten it. And then finally, we provide a dense layer with a soft max activation. And once we have all the layers in place, we can go ahead and compile the model with the appropriate optimizer and loss function. So since both Node-RED and TensorFlow.js run in Node.js, combining the two is inevitable. So the goal is to get to a point where you can just launch Node-RED, import a TensorFlow.js model node, and then drag the node into your flow and wire it to your device. Much like you saw me do early with the default Node-RED nodes, you should be able to do the same thing with custom TensorFlow.js models in Node-RED. But why combine Node-RED and TensorFlow.js? As you saw, Node-RED makes it simple to wire together devices as well as APIs. And TensorFlow.js makes it possible to build and deploy machine learning models in JavaScript. So the two together make it easier for developers and IoT enthusiasts to incorporate machine learning into their device. If you bring TensorFlow.js models into the Node-RED platform, there is a lower barrier to entry into machine learning that the visual programming environment of Node-RED helps facilitate. You also get an increase in privacy and data security that comes with being able to perform predictions directly on the device, collecting the data. And now I have to try to send the data across the network or have the data leave the device. And keeping it all on the device makes it possible to perform inferencing in remote locations or in areas with unreliable or no network connectivity. But to get to that point, first we need to have these custom nodes. Luckily, there are already a number of TensorFlow.js nodes that you can find in the Node-RED community library as well as in GitHub. These custom nodes help you quickly get started with adding machine learning tasks into your IoT flow. You can find some general nodes for loading and running TensorFlow.js nodes and models, as well as nodes for specific use case models like the BERT tokenizer or object detection. But what about when you can't find an existing TensorFlow.js node for your use case? Well, for these scenarios, Node-RED is highly extensible, and you can create your own custom node. The first thing to understand about creating Node-RED modules is what are the pieces that make up a Node-RED node? A Node-RED node consists of three main files. You have your JavaScript file that defines what the node does. You have the HTML file that defines the node's properties, the edit dialog, as well as the help text for the node. And then you have the package.json file, which is used to package it all together as an MPM module. And let's go ahead and quickly look at an example, custom Node-RED node. So we'll first look at the package.json, which is similar to the package.json of any MPM module with the slight difference of having the Node-RED section. And this section right here is just defining the Node-RED nodes that are going to be available in this package. Next, if we look at the HTML file, this is made up of three script tags. And the first one defines the edit dialog. So this is what's going to show up when the user double clicks on the node and they get presented with an edit dialog where they can go ahead and edit any of the configurations that the node may accept. And then we have the JavaScript tag section. And what this does is it basically registers the node with Node-RED. And it's also where you can set your default parameters for any of the node's settings. And lastly, we have the script tag for the help section or the help info. So this is when the user clicks on a node and they go to the info sidebar. This is the information that will be presented to them. And lastly, we have the JavaScript, which is actually going to define the behavior of the node. So the JavaScript file is just going to export a single function. And this function is going to have to go ahead and register for an input event. And what that is, it's going to get alerted whenever a message comes into the node's input. And once that message comes in, it can go ahead and take appropriate action. So if we see here in this example, when the message comes in, we're going to go ahead and pre-process the input. Then we're going to take that pre-processed input tensor and go ahead and run a prediction on it against the model. And then once we have the prediction, we're going to process the output. And this is the similar flow that you saw in the basic TensorFlow.js example. So an input comes in, we pre-process it, we run inferencing, and then we go ahead and process the output to get the output in a nice JSON format. And then we can go ahead and send it out to the next node in the flow. And then we have the same stuff we had before, which is, for example, loading the model in here. So this is the basic building blocks of a particular node red node. So with this, we can now go ahead and see this node in action. So here we have a node red flow using that custom TensorFlow.js node. We just went over. And the way this flow is going to work is once I trigger this inject node, this image will be sent over to the custom node. The node will pre-process the image and turn it into a tensor. Run inference on that tensor, process the prediction, and then send it out to the debug node. I also have here an image preview node. And it is just so that we can see what image is being sent to the custom node. Let's go ahead and deploy this flow. It is not successfully deployed, and we can go ahead and run it. When we run it, we get a preview of the image that was used. And looking at the debug, we can examine the prediction that was returned. You'll notice it is an array with objects corresponding to what was detected in the image. And in this case, we have three objects, each one being a person. And we get along with that, we get the accuracy score, as well as the bounding box, which outlines where in the image the specific object was detected. And just like that, we have a custom TensorFlow.js node that allows us to add machine learning capabilities to our IoT devices. That is great. We can now create node red nodes with TensorFlow.js that anyone can just add into their flow and effortlessly deploy to their device. As straightforward as this may appear, there can still be challenges and things to keep in mind if you had to bring TensorFlow.js models to node red. For starters, we need to think about the model, such as storing a model. Should it be packaged with Node.js module, or do you prefer to serve it from an external URL or CDN? How well would the model perform on edge devices? Not all models are optimized or can easily be optimized for running on low compute devices. Model optimization can be a whole talk in itself. You also have to think about how best to run the model. Should it be kept in a main thread or moved to a worker thread? How should loading caching be handled? Where in the life cycle is it best to load the model? Should it be added into the global context, the flow context, or just left it within the node context? And as far as data goes, you can have at your disposal audio inputs, video inputs, and all kinds of sensor input data. How much of the model input output should be processed by the node red node versus letting the user of your node handle that themselves in their flow? A lot of these questions will be answered by the model you're working with and the use case you're trying to solve. And with that, I'll pass it on to Yee Hong to take you through additional example flows and to talk more in detail to an interesting solution he was recently playing around with. Thanks, Bob. Hello, everyone. My name is Yee Hong Wong, and I'm with the company for OpenPan Group at Live Room. So now that you know a thing or two about node rate and TensorFlow 0HR and the way we combine them together, let's look into some simple flow to help showcase the technology in action and hopefully inspire some of your own ideas. So VAR just showed a basic object detection flow. It's the simple tutorial node, which would look something like this. The flow is very simple. There's a camera as the input node, and here is the custom tutorial node. And then we will do the object detection inside here. The manager logic is inside the tutorial node. He used a TF2S models MPM package, which has a nice API and hide all the pre and post processing. In many cases, this is great. Using the pre-packaged model API is easy and works for many cases. However, you are limited to just the cases and models that are used by the package. Here is another example. In this flow, I used some TensorFlow.js custom node VAR mentioned earlier, where it extracts the functionalities of the tutorial node into several individual nodes. Here, here, and here. So the flow itself is very simple. We just take the input from the camera and do some pre-processing and then pass the data into a model node around the model inference. And then we pass to a post-processing node to do some post-processing against the output of the model node. And finally, we pass the data to a bounding box node, and then we will draw the bounding box as well as the label onto the images. So without further ado, let's try it out. So you actually detect one person and a cell phone. So let's go into the detail of the flow. The second node here is actually a table function node. It's doing the pre-processing logic. And the logic here is very simple. You just call TensorFlow.js API to decode the image and then convert the data into TF objects. So inside the TF function node, you can access all the TensorFlow.js API under the TF variable. And at the end of the code, you try to compose a main object and containing the image tensors. That's the format that model node needs. Here is actually the model node. So it is a TF model node. In the TF model node, you will be surprised by your model URL. So it is an object detection pretend model. When you style a flow, you will try to retrieve the model from this URL. So you can assign it to point to a remote URL. You can also point to your local file system. If it is a remote URL, you will try to retrieve the model, store it into your local file system, and cache it. You will also maintain the cache. And when you kick off the flow and the data flow through this node, you will run the model inference. And then you will pass the model inference result to the next node. So the next node is actually post-processing. Because the result of the model inference, the object detection model is actually either a tensor or an array of tensor. So we need the post-processing node to convert it to a more friendly format. As you can see here, after the post-processing, the data will become an array of objects. And these objects actually are the objects that detect the images. So the information containing the class name and also the coordinates of the detected objects. So then in the flow, we try to combine the result from the post-processing as well as the result from the original camera. And then we combine these two data and then send it to the bounding box node. In the bounding box node, you don't need to provide any configurations. So you will automatically using the information you're passing, including the image, as well as the bounding box information. And then draw these information onto the pictures, as you can see here. So you can see the node here and here. Those are the TensorFlow custom nodes. So the only thing you need to provide to them is actually here is the model URL and also the class definition and JSON file. And that's it. And I think the only code I have at Program is only the preprocessing. But even here, the logic is very simple. So you can see by leveraging those custom nodes, you can compose your model inference flow very easily. So in this case, we use the object detection. So if you, for example, if you train another object detection and with your own classes, the only thing you need to change is modify this URL and to point to your model. And also in the preprocessing node, you change the class URL to point to your class definition file. Then you can run the flow. So it's very easy. There is quite a bit you can do with object detection. As for the node rate part, I also found something quite interesting because node rate is running on Node.js and the benefit of using Node.js is you can run it on many types of environments from your laptop to the cloud platform. So let's try running something at each device. The door calls hardware such as Raspberry Pi or JSON Nano. Luckily, I do have JSON Nano by hand and in the next flow, I'd like to show you an auto garage door flow. In this flow, I use several devices. The first one is JSON Nano. It's a small and inexpensive single board computer and it also has GPU on it. Running model inference is quite smooth on it. You can see I also attached a Wi-Fi USB adapter to it. It allows me to control my IP camera here. I can take a snapshot from this IP camera and retrieve the image. Then lastly, I use a garage door opener hub to control my garage door opener. So it's also very cheap. Let's look at the flow on my desktop first. Later I will deploy this flow to the JSON Nano. But before we look at the flow, let's look at the model that I'm going to use. I found the auto license plate recognition model from this repository. It already provides a pre-trained model and the accuracy is pretty good. So I directly convert this pre-trained model into a TensorFlow.js web-friendly format. Then you provide a two approach to use this model. The first approach is that you send the image into the model and you will recognize the license plate as well as the numbers and characters on it. But when I run this approach on my JSON Nano, it took me about 50 seconds. So I think it's too long. So I use the second approach. In the second approach, it actually has two steps. The first step is the same. You send the image into the model, but you will only return you on the license plate. You find the images. So you use that information to do the image cropping and cropping out the license plate. So then you send the image only on the license plate into the model again. So you will return you the numbers and characters on the license plate. And by using the second approach, it took me about five seconds on the JSON Nano. So I think it's pretty good. So let's look at the detail of flow here. The first thing is that I try to trigger the shutter on the camera. So I took a picture from that camera. And the second step is I try to retrieve that because it's an IP camera. So the thing I did is that I used the API code to do those stuff. And then after I retrieved the image back, I do the same, the image processing. And then I send it to the auto license plate with the engine model. In here, you see that I store the model in my local power system. And then, like I mentioned earlier, we will retrieve the license plate with its coordinates before you find any license plate inside the image. Then I do the cropping. So then I need to do the processing against the license plate image again. Then send it to the model second time. And the second time, you will return me back those characters that detect the license plate. And in here, I try to output it into the debug message. So you can see I also attach a camera input node here because I want to try it on my desktop first and make sure everything works smoothly. And then I can deploy it onto Magicianio. And on Magicianio, of course, we will go with this flow here. Let's try this flow on my desktop first. And you can see I holding the license plate. And then you trigger the flow by using the front-facing camera on my desktop. So you can see it's actually successfully detect all the characters and numbers on my license plate. But you notice it's not quite slow. Actually, the reason is that I display the image. So you send the image back and forth from the device. So when you deploy the flow on the device, you need to disable the image viewer and you can improve the performance. So let's look at the flow on Magicianio. The last node on the flow on Magicianio is a little bit different. I switch this node from debug to garage opener node. In this node, I try to call the garage door opener hub to open the garage door. If the license plate detects an image, I match to my license plate. And this is my IP camera that I attach on top of my garage door outside. And this is the garage door opener hub. I can send the signal to this hub. Then he will send the open or close signal to the garage door opener. So I have a video yesterday. Then he will show you how this flow works. As you can see, I'm pretty satisfied with the result I got. Making those types of object detection flow is very easy and quick. However, you are not limited to just image based apps as you can load and run any TensorFlow.js model. We even have a flow for using a bird model for sentiment analysis on things like YouTube comments and trees. So many things you can do to recap which the vast amount of sensor data provided by a variety of IoT devices innovating on this data and creating application for them is way of those no-ray and TensorFlow.js tests. As you saw, no-ray provides a platform that is super flexible and extensible. If some functionalities that you want doesn't exist yet, you can easily create a no-ray node potentially use some of the hundreds of thousands of MDM packages that are available earlier. There are also a lot of TensorFlow models that you can use. TensorFlow.js even has several model APIs already packaged as npm modules like the object detection model that Bar show earlier. The fact that these two technologies both live in the no-js ecosystem max integration relatively seamless. With TensorFlow.js, since model inference is done locally, data privacy shouldn't be a concern. And also, you don't have to rely on Internet connectivity for your model to work. No-ray and TensorFlow.js can be used rapidly on view AI-neighbor IoT applications. The speed of realizing this class of applications is a strength of these technologies. AI is democratized and made even more accessible. And with that, we provide some link to where you can learn more and they can go this session. Thank you so much. Hey, hello. Thank you everyone for listening to the talk. And any questions, feel free to reach out to Yee Hong or myself or ask in the chat. And early in the chat, a couple of people were asking about accessing the video and the slides. So the open source summit team will be making all the content available. But in addition, you can actually find our video and the slide. If you go to IBM.biz-tfjs-nodread-oss2020. So now let's see if we have any questions. I don't see any questions at the moment. So we'll just give people a moment to see if they have any questions to ask. Yee Hong, I have a question for you. How long would you say it took you to get that flow together and get it working with your garage door opener? I guess it took me about, I would say, if including the device setup, I need to mount the IP camera on top of the garage door. If including those stuff, I think it would take me about two hours. The tricky part is that you need to find an angle, that the angle can get your license plate. And also, the benefit of using that IP camera is that it's wide angle. So I need to face the camera down a little bit and focus on the front end of the vehicle only. I don't have to shoot the whole vehicle they're coming up to my driveway. Yeah, and about the coding part is, I think it took me about maybe less than one hour to do the coding because those no pre-existing TensorFlow.js custom packages. So those no are easy. But I need to write some programming to, for example, to copying the image with the license plate. That's the first part. And the second part is I need to, once I get the output from the model, tell me the license plate character and digit, I need to do a comparison. That's the part I need to configure. Right now, I hot-code my license plate on it so it can only recognize my vehicle only. And, okay, so someone has a question and they said the camera node was already available, correct? So that was just the default camera node available in the Node Red Library, correct? So it was in a special camera node that you used. Let me answer the question properly. If he are talking about the camera node, that camera node on the Node Red node is actually using the built-in camera on your desktop. It's not using that IP camera API. So that camera node is actually linked to your browser built-in camera. It's your front-facing camera. So the way I control the camera is through the API code because that camera is an IP camera. So I call the remote API code to get to have a Photoshop and then retrieve the image back. But I believe nowadays those IP cameras actually provide a lot of features. For example, they have motion sensors. So you can have some sort of web hook. So when he detects a motion sensor, he will notify you. So in that sense, then if you detect the motion sensor, the motion sensor detects motion, then it can trigger your phone to style your phone. When you see the demo in my video, actually, I manually trigger that phone because I don't have the motion sensor for my IP camera yet. But I believe those IP cameras with the sensor is very common right now. So, yeah, thanks. So any other question? Is there any AI library can be used with Node Red? I think for our talk that we try to focus, integrate TensorFlow.js. So TensorFlow.js itself is actually an AI library, machine learning library. So like VAR mentioned earlier, you not only can do the inference on it, you actually also can run training on it. So you can... Most of the time you will use pre-trained model and maybe... There are other AI libraries exist, but I think PFJ's unique part is that it also supports training. I think most of other AI libraries, they only support inference on the browser or maybe on the Node.js side. Mostly those AI libraries do the training and inference in the Python language by doing so. Somebody was asking for the link again to the high res video in the slides. That's IBM.biz-tfjs. I'm sure maybe you can share that. IBM.biz-tfjs-nodeRed-oss2020. I'm not sure. I think we have another minute or two if there's any other questions. All right, everyone. Thank you for joining us. And again, you can find the open source summit team will be making all this content available so you can get it through them or you can get it through the link I mentioned earlier, which is IBM.biz-tfjs-nodeRed-oss2020. All right, and with that, thank you very much.