 Hi, my name's Nick O'Leary. I'm an open source developer at IBM and the project lead of Node Red. In this talk, I'm going to cover load code development and why it's interesting for JavaScript and Node developers alike. I'm going to use Node Red as an example of a load code development tool and show how it can be easily extended by adding new nodes to its palette of capabilities. I'll also talk about what lessons developing modules for a load code development environment can be applied when creating regular Node.js modules. So to start, let's talk about load code development. To help put it into context, let's just step back and think about how coding has evolved. In the beginning, there was machine code, the raw binary used by the computer to control its circuits. As a developer, you had to speak the computer's native language. The second generation of languages were assembly languages. We took the first steps towards accommodating a programmer's needs rather than those of the computer. We then have a third generation of languages, which is what many of us think of when we talk about programming languages today. C, Java, Python, JavaScript, and so on. They are much more independent of the machine they run on. They are far more portable, some more than others. They provide structured data types rather than simple bits and bytes. They provide easier ways to organize and structure code. And then we come to fourth generation of languages. They tend to be more domain-specific tools with much less of a focus on bits and bytes and more interested in the higher level data they work with. And the key thing to recognize is the increased layer of abstraction as you go through these generations of language. Each abstraction moves you away from having to understand the machine code, from having to deal with low-level memory management, and having to decide how best to represent your data in whatever binary format. As developers, we accept those abstractions for what they are. They are a convenience to make us more productive. And those abstractions do more than just that. A modern compiler can generate machine code that's highly optimized, probably far more so than if you try to write it by hand, because they can fold in the decades of experience and knowledge that exists. Low-code programming is a style of fourth generation programming language. It typically involves some sort of graphically driven development, where the developers assembling and configuring components, rather than hand coding discrete parts of an application. Their graphical nature provides that increased abstraction over screens full of code. Now, before getting too deep, I want to briefly mention the other type of programming that often gets bundled together with low-code programming, and that's no-code programming. Now, there's an important distinction. No-code programming means just that. The user, who may well not consider themselves a developer in the slightest, does not need to touch anything that looks remotely like code. A great example that many of you would be familiar with is if-this-then-that. As a web service, it allows you to create workflows in your browser purely through clicking options. Whilst it's very versatile and it does keep you somewhat constrained to the types of integrations they support, there's nothing wrong with that, except when you want to do that extra little thing that's slightly off the rails that they keep you on. Low-code, by contrast, still allows the user to create workflows through visual tools, but it does give the user some more leeway to insert bits of code when it's needed. Different low-code tools provide different degrees of flexibility in that regard. The use of abstraction also helps to make it more accessible to a wider range of users. By hiding the individual lines of code, a user is able to focus on solving whatever problem they're trying to address. They can also be tailored to particular problem domains, so they're presented in terms of a domain expert who may not be a developer, is able to relate more closely to. And making it easier for a wider range of users to be able to create applications is a good thing. How does this relate to Node.js and JavaScript? Well, with Node, you see a similar pattern of abstraction. Node.core provides a set of standard libraries to provide an abstraction over the V8 engine. You have things like the HTTP module, but more often than not, you'll use a module that wraps the low-level HTTP API into something more consumable, Axios, Got or whichever is the flavour of the month. These modules don't stop you as a developer from using the lower-level APIs, but it's often more comfortable to use the higher-level abstractions. And it goes in the other direction as well. For those tasks that need the performance of native code, you could choose to create a compiled add-on and integrate it relatively seamlessly with your JavaScript code. So that now brings us to Node.red, a low-code development tool built on top of Node.js. If you haven't seen Node.red before, this is what it looks like. You get a browser-based editor with a pallet of nodes down the left-hand side and a workspace in the middle where you drag on the nodes to create your flows. Each node represents a discrete piece of functionality, and it acts as a black box. It receives a message, does something with that message, and then passes it on to whatever other nodes it's wired to. So in this environment, you're drawing the logical flow of information. Here I have a flow that defines an HTTP endpoint. When the HTTP request arrives at Node.red, this node is triggered. The message it passes on contains information about the incoming request. We pass that to a template node that generates HTML content using the data provided and then passes it to a response node to respond to the request. And we can also add in debug nodes to examine the messages passing through a flow. So when I now load this URL in my browser, you get an HTML page, and in the Node.red editor, we see the message in the debug sidebar. And that's what Node.red's about, building applications by drawing the logical flow of events. The nodes representing each step, the event or message passes through. Now Node.red comes with 30 or so core nodes, covering a wide set of basic building blocks, nodes to handle setting message properties, branching the flow based on logic, handling HTTP, TCP, UDP connections, splitting and joining messages, all sorts of things. And as a platform, the main way it's capability to be extended is by adding extra nodes to this palette. As it stands, there are over 2,500 contributed nodes available, covering a huge range of topics, such as talking to hardware devices, databases, web APIs, other pieces of pre-canned logic and so on. And there's always room for more. So let's talk a bit about what a node actually is under the covers. A node's implemented in two parts. A Node.js module that provides its runtime functionality, defining what the node actually does in a flow. And it has an HTML file that's used by the editor to define its appearance, help text and its edit dialog. Those two files are then packaged as a regular node module by adding a package.json file. That contains a custom Node.red section that identifies the files in the module containing nodes. Let's dive into a node's code to see how it's structured. A node's module has to export a single function. This function is called when the module is being loaded into the Node.red runtime. It gets passed in an object that's the node's handle into the runtime API. Next, the code defines a constructor function for a node. This gets called whenever the runtime needs to create a new instance of that node. That happens whenever the user needs some changes in the editor. They click the deploy button. The editor packages up the configuration in a JSON format and sends it back to the runtime. The runtime then loops through all the nodes in the flow configuration and creates instances of each one, passing in the configuration for that particular instance. So within the constructor function, the first thing it has to do is call this utility function called create node. That turns this object into a proper Node.red node. It can then do whatever work it needs to get ready to start handling messages. For example, it needs to validate its configuration. Maybe it needs to establish a connection to a remote system, whatever it needs to do. So at this point, we've defined the code for creating the node. Next, we need to define how the node will handle messages that the runtime passes it. This is done by adding a handler for the input event. The event is triggered whenever there's a message for the node. One point to know here is the events are handed completely asynchronously to when they're emitted, which is unlike the core node event emitter object, which is fully synchronous. The event handler is passed the message being received, a send function and a done function. The handler can do whatever work it needs to in response to that message. If it wants to send a message on within the flow, it can pass the new message to the send function. Once it's finished its work for that message, it calls the done function. Now, this send done pair of functions were new additions in Node Red's 1.0 release late last year. Before that, nodes would use the node object's own send function. The reason for using the passed in function, though, is it allows the runtime to correlate the call to send with the message that was received, allowing for better traceability of your flows, something we need to make easier to do within the project generally. Now, finally, the node may also register a handler for the close event. This gets called when the node is being stopped by the runtime, either because a new set of flows are being deployed or the runtime itself is being stopped. This allows the node to clean up any internal state, such as its database connections or whatever resources it's created. And that's largely it in terms of the framework for creating a node. There are a bunch of other APIs for logging errors, updating node status and other features that we don't need to get into for this talk. So once we've created this construct function, the very final thing is to register it with the runtime, using this call to register type. Switching to the HTML side of the node, there are three things. HTML content for the node's edit dialog, help text, and the JavaScript code used to register the node with the editor, which defines various aspects of its appearance and lists out the properties of the node that can be configured by the user. The definition also includes optional JavaScript functions that will be run when the edit dialog for a node is being opened or closed. This allows the edit dialog to provide a much richer user experience than just a plain HTML form. The editor bundles jQuery, but doesn't otherwise use any JavaScript toolkit for its UI generation. We occasionally get asked about Angular, React or one of the other various JavaScript frameworks, but we've generally tried not to tie ourselves to one framework or another. The editor does provide a number of useful widgets that nodes can use to build their UIs, such as the typed input, which allows the user to specify the type of a property as well as its value. This makes for a much more consistent user experience rather than having each node create its own way to do it. So given the time available today, I'm not going to delve any deeper into how to create nodes. Instead, I wanted to talk a bit more about the general design principles that apply when creating them. When designing a node for Node-RED, the first thing to consider is who the audience is. Who are the people that will install the module, add it to the palette and drag it into their workspace, with low-code platforms like Node-RED. You have to remember the users come from a much broader range of experience. They may already be deeply familiar with whatever functionality your node provides, or they may have no knowledge of the underlying details and just know they need that functionality that the node provides. They may have a very detailed set of requirements for the node, or they may just want it to work and not worry about those details. So here are some of the principles to apply when designing a node. The first is making it intuitive. A node's configuration is stored as a set of JSON key-value pairs. Expanding that as a plain list of text boxes and labels assumes the user will know what it's expected in each box. For a simple node that expects perhaps just a URL, username and password, that may well be good enough. But when there starts to be lots of other options, a different approach might be needed. One of the approaches we took with the core trigger node, for example, was to build the UI in such a way that the user could read how it was configured almost as an English sentence. Send hello, then wait for five seconds, then send goodbye. It is an approach that would work in all cases, but it's certainly helped in this case. Every feature and option you add to the node increases the cognitive burden of using the node. For feature-rich nodes, it can be quite hard to design their UI to provide all of those features without also making it much harder to find a particular option a user needs to use. Providing a user with a clear hierarchy of choice can also help them reach their desired configuration. By that I mean only presenting options that are relevant to the choices a user has made up to that point. There's certainly a balance to be made here. Having options appear and disappear, if overdone, can also be confusing to the user. The key is finding that right balance. The next principle is about having sensible defaults, and this very much follows on from making the node intuitive to use. It should come with a sensible set of default configuration values. Don't force the user to have to set everything the first time they want to use the node. A good example of this is the split node. This node can be used to turn a single message into a stream of multiple messages. By default, if you give it a string payload, it'll send a message for each line of text in that string. If you give it an array, it sends a message for each element of the array, and if you give it an object, it'll send a message for each key value pair. You don't have to configure it to do any of that, but if you do want to change its behavior, you can very easily do so within the node. Its partner node, the join node, will, by default, attempt to join a stream of messages back together into a single message, essentially reversing whatever the split node did. But again, you can easily give the join node a manual configuration if you want it to do something different. The third principle is about choosing what can be controlled dynamically. Nodes expose two different API surfaces to the user. The configuration property is set in the edit dialog, and the properties of the individual messages pass to the node. Both are means to control the node's behavior, but that doesn't mean everything should be exposed in both ways. The HTTP request node allows you to set the URL of the request in the node itself. That's useful if you're sending the request to a fixed endpoint where the URL never changes. But the node also allows you to pass in message.url to set the URL on a per message basis. The node also lets you set custom HTTP headers for the request by setting message.headers, but that's an example of an option that's not exposed in the node's edit dialog. There's an important design principle here as well, but a message property should not override a property the user has explicitly set in the node's edit dialog. For example, in the request node case, if the user is providing URL in the edit dialog, then the node will ignore message.url. This was a deliberate choice to avoid the cases where a node's behavior changes unexpectedly, because a user didn't realize a property happened to still be set on a message in the flow. The fourth principle is about being forgiving. Nodes should be forgiving in what they receive and be kind when things go wrong. And a good example is the MQT node. An MQT message payload is just a collection of bytes, but the node will accept any JavaScript type and try to do the right thing with it. Strings, numbers, booleans all get handled as we think the user would expect them to be. Passing in an array or object will lead it to be stringified to JSON automatically before it gets published. And I'm sure we've all tripped over the handling of false-like values in JavaScript. If a node accepts a true-false flag as a message property, take care to consider how it handles non-boolean type values, should undefined, null, zero, an empty string behave as you think it would? The fifth principle is handling errors with grace. One of the most important things is ensuring nodes have the proper error handling to avoid the dreaded uncaught exception. An uncaught exception will cause the runtime to shut down. In the immortal words of the Node.js documentation, it's simply not safe to resume normal operation after an uncaught exception. Attempting to resume normal after an uncaught exception can be similar to pulling out the power cord and upgrading a computer. Nine out of ten times nothing happens, but the tenth time the system becomes corrupted. Restarting is the only option we have. The sixth principle is about being consistent. With such a large ecosystem of nodes available, there are a whole bunch of well-established patterns for how nodes look and behave. If it's good to challenge those norms, it's less good to reinvent the wheels for sake of it. Use the patterns that exist. Use the common UI widgets Node.js provides. Follow the style guide for the help text. It all makes for a better user experience. Ultimately, the key is to have empathy for the user of the node. The goal isn't to outwit them. The goal is to empower them to achieve what they're trying to do with as good a user experience as we can manage. And these rule lessons that apply just as well when designing a regular Node.js module. APIs are all about user experience. Just because you're exposing your module via an exported code API, rather than the visual wrappings of a low-code environment, you still want to ensure the user experience means it's a joy to use. A well-defined API that puts the user first, above the internal implementation details, will always provide a better end result. Get the externals right first, as they are so much harder to change down the line compared to internals. Now, if you maintain a Node.js module that you think would be interesting to expose in a low-code programming environment, I hope I've given you a bit of a flavor of some inspiration to try it out with Node.red. If you're interested in learning more, then please do get in touch via the Project Slack or Discourse Forum. I've been Nicoliri from the Node.red project. Thanks for listening.