 Well, hello and welcome to another Dev Nation Live. I'm super excited here today to give you guys some feedback and actual presentation content around Node.js. So you guys are going to enjoy the next session we have right now. We're going to spend about 30 minutes with Lance Ball. He's going to dive into Enterprise Node.js and show us what we can do in the context of OpenShift and Kubernetes. So if you guys have any questions, feel free to hit us up on the Q&A tab or on the chat tab. I'll be monitoring that live during the session. But at this point, I'd like to turn over to Lance. Hey, Burr. Thanks for the introduction. Yeah, I'm really excited to be here to talk about this stuff. I've been working on Node.js at Red Hat for a couple of years. And let me just pull up my slides here so we can get started. I'm eager to share with everyone some of the stuff that we've been working on because I think it's really cool. Just very briefly about myself. My name is Lance. Lance Ball. I work in the middleware division on a group called Project Dodd. And we're tasked with doing things like looking at new and emerging technologies and trying to figure out how those technologies can best fit within the context of our best-of-breed Java-based middleware, as well as our OpenShift platform. So that said, Node.js is now a thing at Red Hat. And I guess probably a lot of you are thinking this. Because we have been known for a long time for our Java-based middleware. But the world is changing. And there's a lot of new technologies out there. And Node.js is the one that we're embracing right now. So what do I mean by that? Well, you may or may not be aware. But last month in March, we released full product support for Roar, runtime for Node.js, 8.x LTS. So 8.x is the most recent version of the long-term support Node.js releases. 10.0 is coming out very soon within a week or so. Not coming out on Roar in a week or so. We should have the community version of it available within a week of 10.0 coming out. So we do have product support for Node.js on OpenShift now. This is really exciting. What does it take to do that? We have a couple of different technologies that we have built in order to make this happen. First of all, we have a Node.js RPM that is built directly from the upstream Node core source. And right now it's running Node 8. As well as some runtime containers for your Node.js applications. These are S2I Builder containers that allow you to basically take your application code, overlay it onto a Node.js container, and then deploy it into OpenShift Kubernetes or you can just run it straight up Docker style. So those are sort of the main technologies that we're supplying as part of the Node.js product. How do we make that happen? A number of ways. We're really fortunate to have a couple of Node.js core committers who commit to upstream on the team. I'm one of them. And Dan Bavinius out of Sweden is another one. He's a fantastic fellow, a really great engineer. And he is also on the Node.js technical steering committee. And the great thing about that is that it gives us some insight into critical CVEs, things that might be not necessarily public yet, but we need to be on it so that when we do our next release we're following right on those critical CVEs and that sort of thing. So those things all combined gave us kind of what we felt like was a solid footing to move forward with Node.js at Red Hat. But why? Why do we choose Node.js? There are lots of different frameworks, languages, platforms out there that you can build applications with. Why was Node.js the choice that we made? Well, I think it's pretty obvious if you look at sort of the language rankings online and stuff like that, that a lot of people are hiring Node developers. It's something that a lot of engineers are interested in these days, a lot of companies are interested in these days for a number of different reasons. But before we get into the why, I want to talk a little bit about how we got here. In the beginning there was the monolith. I'm sure all of you are familiar with the concept of the monolithic application, the idea that you kind of shove everything that you need into this one process and that one process handles service discovery and authentication and authorization and all the different pieces that make an enterprise application an application. They're all shoved into that one thing. And that application could be a Java EE container, but it could also just be like a PHP container. It's just a thing that you deploy. And then when you need to scale it up, you might put a load balancer in front of it and deploy another one. And there were some problems with that. I don't want to get into all of them. We all probably know some of those by now because microservices have been around for a while. But what resulted from sort of examining what those problems were was this concept of deploying our applications as small, independently deployable, discrete pieces of functionality. Typically, that functionality is expressed as an HTTP-based REST API. So now our applications look something like this, where we've got this application blob, but it really consists of half a dozen or more different services that are all communicating with each other. And these services typically are pretty small, pretty lightweight, start up very quickly, do one thing, do it well. And that's what Node.js does really, really well. And the nice thing about that is that it can make your deployment time quicker. It can make your development time quicker. It can make your onboarding with new developers quicker because the application itself has become very, very simple. But actually, I shouldn't even say that. Because the simplicity does not depart your application. Just because we moved to microservices doesn't mean that your application all of a sudden doesn't have the complexity that it had before. It just means that the complexity that was built into that big monolith has now started to shift out into other places. For example, the platform, the runtime where you're running your application. So we've got Kubernetes and OpenShift and Istio, which I don't even want to get into now, but Istio is a nice service mesh thing that lays on top of all of this. But the platform that you run your application on now handles a lot of that complexity that was built into your code before, like service discovery, like authorization and authentication and resilience and tracing and all of that kind of stuff kind of comes along for the ride in these Kubernetes-like environments. So your application code now becomes very simple. So with that in mind, I want to move to a very quick little demo to show you how easy it is to just deploy a Node.js application, any Node.js application into your OpenShift environment. So you see here, we've got a few commands to set up MiniShift. And I haven't talked about MiniShift yet, but I want to just take a moment here to mention that project because I think it's fantastic, especially for developers. It allows you to run a single Node OpenShift cluster, Kubernetes cluster on your laptop or your desktop, your development environment. And so that's what I work with on a day-to-day basis. And I tend to set up these profiles and set up some configuration for each project that I have. And I've done this here. I'm not going to go through all these steps in the demo because I've done them already and they take a little time to get MiniShift started and stuff. But one thing I do want to point out is this eval MiniShift OCENV command right here. I'm using Fischel. If you're using Bash, there would be a dollar sign right there before the first parentheses. This is important. MiniShift comes with the OpenShift command line client, the CLI client built into it. And when you eval the OCENV that's provided by MiniShift, then the OC command line client is now part of your path. So then we can create a new project called DevNation. As I said, I've done all those things already. Let's go and take a look at it. Make sure everything is running. And I'm going to log in. And I'm on the DevNation project. So the next thing that we want to do is create and deploy an application. And it only takes right here, we have five lines of code or five command lines to get an application deployed into OpenShift. So we're going to walk through this step by step. I'm going to move away from the slide here and go back to my terminal. And the first thing I'm going to do is make a directory for my application and cd into it. And there's a tool called Express Generator. First of all, Express.js, I'm sure many of you are familiar with it. It is kind of the de facto web framework that people use in the Node.js world. There are a lot of others out there. Some are really good. Some not so good. Express is the one that we're going to use for this demonstration because it's so widely used and has some nice tools. One of those tools is the Express Generator. You would run npm install minus g express generator, minus g says hey, install this thing globally so it's available, not just for my given project, but on the command line for me always. I've already installed it, so I'm going to control C out of that. But I'm going to build my Express app by just running Express in the current directory and specify the current directory as the place where we want our app to be built. And you can see that Express then sets up some static files, sets up some routes, some templates, and then we've got our entry point into the application here with Ben www. And it tells us, well, the next thing we need to do is install our dependencies by running npm install. And that's pretty straightforward. And then we can start it. But there's nothing really interesting about starting it because right now all I've shown you is what you can see on the Express.js website, right? You just run Express and then you can run npm start and it's all good. But we want to get this thing into OpenShift. And the tool that we're going to use to get this into OpenShift is a tool called NodeShift, which we've developed on our team. Its purpose is to make it very easy to deploy an application into the OpenShift environment. NodeShift by default uses port 8080 to expose your application and Express by default uses port 3000. So we're going to make a quick change to package.json and package.json is basically metadata. If you're not a Node developer, it's metadata about your application. And one of the things that it has in here is how do you start your application? What is the script to start it? And it is run Node on this bin www file. But we want to set an environment variable to say we want to run it actually on port 8080. That way when we deploy it to OpenShift we don't have to make any changes within the OpenShift context and just do what it's supposed to do. npx is the next command that I'm going to run. This comes as part of npm as a 5.0 and it's a really nice little tool that basically allows you to run any npm package that has a command line component to it, even if it's not installed. I could have used npx earlier with Express Generator. But so npx NodeShift is our package that we're going to use to deploy our application. We use the deploy command. Because we're running on mini shift and not on OpenShift I haven't set up things like SSL certificate. So I use the command flag strict SSL equals false so that we don't get complaints about that. And then the dash-expose flag tells NodeShift not only do I want you to deploy this application but I would like you to create a URL and expose this application to the world. So let's run that and see what happens. It won't take very long and I'll walk through what's happening as it happens. The first thing it does is it looks at your package.json and looks for metadata in there about your application, examines the directory that your application is running in to see what JavaScript is there, figures out what the entry point is to your application is, tars everything up, sends it up to your OpenShift instance, whether that's mini shift or OpenShift online or on-premises OpenShift. It creates a new deployment configuration, a new service, a new route, and then potentially exposes that route. And we can see right here that the route was exposed to this URL, which is internal to my laptop. I happen to have the mini shift console open already. And you can see that the console has already been updated with my deployment. This happened while I was speaking. So we can see here that we've got the image, DevNation slash MyApp. That's the container image that your application is running as. And you could actually connect to the Docker repository that is part of OpenShift or mini shift and run that container directly in Docker if you'd like to. We can see that it's created a service called MyApp that's routing for 8080. And we can see that we've got a build called MyApp. And then if we finally look over here at the routes, we can see that the route is going to this URL. And when I click on it, I can see that we have an Express app. There's nothing very special about this application. It's just the default Express app that gets installed when you run that generator. So that's kind of cool. Really quick and easy to get an application running in OpenShift. But now what? I mean, because that's not very enterprise-y, right? And the promise of this talk was to introduce you to ways that you can get Node.js happening in your enterprise environment. Well, when we talk about enterprise and especially when we talk about microservice-based enterprise applications, there are some problems that we face by moving to this kind of architecture. And one of those problems is this concept of cascading failures. So imagine that this is our application now. It's composed of all of these different services. And Service-G, somewhere deep down in there, has started to fail. Well, because Service-G is failing, Service-D and Service-E might start to have problems because they depend on Service-G. And so they're going to start failing. And because they failed, well, Service-B is probably going to start failing. And then Service-A is going to probably start failing. And if Service-A is the entry point to your application, well, then forget it. I mean, it's dead in the water at this point, right? So there are tools and techniques that we can use to overcome these kinds of problems. One of the techniques that you can use for this specific problem is something called a circuit breaker. And I want to introduce you to that concept by showing you a little bit about Roar, the thing that I mentioned at the beginning of the talk here that is our new product. So let's go back over to the browser. And I've got this URL up. This is launch.openshift.io. And if you want to use Roar, this is where you're going to go. I'm going to reload the page just to make sure it hasn't logged me out because that does happen sometimes. And it does want me to log in. So yeah, if you're going to build a Roar application or use Roar, this is where you start. And you can see that we've got several runtimes that we support. I'm going to talk about Node.js today because that's what this demonstration is about. The first thing you do is you click Launch My Project and you have two choices. You can choose whether you want to have your project deployed directly into your OpenShift instance at that moment, or you can choose to have your project created as a zip file that you can then download and develop locally on your machine. And we're going to do that for a couple of reasons. Number one, it's a demo. And number two, as a developer, this is typically what I'm going to do, right? I'm not going to just take the framework that I'm provided as part of the booster and have that be my application. I'm going to work on it. I'm going to develop it a little bit. Okay, so the next step here is then to choose what mission we want to create. The missions are the boosters that I mentioned earlier. And these are basically just preconfigured functioning applications that will show you some fundamental aspect of modern application development, not specifically, but typically tied to a microservices type of deployment. And because we were talking about the circuit breaker pattern earlier, I want to show that one to you. It's available as a mission here within Roar. So I choose that and click Next. And here's where we get to choose what technology we're going to run our application as, because all of these boosters can be run as any of these applications, for the most part, a few exceptions. So we're going to choose Node.js and click Next. And we then get to choose what version of the runtime we'd like to use. Roar is the productized version. Slightly ahead of that in patch release versions is the community version, 8.x. And because I'm currently not running the lessons on this computer, I'm just going to go with the community version. The defaults for the name and the version number are fine. So I click Next and we get a little summary and then I get to download the zip file. It tells us what the next steps are. It's probably a pretty small type for you to see here. But the next steps are, we're going to unzip that booster and then read the readme. I'm not going to read the readme because I know what to do. I've done this before. Okay, so I'm going to copy or I'm going to move this zip file that is in my downloads folder into this DevNation folder that I have. And I'm going to unzip it. And if we CD into that booster folder, we can see that there are two different services. There's a greeting service and a name service. The greeting service exposes a front end web UI, as well as a back end JSON-based REST API that will return a greeting when requested over HTTP or Ajax. And then the greeting service wants to know who it should greet, so it calls the name service. And the name service returns the name that should receive the greeting. These do not come by default executable. I'm going to run that, change the executable bit there. We have a script called start OpenShift. Let's look at that. That's very simple. All it does is check to see if we are logged in, if we have a user for the OpenShift instance. And if we are, then it goes into the greeting service and runs install and runs OpenShift deployment. And then the same thing for the name service. So I'm just going to go ahead and do that. And we'll see a lot of the stuff that we saw earlier when I was deploying the little express application, although here we're seeing some installation happen first. As this is happening, I want to bring your attention to a couple of the other tools that we have developed on my team in addition to NodeShift that are kind of enterprising. One of them is this license recorder. As this stuff scrolls by, you can see that this license recorder runs on the application and will generate a report of all of your dependencies and what their licenses are, which is very nice, especially if you're in a corporate environment where you need to get these things approved by legal or something like that. Or if you just have policies within your development group that you will only use certain types of licenses, you can know pretty quickly and easily what all of your dependencies are depending on or what all of your dependencies are using for their licenses. So we can see that the breeding service was deployed here and then the node service, the name service, same thing, we get the license report and it tears things up and it's uploading it right now and we will see that complete momentarily. Let's go back over here to the browser and look at what's happening on the web console. You can see on the overview, we now have the circuit breaker greeting has appeared and in just a few moments, once the name service showed, and there it is, the name service showed up and the script by default just opens up the URL for this thing on my browser. Let's go back here to the web console real quick. So we can see all of these things were created for the greeting service and the name service in the same way that my app was. There are a couple of additional things I want to bring to your attention. My app, just straight out of the box from Express, it doesn't have some of the things that we like to have within our deployments. Let's look at the deployment for the circuit breaker, the greeting circuit breaker. We have two things on here that I want to draw your attention to. Well, one thing really, they're both kind of similar, a readiness probe and a liveness probe. So you can put these into your routes, into your application. And these things tell MiniShift or OpenShift, number one, is your app ready to receive requests? And then number two, once it starts receiving requests, how lively is it? Is it responding quickly and well enough? And so you can specify one second time out, one second time out is too slow. So we want to have OpenShift be aware of the health of our application over its lifetime. And this kind of thing is built into all of these circuit breakers. But let's go back here and look at this circuit breaker demo here. Quick, and I'm going to shift reload on this. Okay, so the way this works, as I mentioned, is we've got a greeting service. The greeting service is called by the front end UI via Ajax. And that greeting service then calls the name service. So if I invoke the greeting service, we can see that happen. We can see the JavaScript, I'm sorry, the JSON comeback that says hello world with the timestamp. I'm going to make this a little bit bigger so it's easier to see hello world with the timestamp. And the name service, we can see that the operational state is okay. And if I invoke this over and over and over again, we can see that the greeting service is invoking the name service and the name service is responding with world. But I can toggle the name service and turn it off, just for the purposes of demonstrating how a circuit breaker works. And I can invoke this again, and I can see that it failed. We can see on the right-hand side here that it failed. And instead of just failing, the greeting service has some fallback mechanism. Because I've been talking so long, the circuit breaker has gone back into the closed state. Well, so let's invoke it again and get it to open. So now that it's open, I can keep invoking it over and over and over again. And it's decided it's not going to continue to call the name service. It's going to wait and let the name service kind of recover itself or the sysadmin recover the system. And then try again in a little while. And if the name service is still failing, well, then it'll open itself up again. So the name service is still failing, the circuit breaker goes right back to open. But let's toggle the name service. And now the sysadmin has rebooted the server, whatever he does or she does. And we can invoke it again. Because the circuit breaker has closed, it makes its call directly to the name service. And we get the behavior that we expect. That is about a 30-second introduction to circuit breakers. I understand that you should understand that there's a lot more to circuit breakers than that. It's potentially a demo unto itself. But we don't have time to get in that today. It's a 30-minute presentation today. And we're, I believe, at about 25 minutes. And I want to leave a little bit of time for questions and answers. So thank you for giving me this opportunity to give a sort of whirlwind tour of what's going on with Node.js at Red Hat. I've got some links here on the slides. If you're interested in some of the material that I presented today, launch.openshift.io is where you can go to check out the Roar product that we released in March. There's a blog post on the developers.redhat.com website that shows you basically how to do that Express.js demonstration in three commands, which is kind of neat. So check that out. And then I've got links to the GitHub URLs for the two technology pieces that I mentioned, the container builder image, the CentOS S2I Node.js builder image, and the Node.js RPM. And I guess that does it for me. So I'll turn it back to Burr. No, I mean Node.js, WebAssembly would be what you would use to package up your application for deploying to the web. And I believe its purpose is really to potentially transpile things from like TypeScript and that sort of thing into JavaScript. Yeah, it's just kind of an unusual question. Another good question though that came in is about the health check APIs. I really appreciate that you showed those health check APIs. And the question is, are they automated or do you have to manually check them? And I think there's just some confusion as to how they're used inside Kubernetes. Okay, I didn't show some of the metadata that comes along as part of the boosters. The boosters have as part of them a .node shift directory and in there is some YAML or JSON. I think OpenShift can work with either one. And those files specify what the readiness and liveliness endpoints are. They can be whatever you want them to be, whatever path is appropriate for your application. But once you have that in the YAML, NodeShift itself knows what to do with it and then informs OpenShift or Kubernetes as to what those URLs are and what the parameters around them are at the timeouts. All right, and NodeShift, the question about what is the name of the tool? NodeShift is the name of the tool. Question on, is the circuit breaker implemented via Istio or is it the native NodeJS way that you have? So in that example, the circuit breaker is implemented using a circuit breaker implementation called Apossum. And yeah, it's written in NodeJS. We're exploring ways to use Istio circuit breakers, but we're not there yet. Istio is definitely on my radar and the path that I'm walking down right now just not ready to present anything on it. Okay, yeah, very good. I was trying to see if I could find that NPM and the list of NPMs rather quickly and NPMJS.com is not loading very fast for me here. But yeah, if you have a link to that, that would be good too. Okay, let's see what other... All right, double check a couple other questions here. I think you guys were provided all the links already. There was a discussion around Visual Studio Code that was interesting. I think people enjoyed your use of Visual Studio Code there. I personally like Visual Studio Code a lot. 82. And overall, I think it was just a great presentation. So thank you so much for giving us that live demo and walking us through the use of Launched at OpenShift.io, how to use NodeShift, how to basically get it up and running super easily with a NodeJS Express application inside of OpenShift and inside of Kubernetes. I totally love that. And the last question I see here is, is this related to Fabric8 in any way, former fashion, the tooling that you showed? Well, kind of. I mean, it's maybe a poor stepchild of Fabric8. Fabric8 was written before NodeShift, and we use that as inspiration. And obviously, Fabric8 does a lot of different things. It's working with Java projects, and it's written in Java. It's a different beast altogether. But we used Fabric8 as a model for what we wanted to do with NodeShift. Okay. And actually, we're out of time. So thank you guys so much for attending today. We had a lot of you on the line. If we didn't get to your question, feel free to email me or find Lance on Twitter to chase us down. And Lance, thank you so much, because it was a fantastic presentation. Thanks. I was really happy to be here.