 Okay, so I think it's 11 o'clock and it looks like the people coming in is slowed down. So I think I'll get started So welcome to my talk. I'm Michael Dawson and today I'm going to be talking to you about Node.js in a kubernetes world A little bit about myself before I get started I'm Michael Dawson. I'm IBM's community lead for Node.js And what that means is I get to spend a lot of time working in the community I'm on the technical steering committee and active on a lot of the different working groups and teams It also means though that I get to work with some really good teams with an IBM Working to do some cool things like make sure IBM make sure Node.js runs on our platforms s390 power PC as well as our cloud teams making sure that you know, they're a great place to deploy Node.js as Well as some other things which I'll I'll get into So just before I get started a little bit To let you know what I'm going to talk about I'll start out with IBM's Java JavaScript involvement and kind of a little bit on Our strategy. I'll talk about cloud native development Some of the key tasks that I think you need to be aware of as a Node.js developer coming to the cloud native development kubernetes world Talk a little bit about separating concerns. You'll see as we go through this There's a lot of things that at least from my perspective I don't necessarily want to have to worry about every time and I'm developing an application and separating those concerns can help And then finally I'm going to just touch on some available tools that you might be interested in taking a look at and learning more about So to start with IBM's involved in Node.js in a number of different ways. We're of course very active in the community We have 11 core collaborators for people on the technical steering committee And we really try to invest in the community to make sure node is a great platform to develop on as I mentioned before we work hard to make sure that you get good node support on our platforms We also want to make sure that when you deploy to the cloud either public or private cloud It's a great environment to deploy those node applications We also work on developer enablement so making sure that when you're developing your applications You're efficient and when you deploy to production you're successful And then finally we offer commercial support so when your production we can help you if you run into problems Today I'm really going to be talking about the developer enablement front and some of the work that we're doing on that front to help you make What help you be efficient and be successful in production two key things in that aspect are Rapid development so you want to be able to create the first version of your application quickly as well as be it's very important to be able to iterate On those so that you can deliver more and more value Incrementally, but not be sort of slowed down by the cycle that it takes you to get through the sort of development test and so forth and then of course, it's very important that you're successful in production and today that really means likely deploying in Docker and Then taking those Docker images and deploying them on at scale in a kubernetes kubernetes environment Now I'm not a kubernetes expert by a long shot I'm more than no developer coming to the cloud native and kubernetes world And I just want to give you a little bit of an idea of what I see are the things that you should be aware of and be thinking About when you're developing your applications that will likely be deployed in that environment in the end So when you think about cloud native development one of the things you'll often hear about is 12 factor applications And you can go read more about the different aspects of that But it's basically a set of tenants that you want to follow in developing a modern application Some of them that are specifically relevant to what I want to talk to you today about are you know Keeping development staging and production as close as possible. You want to test where you're going to be deploying Treating logs basically just as streams and exporting services via ports. So you'll see that some of that comes back through the talk in terms of the key tasks I'm going to talk about these things in terms of a what as a new node just developer you need to think about so one is like Well, I'm going to have to be building Docker Docker images. So what do I need to know about that? I'm then going to have to deploy those Docker images both for my own testing and in production I'm going to need to figure out how I'm going to test them in those environments and Kubernetes brings with it some key qualities of service that you want to do the do some work on your application to support So things like health checking logs metrics and then finally I want to before I get into tooling I also want us to think a little bit about upgrades What happens when I need to upgrade my application because that ends up being as you become more and more successful a Bigger and bigger part of the work that you actually have to do I Just have some references to cloud native GSO died. I over here that's a site where there's some best practices tools and techniques for writing applications in JavaScript that are going to be deployed in a cloud native environment and I am going to Be touching on some of those tools. So I just have the reference there So the very basic step is you know the first difference is I'm going to likely have to get my node application into a Docker image And so what does that mean? Well, luckily the community actually builds some base Docker images that already include node for you These are done by one of the teams and fairly quickly after a node release You'll see that there's new Docker images published to Docker hub their official images their cross platform They support, you know x86 S390 power so you can use them on any of those platforms and they're named based on the name So for example, you know, there's one for 12 13 1 and they bundle and node.js binary into a base operating system There's a few variants. There's the base one Which is just node colon that takes a Debian image and bundles node into it with most of the things that you're going to Need to build and run your your node.js application So the compiler is and so forth the node slim version is the minimal package that you're going to need to run So it's a smaller image but may not have everything that you need to say build your application if you have native modules in it and then finally, there's an alpine version, which is if you're really focused on container size alpine is a distribution that has a smaller footprint and It's you know, so we bundle node into there a little bit of a caveat that that's uses a different Glib C or like the the C base library and so it's maybe not tested quite as well in the overall node inference as the others There are of course others in addition to the official repository so for example red hat ships a couple of UBI images and those are images that you can use because they're open source But if you want to get support you can actually, you know, run them in a way where you can actually be supported as well So once you have your docker image Or wouldn't you have your base image that you're going to build on top of how do you actually get your application into the image? What you're going to need to use is something called a docker file and in the picture have I here I have the very simplest docker file which just says from this base image So that's the image I'm going to use from github the the one that the community puts out It's telling you what port it's going to use that's informational and in this case I just copy a server.js to Into the container at the root directory and then you give it the command which says okay start up the server. No. Yes, very simple It does get a lot more complicated though than that of course in that if you just You know follow the the sort of naive approach to building your doctor containers You can end up with containers which which are much larger and contain a lot more than you want in them So there are a set of best practices And I have the link here in terms of how to use those best practices to build your images in a multi-stage in a multi-step Stage to end up with a small Image, you know at the simplest case. You're just running docker build-t test But you're gonna you know need to think about and understand that multi-stage build I don't really have time to get into that But I just wanted to make you aware that that is something you're gonna need to think about and understand if you want to have Good images finally you're gonna push that to the registry and then you can run and test under docker or kubernetes from the the image in the registry In terms of deploying so how do I now get that docker image? I built I bundled my application into it with the docker file now I want to deploy it so that I can actually run and test my application so deploying it into kubernetes ends up being At least to start with writing some YAML files. So you have to write some files and say here's the image I want to use here's how many copies that I want to run and here's how I wanted deployment so At the very top on the right hand side there I have a deployment that YAML which has you know, basically a simple deployment that says use Test test one colon news my image name. I'm gonna pull it locally So I'm not pulling it from a registry and I think I said I want two copies of that to run So you have to start to learn how to write this YAML what the sort of language is to be able to say I want a deployment Unfortunately, it's not not quite that simple in that okay now you have some containers running now You have to have a way to tell people how to get to them. So they have something called the service So you'll write a service YAML which says Okay, here's how you connect and get to those pods and it will load balance across the number of instances you have That still doesn't let you get act at at your at your application from the outside world yet You also need something called an ingress which says okay, I'm gonna come from the internet And I'm gonna get you to your services. Well, which will then get me to my pods So you can end up writing quite a bit of YAML To get there and so that's something you know again. You're gonna have to probably learn and understand Luckily there's something called helm charts helm charts actually just bundle those files together into something Which is more more easily deployable. So having written a deployment of service in an ingress Helm charts give you a very specific structure. You can run something called helm in it It'll give you the structure that you see on the on the right In the templates, they actually have template template at implementation So you can use the values to configure it to start your application You can also do what I did for my simple example was to just copy my existing deployment service and ingress files Into the structure and then you can easily deploy as opposed to having to like spes Individually start the deployment start the service start the ingress and Helm kind of builds itself as the package registry for kubernetes. You can go there and say helmet it a particular Service say you want to start Grafana you can help minute Grafana or help helm install Grafana It'll pull down what you need start it all up and you've got kubernetes running all these things More recently you've probably heard about if you've been looking at kubernetes operators Operators give you a more programmatic control than then then simply having some yaml that tells you how you want to put together your application So many many applications are you know now come with operators, which are things you can write your own code to use the kubernetes APIs to You know spin up pods monitor the pods and figure out what's going on and again Just mention that as like here's another thing that probably you're gonna want to understand and know what's going on So at this point kind of step back and think about testing so You know we've said based on the 12 factors that we want to keep our development staging and production as close as possible And in particular if we're like developing on you know Mac OS or Windows which a lot of our end workstations are But we're going to deploy on on on Docker, which is basically Linux It's important not to test in Windows or OS X and then deploy directly to Linux So we actually want to test in Docker or in kubernetes ideally, but it just strikes me There's going to be a lot of overhead there in terms of I got to rebuild my Docker image I've got a my Docker image. I got to start my Docker image I got to start my testing and do I really need to do that on every single cycle that could be a lot of overhead So I'll just leave you thinking about that as we will come back to that later on Now I'll get into a few the services the sort of qualities of service that you don't quite get for free in kubernetes But as a node application developer you can do a little bit of work And it will make your application better a better fit for that cloud native development the first two are liveness and readiness probes So by adding by adding an endpoint into your application Kubernetes can automatically figure out from that from the information you're giving it when should I restart your Your container, you know something's gone wrong in the container. You're no longer responding It will help you figure out when to restart that and similarly readiness lets it figure out when it can actually start sending you traffic If you think of the case where I'm going to migrate from one version to another you don't necessarily want to spin up your new versions and Automatically instantly like instantly redirect your traffic because they mean they still be maybe connecting to the database Doing whatever work and initialization they need to do the first time and if you start directing traffic to them right away Those transactions are going to wait and be slower So you might as well continue to serve existing transactions until they're ready and the readiness endpoint lets you tell kubernetes When your application is ready to actually accept traffic For liveness, there's three probe types. You can do basically, you know A shell into the container and run some command. You can do an HTTP probe or a TCP probe for node JS typically HTTP probe is is the most common you are you know quite often already have an HTTP server running for your API or your Your web application Basically the probe considers that it's okay if you are you know greater than 200 less than 400 and The key thing is that you shouldn't just you know You could just say I'm going to point that at one of my existing endpoints and if it responds That's good enough, but really you want to think about in your application What should I be checking to make sure my I'm really up not just that? Hey, I'm going to respond to the login page or something, right? Let's see if I can check if the database is really up and other resources that are running So you want to build a more complicated The it not I shouldn't say more complicated It's probably it's probably more than just a simple check you want to think about what in your application You should check to say whether I'm really up or not And on the the side here I just show again in the yaml that we talked about you know I can easily add this liveness probe that says it's an HTTP probe the path is the live endpoint on port 3000 and you can say well don't start checking until like five seconds after I spin up and check every ten seconds Likely going to have a you know a longer period than that I don't know a minute or whatever and as I said you should really check more than just that your web servers up in In terms of readiness you get the same options and config as a liveness probe But instead of restarting your pod for you automatically it just holds off Rooting traffic and if you've ever used kubernetes to say I'm gonna start up a pod You've probably seen something like this if you use the cube cuddle to get the status and you can see here it says you know zero out of one are ready and Basically if you have a readiness endpoint until you respond positively It'll stay like that in terms of it's not ready. I'm not gonna route traffic to it and Again on the right-hand side. I just show you know I can add about I can have a liveness probe I can have a readiness probe same kind of configuration You can make him the same endpoint if you want And that's you know how you would add them in as I mentioned you probably want to do something That's more than just one check so we do have a module on cloud native J. S. It's called cloud health Which lets you you know it provides an infrastructure for registering things that you want to check So you could write those independently and then register each of the things you want to check and it aggregates the responses and says okay I'm gonna tell you I'm gonna you know, I'm gonna provide you an endpoint that says yes. I'm live not live I'm ready now not ready The next thing to think about is logging that may be a little bit different in the kubernetes environment so can containers are ephemeral in that you know the the 12 factor approaches like you're not gonna have state in your Container make sure you could kill them. They can go away at any time So unlike other environments, you're not gonna be writing your logs directly to the file system in most cases You might be able to do that if you actually remote, you know, you can mount volumes into pods that are shared but if you think about a large deployment across a whole bunch of nodes and a whole bunch of Pods you can think that's gonna get pretty hard to manage in terms of keeping those shares across machines and nodes and everything So generally that's not what's done. You just end up logging to standard And you let the infrastructure handle that which actually as a developer is a little bit nicer You just need a simple logger which will log to standard a The trend seems to be to go towards structured logging so log data being in jason format and here's an example that I put together using Pino, which is a nice logger and You know, it's very very simple to come up with you know your you know log at a different levels And from Kubernetes, you can actually say okay cube control logs Get me the logs coming out of a particular pod and then there's things like log DNA and other tools that will like in cloud environments pull all this logging together and give you a view across all of the Pods for your application So the the nice news on logging is maybe it's a little simpler than than what you would have to worry about in other environments The next thing is metrics again, you have a container It's not as easy necessarily to get into the container to find You know, maybe you got a 60 a hundred containers running So actually going into each container to get data about what's going on can be a challenge So the the next thing you want to think about is providing a metrics endpoint. Luckily, there's there There's kind of a standard that's people are moving towards which is Prometheus and it defines the a set of base metrics that you should provide and There's some good clients and no clients that you can go and get prom client at metrics Prometheus That will actually instrument and give you a metrics endpoint pretty easily You should think about not just you know, you get the default metrics almost for free because those those those Modules will extract them and given for you But the other thing is to think about what does your application do that for you know business reasons or SRE reasons? You should be exposing so think about what other metrics you can add and again The you know prom client provides a nice API for saying here my additional metrics and then they'll end up in the data In the Kubernetes environment, there's already tools that will help scrape those endpoints collect all the data And you know, it took me an hour or two to get up You know through helm installing Grafana and some stuff to get nice dashboards where I could create You know graphs and great gauges of all this kind of data So as a no developer if we do this little bit of work It helps on the other side when people want to actually see what's going on then I at this point I think I just want to step back and think about okay. We've talked about a bunch of things What about upgrades because I find at least in real life The more successful you are the longer you've had to support your product the more and more upgrading becomes a big piece of your work So what does that mean if I'm developing my Node.js? application for a cloud native environment So Kubernetes actually does a great job of handling the updating like migrating from one version to another in terms of like I've already got my Docker images and I'm going to move from version one to version two And so don't really have to worry about that But it doesn't provide anything about like how do I upgrade my Docker containers? And so inside my containers, you know, I have modules like express. I have logger I have a whole bunch, you know, probably a bunch of my own application code and And I probably have a number of applications and you know, maybe with you know Microservices world a bunch of microservices and even a whole bunch of microservices for my one application So actually figuring out what all versions I need to update if I want to move to a new version express Or there's a security vulnerability and we want to upgrade can actually you know the x times y Containers can end up being I got a lot of containers to upgrade and that's a lot of work Which ones need to be upgraded? What versions do I have in each one that becomes a lot to to manage and Then I also imagine moving to another team So, you know, we've talked about these endpoints. We've talked about you know building Docker files writing YAML files And there's not necessarily a standard or one way to do any of those things So if you had to learn it once the first time when you move to another team It's quite possible. You're now going to have to figure out how people have decided to do it in there In that particular instance, you know, the lie that what endpoints do they using which logger? How are we organizing your deployment artifacts? and I do think like Consistency through documentation can help but I still see that as a challenge in terms of you know, how do we make that work? So at this point after learning all this stuff I'm kind of like at what like do I really need to learn and understand all of this stuff Because really I want to focus on the application code. That's the value. I'm delivering to my company Not, you know, learning YAML not learning how to deploy Kubernetes now, you know in a small company Maybe you're the one person doing everything But in a lot of cases you want to focus on your application and having to figure out all this other stuff It's just like what you know, is this what really what's what's where we're going? So from my perspective, we want to figure out how to separate the concerns There's some things that like as a developer I want to very actively be involved in and I want to change every time I make a new application And then there's really a common stack below me that it's not going to make my application any better to choose something different so You know, I think we need to get to the point where I've got this Docker image and somehow I can build that efficiently and easily and Embed in it a common stack So something that has the pieces that we've all agreed, you know The developers the architects the operators because for your production application probably everybody's involved in trying to make that a success So we've all come together. We've agreed that that set makes sense to us And then when I build my specific applications, I can just You know focus on my application code on top of that Now I do want to say like we want to have consistency where it makes sense But not uniformity So I'm not saying we're gonna have one stack and we always use that one stack We want to be consistent where it doesn't really matter to the you know to the particular implementation But not be uniform so if it really matters Let's push it into the part above the stack or in fact, you know We quite often would expect that we'll have multiple stacks for a particular organization Because there's probably different contexts that make sense So this point I'm like, okay, well, let's hope there's some tools and the good news is is that there's a few open-source projects and IBM is contributing to some of these projects that are aiming to try and address this particular challenge The first one's called app city it what it does is it helps you build those Docker images in an efficient way And what it does in the first stent the first part is to say well for my local development I need to iterate very quickly and I want to test in Docker But I don't want to have to rebuild my Docker image every time startup Docker to be able to test it so what it does is it takes your application code and it maps it into a Docker image and Then there's code to watch for changes and so that when you're you know, you're developing your application in You know Locally on your machine. It's mapped into a Docker container So you're testing in Docker, but it can automatically pick up your changes and you don't actually have to restart Docker itself The other thing it does is it uses the stack model to say we're going to have a number of stacks And I'm going to take your application so for example express I'm going to take just your express components and layer that into a stack Or a Docker image that we can use to do that testing Once you've gone through the the rapid development cycle you can then say build which will then build you a Docker image again the advantage being you don't have to understand Multi-stage Docker builds and all the best practices because it's going to be bundled into the stack for your organization And then you can even test it in Kubernetes just by saying absolutely deploy So again without having to know all that YAML and stuff you can get it up and running and be accessible Of course, we've we've thought about no JS and so there's a number of stacks that are available pre pre-made stacks There's no JS no just express no JS loop back and the functions one I'm not going to talk about but we do have a talk in the same room following this where Chris Bailey is going to get into a lot more detail of if you want to do serverless type programming How can you do that using app city and some of the benefits and so forth? So stick around if you if you want to know a little bit more about that At this point, I just want to drop down to a very quick demo If I can get out of the show So I don't have a ton of time, but I just want to show you how easy it is so this So the first thing I'm doing is just saying I want to use the no JS express stack which is going to bring me in the standard Docker image with express already installed and Okay, sorry, okay, it doesn't matter what it's called. So we're just going to create that project and You can see what I end up with this is app.js and if I look at that This is just my very simplest Express application that says hello world Now since I don't have a lot of time. I'm not going to go into the particular steps, but I'm now going to say App city deploy and if you see everything that's going by You'll see a lot of the components that I talked about in that is Using Docker to do a two-stage multi-stage build to efficiently build you a Docker image It's then gonna, you know, basically push that container To the right place and then create all the ammo that you need to deploy the application through Kubernetes So if I see say get all We can see that I actually now have and I could have shown you this before I guess I didn't but I now have a fully deployed application which is running if I go to the the endpoint and I go there And I do need to look back at my terminal To see 3155 31 wasn't it sorry I need the port of the Okay, sorry. What was it again? 30882 Yeah, okay, so I get my hello from app city. So that was my application But because the stack includes Support for liveness. I can get I automatically get a liveness endpoint which is responding for me I Get a ready a readiness endpoint and I get a metrics endpoint So basically by using that stack a lot of those things that I you know I talked about us needing to worry about no JS developers We've actually built that into the base stack and you don't have to worry about it when you're doing your individual Application itself. So let me just switch back here So that's app city the next thing I just want to mention another open-source project Is codewind this brings a nice UI on top of app city as well as some other things like integration with tecton Pipelines some performance monitoring. It's a bring your own UI so you can use visual code studio or bring your own editor You can use visual code studio kind of provides that next layer if you like to work with user interfaces as opposed to command line Cabin arrow is another open-source project and again what it does is it bundles together The different pieces along with some curated stacks. So we talked about stacks that you would want to potentially put in for your organization This gives you some curated stacks Which we expect you would start with and then sort of customize for your use and I'll just end up with like if I if you're interested I don't have time to go in the details But of course there is the talk which is immediately following this one You can also come and see us at the IBM booth We have some quick labs one of which includes getting involved and hands-on with with app city You can of course come talk to us We have a nice online lab 75 minutes So if you really want to dive down into a little bit more details of using the different tools You can go through that lab and then of course you can also go to cloud native gs for a little bit more information on some of those Components like the health checking module. So at this point, thank you very much for coming and I'm not sure. I think we're right on time. So that's probably the end