 Hello, welcome to the session on cloud native for Node.js developers. My name is Obkar Litter and I'm a developer advocate at IBM. This is a beginner session on cloud native curated for Node.js developers. So before we dive into what cloud native is and what it means to you as a Node developer, you may have some questions. You may be asking what makes a cloud native application cloud native. If you're using microservices, you may already be doing some parts of cloud native. Is cloud native just a buzzword and why should you care about it? We'll hopefully be able to answer some of these questions by the end of this talk. A cloud native application consists of discrete reusable components known as microservices that are designed to integrate into any cloud environment. This is the definition from IBM. I've highlighted some of the keywords that I want you to pay attention to. Discrete, reusable, microservices and integration into any cloud environment. I would like to point out that these words or concepts are not new. As developers, we like to write our code in smaller reusable chunks that can be tested independently. So that was a very high level definition of cloud native. Let's dive a little bit deeper. All cloud native applications are built with cloud in mind. This means from the time you start gathering requirements, creating user stories to architecting the application, you're already thinking about how the different pieces will fit together in the cloud native world. Cloud deployment in a cloud native application is not an afterthought. Microservices is not a new concept. Cloud native application uses a microservices architecture. With cloud native, it is likely the way of doing things. Breaking your app into microservices is a natural way to architect your application for the cloud native world. Since the application is built using microservices, it is highly scalable and dynamic. Think of it as a plug and play model where different components can be scaled up and down and also replaced with other components. It's a little bit more involved than that, but that's the idea. With the developing standards in the cloud native landscape, applications generally use managed and hosted services on the cloud. Another key characteristic of cloud native application is that DevOps is built into the application from the ground up. Developers and operations teams work together to set up the environment. These microservices are generally built on containers. Containerization helps realize the various benefits of cloud native application development. So now that you have a little bit more understanding about cloud native, let's talk about microservices. If you're creating a microservices based architecture, you're most likely splitting your application up into multiple services where each service is loosely coupled with each other but is quite independent and isolated from each other and together they make up the complete application. In the cloud native world, these microservices are packaged into what we call containers. Containers are packages of your software that include everything that is needed to run it, including code, dependencies, binaries, etc. So why do we want to use microservices architecture in the cloud native world? What benefits do we get out of it? With each microservice small and independent, developers are able to focus on their area of code instead of having to manage all parts of the application. This results in increased developer productivity. Secondly, with services isolated from each other, the failures or bugs are easier to debug in some cases. However, as the number of services or microservices grow, so does the complexity. A microservice architecture not done correctly can lead to a lot more problems than a well-managed monolith. Microservices together with containers provide maximum flexibility and portability in a multi-cloud environment where you can pick up your service from one cloud environment and port it into another easily. Finally, as a result of breaking the application into multiple services, developers are able to update or replace the services with minimal downtime to the application. So just to summarize, microservices are loosely coupled but remain independent, allowing incremental and automated improvement of an application without causing any downtime. Now you might ask, what are some of the downsides of the microservices architecture? And if it's so cool, why isn't everybody doing it? So while it's definitely more suited for cloud-native application, they also create the necessity of managing more elements. So rather than one large application, now it becomes necessary to manage far more small, discrete services. Microservices architecture needs different tools in order to monitor all of these different services. This presents a learning curve. Finally, while microservices provide a path to rapid development and flexibility, they do require a change in culture and a mindset. So it is not too uncommon to build and release multiple times a week or even multiple times a day in the microservices world. Alright, let's talk a little bit about something called the 12-factor application because I think it's important in the not only the cloud-native space, but in general as well to know about this. So the 12-factor application is a methodology for building software as a service application and as a service application. It encompasses the best practices in building a complex SaaS application. So all these factors, like I said, apply to the cloud-native application development and there are a lot of tools in the cloud-native world that make some of this easier. Let's look at some of these factors. So also by the way, you can get more information at 12factor.net. So the first factor is codebase, which simply states that all code be tracked in a version control system such as git or subversion. Let's look at a couple of other ones as well. So the next one is dependencies. The dependencies factor states that application must declare all dependencies completely and exactly why some sort of declarative manifest file. Furthermore, the full and explicit dependency specification should be applied uniformly to all environments like production, test, integration, development, etc. Next, config factor states that all configuration must be stored outside of code and there is a strict separation of concern between code and configuration. And the reason being your code should stay the same between different environments whereas your configuration might vary between test and development, for example. The build release run factor states that a 12factor app uses strict separation between the build, release, and run stages. And with containers, again, they facilitate this build, release, and run stages separately because when you're building your image, that's the build phase and when you're deploying, that's the release phase and then something like Kubernetes, which you'll get to in a minute, then separate out the run stage from your release stage as well. Okay, I'm going to skip a couple of these. Let's talk about dev prod parity. So dev prod parity reduces the gap between development and production environments. A developer may write code and have it deployed within hours or minutes after doing that. The parity just means that you're not concerned that something works on your development box but then doesn't work on production to make the mass similar as possible. And finally, looking at logs, the logs factor says to treat logs as event streams. So logs have no beginning or end but they're flowing continuously as a stream and there is a mechanism in the cloud native world when you're using something like Kubernetes to then capture all of these streaming logs. And in that way, the production or the platform that your application gets deployed on is responsible for aggregating and storing logs and not your code base. And we'll go into logging again at the end of the presentation. So the next question might be, how do you convert your existing applications to be more cloud native? So there are a couple of different tasks involved as listed on this slide. So it all starts off, or even before development, there's a couple of activities but let's start with deployment. So this all starts with deployment when you're building applications for cloud native. You need to package your code dependencies in OV that you're able to run any cloud platform. If you remember that was the definition that we had for cloud native and the container technology make that easy. Secondly, cloud native, in the cloud native world, there's a concept of self-healing or checking for the health of an application. So if you're writing your Node.js app, it is a good practice to write certain health checks and implement health checks in your application and we'll look at what these health checks are. Two common examples are liveliness and readiness. And then we come down to monitoring. So you need to be able to monitor and collect metrics from your applications so that the platform can check your application over time. And finally, we'll talk a little bit about logging as well. So as you can see, you may already be doing some of this in your application development but these are some of the key tenets of cloud native world. So on the screen right now on the left-hand side, this should be familiar to you if you're a Node.js developer. This is a package.json file, package.json file found in typical express apps. So we talked about using containers to package your code libraries and any dependencies. Docker is a popular tool for creating and running Linux containers. The Docker file on the right-hand side has a series of steps on how to build an image for that express application and run a container from that image. So let's go through this file. So first, I define a base image from which I'm creating this new image for my application. There are a number of benefits in using a base image like this. First, assuming the base image comes from a credible source, the developer doesn't have to worry too much about the base OS security. Second, the base image is usually optimized and minimized so it takes less space and storage and is faster to run. Next, we define the working directory for the next set of instructions in this file. We then copy the dependencies and other file source files into the image. Next, we run npm install in order to install these dependencies. After this, we copy the server and public code into our image. After this, we set some environment variables including the production environment and the port we want our app to run on. Finally, after exposing port 3000, we issue the npm start command which you can see comes directly from the package.json file to start our application. Now, this is one way to write a Docker file. There are a number of other ways I could have done this as well but this is a good start for us. All right, so speaking of deployment a little bit more, I'm sorry if the text is a little bit hard to see. Excuse me. Now that you have a Docker image you can deploy, where does it actually live? So you have to run it somewhere and this is where Kubernetes comes in. So Kubernetes is an open source container orchestration platform that automates deployment, management and scaling of applications. Clusters are the building blocks of Kubernetes architecture. So the clusters are made up of nodes, each of which represents a single host. Each cluster consists of multiple or may consist of multiple worker nodes that deploy, run and manage your containerized application. If you have read about Kubernetes, you may have read about pods. So pods are groups of one or more containers that share the same compute resources, network and memory. They are also sort of the smallest unit of scalability in Kubernetes. So that means if a container in a pod is getting more traffic than it can handle Kubernetes will replicate the pod to other nodes in the cluster if you ask it to do so. And then from the pod the next level up is the deployment. Deployment provides declarative updates for pods. All right, so there are two code snippets on the slide. The one on the left is a declaration of a deployment. You can see the kind of deployment and then we label our deployment. Replicas of one just means that I just want one instance of the application running in my cluster. And then the container section in the specs and the spec tells Kubernetes where to get the container image from. So in this case you can see the image is coming from us.icr.io. That's the image container registry on IBM with a name space and then the name and the tag of the image. There's an additional quotation mark in the slide. I'll remove that later. And then there's something called an image pull secret. So if your container registry is private, which ICR is, then you have to tell it how to get access to that image registry and you do that through image pull secret. So that's what typical deployment will look like. And then on the right hand side you have something called a service. So service is how deployments are exposed to the internet. If you do not have a service you're not able to reach your deployment from outside the cluster. So in this case you can see I'm defining a service and the service is exposing port 4000 outside the cluster. Now there are three kinds of services and I don't go through it here, but there's node port, load balancer and cluster IP. Node port is sometimes used for development and that's where you can directly hit the service through the opened port. It is not already secure and not recommended for production. Load balancer is another one where every service gets a different load balancer in front of your cluster and though it's easy to use it can get expensive to put a load balancer in front of each and every service. The third one is cluster IP. I've seen that used most in the work that I've done and that's where you have something like an Ingress gateway in front of your cluster which routes traffic to the right service based on some rules. Cluster IP service is available and accessible from inside the cluster through the Ingress gateway. As you can tell there's tons to learn about in Kubernetes, but again I don't want to get in the weeds in this talk. I want to keep it more high level and give you a flavor of what you might run into when you want to go cloud native with your application. Alright so let's talk about hell checks. So you have Dockerized, containerized to your Node.js application, you've got a deployment file that's specified a deployment and then you've got another YAML file that defines the service. And by the way, all of the, let me go back a little bit, all of this code here is written in this language called YAML. You either love it or you hate it, but this is what we get to use at least in Kubernetes. Alright so hell checks, now that you've got your app up and running how do you utilize the different sort of healing, self-healing features in Kubernetes. So hell checks are a way for Kubernetes to check if your application is ready and healthy. Those mean two different things. When a container fails, Kubernetes can restart it or replace it automatically. It can also take down containers that don't meet your health check requirements. So there are three types of hell checks. The first one is running a command. This is useful if your application runs for a long period of time in the background like a bad job and doesn't provide an endpoint to the external world outside your cluster. So you can sort of probe and run this command and see what the output or result of the command is to say if your service is alive or not. Second is HTTP probe and this is useful if your application can expose an endpoint that accepts an HTTP GET request. And since this talk is for Node.js ecosystem and developers, I would guess this is the more popular type of health check or probe. And the last one is a TCP probe and this one uses a TCP socket to check the health of the system. Now how are these used? So there are two major use cases. The first one is liveliness and liveliness basically says tells Kubernetes to restart the container when the liveliness endpoint does not respond and that indicates the application is down. So Kubernetes is going to try and restart. And the second one is readiness and readiness basically says do not send any traffic to this application until it is ready. Now this is especially important when we scale up or down as we don't want to route traffic to a pod which is not completely ready. So what do these things look like? So here's what a typical YAML file for each of them might look like. Let's focus on the HTTP GET on the right-hand side. So you can see I'm defining my health or my liveliness probe as an HTTP GET endpoint of forward slash health on port 8080 and then I have an initial delay seconds of three and period seconds of three as well. So initial delay second controls how quickly to start checking. So you want to give your application enough time to come up. And then period seconds checks how often to check it. So you're saying wait for three seconds and then check every three seconds to see if the application is up or not. And there are other things as well which we haven't defined here. So there's something called failure threshold that controls how many times the probe needs to fail before the container will restart. And then on your code side you would define your endpoint inside your Node.js so you would have a route if using express for example of health which can return anything with a status of 200. A lot of time people return status okay. Just a simple JSON. Let's talk about monitoring. So it is important to export key metrics from your container that you need to understand and track the health of your application so it might look different for different apps. But a really good place to start monitoring solution is by implementing Google's four golden signals. There's a nice paper on this if you want to go read it in their SRE PDF. So the golden signals are latency, traffic, errors, and saturation. So latency is the time it takes to service a request. That's sort of your response time. Traffic is the amount of activity in the application and you definitely want to keep an eye on that as well. Errors are the rate at which the requests are failing from the obvious one. And then saturation is how full your service is. Now that's kind of hard to measure. You can do for example CPU capacity or memory or disk IO to measure the saturation. And the de facto tool is for doing metrics on something like Kubernetes Prometheus is very popular. It defines a set of metrics you should export and gives you the ability to add additional metrics that are very specific to your application and important to your business. Here's an example of what Prometheus gives out of the box. So you can see it's giving you a number of stats on memory and CPU. And it's easy enough to install in your application or in your cluster, I should say. Let's talk about logging. So in container development, writing logs out to disk generally does not make sense and it's not a good idea because of the extra steps needed to make the logs available outside the container. So remember containers are formal once you, for whatever reason, if your pod dies because of some exception in the application or otherwise all of your logs will be lost. And remember we also talked about how in the 12 factor app it specifies logs should be streaming and don't have a beginning or an end. So a couple of tips here. So first is always log out to standard output and standard error. Kubernetes as a platform understands those things and knows how to pick up logs from there. Second is never store logs in your container. As we said, they are temporary. And then my personal tip is use Pino for structured JSON logging. And there's an example given at the bottom here. So you have some nicely formatted logs coming out of your system. And by the way, there is an article on developer.ivm.com that Michael Dawson wrote. He's with Red Hat now. And that article describes some other characteristics of cloud native applications that are important to understand as a Node.js developer. All right. So here are a couple of tools that I would suggest to look at when you're starting off on your cloud native journey with Node.js. So first is the IPMN Red Hat Node.js reference architecture. So the reference architecture talks about a number of different components including what frameworks to use, template engines, what to use for internationalization, authorization, authentication, some of the best practices and the tools that the developers at Red Hat and IBM use in their daily work. Also talks about development, operations. There's a big section on health checks if you only get more information as well as on metrics. And there's also a workshop that you can go through to try out some of these things. And I should also mention that in this conference, they are also doing a workshop that I hope you attend that will go through converting a simple Node.js express or Node.js application into something that can run in a cloud native landscape on the cloud. So it's a really good resource. The other one is Project Odeo. Odeo is a CLI tool for developers who write, build and deploy application on Kubernetes and OpenShift. So it gives you a lot of scaffolding out of the box that you might not know about when you're starting off. So it's nice to have that help in guardrails as you're starting off your journey in cloud native. So I highly recommend this tool as well. IBM Code Engine is a new service from IBM. A little bit of a shameless plug here, but I think the team did a wonderful job. And IBM Code Engine, though built on and based on Kubernetes and things like Knative, Tecton, and Istio, hides all of that and abstracts all of that away from the developer. So you as a developer, give your image or container or source code to Code Engine, and it takes care of all of that. Scaling, SSL, TLS security, rolling updates, all of the things are sort of taken care of for you. And it gives you a very simple UI and a CLI to go cloud native with the application. I personally really enjoy these two tools. The first one is called Stern, and I use it for logging in my local system. So I'm testing something on MiniCube or MiniShift. I have Stern running to see what logs are flowing through. And Scaffold. Scaffold is the one I use to quickly deploy an application to a Kubernetes cluster. So I have a test cluster where I deploy applications just to test it out or doing POCs and things like that. Scaffolding gives me a really quick way to go from something like VS Code to a deployed application. So it gives you a development environment for Kubernetes. All right, great. That was all for me. I hope you enjoyed the talk. If you have any questions on cloud native and how to move forward in that journey, please do connect on LinkedIn or Twitter. I've got both of those written out here on this slide. As well as I've got a course on Coursera called Introduction to Containers with Docker, Kubernetes, and OpenShift. If you're interested to learn more, do take that course. You can audit it. It's free. And as you can see, people have liked that course. So let's stay in touch. And thank you very much for listening to my talk. Bye-bye.