 Welcome to my Deployer Micro-Surfaces on OpinChip3 with Red Hat Developer Studio presentation. My name is Fred Bricon. I've been working for the past five years at Red Hat on the DevOps Tools and Dev Studio team. I'm currently based in Canada. If you've heard of it, maybe I'm commuter on the Maven integration project M2E and M2EWP projects at Eclipse. So if you've used Maven or Maven and JavaE and you add bugs, that's on me. Currently I'm working on OpinChip tooling for Eclipse as part of the DevOps Tools and Dev Studio development. So today this talk is designed to teach you how to deploy and scale Docker-based microservices on the OpinChip container platform, running on the Red Hat container development kit and using the Red Hat developer studio. So a lot of the concepts that I'm going to talk, I've been introduced already by Pradeep in the talk before me. So I'm going to just make a quick overview of the products I'm going to demonstrate. First of all, the OpinChip container platform is Red Hat's platform-resistant service product. It's deployable on-premise, locally on your machine, on an hybrid or public cloud. You can apply for the online preview if you want to try it for yourself. It's based on open source projects, which are the OpinChip Origin project, which itself is based on the Kubernetes container management project, as well as Docker, which is a lightweight container. So the container development kit is a container development environment provided by Red Hat. It's based on an upstream project called Atomic Development Bundle. It requires a $0 subscription. It's free for development use. So that means that you go to the Red Hat website, and there's a link over there. You go to that website, you download a couple files. So one is a zip file containing a Begrant file and Begrant plugins, as well as a virtual machine image that matches your preferred virtualization engine, whether it's VirtualBox, Libert, or whatever. So as I said, it's a virtual machine. It's based on Red Hat Enterprise Linux. It's running with Begrant. And it's interesting for us because it contains the OpinChip container platform I mentioned earlier, as well as a Docker registry that we're going to use for a demonstration. So if you go to this website, CDK Overview, you can get started easily, download. So the zip file, as I said, container, I think that's it. And then the VirtualBox, the virtual machine image that fits for you. Next, the next project we're going to use for our demonstration is Red Hat Developer Studio. It's an Eclipse-based distribution. It's based on both the Eclipse projects and JBLA schools. It's mostly focused on JavaE development, as well as Docker, JavaScript, and object tooling. And you can download it from the tools.jbo.org website or the marketplace. It's also available as an entry in the marketplace at Eclipse. So, about the demo, we're going to deploy a Yellow World Microservices Architecture project, which is based on a tutorial available on GitHub, which basically... I think there's a screenshot of the front end. Basically, it's an over-engineered Yellow World application where we have multiple services talking to each other. And we're going to deploy each... Well, all these services are deployed on the local OpenShift instance that I got running on my machine. And this tutorial mainly acts as a way to educate you about all the different ways for microservices to communicate with one another. So, we'll see that in a moment. But all these services are written in different technologies. So, we have Vertex. Vertex is an event-driven application framework. We have Wildfire Swarm, Node.js, and Spring Boot. But this demonstration will mostly focus on the tools that I'm going to show, not particularly the code or the architecture itself, even though we're going to scheme through it. And the goal is to demonstrate the deployment and scaling capabilities that OpenShift provides and how Red Hat Developer Studio helps you with. So, we might need to get familiar with some key OpenShift and Kubernetes concepts, which are really oversimplified. So, we're going to meet some resources in the OpenShift world. For instance, the project is a Kubernetes namespace approved. So, it will allow you to manage different kind of resources under the same namespace. We'll see an image stream, which monitors Docker image deployments. A pod is something that manages Docker containers. A service is a load balancer for pods. And finally, a route is the entry point to a service. So, typically, the URL that will be entered in your browser will point to a service, which then the service will then load balance the query to any of the pods that are running. And if we deploy an image, then the image stream will listen for the deployment and spin up a new pod with a new image. So, without further ado, let's see my workspace. So, I got the projects that are listed in the tutorial on GitHub. They're already imported in Eclipse. Then what I want to do is start an OpenShift instance directly from Eclipse. So, how do I do that? I can go to the Quick Access text box here and launch the container deployment environment. Here, I will, using my credentials that I use with the RedEye account, I will select the location of the background file I've unzipped. Okay, click Finish. Then a server adapter will be created in the service view. And this server adapter basically calls the vagrant command, vagrant up command on the vagrant file. And the vagrant itself will spin up an OpenShift instance. The CK server adapter will also create, for me, a connection to that OpenShift instance. And it will also create a Docker connection that I can see in the Docker Explorer here. So, if I look at my OpenShift instance, I can see one project. It's called Greater. My project contains several services that are currently deployed. And what I want to do is look at the front end, which is the most important one. And I will show it in the web browser. So, here we have, on the first tab, the browser acting as a client to all the different services. Each service, as you can see, is displaying LO in different languages. Hola in Spanish, Hola in Portuguese. Aloha in Hawaiian. And the bonjour service is currently not running. So, we're displaying a fallback string. So, there are different ways for microservices to communicate with one another. One possible architecture would be for the browser to talk to one single endpoint, which is the API Gateway. And that API Gateway service is itself talking to my four other microservices. So, again, I can see that all the responses from the different services and the bonjour response is not responding for the moment. So, this tutorial is pretty nice in the sense that, in a way, it shows you how to degrade gracefully when your microservices are not running. So, I won't go into too much details in the code, but if you really want to learn about how microservices behave, this is a really good tutorial. Finally, we have the service chaining tab, where the browser talks to only one service, which itself chains the calls to another one, then another one, then the last one, the module service, which is still not responding. So, it's about time we do something about it. So, let's see how we can deploy our service using Docker. So, the bonjour application is a Node.js app. So, we have a package.js file that I just changed to have these Nodemon processes instead of Node initially. The Nodemon process will basically listen to changes to the bonjour file, and if it detects a change in that file, it will heal the process and restart it. So, before I deploy it, let's look at the Docker file. The Docker file, as you can see here, is very simple. It takes everything in the current directory, and it will copy it under OPTIP app root source. Then it will expose port 8080 and issue the npm start command. So, before I can deploy my Docker image, I will need to build my app. So, I will click on the package.js file and do run as npm install. So, I made it pretty quick. So, I only removed one Node module folder here so that it could build pretty fast for the purposes of this demo. But basically, right now I have everything in place. So, I can take my Docker file and create a Docker image. So, I will run as Docker image build. I will connect to the container development environment connection and we'll use the creator bonjour name. So, the Docker tooling allows me to build Docker images directly from within Eclipse. I don't need to go through the command line, which is pretty nice. It will take a few seconds to build the image. And then we'll see the Docker image here. There it is in the list of images in that container development environment. So, the thing is, at that point, the image only lives in the daemon. For OpenShift to see it, we need to push it to the Docker registry living on the CDK. So, if we go back to the connection and edit it, you will see that the CDK configured that connection and exposed the Docker registry under these viewer. So, that's convenient for us. So, what I'm going to do now is right click on my image and do deploy to OpenShift. So, I want to deploy my Docker image to the creator project on OpenShift. I could create a new one if I wanted. This is the image name that I'm going to use. I got auto completion here if I want to choose something else. And the resource name will be used by convention is Bonjour. So, I want to push it to the registry. Next, here I got a list of deployment environment variables. I can modify if I want. I will use one replica. The replica is a number of pod instances that are currently running at any given time. I got one port that is exposed and one route that will be created. I can add labels if I want and then I finish. And what's going to happen now is the Docker image will be pushed to the Docker registry on the CDK. Once it's created, an image stream will be created on OpenShift. And after that, we'll create a couple other resources in OpenShift. We'll have a service. We'll have a route and a deployment config. The deployment config basically says, I'm going to listen to this image stream changes. So, it's going to take a few more seconds. It can be quite long sometimes. Any questions so far? Everything is crystal clear. In the meantime, I can do some nasty things. So, every pod that is currently running, I didn't show you the pods here. So, every service has currently one pod. I can go to say the front end and right click on the pod log menu. It will display the current pod, the current logs for that pod. But I can see the exact same thing if I go to the running containers. Oh, my deployment is finished. So, I can see my running containers, the front end container here. I can display the log as well. And the nasty things I was talking about is that I can execute shell commands. As a section to the container, I can do stuff like, yes, I can take the node front end gs process and do something like kill. So, what happens if I kill process? Currently nothing. So, here, the container I killed was almost immediately replaced by a new one. So, OpenShift allows you to, well, definitely monitors the app for you. And if something goes wrong, it will spin up a new instance. So, my application is still running, which is awesome. And as you can see, the Bonjour service is now up and running. The API gateway works as well, and search chaining works as well. So, at that point, we can see that pushing a Docker image takes quite some time. So, we'll see if there's another way to make it quicker to update our stuff. So, let's go back to the OpenShift Explorer. And for the Bonjour app, we're going to create a server adapter. Bonjour projects here. And basically what we're going to do is we're going to map the sources in our workspace to a process that will, on any changes in my workspace, that will deploy all these changes directly to the running container. And because I set my NodeMon process instead of Node, if I change something in the Bonjour file, then hopefully I should see the result immediately on my service. So, let's change it to Namaste. Save it, refresh. There you go. That's pretty cool. Another feature that's pretty nice is that we can scale pods as we want from Eclipse. So, we're going to scale up our pod. We've got two pods running now. So, if I go back to my browser and refresh the results, I can see here the name of the host is changing because my service acts as a load balancer and will round robin all the requests to all the different pods. So, I can add another one. There you go. Three pods running. I can decide to scale down to zero if I want, as possible. And I should not get any more. The last thing that's probably interesting to see now is the link to the web console, which is pretty nice. Just right-click on the menu showing the web console, and you have the nice UI directly in the browser. So, I can go back and spin my module service directly from the console, and if I go back to Eclipse, I will see that my pod is appearing directly. No refresh necessary. Everything is synchronized automatically. So, I think that's pretty much the extent of what I wanted to demonstrate today. If you guys have any questions, now is the time. Yeah, everything is running on my machine. Yes. So, basically, a pod or multiple pods can run in one node. I have one node, and a node is a machine, whether it's a virtual machine or physical machine. And if you're an administrative open shift, you can decide how to set up your nodes on different machines. This is your cloud. You do however you want. For development purposes, the CDK helped me to set up the open shift environment on my machine. And because this is running on a virtual machine, it works on Windows, on Linux, on Mac. So, it's really multi-platform. It's pretty nice. All right. So, here are a few links that might be interesting for you. If you have any feedback, if you want to try it and you have any feedback on the Eclipse tooling, I suggest that you open bugs or enhancement requests on the JIRAV project. That's it. Thank you.