 Good morning everyone, thanks for coming to this session. Thanks for choosing this session among a dozen of other interesting sessions, which is called benefits over Eclipse J when developing multi-containers application. My name is Eugene Ivansalv and I'm an engineer with Red Hat Developer Group, namely people who work on Eclipse J, a Kubernetes native cloud ID. And today, the agenda will go as follows. I will give a brief introduction of Eclipse J, what it is, some history facts. Then we'll go through cloud-native development challenges and what Eclipse J is actually offering to solve them. And then, yeah, away with the boring stuff and slides, we'll actually launch Eclipse J, deploy a multi-container application into my local Kubernetes cluster, and play with the stuff for a little while. All right, so what's Eclipse J and why we hear it at this very conference? What Red Hat has to do with Eclipse J? So Eclipse J is an open-source project, and it has been with Eclipse Foundation since 2016. It's not the next version of a traditional desktop Eclipse. So it's a different concept, entirely different concept, which we'll talk about later. The project has got 5.k stars, if that matters. There are over 100 releases, even more, at this point, I think. There are almost 100 contributors to the project. The biggest of those are SAP, Red Hat. And the project has been forked 800 times, something like that. So what's Eclipse J? It's a workspace server, workspace orchestrator, and a cloud ID, a web-based JavaScript ID that you'll get in your browser. So by having Eclipse J, you do not only have the web editor, but you actually get an environment that your project needs to be built, run, and debug. And at this moment, Eclipse J supports three infrastructures. It's Docker, OpenShift, and Kubernetes. And by saying supports infrastructure, I mean that Eclipse J both runs on that infrastructure. And it uses that infrastructure as a runtime to create cloud workspaces. And the cloud ID is what you'll get in your browser. So once Eclipse J is deployed, all you need is a browser. So you can have Chromebook or very weak laptop to have a powerful 16-gig cloud workspace with all of the tooling, all of the compilers, all of the runtime that your particular huge enterprise project needs. So one of the biggest problems that Eclipse J is trying to solve is this. I think every one of you sitting here have said that multiple times, to your colleagues, to your managers, to newcomers in your team, to whoever's saying that the readme of your project doesn't really work for Fedora 27 and this OpenJDK version and that Maven version and stuff like that. So it always works on our machines. If it doesn't work on my machine, then probably I'm not doing a very good job as an engineer. But if I give you some random project and a very good readme and you're trying to follow it, I'm sure 30% of people sitting here will say, no, it doesn't work. Just because of differences in operating systems, differences in versions, even minor versions of the software you use, different version of NPM can cause huge troubles or my big legacy project requires Maven 3.1 and not the newer one. So Eclipse J is trying to solve this problem and by having an Eclipse J workspace, you can guarantee that it works on every machine. The next problem is that we've seen this trend to come from big monolithic apps to microservices. So it turns out that if you deploy your application as many small components, it's easier to maintain, it's easier to upgrade, it's easier to solve problems, it's easier to debug. And the real challenge is that when developing an application that will run out there in the cloud as a set of Kubernetes containers, developing it locally doesn't guarantee you that the work you've done locally and tested locally will work and will behave exactly the same way when your DevOps and when your admins trigger some CI job and this stuff will go to staging and then eventually to production. So Eclipse J is trying to replicate your production environment and run your production in a dev mode. So the problem, what we love containers for and why Kubernetes is so hugely popular. A container is something that behaves exactly the same way no matter where it's run. So if you put your app into the container, you can guarantee that the behavior will be the same as long as it runs in a Kubernetes cluster. So containers are predictable and they run exactly the same way anywhere they run. So localhost development. So when you start working on a project, the first thing you need to take care of is your environment. So all of us are localhost ninjas. So you know exactly how to pull this database, you know exactly what ID and what settings you need for your project, and you're forced to do so to be successful. But localhost environment is always different, right? And it never replicates your production or staging environment, never replicates it entirely. Thus, there are no guarantees that the behavior will be the same. EclipseChay is providing a solution for that. So instead of being a localhost ninja, right, instead of taking care of all of the dependencies, all of the build and runtime tools, EclipseChay is offering the following solution. You kind of grab your production, your application definition as a Kubernetes YAML, and you can convert it into EclipseChay workspace. And EclipseChay will add all of the developer tooling on top of it, not even on top of it, but as a sidecar. So in the end, you will have your application runtime, which you will own, which you'll be able to modify, and you will get all of the developer tooling that you'll need to be productive as a developer. The IDE, the actual build and runtime containers, and other stuff we'll take a look a bit later. So how it works. We put your application recipe in the center. Because it's your application, and you know what images, what tools it requires. Then you can choose the IDE, which will be a web-based IDE. You will choose the tooling. So if this is a Java, Golang, and Node.js application, like using all those stacks as microservices, you can choose the tooling in EclipseChay. And that tooling will be launched as containers alongside your application recipe. You can also choose build containers, which may be using images, the same images that, for example, your CI uses. So that when you build a project in EclipseChay, you can guarantee that you're using the same build environment, the same build settings, as your CI does. Because oftentimes successful build on your laptop doesn't mean successful build on the CI system when you push the code. So all of the containers of EclipseChay environment will be using shared volumes. So they will all have access to the source code that you will get cloned. It will use Kubernetes recipes, Kubernetes services, to communicate with each other. So internal endpoints. As said, you can have different build and run containers. And finally, you can use a remote debugger. So all of that, just having EclipseChay in your browser. So what's a workspace environment? It consists of several containers. A pod may consist of several containers. You may have an environment that has several pods. For example, if your complex application requires a database or some Chay proxy, some auxiliary containers, you can use all that in your Chay workspace to replicate the environment that will be used when your application is running in your staging, for example. So this environment is defined by a Kubernetes recipe. We also support Docker images and compose files. But the next version of EclipseChay will support only OpenShift and Kubernetes. Shared volumes, as said. All containers have shared volumes. So you can run build command in your build container. And build artifact will be available in run container. And you can launch your application and test it. And servers to expose services. For example, if you need a preview URL to your application, to your web app, or you want to share this URL with someone to check out the application that is running in your workspace, you can do so by creating OpenShift routes or Kubernetes ingresses. The ID, it's a fast, modern JavaScript ID that has client server architecture. There is a new Plugability module in the ID that is used and that will be demoed today. It's EclipseThea. And the EclipseChay 7 release will include the Plugability model that will make it possible to run VS Code extensions with EclipseChay. So you can grab any extension from VS Code marketplace and run it in a web-based ID, which is EclipseThea. There is plug-in registry where plugins, apart from VS Code marketplace, where you can add your own plugins and you can run your own copy of plug-in registry in your particular cluster. You can have custom ID per workspace. So if this is a Python workspace, the workspace configuration will include all of the tooling required for Python. If this is a Java workspace, your ID will look different because it will include some Java tooling, new menus, new side panels, and stuff like that. And you can use different IDs as well. Currently Thea is the one that is included by default. There is also a legacy GWT ID, but there are experiments with including IDs like Python, Jupyter, and stuff like that. Tooling. As I said, EclipseChay will try to make you as productive as developer as possible. And most of the tooling is backed up by language servers. So Eclipse Thea implements language server protocol. And all of the auto-completion refactoring go to definition. And all of the things you expect to navigate code to debug are backed by language server protocol and debug adapter protocol. So language servers are used by respect to VS Code extensions. So when you install Java support extension in VS Code, it actually starts a Java language server that the client communicates with. And the language server then gives all of the project details, auto-completion prompts, and navigates you through the code, et cetera. Build containers. As said above, it may not be important, but oftentimes it is important to use exactly the same build environment as your, for example, CI has. And it's important to decouple build process from the runtime. It also makes it possible to use smaller images for the runtime that don't include, for example, Maven and stuff, only just Barajitiki. Build containers have shared volumes, so build artifacts are then available in other containers. And you can manage resources. You can, since it's your application, you know exactly how much RAM you need to run MVN clean installing that. So you can allocate as much memory as the application may require. So the process and the demo that I will run will include deployment of the app to Kubernetes. Then you grab the workspace recipe and convert it to Eclipse J workspace. And once done with the developer work in your Eclipse J workspace, you have multiple options like push into CI and that will trigger rebuild of your images. Or you can interact with your Kubernetes cluster directly from the IDE. And Kubernetes and OpenShift plugins for Thea, which are actually VS Code extensions for Kubernetes and OpenShift are almost there, will be included in J7 release. So you'll be able to deploy an application right from your IDE, being in the same cluster, but a different namespace. All right, those were the slides. I'll start with this demo and a little bit of information of what you're going to see. So we'll deploy a microservices app to Kubernetes cluster. It'll be my local Minishift. It's already deployed there, so nothing special about it. OC apply and a huge yaml with the definition of my application. Then we'll develop, we'll try to play with the code of my application in Eclipse J workspace. Then we'll push to GitHub, and then we'll see what this push may do. It's one of the scenarios of what happens when you check out the code when you commit and push and how this can be integrated in CI, CD processes. All right. So the application that I'm using for this demo is a that simple microservices application. It's a to-do app, a web application where you add to-dos or remove to-dos. And it has authentication API, user's API. There is some very simple front-end. And there is a to-do API and a Redis database to store the information. So authentication API is written in Golang. User's API is a Spring Boot application. Front-end is some Vue.js stuff. And to-dos API is a Node.js app. So not sure that this combination happens frequently in real life. Yeah, this is to demonstrate the polyglot multi-languages ID. OK. Let's get started with that. So I have deployed this application in my OpenShift cluster. So you can see several pods. And one of the services is exposed so that we can hit this route. And it will ask me to log in, which I will do. Oops, something went wrong. It doesn't work. Is it demo effect or not? It is not. Let's take a look. Maybe there are some issues related to that problem. OK. I was warned about the internet. Yeah, we're good. Almost. So for that particular problem, I have registered a GitHub issue. Yeah, last year in October. Never fixed it ever since. And yeah, so let's do it now. I think it's high time because I will run into that at every demo, which is not good. So a GitHub issue with the description and looks like a typical GitHub issue, but not quite. There is this magic button which says, check it out in Eclipse J Workspace. Let's click it. So what happens now, what that magic button does is we call it an Eclipse J Factory. So that's a link that actually calls CHAY API with a request to create a workspace. There are some errors here and there. It's the consequence of grabbing the nightly build. So what happens there is CHAY will create a workspace based on the application recipe that I have provided. We have the IDE and we have the source code. So nothing special so far. It's just a web-based editor. But let's go back to my open shift to a different namespace where I actually deployed CHAY. So this pod is Eclipse J server and the IDE deployed on open shift. And this pod here is my workspace. So creating and starting a workspace in Eclipse J is not just starting an instance of a web editor. It's starting the environment as you described it for your particular application. So what I did for this workspace, I grabbed all those yamls or it's just one yaml that I used to deploy the application, the original one, the one that has an error that we are hopefully going to fix. And I converted it to an Eclipse J workspace and chose some tooling based on what tools my application requires. And I know that this is Golang, this is Java, and this is Node.js. If we peek into the pod, right, you can see that this pod consists of different containers. It has containers for my authentication API with Golang and GoPath and all of this stuff. It has got a container for users API to be able to run and compile my Spring Boot application. And it has got other containers. One of those is Thea, so this is the actual ID. This container, for example, is a container that is used for a plugin that provides shell access to all of the containers here from the ID. So once again, starting an Eclipse J workspace is virtually grabbing your production application definition and running it as a developer sandbox in your Kubernetes cluster. Let's get back to the ID. So now what I know, I know that I've got an application that is not working, right? And something went wrong. Not a lot of information there for me to debug. But since I know what happens where, it's obvious that from front end, the request goes first to authentication API and authentication API calls users API. So what I'll do is I will run this application in my developer workspace in Eclipse Thea. And to do so, I will use commands. So now I'm running an exec into my front end container. And I will restart the server and get the URL to preview it. So I'm choosing the command. And I'm choosing which container I want to execute this command in. So which container in my workspace I'm interested in right now. So that will be front end. I'm restarting the server. And now this is my container. This is my pod. It's not production. I can mess with it. I can kill it. Then I can fix it. I'm not messing up with the environment that is out there. All right. I can preview it right here. I will log in. And I have the same error. All right. So obviously, I need to take a look what's happening with one of my components in that application, namely authentication API, which is a Golang component. I'll go to Auth API. Go to main go. You can see that I'm having all sorts of help in here. So that is the language server that is running in one of the containers that I've defined for my IDE, for my workspace. And we can see what's happening here is there is user API address. So the application is then passing this request to user service. You see, I can navigate through code as well in here. So user go, username, last name, and stuff like that. So now I will rerun this service. So in this part, in these containers, I can stop services, rerun them, debug to find out what's going on. So I will just run another command. It will be authentication API. And the container is also authentication API. It will recompile my Golang. And I can do so because I have the right environment. I have the right go path in there. I've got the right dependencies either in go path or vendor folder. So I can totally do so. And now I can log in again. And yeah, I'm taking a look at the error message. And it says here that what path users admin is not found. So it's not my Golang component that is faulty, right? It's not the one to blame. That component is calling users API. Let's take a look at users API then. It's a Spring Boot application. There is users controller, right? And by just looking at the REST methods, I can see that someone not smart enough has changed the endpoint. And there are no tests. That's why it compiles OK. But yeah, the endpoint is user. So that explains my 404. Yeah, pretty stupid mistake, but very easy to demo though. Right, so I will fix it. Yeah, sometimes a fix is a very simple one line, but it takes a while to figure it out. Now, I have to recompile my Spring Boot application and rerun it in my workspace to verify that my fix works. Let's do so. Once again, I will use a command. No, I will use this time I will use a terminal. So I can open a terminal and run tasks which are actually Kubernetes execs into any of the containers. So I can play with my environment the way I want. And I have demoed commands, but you can totally go on the command line and run. So I am running this in a container which is different from the one where my user's API runs. So it's my build container. It has got Maven, right? And once built as a success, the build artifact, the jar, will be available in my user's API container. So this is what I'll do on task. And I will update my user's API. So I am restarting my Spring Boot application with the fix, with the right endpoint of the API that another component is calling. And there you go. So I have fixed it. So hopefully we'll do something else for future demos. Well, yeah, we'll mess with another endpoint probably. Right. So once I have it, I can do the same things with other components. Like say, I want to change something with my front end. Right. So my server is started there in watch mode. I can do such sort of things. Well, it depends on the technology you're actually using. Now, I've got a fix. But that's half of the deal, right? So it works on my machine kind of. I need to deliver that fix or change it or test it, verify against the environment that was used in my different namespace. OK, so I will go back to my DevConf, check that it's still not working on my production. Let's call it my production. There are several options that I can choose now. I will show two of them. I've got a working user's API in my namespace in here, in my EclipseChain namespace, right? So why can't I point my production app if I have access and luckily I have access here? Why can't I point it to the component that is working properly in my workspace, right? So to do so, I will go to my workspace services, so internal endpoints, and go to users API and create a route. So I have exposed this service, right? And then I go to my namespace here, DevConf, the bad one, so where the error is. I will go to authentication API deployment and go to environment. And you can see that my Golang component is going to users API using users API service name, right? So an internal DNS that is resolved to an internal IP in a Kubernetes cluster. But I can do this trick. You can see that the authentication API has been redeployed, so you can see a second deployment active in there. So now this component is actually using a service that the fixed service that runs in my chai workspace. And now if I go in here, it still doesn't work. Interesting. No, the trick didn't do it. Let's check the logs. Yeah, I think up. Let's go back to here. Yeah, I think it's the one. All right, that didn't work. The route refuses to work properly. OK, it could have been a nice wow demo. What I will do now, I will then commit my changes to GitHub. And that will, that should trigger a CI job. The Jenkins that is running in the same environment will rebuild the images for me and update deployments of my faulty production environment. And I will push to master. If not here, I mean, I can do that at work. It's nice to push to master. OK, changes are pushed. And in this DevCon's namespace, I've got a Jenkins instance with the Jenkins job that will receive a webhook. And it will rebuild my images with the included fix. And it will update my deployment. So in a minute or so, yeah, you see it's happening right now. We've got all new deployments in place, except for the Redis database because it's something static. And I will go in here. And if that doesn't work, yeah, that's a good idea. Yeah, I need to do this. It will trigger yet another deployments in there. Yeah, so I am in. Yeah, not sure why it hasn't worked with the service. For some reason, the route that I have exposed, it didn't work. It didn't route back to my pod, to my container with the correct users API. I swear it worked in the hotel, maybe. I need to run my demo in hotels only. Who knows? So that's it for the demo. We can take a look at the workspace details in here. So this is how my workspace looks like, right? Every component, I can decide how much RAM I need for each of the components. This is a tiny app, so I use half a gig and I used two gigs for the ID, which was my build container as well. So I allocated two gigabytes of RAM to run Maven builds without problems. There are servers, which are exposed services. So I have a Thea as a web ID, right? And I have frontend, right? I need a preview URL for my web application. And that is declared on the workspace configuration level. As I told you, there are shared volumes for all containers. So I can run Maven built in container A with Maven. And then run build artifact in container B, which was my Spring Boot application. So this is the config and this is how YAML looks in a string. So not very interesting to see. This is the magic button that I have clicked. So every time I will click on that link or that picture image with the link inside it, every time Che will create the exact same workspace, right? So if I had a public IP for this laptop and a little bit more RAM, we could have interacted and I would ask you to go to GitHub and click on that link. And Eclipse Che will create the same environment for you. All right. This is where you can interact with Eclipse Che team on GitHub Eclipse Che or Thea ID Thea. So Che is a workspace server, workspace orchestrator. Eclipse Thea is a nice ID in a browser that you have seen that is polyglot, that is capable, yeah, on running on top of Kubernetes or OpenShift that is capable to providing you access, shell access to containers, run commands and execs in containers. There are Eclipse Che docs with instructions on how to deploy your Eclipse Che to a local MiniShift or to a Kubernetes cluster in the cloud. Or you can chat with us on MatterMost, yeah, it's quite active as well. So if you have questions, you will get answers there. Yeah, you may try Che even without installing it. So you go to che.OpenShift.io. You'll be asked to register with Red Hat Developer Program. You get a login and yeah, you can start a workspace up to three gigabytes I think. So it's a nice way to try and see what the service is capable of. And yeah, and that's it for today. If you've got questions, you're certainly welcome. All right, then thanks everyone for coming. Now I have fixed that bug and I need to think of a different one to keep demoing it. Thanks everyone.