 development, but in this case applied to containers and the cloud. It's going to be really awesome. And we have the resident expert on Apache Campbell with Mr. Klaus going to be coming up here in a second. But one thing I want to say is I really appreciate all of you folks who've come in from all over the world. I see now Germany and Chicago, Barcelona, India, Calgary, you know, all kinds of folks, right? Brazil, etc. Thank you so much for showing up. We do have a great international crowd and we have a large number of you on the line with us today. We will basically run for about 25, 28 minutes or so and try to get questions at the end. Feel free to throw your questions into the Q&A tab in the user interface here and we'll try to get to them as quickly as possible. And one thing to tell your friends and neighbors in the chat, if they have trouble hearing or seeing, they probably just need to refresh the browser. That's often what the case is. So at this point, let me turn it over to Klaus. Klaus, are you ready to go? Yes, you're both. So hello. Let me just find the button to share my screen so we can get started with these slides. So coming here and present. So again, welcome to this talk about camera writers in the cloud. I am, as per said, I have been working a lot with Camel more than nine years, wrote a couple of books on Camel. I'm based in Denmark, in Europe. I threw in some contact details if you need to reach out to me. So recently, I was told that Apache Camel was not a household name. In other words, Camel was not so well known. So I was a bit disappointed but still, so I want to spend a couple of minutes to tell you about Camel. So Camel is used for any kind of system integration. Camel is an integration framework. Some people like to say Camel is an integration toolbox or toolkit. Here are the highlights of Camel. Camel allows you to build integration using enterprise best practice. Those are the enterprise integration patterns, which we're going to see in a moment. To integrate with different systems, Camel comes with over 200 components. So you can integrate with legacy systems, mainframes. You can do batch styles with FTP and files. You can do two messaging systems, streaming systems, web service, rest service, cloud providers, and so on and so on. Camel also has capabilities to do data transformation. So you can transform data between well-known structures like XML and JSON, flat files, CSV, and so on. But there are also data transformation for industry standards in healthcare, finance, and telco, etc. In Camel, you set up how systems are integrated using routes. Camel routes. And those routes are developed by developers in either Java or XML. And Camel also comes out of the box with support for RIST and APIs. Now Camel was actually inspired by this book with the title Enterprise Integration Patterns. This book was published about 13 years ago. In this book, we have industry standards for common integration problems and their solutions. So when you integrate systems, you can route messes depending on, for example, the content on the message. You can fill the messes. You can route messes to dynamic destinations, to a number of recipients. You can split them and you can aggregate them and so on and so on. These are universal patterns that are really reasonable for developers to build integration. And those developers are using these patterns in Camel routes. And these are a very simple Camel routes. It's just a straight integration between two systems. So it's pick up files and send them to a messes queue called order on a JMS program. And as you can see, you can do that in two lines of Java code or four lines of XML. The overall architect of Camel is basically like this. So in the center, we have Camel context. That is the runtime Camel. So developers are then building Camel routes, which they add to the context. In those routes, you are using enterprise patterns where you can do routing, transformation, so on and so on. And then to speak to the outside world, as I like to put it, or to integrate with other systems, you use Camel components. That's basically the Camel architecture. So then you may say, okay, where can I run Camel? Well, Camel is a very lightweight integration engine or framework. So you can actually run Camel everywhere. Now Camel was created 11 years ago. That was before, you know, we had the cloud, or at least the public cloud. And that was also before Linux containers became so mainstream and popular as it is today. So in earlier days, we were running our Camel applications in more traditional styles using application servers or standalone and so on. It's still a possible way of running Camel today is by no means. It's still a good solution. But, you know, this topic of this webinar is more geared towards containers than Camel in the cloud. Now, Camel can connect to a lot of different systems. And here on the slide, I just put some of the different connectors we have. One thing that people are not maybe not as aware of, but you can also use Camel for connecting to AP gateways to smaller, better devices, for example. So it's very flexible and versatile. So if I like to summarize or tell what Camel is in pictures, then I like to picture Camel as a tool bug that has integration tools that developers can use together with these enterprise patterns to build integration solution defined as Camel routes. And then they use Camel components to connect to these different systems. So all that together is Apache Camel. Okay, so what about Camel in the cloud? That's a very good question. And that is the topic of this webinar. So to help explain about running Camel in the cloud or in containers, I have borrowed some slides from our host, Burr Sutter, in his latest webinar where he talks about service miss and east queue. There was a set of great slides which I have borrowed. And I have to say that this presentation from Burr was very good. So I encourage you guys to take a look at that on as well. So when you start running applications on containers, we are moving towards from having a bigger monolithic, bigger application towards a more microservice style where we split up and have smaller pieces of applications or integrations. So over time, those bigger applications become split up into smaller pieces. And I think Burr was maybe a bit ambitious in his slides. So this big monolithic application was distilled into what is their 20 or so individual microservice. That's quite a lot. And over time, these microservice become more independent and they are distributed and networked. And so now we have a different situation. We have the distributed computing and that's actually much more complicated than just one big monolithic application running on one host. Now we have a different set of service that are running on containers that may run on different hosts and whatnot and they are networked to remote networks and so on. So for example, before with a bigger monolithic application, it may be easier to monitor and know the health status of that application. Was it ups or downs or those services that are running sort of big inside that big monolithic applications may communicate and with each other using just functional calls inside this MDVM. Now when you split it up and run it as microservice where there are many different microservice, they are connected. It becomes much more complicated. So for example, the service A may call a downstream service B that again calls another service C. So now if something goes wrong, it's harder to troubleshoot and figure out where the problem is. Is it in service B or is it in service C and so on and so on. So when we start to run applications in the cloud or containers, we need a set of extra capabilities. And this is from the slide. It's a shape of 10. So the Greek number for that is DECA. So it's a DECA gone. And you can see the different concepts here. I will not go too much in details of each of them. Burr, his webinar is going much more in depth. So where are these facilities coming from? Where do we get them? Well, when you run on containers with Kubernetes or if you're using Red Hat's version of Kubernetes, which is OpenShift, then these many of these capabilities come from them. So when you Kubernetes makes it easier for service and application to discover each other and also easier for service to call each other within the cluster, Kubernetes makes it easier to scale up your service. And Red Hat brings with OpenShift additional capabilities around CI CD with Jenkins pipeline, centralized logging and help for monitoring and so on. And in recent time, a concept around called Service Mesh has become very popular, in particular one product called Istio, which was also the topic that Burr was talking about in his latest webinar. So again, a shoot out to that one, go and watch it. It's really awesome. So Istio also brings some of these capabilities to the table. And the last piece is that you can choose to update and if you need some sort of solution for helping governments, your APIs, then Red Hat has a product called TreeScale, but there are other products in the market. But again, this is a camel webinar. So where is camel in all of this? No, there's no camel logo here on this awesome slide. And camel is actually embedded in your service or in your applications. Now camel requires Java, so it can only be the Java service. So in the slide here, we can only use camel inside with WordX, Spring Boot, Wi-Fi Swarm, for example. Frankly, I was recently as well told that one of the biggest negative around camel was in fact that it was Java. They would like to have camel on other platforms like Go or .NET or Node, but camel requires Java. To my notice, there's only camel on Java. So when we talk about microservice or running camel on containers, it's more about distributed computing, but more about distributed integration. So for example, we have TreeService here, a, b, and c. So we could use run camel on service a and service c in the cluster. Now, for example, where it makes sense to use camel on containers could be, for example, if a service needs to integrate with some system that runs outside the cluster, it could be a Lexi system, it could be an Oracle database, or it could be any kind of other system that you may not run inside your own cluster. And as you may know, camel comes with a lot of components or connectors for all these different kind of systems. So it makes it much easier for developers to pick up and use camel to integrate with these kind of systems and embed their camel directly inside the service. This is a powerful feature with camel. Now, another slide I really like. This is one I have borrowed from the Red Hat middleware team. So when you talk about integration and containers, there is sort of part of a bigger puzzle which is coined as agile integration. And that consists of three pillars. One is the distributed integration. That's actually the bread and bread of camel. This is where camel lives. You know, if camel is lightweight, you have the enterprise patterns and all the connectors and so on. And the middle pillar is around containers. That is where, you know, using as a, for running our service in a cloud-scale native way. And then for APIs, we can use something like tree scale to help govern them. So what about some of the good practice or best practice around running camel in the cloud, our containers? So I have a set of slides here and I have to apologize up front. These slides are not as polished and pretty as I would like to have them to be, but anyway, here goes. So one of the great practice around running microservice or camel is that it should be small in size. And camel has always been that. It was very lightweight from the very beginning 11 years ago. You just pick up camel core and then you choose which camel components you want to use and you add them together. And camel works beautifully in this single, fat jar kind of style. So you can easily embed camel together with Spring Boot, Wi-Fi Swarm, Vertex or whatever you want to use. That's awesome combination. Another good practice is to, when you're built integrated at microservice, is to try to build your service as stateless. If you need any kind of state, then camel has great support for that with different components. There are some for memory data grids like in finished band, Hazel cars, Apache Ignite and so on. And of course, you can also store states in traditional databases or key value stores and so on. And Kubernetes itself comes with a concept for stateful applications called stateful set. Another practice that is around configuration management. So when you start to run your camel microservice in containers, then you can leverage the best practice around that from Kubernetes for example. There's a concept called config map. It allows you to externalize your configuration from your containers and your application and manage those in Kubernetes as config maps. Those are essentially just sort of like a key value pairs. And when you boot up your applications in Kubernetes, those config maps are injected using different means. One is using environmental variables, another one is to store additional files in your container so your application can read the information from those files like properties files, etc. A variation of config map is when you need to have configuration that are more sensitive like secrets where you can have like passwords or certificates and so on. And those can also be injected and provided by Kubernetes to you. Now camel make this, those using config maps in containers is easy when you use for example spring boot or camel. Spring boot, you can refer to config maps using this annotation add value and then you can use this dollar squeak thing. And on the right hand side, I have a little example with a very simple config map. It has one key. It's called fallback and then the value of that is I still got no response. And you can see here how I can easily refer to those in spring boot and camel. Another very good pattern is around fault tolerance. When our applications are distributed and distributed integration, we have a lot more communication over the network and then a lot more can go wrong when you have remote communication. Camel has sort of two sides for that. One is the client side retries or camel retries and another one is around circuit breakers implemented using districts. On the slide here, we have a simple example using client side retries in camel. That is the exception. So we can tell here camel, okay, if you're trying to call the service it fails, try up to tell times. And if there's a failure try, use one second in between. Now, when you use client side retries, there can be a problem known as the thundering hurt problem. And this is a figure I have borrowed from Christian Poster. So we have this we have this service, a relevant service in the middle that is under stress. It has problems. So all the other service that are trying to call it, they will fail. And because we are using client side retries, they just keep on retrying calling the same service over and over again, but they just keep on failing because it's overloaded, for example. So it's like, you know, this picture of, as a herd of bulls or whatever that come running and roaring towards you, it's just, you know, it's immense. You can't stop it, right? So in that example, there may be a different pattern you can use that is maybe better. And we're going to see that in the demos is related to circuit breakers. When you run Michael Camel in containers, you also need to think a lot more about health checks and how to provide health checks to the containers. And thank God you can get that easily out of the box. If you run in Camel with, for example, Spring Good, Camel has support with the mechanism in Spring Good, which is called the actuator. And on the right hand side, there's a screenshot of that. So the health check of Spring Good says it's up. And then the additional fine grid status from Camel, and even for every Camel route, et cetera. Wildfire Swarm has a similar concept. It's called monitor. Then when you run your applications or microservice in Kubernetes, there are two concepts that are really important, readiness probe and lightness probe, which we're going to talk about in the demo as well. So what about all these enterprise integration patterns? And are there any, you know, good practice around running those in the cloud? Is there any difference between running and using them in containers versus standalone? No, there's not. They just work absolutely the same. There's no difference. These patterns were actually created before we had containers and cloud, et cetera. So they are universal. They work anywhere. So there's nothing to worry about. Just use them. We do have a few additional cloud patterns, if you want to call it that. One is called the service call. So in this example, we have a very simple Camel route that has a timer that I want to call a service, sort of like a scheduled fashion. The service is called hello service. So what it does is that it goes to some sort of registry and do go up, which host has this service, a physical host that has the service. And then if there are a number of hosts that has the service, then it picks one of them. You can choose which algorithm to use. It could be around robbing. It could be sticky. It could be random, et cetera. Or you can plug in your own. And the service registry is also pluggable. So we have plugins for Kubernetes, Netflix, ribbon, consoles, to keep and so on. Now, this pattern was created, you can say, on the same time a bit before Kubernetes kind of existed. So Kubernetes has made it most easier to call service. So it might be obsolete and not so often usable when you use Kubernetes. But if you're not, you can still use some of the other service registries. On top of my head, there might be a situation where if you really need some sort of powerful client-side algorithm to decide which service to a bit physical note to use for calling a service and you don't, you cannot use the default from Kubernetes or maybe the one that is still we might provide. Then this service call pattern you can use in KAML and you can plug in your own algorithm to decide what to do. Another pattern which you're going to see in the demo is around circuit breakers. It's the history pattern. We do have a circuit breaker pattern that is coming out of the box in KAML call. However, we have deprecated that in favor of the history. Historic is a much more robust and powerful implementation of the circuit breaker. It's battle test. It's used a lot. It's really awesome. And there are also a few other patterns around related to distributed tracing. KAML has integration with Zipkin and open tracing. I will say that Zipkin is the first one we did and it's a bit, let's say outdated now. We have scheduled to update it for KAML 2.22. Open tracing is more up-to-date. Those components are allowed to, depending on what kind of service you're calling and what KAML components you're using, those components are able to enrich the tracing method with additional metadata that you may not get out of the box. But I do think that Istio, for example, is something that is taking over this approach. So, going back to BERS, Decagon with the 10 squares, where are some of the KAML capabilities that you may want to use when you're running KAML in containers? So, definitely around invocations. All the KAML components or connectors, you can use them to call any kind of external systems and whatnot. Not every service is straight ACDP REST call within your own cluster. So, every time you need to call a service outside, there's definitely maybe a common component for that. KAML also has quite good support for REST and REST APIs. And again, for fault-tolerant education, client-side retries or the circuit breaker. Now, it's demo time. So, unfortunately, I put the demo at the end. Sorry about that. So, this is very basic simple demo. So, we have a Spring Boot that calls hello service and get a reply. And we're going to run that and do a little bit of see what KAML does here. So, let me just go over here. And I have already installed the application. So, if I run the service, I call the service from my web browser, I get a swarm says hello from hello swarm, nothing of interest there. But here is the Spring Boot side of things. It's just a Spring Boot application. So, there is a timer that I want to call, run regularly every second. We do a service call inside history. So, we have a fallback in case something goes wrong. We call the service using the DNS name in Kubernetes. It makes it very easy. Just the name of the service call on the code number. Up here, if you may have come it out, but this is a variation in KAML. You can also call a service easily in KAML using double quickly a service call and then the name of the service. Then you don't need to remember the port name, for example. So, let's try to run this guy. And I guess in interest of time, I already pre-deployed it on the Kubernetes cluster. So, I'm just going to scale this guy up. So, it's scaling up. And I can go here, go to get parks. And this guy is starting to run. So, what I'm going to do is just follow the locks and you can see Spring Boot is starting to pick up running this application. It has the KAML inside that application I'm running. And if I go down here, now it's running in and now it starts right. You can see Swamp says hello from me calling that application. And let me just say if I delete the part where the swarm is, you know, the circuit breaker inside the KAML should be helped with fault tournaments and build a resilient application. You can see we have the fall back. I still got no response. And now the Kubernetes cluster is starting up another cluster with another part with the hello swarm. It should go back and then run again. Hopefully not so pretty soon. And we can see here there's already one part running. So, it should start to have the response. Here it comes. Now, that may not be very fault-turned because if you only have one part running and it fails, right? Then your service is falling. So, let's try to scale it up to two. So, to have a more fault-tolerant application. So, there should be two instances of that. And you can see it should start to load, bounce between them to, oh no, you can see there's still some errors. So, we scaled up. Why is this not fault-turned, you know? So, and this is actually a bit on purpose to have a problem because when I was scaling up the hello swarm application to two parts, there is a problem. If I can find it, deployments, hello swarm, because OpenShift should be able to detect that as well. And it says here container file file does not have health checks. So, the problem is that when you build your application in the cluster, you should include health checks because there was no health check in the in the wildfire swarm application. So, the cluster thought it was healthy immediately and started serving traffic to it. But in the wildfire swarm and job in general is a bit slow to boot up. So, it may be 10 seconds to be really ready. So, you should have included health checks. And if there were health checks there, then, you know, Kubernetes is able to safely scale up and scale down your applications in a more fault-turned way. So, always implement health checks in your applications. And also from where you call your service, even so, if they have health checks or any kind of things, errors can still happen. So, you have to build in fault tolerance using, you know, client retries or circuit breakers. Also, when you run the client here, we can see we got the health check here from Spring Build. You can see Kama coming up here and saying everything is up. So, that guys is good. When you use the HISTRICS, you can also get this HISTRICS dashboard. This is live metrics. So, when I kill it and what not, this one will show the problems. We are getting out of time. So, let's quickly move ahead out of the demo. Just go to the last couple of here are some links for more information. And just to shoot out from a book, Kama X and Z condition, there is a discount code that gives 40% from Manning. If you order from their website, the code is live for at least a month or so. So, you do have good time. The link on the bottom is a free sample of the book that Red Hat has sponsored. So, that's like the first three chapters. They are for free. You can download it from there. And now we are going to Q&A. So, I think our host will set it up and I might need to stop sharing. Yeah, yeah. So, one of the key questions, Klaus, is, hey, how can I get the slides and the presentation? People definitely want to know more about that. So, if you can publish the link on Twitter or send it to me, we can definitely get it to folks or you can add it to the chat as well. One good question I thought is a very good camel question. It's specifically related to the number of threads you use. So, the threads you use in camel. So, is there a thread per request or is there a thread per route? Good question. So, Mr. Kalman here can help answer that right. So, no, yes, it depends. So, it depends. So, basically, there's one thread per per route in the sense that, you know, it all begins with the input, the input that is the incoming source and it has usually one thread per message but it can also be reactive in a non-blocking way and all that kind of thing. So, it's a bit of a complicated answer. Okay. Yeah, that makes sense to me. And there's an interesting question about TLS ingress support. Do you have, have you ever encountered TLS ingress support within the context of camel or does that normally happen outside of camel at the Tomcat level, the, you know, the web server level? Yes. We tend to see that more happening on the outside level. Okay. Definitely. Excellent. And then there's a lot of questions about cloud computing in general. I tried to answer those. A lot of questions on enterprise integration patterns. I tried to answer those. Anybody, have you tried, by the way, running camel inside of Amazon Lambda? That question came up a lot. People are very interested in running camel inside of Amazon Lambda. Well, no, we have not. But there is a support for letting camel call Lambda function. I do think, you know, running camel inside Amazon Lambda is probably too heavy because, you know, it requires the entire JVM, you know, to bootstrap that. So, you know, the footprint of the Java itself, the JVM is too heavy for that. And so I don't think it's yet there yet. Okay. And that's a very fair response. I was thinking about the same kind of answer myself. We're really out of time. Let me see if there's any other really urgent questions I tried to answer a bunch of them. Oh, here's a great one. When can we get camel 3.0? Oh, yeah, that is the okay. So yes, we will work on that after the summer break. So, well, we said it a couple of last couple of years, but this time it is true. The first priority is to get on the website, get support for Spring Boot 2. And then we have in the clear pad for working on camel 3. But on the other hand, we do also want to say this is a testament on the camel product itself is that it doesn't throw people under the bus by reviving itself every two or three years, like some of the open source product does. We do have a very respect for our user base. So that's why we are a bit more cautious and conservative maybe because camel is 11 years old, right? And this is only camel 2. Sort of like the second generation. And also if you have my book, that's all we have time for now. Thank you so much. We also have information about camel 3.0 in this book on the last chapter. That is an excellent plug. People may definitely want to check out that book. And hopefully, they saw the link I posted earlier for the free ebook for excerpts from your book that is available to developers.darbrahead.com. Thank you so much, Klaus. Perfect. Okay. Thanks for having me.