 Hi, everyone. Our talk is on application modernization with Camel JavaScript and OpenShift. My name is Wuxin Zhang. I am an Associate Consultant at Rahat. And my co-pilot of my presentation is Yipsan, who is an architect at Rahat. So application modernization. There are five parts to application modernization. For service endpoint, we migrate web services to APIs. For architecture migration modernization, we want to break down monolith into standalone microservices because they are easier to maintain and to share the code. To modernize development process, we want to modernize waterfall approach to CI-CD so that you can release on a daily basis, like agile transformation. And for deployment, we want to modernize virtual machine on-prem to containerize images. And lastly, for infrastructure, we want to move data center to cloud. Integration points. Before, on the left-hand side, we have a big cluster of dependencies. It's hard to cut out a piece of the application to modernize because in order to modernize the application, you also need to identify the dependencies to modernize at the same time. And now, we have this graph on the right-hand side that shows once you have an application to modernize, you can create clusters of dependencies, meaning we are breaking down the dependencies into smaller multiple clusters. So instead of having one giant cluster, depending on each other, we have small clusters of dependencies. In doing so, we can maintain low-lists. So Camel, what is Apache Camel? Apache Camel is an upstream project that we use for integration technology at Red Hat. Apache Camel is Java-based based on enterprise integration patterns. Apache Camel started its life as an implementation of the Enterprise Integration Patterns book. It comes with 300 components out of the box you can use. Integration can range from simple timer to log done examples to complex processing workflows, connecting several external systems. Camel has built-in data transformation, intuitive routing, and provides native rest support. Integration patterns. As developers, we know that the model application is deconstructed into smaller pieces. The more you need better communication patterns for managing all the inherent complexity. Camel has been shaped around enterprise integration patterns since its inception, and developers have created a DSL that often maps patterns in a one-to-one relationship. These patterns are agnostic of programming language, platform, architecture, and provide a universal language, notation, and fundamental messaging and integration. Camel continues to evolve in adding new patterns from service-oriented architecture, microservices, cloud-native, and serverless. Camel has become a general pattern-based integration framework suitable for multiple architecture. I am not exaggerating if I state that Camel DSL is the language of enterprise integration pattern. It's the language that expresses better than most of the patterns that were present in the original book of integration with other patterns that have been added by the community during all these years. And the community keeps adding patterns and new components in every release. In this slide, you can see that we have a split orders pattern that splits the order from a larger order and have each item be sent to either electronics or other areas. Apache Camel is a powerful integration library that provides lots of integration connectors. As you can see, there are hundreds of Java libraries that use Camel connectors using Camel and point notations. These URIs are also universal. Here are more examples of Camel components. You can Google Camel components, and you will find a lot of them. Camel routes. So like I said before, Camel has multiple domain-specific language for DSL. It supports XML, Java, Groovy, Kotlin, and, of course, JavaScript. There are good reasons to use both Java and XML DSL. Camel route expresses the enterprise integration patterns. It gets the developers into thinking in terms of pipes and filters, for instance. The DSL is a technicality that will not impact the success of the project. You can even mix and match. So in this slide, you see that this is a one-to-one integration between a file and a jmsq. At runtime, it does not matter for Camel whether you write it in XML or Java. So more examples. Here, from a file called inbox, we split the body based on its line. For each of the lines, we turn it into a custom XML. And for each of the XMLs, we send it to an active MQ called line. Integrations are great for connecting systems, data transformations, as well as creating new microservices. REST DSL. Camel also offers a REST-styled DSL, which can be used with Java or XML. The point here is for end users to define REST services using a REST style with verbs like get, post, delete, and et cetera. The REST DSL supports the XML DSL using either screen or blueprint. Here, to define a say path, we can set the base path in REST say, and then provide the URI template in the verbs. It also accepts data format setting. OK, let's talk about Camel JavaScript. In this sample, we use JavaScript function to create a predicate in a message filter. The message filter is Enterprise Integration Pattern. It allows you to filter the messages, obviously. For example, if the predicate is true, the message will be routed from QA to QB. This route path routes exchanges from an end user to a special queue. We can write this in Spring DSL as well. Here's another example of Camel JavaScript. As you can see, the integration written in JavaScript is very similar to a Java one. Here, from the timer tick, process a function that prints hello, Camel K to log info. To run it, you just have to execute Camel run and the name of the file. For JavaScript integrations, Camel K does not yet have an enhanced DSL that you can access to some global bounded objects. In this sample, we're using context.get, component of the previous log component, and use exchange for matter property to do something like this. Camel script context. JSR 223 lets you use the power and the flexibility of scripting languages like Ruby, Groovy, and Python on the Java platform. Camel supports a number of scripting languages which are used to create an expression or predicate via JSR 223, which is a standard part of Java 6. Camel script context is extremely useful when you need to invoke some logic. There are now in Java code such as JavaScript, Groovy, or any other languages. Properties is the attribute of Camel script context. As you can see here, before Camel 2.6, 2.9, if you need to use the properties component from a script to look at property placeholders, it is a bit cumbersome to do so. However, since Camel 2.9, you can now use the properties function, and the example is much simpler. Here, in this example, function with a resolve method makes it easier to use Camel properties component from scripts. You can also load scripts from external resources. You can do so by referring to external script files. For example, to load a Groovy script from the class path, you need to prefix the value with resource, as shown. Camel dependencies. To use scripting languages in your Camel routes, you need to add a dependency on Camel script, which integrates the JSR 223 scripting engine. Here, if you use Maven, you could just add the following to your palm.xml file, substituting the version number for the latest and greatest release, and you can see the download page for the latest versions. So with the introduction of Camel and how Camel supports JavaScript, let's talk about Camel K. Camel K is a deep Kubernetes integration for Camel. Camel K runs natively in the cloud on OpenShift. Camel K is designed for serverless and microservice architectures. For those who are not familiar with Camel K, Camel K is the sub-project of Apache Camel with the target of building a lightweight runtime for running integration code directly on cloud platforms like Kubernetes and Rahab OpenShift. So we learned that Camel is a Swiss knife of integration. Camel K is for serverless Camel for Kubernetes and native. And we also have Camel Quarkus, which runs on top of Quarkus and enables developers to write small, fast Java applications. We also have Camel Paraff, Camel Spring Boot, and Camel Kafka Connectors. These are all the Apache Camel 3 projects. So Apache Camel K configuration. In order to run Camel K, you'll need access to Kubernetes or OpenShift environment. However, Camel K works best when it's run natively on Knative. Knative is a simple pre-built component to publish and subscribe from the event mesh. Let's take a look at the performance of Camel K. Camel K runtime provides significant performance optimizations. This graph shows the performance of Camel K without utilizing Knative and serverless technologies. Compared to binary source to image deployment, Camel K has lower deploy and redeploy time. If the binary runs remotely, it is even slower. Notice that redeploy with Camel K is almost instantaneous. So how does Camel K work? Well, developers just want to deal with the business logic and not deal with the runtimes at all. They want to integrate systems and become serverless. What they can do is to write Camel routes in a single file. For example, here we have a Camel route written in XML. And how does it work with OpenShift and Kubernetes piece? Well, at this point with Camel K, you only have an integration file. This file says from some sort of timer every second. So you can write ID, set a header, and send it and log it. So once we have a cluster prepared and the operator installed in the current namespace, then we can say Camel run with the K. Camel K comes with a command line tool called Camel with the K. So Camel run, and you can do Camel run and then the name of the file. The CLI can automate the taxes on the developer's machine, such as observing code changes, streaming those to the Kubernetes cluster, printing the logs from the running pods, and et cetera. Note that you don't have to specify any dependency specification in the folder that you wrote your integration.gruv file because Camel K will figure that out for you and then inject it during the build. So all you need to do is just write the application. In this case, the Camel binary will push it to the cluster and the operator will do all the tedious footworks for you. At first, the first time you run your application, it might take up to two minutes to start, since it needs to pull and build the image for the first time, but the next bill will only take a few seconds. Once it started, you can find the pod running this application on OpenShift. Here's what the pod looks like on the OpenShift console. The pod runs in the cluster. This can also be done with OpenShift CI tools as well. Once logged into the OpenShift cluster, the Camel K CI will then use that to run the integration on the OpenShift cluster in this project and deploy it from here. Next, let's talk about how to deploy JavaScript on OpenShift. So the deployment for OpenShift is based on containerized images. So the first step is to identify and find the base image for your JavaScript application. You could find it from the Docker Hub or Quay.io or GCR.io. So the image needs to match your specific JavaScript version. And once you have the image, you could start working on your Docker file. And then with the Docker file, you could do the Docker command to build the application image. Basically, this is a S2I process that will merge your code from your Git repository with the base image. That generate a new application image for deployment. So this is the high-level architecture of how the deployment work with the Camel K application. On the left side, on the box, you have your local machine, your dev environments. You have your IDE. And once your coding is completed, you do the use the Camel CLI command to execute the program. What it does is it does do a live update to basically trigger and send an update to the cloud on the right-hand side. It will trigger a notification to the Camel K operator in OpenShift. The operator will get notified and then deploy the latest change to the pod based on the integration definition. Whereas the surface architecture is autonomous, it's loosely couple, right? In the modern days, people have talked more about microservices. The idea is that we want to make the microservices single responsibility, single purpose, stateless, right? All this specific application program stay will be moved away from the microservices and kept in some sort of persistent voidum claim or databases, right? Each microservice is independently scalable and they could also be independently automated. And then on the far right-hand side, we have serverless architecture, right? Serverless is based on single action, right? And this is also temporary. So when we talk about Camel K native profile, so this is an example of a workflow. On the left-hand side, you have some sort of Camel definition. You see from the form attribute, you see that this is getting the information from the K native channel and then the two attributes saying that it get the information from K native channel and send the information to the specific HTTP my host API path. This is a very simple Camel definition. You put this into a YAML file, right? This YAML file has integration as a kind, right? The API version is using camel.rpachi.org version one alpha. And this is an example of a YAML file for deployment in OpenShift, right? So once you set up the YAML file, all you need to do is to pass this along to the Camel K operator. When the Camel K operator got notified, the operator will look at the file and make a decision. Is it a K native profile, right? If this is, yes, this is a K native profile, then it will generate a new YAML file, set the kind to surface and then use that new file for deployment, right? If this is not a K native profile, right? In this case, this is just a deployment object, right? Then it will generate the new deployment kind YAML file based on the information and use that for deployment. So this is a high level architecture of how the Camel K operator would work. Serverless and K native. So now it's really important to understand how we could apply K natives to serverless. Serverless is basically is an execution model where the code is executed by a dynamically allocated resources. When you have the code available, it trigger a notification and the resources got spin up to execute a specific piece of code, right? Serverless removed the need of traditional deployment model, right? You know, the traditional way is you always need to have some sort of server component that were deployed to handle the specific deployment, but serverless took away that concept, right? Everything is on demand, right? K native is an open source Kubernetes based platform to help you deploy and manage serverless workload. So between, when we talk about K native, we need to define the different component, the building blocks of K native, right? For the serverless application. So K native contain two pieces, serving and eventing, right? Serving is based on your service object. It could scale to zero when I don't need to use the surface, right? Or it could scale up to as many instances it need when you have a peak at the traffic, right? And this is, the service model is a request-driven compute model. When I need it, I will ask for it. When I don't need it, I will scale to zero. Eventing is based on the event binding, right? When you have a specific event come in, right? Usually a lot of time is coming from a message in Q, right? The Q come in and say, hey, you know, we need to do this specific operation, right? And when that event comes in, K native will spin up the resources required to do the computation for the event, right? So each building block is a quad operation, right? With a controller, and that manage the life cycle. So in K native serving, for example, we have a chemo script coming in while it's doing a west call, doing a post to a specific path. And then you have to write to system one to system two. So basically, when this script comes in, you will create a Kubernetes name space. Inside the name space, you have the K native surface that will spin up, right? There's no container if no one is using it, right? So this is all on the name, right? So for this also help you reduce the operational cost, right? It's much faster deployment to the market. It will help you reduce packaging, deployment complexity. And then at the end, right, is a flexible, scalable on-demand solution. And you can see this chart on the left and on the right, right? The left side show a traditional IT organization, how much they were overpaying, right? Majority of the time on the yellow line, right? Your demand is on the red line, but you need to basically have a 20%, 30% margin to make sure that you cover the demand. When there's a peak on the day of Thanksgiving, right? You are not able to meet the demand, right? So this is a traditional problem of predicting the usages and computational resources that a lot of ITB department were having difficulty. Can MoK stay between microservices and serverless, right? Underneath that, you have Quarkus, right? Quarkus is also an important component that is also kind of between microservices and serverless. And Knative is completely serverless, right? And then you have swimming, AM queue, messaging queue. Those are working, you know, could be both microservices and serverless. And then underneath that, we have OpenShift. So the event-driven computation is happening between the messaging queue, Knative, Camo, and Quarkus. For microservices, we want to ensure that the services have granularity and security. For distributed integration, right? We want to be able to set up different containers when we discover services available, the container got spin up, right? And then at the same time, we have container that coming on the other side of the picture where you need to track the API transaction management using Saga, right? And then you can see why this container application architecture could get very complicated because you could have container in different size of the architecture diagram. And then at the end, right, they all need to communicate with some sort of centralized databases on the right-hand side. So when this type of distribution application comes in, right? So one of the best practices to handle this, right? Yeah, so here we talk about Saga pattern that we use heavily for application modernization, right? The Saga pattern work like this, right? You have multiple surface services. You have surface one, surface two, surface three, and surface four, right? Each surface has its own compensation object associated with the surface. So what happens is when you have surface one making a call to surface two, right? And then when surface two make the call back to surface one and then you need to make the call to surface three and then come back, right? Each step, the compensation object will get notified, right? In that situation, it could basically hand based on the request and response it could make a decision, right? Should it continue making call in the orchestration surface orchestration or should it go back, return the call back to the caller and do something else, right? So this is a very useful Saga pattern that can handle surface orchestration. You can see we can set up a set of integration point between the Saga and the Camo, right? And the first Saga, we're basically saying that I'm making a population to make the surface call and this is a mandatory propagation, right? And then in the compensation model, you can say direct cancel booking, right? So this is my compensation model and then to specify where this is coming to. This come to a specific SQL database and it is doing a insert into the flight table with the specific value, right? So you can see your Saga can be set up in a way that in a Camo format where you can specify the source, the destination, the condition, right? And what to do to handle specific exception, right? So the lower section of the Saga is the same thing as well. We set up the header. This is HTTP header is a post and then you're posting the to request to HTTP endpoint. So for the source to image deployment, right? You usually get the code from the Git repository and then from there, we, as soon as the code got pushed, we pushed to the port. We merged the code with the base image and generate an application image for deployment. So the configuration injection is a common pattern where we can inject the specific configuration doing the deployment. So for example, in OpenShift, we have the config map. Config map is an object that whole key value pairs. So when we deploy the application for application one, it pulled the information from the config map and use that information for deployment for application one. And similarly, application two, follow the same design pattern. OpenShift operator contain an example of the Red Hat OpenShift serverless. And once serverless is clicked on, you can go here and specify if you want to do a basic install. If you want, you can specify specific version, for example, I want to do a version 4.6 for the operator, for the OpenShift serverless. Once it's installed, you can see that the install operator is in your screen here. In conclusion, CamoK is a lightweight integration framework that one natively on OpenShift. CamoK is also designed for serverless and microservices architecture. The Knative Egg component for deploying, running, managing serverless cloud native application in OpenShift. The serverless cloud computing model lead to an increase in developer productivity, reliability in the cloud deployment and reduced operational cost. CamoK and Knative provide a fast and scalable solution for application modernization, architecture, and Camo integrate with different technology, different languages with reliable results. Next, we are coming from Red Hat Consultant team. If you guys have any specific question, please reach out to Red Hat Consultant.