 All right, well, everybody welcome to yet another OpenShift Commons briefing today as we like to do on Mondays is take deep dives into new tech and today is no different than most Mondays, but we have a trio of folks here. We're going to talk about integration in OpenShift specifically integrations using Apache Camel and Apache Kafka. We have Zaneb, Rachel and Maria are here and I'm going to let them introduce themselves and tell us where they are in the world and then there's going to be a whole lot of demoing going on today. So ask your questions in the chat wherever you are, whether you're in Twitch or Facebook, YouTube or here in BlueJeans with us and we will relay the questions and try and get them all answered for you. But without any further ado, please take it away and let's we have a full hour so let's get started. Thank you. Hi. So, yes, we are three software engineers that are working in Apache Camel some way or another. We three work for Red Hat Engineering. And we want to tell you things about how to do good integrations in OpenShift or any other Kubernetes like Cluster, Knative. So let's start with what are integration frameworks, what how to do integrations the proper way. So usually when we talk about integrations, what we are talking about is when you are building a software architecture and you have different components, maybe databases, it may be APIs, it may be, I don't know, maybe you want to connect to some FTP to some custom service and you have to define in your architecture what is going to be the workflow of data between one component to the next. Maybe you have to connect to some component to go to another component and then go back, maybe it's a flow that it's more linear. But in the end you have to define or consider for your software architecture not only the specific logic of how the data or the flow is going to go from one component to the other but also you have to consider how to connect to these components. And if you take, for example, if you want to connect to Salesforce, you have to learn how the authentication works, how the API works, what are the formats, what are the protocols. And then maybe you have to go to a database and then you have to learn how the authentication goes, what are the pool of connections to connect to a database. And all this is usually a task that is repeated over and over again by many, many software engineers, by many, many developers and we've write and rewrite the same lines of code to how to connect to some component, how to connect to another. And yes, it's true that usually it's type of component has their own client library they can use, but even then you have to learn how to use that client library. And you have to consider monitoring if you have to upgrade that client library maybe because the API of the component change or maybe because there is some security issue that forces you to upgrade the library. And then your architecture starts getting bigger and bigger and hard to maintain and very coupled one component to next. So this is what integration frameworks are for. They are the things in between components so you don't have to worry about all of these things. You can forget about how each component work or how to interact with each component or if there is any issue you have to consider, for example, when you connect to some database any considerations you have to do about then coding the authentication security issues. And integration frameworks or at least good integration frameworks should not only help you connect to the components but also they should help you define the workflow. So, for example, you can define that first you go to the A component then you go to the B component and now you have a conditional and maybe you go to the B team maybe you go to the D component. And it depends and this is what the enterprise integration patterns are different ways of defining workflows, maybe it's a loop, maybe it's a conditional, maybe it's a broadcast. There are many different patterns for communications or for creating workflows. And this is what a good integration framework should give you a way to connect to components transparently on a decoupled way, easy, of course, and also offers you the way of defining the logic of the workflow on a good way. We want to talk to you about Apache Camel and this is roughly how Apache Camel works on the inside. You have different endpoints that are specialized into connecting to external systems. It may be an endpoint that connects to a database, Twitter, Facebook, LinkedIn, whatever. This endpoint when interacting with an external system generates a datagram, which we call the exchange, which is the message. This message can have all kinds of data inside and it can have headers, attributes, not only the response from the external system but also some attributes to give context. And this message goes to the router which decides what is going to be the following step. And sends this message, this exchange to the next endpoint which will interact with an external system, return, generate a message, go again to the router, and the router decides which is the next step, etc., etc., until the flow finishes. Why do we like Apache Camel? Well, it's open source, which is always a good guarantee of good software, but also it's very, very lightweight. It has more than 350 different types of connectors, which means it's difficult to find a use case. It's not already covered, and if you find a use case that it's not already covered, this is open source, you can create your connector. And the idea is that Camel offers you domain-specific language to define the workflows, which is very, very simple. For each step, usually it's just one line of code, and you can forget about most of the implementation details on how to connect to that component. Then you can easily replace one component with another. For example, at some point, instead of using an FTP, you want to use an S3 storage system. And it's very easy to change it because Camel is prepared to think on how to connect easily different steps. This is why Apache Camel usually gets called the building blocks of software, because you can easily define how to connect one component with the next. You can connect this easily, but you can also replace this easily with another component and maybe have different flows. This allows you to focus on your use case logic and focus exactly on what is what you want to do, not focus on how to connect to some external system and learn how this external system works. These are hello world examples of Camel. Camel allows you to define, it has the DSL, but it allows you to use different languages. For example, here in green, we see how it will be in JavaScript, which is just a timer that every second it prints a hello world in a log. In blue, we see the Java version, which is exactly the same as the JavaScript because it's the same DSL, but it has all the decorations of public class extend root builder that it needs to be interpreted as Java. And then on the orange, we have a Jamel version of the same thing. You can see you also have from a timer, then a step that is setting a body that is hello world and then log to an info. So it's very, very easy to define any type of workflow in Camel. And you can choose the language of your preference, the one that you are more comfortable working with. When to use Apache Camel, I would advise you to use it always because there's unless you have a very, very contained application that doesn't interact with anything else. It really helps you to keep the coupling very low and it helps you to keep your system very easy to maintain. But also, this is especially useful when you have very complex architectures with a lot of different components, so you can forget about carrying how to interact with all these components. But specifically, if you have every dynamic architecture, so you have to add and remove steps of your workflow, or you want to be able to add and remove steps of your workflow easily, like replacing an FTP with S3 or replacing a database with Elasticsearch or whatever you can think of. Seriously, using Apache Camel will help you a lot in making this very easily. And now, Sinep is going to talk to you about Mel Quarkus, Sinep. So you stop sharing so that I can share my, let me know if you see my screen because I don't see it. I can see it. Oh, cool. So here comes my part. I'm going to introduce you to the Camel Quarkus project. It's an Apache Camel sub-project that brings all the awesome integration capabilities of Apache Camel and all the components that are available in the Apache Camel project in the Quarkus platform. So to explain how awesome this project is, let's have a quick overview of what is Quarkus and see how Camel fits into this platform. So the Quarkus, it's Kubernetes native Java stack that is tailored for Grail VM and OpenJDK hotspots. And its main goal is to make Java run better in those modern, cloud-native microservices and service-less architectures. It really focuses on Kubernetes ecosystems and how things run in containers. We say that Quarkus is a supersonic subatomic Java because it addresses the two main problems that the Java language has in those container-based architectures, which are the memory footprint and the startup time. So it's supersonic because it's way faster at startup than traditional Java projects. We can see here in the slide we have two examples of comparison between a traditional Java application and a Quarkus on JVM and Quarkus on native for the time-to-first response, which actually includes the boot of the app plus the first response time of the rest endpoint. So here we have an example of rest endpoint and here a rest that does something in a database. So we can see that when we are Quarkus with JVM, we already have a big difference in the startup time. But when we are on native mode via Grail VM, the difference is so impressive. So when we are on those environments like Kubernetes and OpenShift, we can really gain a lot in those environment and style of architectures when we have to deploy our apps very frequently and also that we need actually to scale up and scale down very quickly our apps. And it's subatomic because of the lower memory footprint. So here again we have a comparison between Quarkus native and Quarkus JVM and a traditional Java stack. So we can see that we really gain a lot in the cost of the memory footprint. Another benefit is the developer joy in the Quarkus ecosystem. They make a lot of focus on the developer experience and to make things very easy. And also there's this awesome live reload that is game-changing for a Java developer point of view. When you can just run your code in developer mode, save your code that you just edited and it just automatically refresh without you do anything. Again, in a Java ecosystem it's really something amazing. And it has a very large bench of standards and lots of libraries, all the well-known libraries you can find them there. And you can really find your joy to do a Quarkus app. You can find very large, you can do everything you want. And of course there is our project which is Camel Quarkus that is available there already in the platform. You can have all the integration capabilities of Apache Camel that are available already in the Quarkus platform. And you can do your integration with Quarkus and your app with this integration would be well suited for a Kubernetes environment. And it would take advantage of all the performance that comes from Quarkus. So you will have your Camel connectors that will have a faster startup, faster scale up and down, lower memory usage. And there are already lots of extensions. If we go to the Apache Camel website in the Camel Quarkus section there is a page about all the extensions that we have. And we can see here that we have already more than 300 extensions and you have all the information about every extension that we have. So you can like we have like in Camel there is 300 something so we have already like a bunch of extensions there that you are already available to use in the Quarkus platform. And of course we benefit from the same developer experience for Quarkus development. So now it's the demo time. So with Rachel and Maria we build a demo that we're going to build in different steps during this presentation. And what we're going to do is to have three different connectors. One of them is going to pull data from Telegram and push it to a Kafka topic and add another one from Twitter to a Kafka topic. And then we will have another one that will take the aggregated data from Kafka to Elasticsearch for future data science usage. The idea of the whole demo is to show you different types of Camel connectors that we can build for OpenShift. So for the first part I'm going to show you the Twitter to Kafka and I'm going to use Camel Quarkus for this. This is going to be the part where I'm going to do some Java code. But if you are not a Java developer don't worry it's super accessible and stay tuned for the rest of the presentation. So now let's go for some coding. So I have my application here and what I wanted to demo is that I created it from the code that Quarkus.io and I just selected the extension, the Camel extensions that I want which are the Twitter, the Kafka one and I need some logs. So I took also the Camel log and I just downloaded the zip and it's created me an app and the app I have already all in the Maven pump file. It has already all the dependencies that I need. It has also the build info that I have like the native profile to build my native app. It has also the Docker file so I don't have to care about everything. Like I can just pick the app and I have a first rest endpoint just to test and I can actually just run the code and I will have my first resting point. So I don't need this resting point so I just going to delete this class because I don't need it. And what I want to do is to build actually a Twitter route. I just have to mention that I've already added, I don't know if I hope I don't go fast. I have added some properties here. Here I have what the Twitter component needs as key so that it access my account, my developer account at Twitter so that I can go and do some search on the tweets. So what I'm going to do is that I'm going to start the camel route that is going to do some search from Twitter and I'm going to do it in a dev mode so we're going to see that the code is going to refresh automatically. So what I'm going to do is that I'm going to create my routes and here it's Twitter search. If I do some mistakes, let me know. And what I'm going to search is the Apache camel and for a first step I'm just going to log the result that I have from this consumer. So if it gets something new I will know it. But generally the consumer takes like the five last tweets that have the Apache camel and then if there are new tweets we will see some new tweets coming. So that I have a bigger terminal I'm going to my terminal so I'm just going to do the maven compile. Quark is dev and it's building my app and now so here yeah it was quick. So here the app started it search for my environments variables that I put because actually what I didn't say here in the properties that those ones are already available as environment variables. So I just put the names and I have them already available in my environment. And so the routes Twitter search already started and here it logs actually the last tweets that have Apache camel in them. What I want to do actually for our demo I don't know if I have it here is that to know on the Kafka topic if the message comes from Twitter or Telegram I'm just going to change the body of the message. And I'm going to leave my app running here and I'm going to go back to my route and I'm going to do a step buddy. And I'm going to transform it but here I'm just going to copy pass so that I don't do any mistakes. And here what I'm doing is that I'm I'm building a Jason message. I'm putting the body in the site in property and I'm telling it that it comes from Twitter. So now if I go back here IntelliJ will auto save my code and here we saw that it started like without me doing anything. And now my messages are in those Jason formats. So now what I want to do is is to push this data to to my Kafka topic. So here I'm going to do to which is my last part and I'm going to say there is Kafka. And I'm just going to be past the name of the topic so that it works. If I go here actually it's reloaded but I have an error because I don't have access to my Kafka topic here on my machine. So what I'm going to do is that I'm going to stop it and I'm going to build this time and push deploy actually to my open shift project. So just to show you I am already connected to my project here. So I can do MavenClean package and I tell it that I want to deploy on Kubernetes and I'm going to tell you how this magic magically done. So what I've done is that I've added actually two dependencies. One actually that is the Quarkus open shift that will let me deploy my app to open shift. And I also added the container image Docker because I personally want my app to be on the container before I just push it. And here I have some configuration about how I'm going to build my image and some Kubernetes stuff like the name that I wanted to give to my app. So it actually already built my image and sent it to my open shift. So let's go and see. So here I am on my open shift view. And here just on the administrator I wanted to show you that we have already installed the Stream Z operator for Kafka. And if we go here we have a cluster and we're looking for the topic and here we have the topic that we're going to use. So I'm going to go to my developer view and my app is failing. Here I just put a container. I have put an application that is just consuming from this topic so that I can get the logs there. And the code is here actually. I just did the consumer that consumes from the Kafka topic log. This is just for me actually to see if everything's going from my other app. And if I go back to the topology here my apps doesn't want to start. If I go to the logs it's actually the problem of the properties that I have like the Twitter access token and everything that are not available. So what I did is that I've already created a secret with all the variables that I had on my computer with the Kafka Bootstrap broker URL and the four keys and secrets that I need for my Twitter account. So I'm just going to add it to my app. Save it. And I go back to the topology and here it's running. So if I go to the log this time there is no error. I don't know if you can clearly. And I have five Tweet search that I pushed to Kafka. There is something that I wanted to show here is that the app with the Twitter camera routes started in 435 milliseconds. I'm just going to write it here so that we can see later when we will do the native mode that is going to start quicker. So I didn't expect that the consumers has already some Kafka things but this is why, because we are. So I don't know if it got some new tweets. Maybe I can tweet something. Yeah, I am tweeting on the back. So that's why you have things there. So I'm going to just put the native build. I'm not going to push it because it's going to take some time. But actually it's the same command and what we do is that we give that native profile that we have already in our app. And it's going to take more time to build because like we saw in the slides when we are on the native, we have lower memory footprint and faster start up. So there is a whole work done in this phase so that we analyze the code and put just what we need for the app. But I have already created a Docker image for it. So I'm just going to stop this one maybe and I'm going to copy the name. So I'm going to go here and this time I'm going to add it as a container image. So it got it from Docker Hub and it is a Quarkus app. I can create it. So this is the native app. It has the same problem that we had with the secrets. So I'm going to add the secret to my new app. I'm going to save and the deployment was very quick. And like you can see here, it's the same app, the same Cameroat, but instead of starting in 435 milliseconds, it started in only 7 milliseconds. And it's already getting some tweets. So this is it. Instead, if someone wants to tweet something and we see it live, I can stop sharing and it's for Rachel now. Okay, thank you. Okay, can you see my screen? Yes. Okay. Great. So just a quick recap. So far, we have, we've learned about the benefits of using an integration framework. And we learned that Camel is the absolute best and most robust integration framework in the whole wide world. We also learned about writing crazy fast Java applications using Quarkus and how we can use Quarkus extensions to leverage all of the benefits of Camel. So where does that leave us? When you look at the big picture, mainly the development processes, it's a lot. It's a lot to learn. It's a lot to do. Because if you think about it, the majority of the time I spend handling dependencies and doing things like preparing for deployment to open shifter Kubernetes. And you have to configure Docker or S2I, you have to create a container, build the image, all of that can get pretty daunting. So, so we wanted to create something specifically made for serverless that is also smart enough to do those kind of repetitive and time consuming tasks for us. At the same time, we also wanted to work natively on Kubernetes, and even more importantly, we wanted to lower the barrier to entry and to eliminate a lot of the associated complexity and to make it easier for people to to learn and kind of to pick up on it. So naturally, the thought process behind all of this was that the Camel project needed to evolve a bit to accommodate these requirements, mainly to be able to work with serverless and microservices architectures. The thing is that we didn't want to reinvent the wheel, because Camel solved already a lot of the problems that integration developers have been facing for years. So, one of the thought, one of the thoughts was, well, how can we modernize it for these architectural trends and changes. And just like with the Quarkis project, a sub project of Apache Camel was created so that you get the same benefits from it. I accept this is, well it's native to Kubernetes as well, and specifically for serverless. And as a result, it was, it's called CamelK. So it exactly is CamelK and how does it work. CamelK runs on top of Quarkis, first of all, it enables developers to write very small fast Java applications like you just saw. One of the biggest benefits I think is that CamelK handles Camel dependencies for you, which is a huge win. And, and of course it removes also the need to configure Docker or S2I before deploying to OpenShift or Kubernetes. That means that you can then continue to focus on writing integrations and just using the pretty much already really simple Camel DSL or domain specific language. No need to worry at all about, you know, how am I going to package it, redeploy it, and that kind of thing. So it's straightforward enough to make a Kubernetes native integration application using something like CamelK. So operators, as probably everybody here knows, but operators are commonly used to install and configure applications or platforms, whether it be on Kubernetes or OpenShift. And they're the digital version kind of of the traditional human operator that they used to just do all of this manually. So they would have to install dependencies and everything for applications, whether it be in a legacy environment and that kind of thing. Just making sure everything is in place for the application to be able to run and to do its job. So it's the same in CamelK except it was really taken to the next level because the operator is quite intelligent. And it knows what you want to run. It can understand the Camel DSL. So this kind of list here is just to give you a general idea of all of the things that the CamelK operator does and how much time it will really save you. And actually the main responsibility of the CamelK operator is to look for CamelK integrations deployed using Camel and to build and deploy them as Kubernetes applications. And it's just as straightforward as that. And all of that is really possible because of the operator SDK. So it basically performs the operations on the Kubernetes resources that are needed to run the Camel DSL script. And part of that is just it defines several new Kubernetes APIs, it extends the custom resources. In other words, the operator scans your application and creates the resources that you need in the cluster automatically. The three kind of main concepts of CamelK, well we already discussed mostly the CamelK operator. It's basically the intelligence that coordinates all of the moving parts where each custom resource has its own like dedicated state machine that orchestrates their phases. And there's also the runtime which provides the functionality to be able to actually run the integration. And then there's traits. Traits, I won't get two in two is some more kind of advanced concept, but the general idea is that you can just, you can customize the behavior of the operator and the runtime. But typically for most people, the defaults are sufficient, but just so that you know it's possible for an experienced user to modify. So to get started with CamelK, first you need to have, you need to be logged into a cluster you have access to, you have to install the Camel binary put into your system path. And you need to run Camel install to install it. And that will configure the cluster for you with custom resource definitions and install the operator in the namespace. And that's it pretty much Bing Bang Boom you're done in under five minutes. And actually if you don't want to deal with the CLI at all you can just use the CamelK operator from the operator catalog on OpenShift. So just through operator hub and it can be installed via Helm Hub. So writing your first CamelK integration is incredibly simple. The first thing you do is just create your integration file. CamelK currently supports a bunch of languages just off the top of my head, Java, Kotlin, Groovy, XML, even JavaScript. And that's quite important to somebody like me because I have to use JavaScript more often than not. And I wanted something that was going to be easy to work with and that wasn't like a joke. And so this was a very low barrier to entry for myself as well. So from there you just deploy your integration with a single memorable line of code. It's quite remarkable. And then from there you can view the integration in the console if you're using one at all and you can just check out the logs, monitor its health. But what's really cool about this that I should probably say is that it's able to just kind of materialize and start up the integrations within just a few seconds. That helps a lot during the development phase because you get, well, you know, you get like immediate feedback on code and you can make changes right away. So you may be asking yourself, why serverless? What is the big deal with it? Well, I'm not here to convince you yes or no, but some of the touted benefits are listed here, but mostly nobody wants to have to be predicting their workloads. You can just scale up or down with a couple of commands or clicks of a button. And with the time that you save, you're also able to get to market faster. If you need to respond to changes, you can do it more quickly. But let's see how CamelK handles this. So more often than not, when you read about CamelK in an article or you watch a video you'll hear about it within the context of Knative. And that's because it works really well with it. CamelK provides a lot of features when it's run on Knative. And if you're not familiar with Knative, I'm not going to go too into depth with it, but it basically gives you serverless capabilities on Kubernetes. And there are three major areas in Knative. There's the build area that provides you with the custom resources and the Knative serving area. That's the part that helps you with the auto-scaling and scale to zero, so that when there's no traffic, then pods or containers can reduce to zero replicas. And then there's the Knative eventing area, which I think is more specific to CamelK, where you subscribe to the channel, that channel pushes events towards your service. It just gives you an easy way to trigger your functions at the same time and to orchestrate services. But I think the thing that really makes CamelK shine here is that your service also just receives the messages through incoming cloud events, which means that you don't have to actively connect to the broker. So the service ends up being quite passive. And actually, the Knative trait automatically discovers the addresses of Knative resources and injects them into the running integration. And if you have an existing integration already, CamelK integration, then it's possible to run it as a Knative serverless service. So with serverless becoming a popular architectural style, you'll see many examples. But it's important, I think, to remember that you don't need to use CamelK for serverless, just using it alone or even to deploy a Quarkis app is very common and useful thing to do. And to not get overwhelmed with all the technologies just because they work really well together doesn't mean that they're dependent. So for CamelK is the possibility to set up monitoring. And so that can be done for both the integration and the operator. And I believe for the integration, if you have OpenShift already, then it's the Prometheus operator is already deployed as part of the OpenShift monitoring. And so to monitor the operator, you would just do it at the moment that you're installing CamelK. And then, of course, you can then you can set up alerting and you can visualize collected data using something like Grafana or some other API consumer. And quite important is also that CamelK helps with transformation. So adding a transformation is as simple as you just add a liner to your Camel DSL or to your integration. Something like converting the outgoing body to uppercase would be an example, you would just add it to a step, and you can have as many steps as you like. So I'll be doing just the teensiest time in this demo is following the theme of adding camel sightings that will get eventually end up in the Kafka topic or not eventually immediately end up in a Kafka topic. This time I'll be reporting my sightings through telegram. So I've already created the telegram bot. But, but it's very easy to do you can create it in under a minute or or so. I'll leave the link in the slides. Okay. So I've already kind of laser lever in the integration here. It's written in JavaScript to kind of change it up but don't get too jealous they look almost exactly the same in Java. Okay, so here in this is just showing that where where we're starting from so the input is coming from a telegram bot with ads or authority or authorization token here set the body. And here we're marshalling it to converting it to Jason but but with Kafka, which is going to be where we're sending it to this not really necessary because it does it automatically. But yeah, then, then over here this is where we're, we're piping it to the Kafka topic setting setting the body and then we are sending back to telegram. Thank you for reporting your camel sighting. Okay. So it's as simple as we would run, we would just do a camel run dash dash dev and telegram sightings.js if I spelled everything properly. So the dev flag, I basically just, it just means that you're going to get tail logged output from integrations, the it also synchronizes this source changes and reloads the camel context automatically which I'll show you. It's doing a lot at this point, and building an integration case and so on. So from here we go to the, the OpenShift console we're in the administrator view let's go to developer, apology view, and we can see here that the integration is running. Okay, and actually, so we can, we have here the logs right there's not much going on so what we'll do is we have telegram open right and I've opened the chat with the sighting spot. And I'm just going to type something and say oh my God, there's a camel in my house. Boom. Thank you for reporting your camel sighting. I've contributed. So you can see here that the, that in the long that it does get sent right away. And if I have time I want to do just a quick, I just want to show you about the camel context being reloaded. So if I were to do, go back over here. If I were to say have a nice day. Right. Of course it's not a user cannot update. Right. That's okay. That's okay but yeah you're going to have to take my word for it. It will, it will update here and say have a nice day. I will not, I will not stop this integration now just to show you that. And so, but yeah another thing is that you can also just get the status at any point. So, this will show you the running integrations and of course I'm not going to do it right now but you can also camel delete and it will delete whichever integration you specify. And with that, I will just leave you here with just to point out that I've left a quick summary and some and some resources here and I'll leave it to my colleague now to continue. Thank you. I'm going to share my screen again. So, I hope you're saying my screen. So we have seen that if you want to build complex software architectures you should use Apache camel because it makes very easy to interconnect things. So you have seen that for example creating a telegram bot is really, really stupid in four lines of code. You can create a telegram bot and that's it. You don't need anything else. What you need to add the code for your logic. Obviously, if you add commands to the telegram bot, you will have to generate the code that reacts to those commands. So, the part of how to integrate with the telegram bot how is the telegram bot API. I don't know we don't really need to know we just rely on coming if telegram updates their API on or how they interact with the with the bot. You don't care to just upgrade camel camel will take care of it. It will work. So, it's really, really makes it very easy to develop things that different third party components interacting. Then we saw that camel is running over Java Java sometimes is not the most fastest the not the best for serverless. Don't worry, we have camel quarkus, we can run camel over quarkus so it's serverless is fast is amazing. And now Rachel told us that camel can help us with all the DevOps side of even also the development side. So it's very easy to deploy on OpenShift or Kubernetes kind of cluster. So, what else? What can we do to make our lives even easier? Well, that's when camelettes appear camelettes is a concept that appears. I think it was in October of last year so it's very, very new. The idea is that, well, for creating a camel workflow or camel integration, you usually use a lot of different pieces and maybe if you are focusing only on the logic of your use case, it may not be as nice as it could be. For example, if you are a scientist analyzing camel side things in all over the world through a telegram bot and a Twitter API, you want to be able to integrate that with your machine learning platform or whatever easily. And you don't want to worry about how to interact with it. Maybe what you want is that some nice developer creates some camelette root snippet, a camelette that helps you create workflows faster. For example, imagine you have an API that is the API that your scientist is using to add new data or run analysis or whatever. But your scientist is not a developer, he's a scientist, so they don't really know how to call the API. Well, don't worry, you can create a camel snippet that, in a transparent way, simplifies a lot all these calls to the API or whatever is the work from. Maybe it's not one step, it's more than one step. But you can simplify it so your scientist can build their own workflows on a very easy way with not so many building blocks. So it's like a meta block. This is a very cognitive concept, I think. So it has camel usually has this consumer producer definition depending on if the end point is reading data from the outside or writing data. And here camelettes have a similar idea. So you have source camelettes which takes data from the outside and sync camelettes which write data somewhere. So it's like two different types of steps. And you usually create a source camelette snippet that reads the data and a source camelette snippet sync camelette snippet that writes the data and you put first a source then a sync and join them. And you usually have only two steps on your workflows. So you are painting a source with a sync. The idea is to simplify workflows even more to be able to, at the beginning I told you this open source if there is some connector that is not available, maybe you want to create your own connector, but that means you have to implement it in Java. Maybe you are not Java developer, maybe you are a Jamel devops, maybe you are a JavaScript developer and you don't want to work with Java. But if there is a way of defining how this connector would work, for example, Telegram uses HTTP APIs behind. So you could create, if you have the HTTP camel connector, you could create a camelette that wraps how the HTTP connector interacts with the Telegram server and say this is the camelette to interact with Telegram. So it's like an abstracting layer snippet of code, a meta block. And with that in mind, I'm going to show you how to define a camelette binding. So, as I said, it's just a Jamel file, very, very simple. This is split in three columns, but it's one single file. You start with a header in the Jamel file defining this is a camelette binding, I'm going to bind a source and a sync. And then you define which is going to be the source and maybe some properties, authentication properties and what is going to be the sync and some authentication. So in our use case, we already have the Telegram to Kafka, the Twitter to Kafka. And now we want to collect everything that is sent to Kafka through Telegram and Twitter and store it in an elastic search. And that's the part I'm going to do with camelettes. The elastic search camelette is not yet, it's only on the snapshot version, it's not on the release version that it's going to be on the next one because it's already committed. So it's there. And this code you see at the left is everything you need to write on your file to connect from the Kafka to elastic search. Next is the header saying, okay, this is a camelette binding. The name is going to be Kafka to as binding. Then you can do define the source which is going to be a Kafka source to define the URL on the topic. No username or password in this case. The sync which is where I'm going to write the data, which is going to be elastic search. And here I have to define the URL, the cluster name, the index name, which is how elastic search defines index and the cluster word you want to write to, and then the username and password. And this is the file, I think it's in better here with more colors. I'm not going to show you my password, but what I'm going to do is just use this file with a proper password and add it to the cluster. This is the current apology. So if I add this, it should appear here any moment now here. So here we have the new Kafka to elastic search binding, which is already running, and I can see the logs. This is not going to read from the beginning of the Kafka so everything that has been sent up till now is lost, but what I can do is, for example, tell the telegram bot, I see a camel in my desk. And here is almost instantaneously, I see a camel in my desk. I added some logging here. Connecting to the cluster. Can you increase the font? Better? Yeah. I see a camel in my desk. It comes from Telegram. I connected to the elastic search and I stored it with this ID. So if I, for example, retweet something with Apache camel, I should see that here when the Twitter thing queries it, which I'm not sure it's going to be here it is. From Twitter, I have this tweet. So let's check that this is really on elastic search. Well, I have here things from last week when we were testing things, but if, for example, I search Telegram latest, I see a camel in my desk if I search for Twitter latest. The one I just retweeted with the pineapple. And I don't know if we have been already one hour talking so maybe it's time for just opening questions or chatting a bit about this. About camel, we can talk hours and hours about this topic, but I think it's better if just we do like we did, just review a bit the state of the art of camel, starting with classic traditional camel, moving to camel quarkus, then camel k, then camelettes, and then let's see what people want to hear about. I think the recap of where it's going would be great. So go for it. Well, what I see right now is that we are, we are pushing a lot in the serverless side, much more than in previous years that the camel quarkus is very active. I think Cinev can talk there more about how active it is camel quarkus. I'm right now working on the camelette side, which is, as I said, the concept was born last year at the end of last year. And there is some very good articles from Nicola explaining where this comes from and where this is going. And we only have like 20 camelettes, comparing with 350 something connectors on camel. So this is a very small subset of camelettes, but I think it's improving a lot. And it's going to be very visual with this topology view in OpenShift. So you can very, very easily connect different snippets of code. And I don't know, in my experience or what I feel is that some people still think that camel is not as easy to start with. Even if I see it very easy now, but maybe not as easy to start with. And the camelette thing is going to push a lot on making it even easier, because you can now separate your development team that can create the camelettes from the people that are going to use those camelettes. That's maybe scientists, maybe analysts, maybe, I don't know, whoever needs to build integrations. That sounds like it's going to be a really interesting community to work in too, is creating the new cache. Maybe, where should people come to find a space to collaborate on creating new camelettes? That might be a good thing to tweet or put into the. Yeah, so we have, of course, we have everything is open source, so we have the Apache camelettes. Let's put it on the chat so we can tweet it later. This is the community repository of all the camelettes that are there. I see that there has been many in the latest days, even hours ago. So this is getting a lot of speed in increasing more camelettes every day. And, of course, the Apache camel website is the place you should go first to see any kind of documentation. We have, we have different top places for Quarkus camelettes and camel K and even Apache Kafka connector, which is something that is also merging Kafka and camel and is there and has its uses, but we prefer not to introduce that also on this talk because it will be just too much. Yeah, well, I think that means we're going to have to have you guys back to do another one. And this has been like, this is amazing. The demos of like some and shining the light on these future integrations and what's what's available now is pretty amazing. And you know, you shouldn't hesitate to get involved in the camel universe and Quarkus universes. I think it's time has arrived. Did you want to add any final words around where you see things going these days? What's next for your adventures? For me, I'm on the camel Quarkus side. For now there is this Quarkus to that X that we were working on. And of course, like the whole team did an amazing job to have so many extension there. And there's still some work because some of them are not ported now on native and there's always so much work. And if you want to get involved in the community, just come and see and talk to us on the mail on the Zulip chat. And yeah, there's lots of work to do. There always is. And how about you, Rachel? Any final words that you want to slide in here to get people more involved or inspire them? Thanks. I think, well, recently 1.4 was just released. A lot of the focus has been around camelettes actually. So a lot of what Maria has said is kind of where camel K is moving towards. We just added the bind sub command as well, which helps you to use camelettes directly whenever you need them. But yeah, and also we're exploring kind of the user interface side of things like seeing what kind of tooling can be beneficial to people that maybe it's not less tech savvy, but maybe maybe don't want to code. Maybe want just kind of like a visual way of working with the integrations. That's about it. Well, as the camelette world spins and develops, we'll definitely have you guys back and maybe even a walkthrough of creating and contributing a camelette might be a great thing, a future topic to have you back on. And I'm thrilled with the depth of the content and the demos. So this is really good. It's one of the best overviews I've seen explaining the whole camel universe. So thank you very much for this. I'm not seeing questions in the chat. I'm just going to pause and see if Chris has found any in all of the other places where we're testing none yet. So you have answered all the questions or you've left them with just enough mystery that they'll go off and explore for themselves. So thank you very much for taking the time. I know you're in London and Spain and France and time zones are always a fun thing, but we totally appreciate you coming and we'll definitely have it back. Yeah, this was a big demo, but it's a great thing to try and break camel down into these pieces and parts, very digestible. So thanks and we'll talk to you all again soon. Thanks everybody for coming. Thanks for having me.