 All right, well, everybody, welcome to yet another OpenShift Commons briefing. Today, as we like to do on Mondays is take deep dives into new tech, and today is no different than most Mondays. But we have a trio of folks here. We're going to talk about integration in OpenShift, specifically integrations using Apache Camel and Apache Kafka. We have Zanib, Rachel, and Maria are here. And I'm going to let them introduce themselves and tell us where they are in the world. And then there's going to be a whole lot of demoing going on today. So ask your questions in the chat wherever you are, whether you're on Twitch or Facebook, YouTube or here in BlueJeans with us. And we will relay the questions and try and get them all answered for you. But without any further ado, please take it away and let's we have a full hour. So let's get started. Thanks, guys. Thank you. Hi. So, yes, we are three software engineers that are working in Apache Camel some way or another. We three work for Red Hat Engineering. And we want to tell you things about how to do good integrations in OpenShift or any other Kubernetes-like cluster canative. So let's start with what our integration frameworks who are how to do integrations the proper way. So usually when we talk about integrations, what we are talking about is when you are building a software architecture and you have different components, maybe databases, it may be APIs, it may be, I don't know, maybe you want to connect to some FTP to some custom service. And you have to define in your architecture what is going to be the workflow of data between one component to the next. Maybe you have to connect to some component to go to another component and then go back, maybe it's a flow that it's more linear. But in the end, you have to define or consider for your software architecture not only the specific logic of how the data or the flow is going to go from one component to the other, but also you have to consider how to connect to these components. And if you take, for example, if you want to connect to Salesforce, you have to learn how the authentication works, how the API works, what are the formats, what are the protocols. And then maybe you have to go to a database and then you have to learn how the authentication goes, how what are the pool of connections to connect to a database. And all this is usually a task that is repeated over and over again by many, many software engineers, by many, many developers. And we write and rewrite the same lines of code to how to connect to some component, how to connect to another. And yes, it's true that usually it's type of component has their own client library they can use. But even then, you have to learn how to use that client library. And you have to consider monitoring if you have to upgrade that client library, maybe because the API of the component changed or maybe because there is some security issue that forces you to upgrade the library. And then your architecture starts getting bigger and bigger and hard to maintain and very coupled one component to next. So this is what integration frameworks are for. They are the things in between components, so you don't have to worry about all of these things. You can forget about how each component works or how to interact with each component or if there is any issue you have to consider, for example, when you connect to some database, any considerations you have to do about encoding the authentication security issues. And integration frameworks, or at least good integration frameworks, should not only help you connect to the components, but also they should help you define the workflow. So for example, you can define that first you go to the A component, then you go to the B component and now you have a conditional and maybe you go to the B, C, maybe you go to the D component and it depends. And this is what the enterprise integration patterns are, different ways of defining workflows, maybe it's a loop, maybe it's a conditional, maybe it's a broadcast. There are many different patterns for communications or for creating workflows. And this is what a good integration framework should give you, a way to connect to components transparently on a decoupled way, easy, of course, and also offers you the way of defining the logic of the workflow on a good way. We want to talk to you about Apache Camel and this is roughly how Apache Camel works on the inside. You have different endpoints that are specialized into connecting to external systems. It may be an endpoint that connects to our database, Twitter, Facebook, LinkedIn, whatever. This endpoint, when interacting with an external system, generates a datagram, which we call the exchange, which is the message. This message can have all kinds of data inside and it can have headers, attributes, not only the response from the external system, but also some attributes to give context. And this message goes to the router, which decides what is going to be the following step and sends this message, this exchange to the next endpoint, which will interact with an external system, return, generate a message, go again to the router, and the router decides which is the next step, et cetera, et cetera, until the flow finishes. Why do we like Apache Camel? Well, it's open source, which is always a good opportunity of good software, but also it's very, very lightweight. It has more than 350 different types of connectors, which means it's difficult to find a use case. It's not already covered, and if you find a use case that it's not already covered, this is open source, you can create your own connector. And the idea is that Camel offers you the main specific language to define the workflows, which is very, very simple. For each step, usually it's just one line of code. And you can forget about most of the implementation details on how to connect to that component. Even you can easily replace one component with another. For example, at some point, instead of using an FTP, you want to use an S3 storage system. And it's very easy to change it, because Camel is prepared to think on how to connect easily different steps. This is why Apache Camel usually gets called the building blocks of software, because you can easily define how to connect one component with the next. You can connect this easily, but you can also replace this easily with another component and maybe have different flows. This allows you to focus on your use case logic and focus exactly on what is what you want to do, not focus on how to connect to some external system and learn how this external system works. These are hello world examples of Camel. Camel allows you to define it has the DSL, but it allows you to use different languages. For example, here in Green, we see how it will be in JavaScript, which is just a timer that every second it prints a hello world in a log. In blue, we see the Java version, which is exactly the same as the JavaScript, because it's the same DSL, but it has all the decorations of public class extend root builder that it needs to be interpreted as Java. And then on the orange, we have a Jamel version of the same thing. You can see you also have from a timer a step that is setting a body that is hello world and then log to an info. So it's very, very easy to define any type of workflow in Camel. And you can choose the language of your preference, the one that you are more comfortable working with. When to use Apache Camel, I would advise you to use it always, because unless you have a very, very contained application that doesn't interact with anything else, it really helps you to keep the coupling very low, and it helps you to keep your system very easy to maintain. But also, this is especially useful when you have very complex architectures with a lot of different components, so you can forget about caring how to interact with all these components. But specifically, if you have a very dynamic architecture, so you have to add and remove steps of your workflow, or you want to be able to add and remove steps of your workflow easily, like replacing an FTP with S3 or replacing a database with Elasticsearch or whatever you can think of, seriously, using Apache Camel will help you a lot in making this very easily. And now, Sinep is going to talk to you about Melcoercos. Sinep? So you stopped sharing so that I can share my. So let me know if you see my screen, because I don't see it. I can see it. Oh, cool. So here comes my part. I'm going to introduce you to the Kamakwakus project. It's an Apache Camel sub-project that brings all the awesome integration capabilities of Apache Camel and all the components that are available in the Apache Camel project in the Quarkus platform. So to explain how awesome this project is, let's have a quick overview of what is Quarkus and see how Camel fits into this platform. So Quarkus is Kubernetes native Java stack that is tailored for GraalVM and OpenJDK hotspots. And its main goal is to make Java run better in those modern cloud native microservices and service-less architectures. It really focuses on Kubernetes ecosystems and how things run in containers. We say that Quarkus is a supersonic subatomic Java because it addresses the two main problems that the Java language has in those container-based architectures, which are the memory footprint and the startup time. So it's supersonic because it's way faster at startup than traditional Java projects. We can see here in the slide, we have two example of a comparison between a traditional Java application and a Quarkus on JVM and Quarkus on native for the time-to-first response, which actually includes the boot of the app plus the first response time of the rest endpoint. So here we have an example of rest endpoint and here a rest that does something in a database. So we can see that when we are at Quarkus with JVM, we already have a big difference in the startup time. But when we are on native mode via GravVM, the difference is so impressive. So when we are on those environments like Kubernetes and OpenShift, we can really gain a lot in those environment and style of architectures when we have to deploy our apps very frequently and also that we need actually to scale up and scale down very quickly our apps. And it's subatomic because of the lower memory footprint. So here again, we have a comparison between Quarkus native and Quarkus JVM and a traditional Java stack. So we can see that we really gain a lot in the cost of the memory footprint. Another benefit is the developer joy. In the Quarkus ecosystem, they make a lot of focus on the developer experience and to make things very easy. And also there's this awesome live reload that is game changing for a Java developer point of view when you can just run your code in developer mode to save your code that you just edited and it just automatically refresh without you do anything. Again, in a Java ecosystem, it's really something amazing. And it has a very large bench of standards and lots of libraries, all the well-known libraries. You can find them there. And you can really find your joy to do a Quarkus app. You can find very large. You can do everything you want. And of course, there is our project, which is Camel Quarkus, that is available there already in the platform. You can have all the integration capabilities of Apache Camel that are available already in the Quarkus platform. And you can do your integration with Quarkus. And your app with this integration would be well suited for a Kubernetes environment. And it would take advantage of all the performers that comes from Quarkus. So you will have your Camel connectors that will have a faster startup, faster scale up and down, lower memory usage. And there are already lots of extensions. If we go to the Apache Camel website in the Camel Quarkus section, there is a page about all the extensions that we have. And you can see here that we have already more than 300 extensions. And you have all the information about every extension that we have. So you can like we have like in Camel, there is 300 somethings that we have already like a bunch of extensions there that you are already available to use in the Quarkus platform. And of course, we benefit from the same developer experience for Quarkus development. So now it's the demo time. So with Rachel and Maria, we build a demo that we're going to build in different steps during this presentation. And what we're going to do is to have three different connectors. One of them is going to pull data from Telegram and push it to a Kafka topic. And I don't know run from Twitter to a Kafka topic. And then we will have another one that will take the aggregated data from Kafka to Elasticsearch for future data science usage. The idea of the whole demo is to show you different types of Camel connectors that we can build for OpenShift. So for the first part, I'm going to show you the Twitter to Kafka. And I'm going to use Camel Quarkus for this. This is going to be the part where I'm going to do some Java code. But if you are not a Java developer, don't worry. It's super accessible. And stay tuned for the rest of the presentation. So now let's go for some coding. So I have my application here. And what I wanted to demo is that I created it from the code that Quarkus.io. And I just selected the extension, the Camel extensions that I want, which are the Twitter, the Kafka one. And I need some logs. So I took also the Camel log. And I just downloaded the zip. And it's created me an app. And the app I have already all in the Maven.com file. It has already all the dependencies that I need. It has also the build info that I have, like the native profile to build my native app. It has also the Docker file. So I don't have to care about everything. Like I can just pick the app. And I have a first rest endpoint just to test. And I can actually just run the code. And I will have my first resting point. So I don't need this resting point. So I just going to delete this class because I don't need it. And what I want to do is to build actually a Twitter route. I just have to mention that I've already added, I don't know if I hope I don't go fast. I have added some properties here. Here I have what the Twitter component needs as key so that it accesses my account, my developer account at Twitter so that I can go and do some search on the tweets. So what I'm going to do is that I'm going to start the Camel route that is going to do some search from Twitter. And I'm going to do it in Dev mode so we're going to see that the code is going to refresh automatically. So what I'm going to do is that I'm going to create my routes. And here it's Twitter search. If I do some mistakes, let me know. And what I'm going to search is the Apache Camel. And for a first step, I'm just going to log the result that I have from this consumer. So if it gets something new, I will know it. But generally, the consumer takes like the five last tweets that have the Apache Camel. And then if there are new tweets, we will see some new tweets coming. So that I have a bigger terminal, I'm going to my terminal. So I'm just going to do the maven common, maven compile, park is dev. And it's building my app. And now, so here, yeah, it was quick. So here, the app started, it searched for my environments variables that I put. Because actually, what I didn't say here in the properties that those ones are already available as environment variables. So I just put the names. And I have them already available in my environment. And so the root Twitter search already started. And here, it logs actually the last tweets that have Apache Camel in them. What I want to do actually for our demo, I don't know if I have it here, is that to know on the Kafka topic if the message comes from Twitter or Telegram, I'm just going to change the body of the message. And I'm going to leave my app running here. And I'm going to go back to my route. And I'm going to do a step body. And I'm going to transform it. But here, I'm just going to copy pass so that I don't do any mistakes. And here, what I'm doing is that I'm building a JSON message. I'm putting the body in the tightening property. And I'm telling it that it comes from Twitter. So now, if I go back here, IntelliJ will autosave my code. And here, we saw that it started without me doing anything. And now, my messages are in those JSON formats. So now, what I want to do is to push this data to my Kafka topic. So here, I'm going to do two, which is my last part. And I'm going to say that is Kafka. And I'm just going to be past the name of the topic so that it works. If I go here, actually, it's reloaded. But I have an error because I don't have access to my Kafka topic here on my machine. So what I'm going to do is that I'm going to stop it. And I'm going to build this time and deploy, actually, to my OpenShift project. So just to show you, I am already connected to my project here. So I can do a MavenClean package. And I tell it that I want to deploy on Kubernetes. And I'm going to tell you how this magic is done. So what I've done is that I've added, actually, two dependencies. One, actually, that is the Quarkus OpenShift that will let me deploy my app to OpenShift. And I also added the container image Docker because I personally want my app to be on the container before I just push it. And here, I have some configuration about how I'm going to build my image and some Kubernetes stuff like the name that I want to give to my app. Oh, so it actually already built my image and sent it to my OpenShift. So let's go and see. So here, I am on my OpenShift view. And here, just on the administrator, I wanted to show you that we have already installed the StreamZ operator for Kafka. And if we go here, we have a cluster. And we're looking for the topic. And here, we have the topic that we're going to use. So I'm going to go to my developer view. And my app is failing. Here, I just put a container. I have put an application that is just consuming from this topic so that I can get the logs there. And the code is here, actually. I just did the consumer that consumes from the Kafka topic to log. This is just for me, actually, to see if everything's going from my other app. And if I go back to the topology, here my app doesn't want to start. If I go to the logs, it's actually the problem of the properties that I have, like the Twitter access token and everything that are not available. So what I did is that I've already created a secret with all the variables that I had on my computer with the Kafka, Bootstrap, Broker URL, and the four keys and secrets that I need for my Twitter account. So I'm just going to add it to my app, David. And I go back to the topology. And here it's running. So if I go to the log, this time there is no error. I don't know if you can clearly. And I have five tweet search that I pushed to Kafka. There is something that I wanted to show here is that the app with the Twitter camera route started in 435 milliseconds. I'm just going to write it here so that we can see later when we will do the native mode that is going to start quicker. So I didn't expect that the consumers has already some Kafka things, but this is why, because we are. So I don't know if it got some new tweets, maybe I can tweet something. Yeah, I am tweeting on the back. So that's why you have things there. So I'm going to just put the native build. I'm not going to push it because it's going to take some times. But actually, it's the same command. And what we do is that we give that native profile that we have already in our app. And it's going to take more time to build because like we saw in the slides when we are on the native, we have lower memory footprint and faster start-ups. So there is a whole work done in this phase so that we analyze the code and put just what we need for the app. But I have already created a Docker image for it. So I'm just going to stop this one, maybe. And I'm going to copy the name. So I'm going to go here. And this time, I'm going to add it as a container image. So it's got it from Docker Hub. And it is a Quarkus app. I can create it. So this is the native app. It has the same problem that we had with the secrets. So I'm going to add the secret to my new app. I'm going to save. And the deployment was very quick. And like you can see here, it's the same app, the same camera roads. But instead of starting in 435 milliseconds, it started in only 7 milliseconds. And it's already getting some tweets. So this is it. Instead, if someone wants to tweet something and we see it live, I can stop sharing. And it's for Rachel now. OK, thank you, Sena. OK, can you see my screen? Yes. OK, great. So just a quick recap. So far, we've learned about the benefits of using an integration framework. We learned that Camel is the absolute best and most robust integration framework in the whole wide world. We also learned about writing crazy fast Java applications using Quarkus and how we can use Quarkus extensions to leverage all of the benefits of Camel. So where does that leave us? Well, when you look at the big picture, mainly the development processes, it's a lot. It's a lot to learn. It's a lot to do. Because if you think about it, the majority of the time I spend handling dependencies and doing things like preparing for deployment to OpenShifter Kubernetes. You have to configure Docker or S2I. You have to create a container, build the image. All of that can get pretty daunting. So we wanted to create something specifically made for serverless that is also smart enough to do those repetitive and time-consuming tasks for us. But at the same time, we also wanted to work natively on Kubernetes. And even more importantly, we wanted to lower the barrier to entry and to eliminate a lot of the associated complexity and to make it easier for people to learn and to pick up on it. So naturally, the thought process behind all of this was that the Apache Camel project needed to evolve a bit to accommodate these requirements, mainly to be able to work with serverless and microservices architectures. The thing is that we didn't want to reinvent the wheel because Camel solved already a lot of the problems that integration developers have been facing for years. So one of the thoughts was, well, how can we modernize it for these architectural trends and changes? And just like with the Quarkis project, a sub-project of Apache Camel was created so that you get the same benefits from it. Except this is native to Kubernetes as well and specifically for serverless. And as a result, it's called CamelK. So what exactly is CamelK and how does it work? CamelK runs on top of Quarkis. First of all, it enables developers to write very small, fast Java applications like you just saw. One of the biggest benefits, I think, is that CamelK handles Camel dependencies for you, which is a huge win. And of course, it removes also the need to configure Docker or S2I before deploying to OpenShift or Kubernetes. That means that you can then continue to focus on writing integrations and just using the pretty much already really simple CamelDSL, or domain-specific language. No need to worry at all about, you know, how am I going to package it, redeploy it, and that kind of thing. So it's straightforward enough to make a Kubernetes native integration application using something like CamelK. So operators, as probably everybody here knows, but operators are commonly used to install and configure applications or platforms, whether it be on Kubernetes or OpenShift. And they're the digital version kind of of the traditional human operator that they used to just do all of this manually. So they would have to install dependencies and everything for applications, whether it be in a legacy environment and that kind of thing. Just making sure everything's in place for the application to be able to run and to do its job. So it's the same in CamelK, except it was really taken to the next level because the operator is quite intelligent and it knows what you want to run. It can understand the CamelDSL. So this kind of list here is just to give you a general idea of all of the things that the CamelK operator does and how much time it will really save you. And actually the main responsibility of the CamelK operator is to look for CamelK integrations deployed using Camel and to build and deploy them as Kubernetes applications. Just a straightforward start. And all of that is really possible because of the operator SDK. So it basically performs the operations on the Kubernetes resources that are needed to run the CamelDSL script. And part of that is just it defines several new Kubernetes APIs that extends the custom resources. So in other words, the operator scans your application and creates the resources that you need in the cluster automatically. The three kind of main concepts of CamelK, well, we already discussed mostly the CamelK operator. It's basically the intelligence that coordinates all of the moving parts where each custom resource has its own dedicated state machine that orchestrates their phases. And there's also the runtime, which provides the functionality to be able to actually run the integration. And then there's traits. Traits, I won't get two in two as some more kind of advanced concept, but the general idea is that you can just, you can customize the behavior of the operator and the runtime. But typically for most people, the defaults are sufficient, but just so that you know it's possible for an experienced user to modify. So to get started with CamelK, first you need to be logged into a cluster you have access to, you have to install the CamelBinary, put into your system path, and you need to run CamelInstall to install it. And that will configure the cluster for you with custom resource definitions and install the operator in the namespace. And that's it pretty much Bing Bang Boom you're done in under five minutes. And actually if you don't want to deal with the CLI at all, you can just use the CamelK operator from the operator catalog on OpenShift. So just through operator hub and it can be installed via Helm Hub. So writing your first CamelK integration is incredibly simple. The first thing you do is just create your integration file, CamelK currently supports a bunch of languages just off the top of my head, Java, Kotlin, Groovy, XML, even JavaScript. And that's quite important to somebody like me because I have to use JavaScript more often than not. And I wanted something that was going to be easy to work with and that wasn't like a joke. And so this was a very low barrier to entry for myself as well. So from there you just deploy your integration with a single memorable line of code, quite remarkable. And then from there you can view the integration in the console if you're using one at all and you can just check out the logs, monitor itself. But what's really cool about this that I should probably say is that it's able to just kind of materialize and start up the integrations within just a few seconds. That helps a lot during the development phase because you get like immediate feedback on code and you can make changes right away. So you may be asking yourself, why serverless? What is the big deal with it? Well, I'm not here to convince you yes or no but some of the thought it benefits are listed here that mostly nobody wants to have to be predicting their workloads. You can just scale up or down with a couple of commands or clicks of a button. And with the time that you save you're also able to get to market faster. If you need to respond to changes you can do it more quickly but let's see how CamelK handles this. So more often than not when you read about CamelK in an article or you watch a video you'll hear about it within the context of K-native and that's because it works really well with it. CamelK provides a lot of features when it's run on K-native. And if you're not familiar with K-native I'm not gonna go too into depth with it but it basically gives you serverless capabilities on Kubernetes. And there are three major areas in K-native. There's the build area that provides you with the custom resources and the K-native serving area that's the part that helps you with the auto-scaling and scale to zero so that when there's no traffic then pods or containers can reduce to zero replicas. And then there's the K-native eventing area which I think is more specific to CamelK where you subscribe to a channel that channel pushes events towards your service. It just gives you an easy way to trigger your functions at the same time and to orchestrate services. But I think the thing that really makes CamelK shine here is that your service also just receives the messages through incoming cloud events which means that you don't have to actively connect to the broker. So the service ends up being quite passive. And actually the K-native trade automatically discovers the addresses of K-native resources and injects them into the running integration. And if you have an existing integration already CamelK integration then it's possible to run it as a K-native serverless service. So with serverless becoming a popular architectural style you'll see many examples. But it's important I think to remember that you don't need to use CamelK for serverless just using it alone or even to deploy a Quarkis app is very common and useful thing to do to not get overwhelmed with all the technologies just because they work really well together doesn't mean that they're dependent. Also for CamelK is the possibility to set up monitoring and so that can be done for both the integration and the operator. And I believe for the integration if you have OpenShift already then it's the Prometheus operator is already deployed as part of the OpenShift monitoring. And so to monitor the operator you would just do it at the moment that you're installing CamelK. And then of course you can then you can set up alerting and you can visualize collected data using something like Grafana or some other API consumer. And quite important is also that CamelK helps with transformation. So adding a transformation is as simple as you just add a line or two to your CamelDSL or to your integration. Something like converting the outgoing body to uppercase would be an example you would just add it to a step and you can have as many steps as you like. So I'll be doing just the teensiest time in the demo is following the theme of adding CamelSightings that will get eventually end up in a Kafka topic or not eventually immediately end up in a Kafka topic. This time I'll be reporting my sightings through Telegram. So I've already created the Telegram bot but it's very easy to do. You can create it in under a minute or so. I'll leave the link in the slides. So I've already kind of laser levered in the integration here. It's written in JavaScript to kind of change it up but don't get too jealous. They look almost exactly the same in Java. And so here this is just showing that where we're starting from. So the input is coming from a Telegram bot with our authorization token here set the body. And here we're marshalling it to converting it to JSON but with Kafka which is going to be where we're sending it to. This is not really necessary because it does it automatically. But then over here this is where we're piping it to the Kafka topic setting the body and then we are sending back to Telegram. Thank you for reporting your camel sighting. So it's as simple as we would run we would just do a camel run dash dash dev and telegram sightings dot JS if I spelled everything properly. So the dev flag basically it just means that you're going to get tail logged output from integrations the integrations Kubernetes pod. It also synchronizes this source changes and reloads the camel context automatically which I'll show you. It's doing a lot at this point and building an integration case and so on. So from here we go to the the OpenShift console we're in the administrator view let's go to developer apology view and we can see here that the integration is running. Okay and actually so we can we have here the logs right there's not much going on so what we'll do is we have Telegram open right and I've opened the chat with the sighting spot and I'm just going to type something and say oh my god there's a camel in my house. Boom thank you for reporting your camel sighting I've contributed. So you can see here that in the log that it does get sent right away and if I have time I want to do just a quick I just want to show you about the camel context being reloaded so if I were to do go back over here if I were to say have a nice day right then of course it's not a user cannot update all right that's okay that's okay but yeah you're going to have to take my word for it it will it will update here and say have a nice day I will not I will not stop this integration now just to show you that and so but yeah another thing is that you can also just get the status at any point so this will show you the running integrations and of course I'm not going to do it right now but you can also camel delete and it will delete whichever integration you specify and with that I will just leave you here with just to point out that I've left a quick summary and some and some resources here and I'll leave it to my colleague now to continue thank you I'm going to share my screen again so I hope you're saying my screen so we have seen that if you want to build complex software architectures you should use Apache camel because it it makes very easy to interconnect things you have seen that for example creating a telegram bot is really really stupid in four lines of code you can create a telegram bot and that's it you don't need anything else well you need to add the the code for your logic obviously if you add commands to the telegram bot you will have to generate the code that reacts to those commands but the the part of how to integrate with the telegram bot how is the telegram bot API I don't know we don't really need to know we just rely on camel if a telegram updates their API on or how they interact with the with the bot you don't care you just upgrade camel camel will take care of it it will work so it's really really makes it very easy to develop things that different third party components interacting it then we saw that well camel is running over java java sometimes is not the most fastest the not the best for serverless don't worry we have camel quarkus we can run camel over quarkus so it's serverless it's fast it's amazing and now Rachel told us that camel can help us with all the DevOps side of even also the development side so it's very easy to deploy on OpenShift or Kubernetes kind of cluster so what else what can we do to make our lives even easier well that's when camel let's appear camel let's is a concept that appears I think it was in October last year so it's very very new the idea is that well for creating a camel workflow or a camel integration you usually use a lot of different pieces and maybe if you are focusing only on your the logic of your use case it may not be as nice as it could be for example if you are a scientist analyzing camel side things in all over the world through a telegram bot and a twitter api you want to be able to integrate that with your machine learning platform or whatever easily and you don't want to worry about how to interact with it maybe what you want is that some nice developer creates some camel let's root snippet a camel that helps you create workflows faster for example imagine you have an api that is the api that your scientist is using to add new data or run analysis or whatever but your scientist is not a developer it's a scientist so they don't really know how to call the api well don't worry you can create a camel snippet that in a transparent way simplifies a lot all these calls to the api or whatever is the work from maybe it's not one step it's more than one step but you can simplify it so your scientist can build their own workflows on a very easy way with not so many building blocks so it's like a meta block this is a very cognitive concept I think so it has camel usually has this consumer producer definition depending on if the endpoint is reading data from the outside or writing data and here camel let's have a similar idea so you have source camel let's which takes data from the outside and sync camel let's which write data somewhere so it's like two different types of steps and you usually create a source camel let's snippet that reads the data and a source camel let's snippet sync camel let's snippet that writes the data and you put first a source then a sync and join them and you usually have only two steps on the on your workflow so you are painting a source with a sync the idea is to simplify workflows even more to be able to at the beginning I told you this open source if there is some connector that is not available maybe you want to create your own connector but that means you have to implement it in Java maybe you are not Java developer maybe you are a Jamal DevOps maybe you are a JavaScript developer and you don't want to work with Java but if there is a way of defining how this connector would work for example telegram uses HTTP APIs behind so you could create if you have the HTTP camel connector you could create a camel let that wraps how the HTTP connector interacts with the telegram server and say this is the camel let's to interact with telegram so it's like an abstraction layer snippet of code a meta block and with that in mind I'm going to show you how to define a camel let binding so as I said it's just a Jamal file very very simple this is split in three columns but it's one single file you start with a header in the Jamal file defining this is a camel let binding I'm going to bind a source and a sync and then you define which is going to be the source and maybe some properties authentication properties and what is going to be the sync and some authentication so in our use case we already have the telegram to Kafka the Twitter to Kafka and now we want to collect everything that is sent to Kafka through telegram and Twitter and store it in an elastic search and that's the part I'm going to do with camelettes the elastic search camelette is not yet it's only on the snapshot version is not on the release version but it's going to be on the next one because it's already committed so it's there and this code you see at the left is everything you need to write on your file to connect from the Kafka to elastic search the first is the header saying okay this camelette binding the name is going to be Kafka to as binding then you can you define the source which is going to be a Kafka source to define the URL on the topic no username or password in this case and then the sync which is where I'm going to write the data which is going to be elastic search and here I have to define the URL the cluster name the index name which is how elastic search defines the index and the cluster word you want to write to and then the username and password and this is the file I think it's been better here with more colors of course I'm not going to show you my password but what I'm going to do is just use this file with the proper password and add it to the cluster this is the current apology so if I add this it should appear here any moment now here we have the new Kafka to elastic search binding which is already running and I can see the logs this is not going to read from the beginning of the Kafka so everything that has been sent up till now is lost but what I can do is for example tell the telegram bot I I see a camel in my desk and here is almost instantaneously I see a camel in my desk and I added some logging here connecting to the cluster can you increase the font better yeah I see a camel in my desk it comes from telegram I connected to the elastic search and I stored it with this ID so if I for example retweet something with Apache camel I should see that here when the Twitter thing queries it which I'm not sure it's going to be here it is from Twitter I have this tweet so let's check that this is really on elastic search well I have here things from last week when we were testing things but if for example I search telegram late test I see a camel in my desk if I search for Twitter late test the one I just retweeted with the pineapple and I don't know if we have been already one hour talking so maybe it's time for just opening questions or chatting a bit about this about camel we can talk hours and hours about this topic but I think it's better if just we do like we did just review a bit the state of the art of camel starting with trusty classic traditional camel moving to camel quarkus then camel k then camel heads and then let's see what people want to hear about I think the recap of where it's going is would be great so go for it well um what I see right now is that um we are we are pushing a lot in the serverless side much more than in previous years the camel quarkus is very active I think Sinev can talk there more about how active it is camel quarkus uh I'm right now working on the camel head side which is as I said the concept was born last year at the end of last year and um there is some very good articles from Nicola explaining uh where this comes from and where this is going and we only have like 20 camel heads comparing with the 350 something connectors on camel so this is a very small subset of camel heads but I think it's it's improving a lot and it's going to be very um visual with this topology view in open shifts so you can very very easily connect um different snippets of code and and I don't know in my experience or what I feel is that some people still think that camel is not as easy to start with even if I see it very easy now but okay maybe not as easy to start with and the camel head thing is going to push a lot on on making it even easier because you can now separate your development team that can create the camel heads from the people that are going to use those camel heads that maybe scientists maybe analysts maybe I don't know whoever needs to build integrations that that sounds like it's going to be a really interesting community to work in too is creating the new camel maybe um where where should people come to find a space to collaborate on creating new camel heads that might be a good thing to the tweet or put into the yeah so we have um of course we have everything is open source so we have their patch camel let's let's put it on the chat so we can tweet it later this is the community repository of all the camel heads that are there I see that there has been many in the latest days even hours ago so this is this is uh this is getting a lot of speed in increasing more camel heads every day and of course the the apache camel website is a place you should go first to see any kind of documentation we have we have different top places for quarkus camel heads and camel k and even apache kafka connector camel kafka connector which is something that is also merging kafka and camel and is there and has its uses but we prefer not to introduce that also on this talk because it will be just too much yeah well I think that means we're going to have to have you guys back to do another one and um because this has been like it's this is amazing um the the demos of like some and shining the light on um these future integrations and what's what's available now is pretty amazing um and you know you shouldn't hesitate to get involved in the camel universe um and quarkus universes um I think we've it's time has arrived zinib did you want to add any final words around where's your you see things going these days what's next for your adventures uh for me I'm on the camel quarkus side for now there is this quarkus 2x that we were working on and of course like the whole team did an amazing job to to have so many extension there and there's still some work because some of them are not ported now on native and there's always so much work and if you want to get involved in the community just come and see and talk to us on the mail on the zulip chat and yeah there's lots of work to do there always is and how about you Rachel any final words that you want to slide in here to get people more involved or inspire them thanks I think um well a recently 1.4 was just released a lot of the focus has been around camelettes actually so a lot of what Maria has said is kind of where camel K is moving towards as we just added the bind sub-command as well which helps you to use camelettes and directly whenever you need them but um but yeah and also we're exploring kind of the user interface side of things like seeing what kind of tooling can be beneficial to to people that maybe it's not less tech savvy but maybe maybe don't want to code maybe want just kind of like a visual way of of working with the integrations that's about it well as the camelette world spins and and in develops we'll definitely have you guys back um and maybe even a walkthrough of creating and contributing a camelette um might be a great thing a future topic to have you guys back back on and um I'm thrilled with with the depth of the content and the demos so this is really good it's one of the best overviews I've seen explaining the whole camel universe so um thank you very much for this I'm not seeing thank you questions in the chat I'm just going to pause and see if Chris has found any in all of the other places where we're we're testing none yet so you have answered all the questions or you've left them with just enough mystery that they'll go off and explore for themselves so thank you very much for taking the time I know you're in London and Spain and France and time zones are always a fun thing but we totally appreciate you coming and we'll definitely have it back um yeah this is a big it was a big demo um but it's a great thing um to to try and break camel down into these pieces and parts very digestible so thanks and we'll talk to you all again soon thanks everybody for coming thanks for having us