 All right, everybody. And welcome to another Monday morning OpenShift Commons briefing. As we like to do on Mondays, is talk about upstream projects and how they interact with things like OpenShift. And today I'm really pleased to have Hugo Guerrero with me from Red Hat. And he is going to walk us through tuning Kafka to speak with almost everything. I like that one title. And talk about the camel connector for Apache Kafka. And I'm going to let Hugo introduce himself and do his demos. Please ask questions in the chat. And we will relate them to him and save some time at the end for live Q&A. All right, take it away, Hugo. Awesome, thank you very much for hosting me. I'm really excited to be able to share with you how we've been working with these two amazing Apache projects to be able to get a better ecosystem. So a little bit about me for people who doesn't know me. It's Hugo, I'm a Mexican currently based in West Virginia in the Massachusetts area in the U.S. And I'm a specialist in APIs and even driven architectures from Red Hat. I'm currently working as a developer advocate for Red Hat integration. And today we're gonna expand the next couple of minutes talking about three, four specific topics. First we're gonna be doing a quick recap on what it's Apache Kafka, what are the challenges and a little bit on the way to do integration using Apache Kafka. And then we're gonna be shifting into Apache Camel. So we will be talking about Apache Kafka. We're gonna be talking about Camel and we're gonna be talking about the Camel sub-projects that allows us to extend the Kafka ecosystem to other systems. So we will be talking about the Kafka connector and the new Camelettes that are coming as part of Camel K. So let's do a quick recap on what it's Apache Kafka and what is so important. So Apache Kafka is a project that was created by LinkedIn around 2010. And it was mainly focused to be tracking the website activity within the LinkedIn portal, you know, all the cliques who you are checking and what jobs they're looking for and so on. But they noticed that their Michigan system were not handling enough the transactions that they were having in the site. So they decided to refactor and rethink outside the box the way to handle events that are happening in a web system. So that's how they created Apache Kafka. And this has been now many uses now. It's been mainly focused as a published subscribe messaging system, but also it's also a data streaming platform because now has a ground with some APIs. But in the core, it is a distributed commit log. So it's a very well-tuned broker that delegates most of the intelligence or the activities into the producer and consumer clients and it specialized only in receiving events and then persisting those events. However, it now has outgrown from the Apache project and there's different components around that that has created a broader ecosystem. So we do have components for doing a replication like New York Maker. We have the Kafka Nego, we'll be talking about that. We have the APIs and also third party toolings like HTTP bridges and so on. And the thing important here, the Apache Kafka is that Apache by itself, it's not the goal. So you are not alone. So it's not just Kafka by itself. You won't really get any benefit by just having Kafka and being able to deploy. You need to go beyond that. So you cannot be just one single Jedi trying to fight and defeat the galaxy. You need some help for that. So let's talk about how can we get a little bit of help as Jedi. So the first thing that I was mentioning, it's about Kafka Connect. So Kafka Connect was this first attempt from the Apache community to be able to increase the scope and being able to simplify the way to connect to Apache Kafka and then bring data from data and events and messages from other systems and also being able to take those messages away. So it basically start to wrap around the producer and consumer APIs because that was the original way to connect to Kafka. You need to create your own application, code it by yourself and being able to handle all these different things like offsets, like the way to commit and so on. So the idea was to create this framework that can be reused to easily manage the most traditional type of approach of Kafka integration, like data conversion, having some connectors to be able to plug into other systems. And they defined a very well structured API that defines when you do have a sync, so place where you're dumping data and a source where you are connecting to that faucet and being able to get your events. And the problem with this was that, yeah, they have the framework, you have the APIs and so on, but you didn't have people actually coding. And even within the Kafka ecosystem and the Apache project, there was only just the file sync and the file source, like the reference of the connector. So people had to start to deploy and develop their own plugins, their own connectors to be able to use this service. And then when you are already deployed, you are able to get all the benefits from this framework and being able to run as a standalone service or the benefit is that you can also create a Kafka Connect cluster that is able to manage then the life cycle of those connectors. So you are able to handle the information if you're able to add some integration pieces, like for example, the single message transformations that allows you to do some messaging handling, like doing some changes in your formats or handling some of your fields. So that's a very good use case. And this is mainly because Kafka Connect, if you remember Apache Kafka was created in the 2010 and it was mainly focused and heavily on virtual machines and servers and Burr Metal and so on. So let's check how can we move them into a Kubernetes space for that. So remember, you're a Jedi, you're not alone, you need to talk to other Jedi, so you are now able to get into the galaxy, gather some friends and you certainly realize that you are able to gather a team but they are from different species, you need to talk different languages. So that's where it comes the title of this session, right? So how can we tune Kafka to be able to speak these languages? So if you remember this movie Star Wars, there's one way everybody can use to communicate with others and that's through a specialized device, like for example, TPO that helps you to be fluent in over six million forms of communication. So that's why you're able to then communicate whatever you require. So this is what, for example, Apache Camel will become. This adaptive that helps us to speak with all the different systems either as a sync or as a source to our Apache Kafka cluster. So let's get a little bit deeper on what it's Apache Camel. So Apache Camel is this project that has been developed since the beginning to implement enterprise integration patterns. So it has also several years as part of the Apache Foundation and it has grown over the years to now be considered the Swiss Knife framework for integration because it has a really good amount of components that allows you to do integrations to systems. They have more than 350 different connectors for different systems that go from Slack to Telegram to Twitter to AWS services and so on. And it also allows you to handle some different types of data formats and protocols like JSON, ABRA if you want to serialize, allows you to handle WebSocket, HTTP and so on. So that's why it's almost everything. If you have a man doing the unit doing integration with the one system, it certainly has been addressed by Camel or if not, it's under works because that's something that anyone who's doing integration has been facing in the past. So most of them are coming to the Apache Camel. And it's an open source project that allows you to also be able to contribute to those connectors. So it's a growing ecosystem all the time. And you have the connector, you have the data formats, the protocols, you can then implement enterprise integration patterns. So for the regular framework, you can do complex things like content enrichment, Twitter, aggregators and so on. It's a very, very active project and community. That's one of the most committed projects in the Apache Foundation. So it's a very live project. And it basically takes these connectors and now Camel is able to run on different environments. So you can run on the traditional VM and it has now an integration with Kubernetes, OpenSheep, of course. And now more recently with K&AD for doing this kind of surplus integration. And the idea is to have these pipelines or flows or browsers, as we call them in Apache Camel, that allows us to do from one place to different destinations. So for example, from a Kafka topic to a ERPC endpoint. And we can have different ways to implement those routes. You have what they call the domain-specific languages or DSLs that allows you to implement these kind of flows or routes using, for example, the Java DSL. It's very simple. It's a Java code that you can define. Or if you don't want to go that way, you can use the XML DSL. Or now more recently we can even support things like the YAML DSL where you can define your flows using the YAML language. Going a little bit in-depth on the camera architecture and please don't just care about this slide. It's like, you know, looking at C3PO without the shell. There are different companies, the context, the routes, the filters and the processors. But the important thing here is really the lower part where you have all the components. And these components are the ones that will allow us to plug into, you know, all these systems, Slack, Telegram, either as a sync or source. And then the rest of the engineering on the camel framework then do the processing and the communication. However, we won't be really going that deep. It is for like people that is deploying their connectors or creating this integration. So most of the times are, you know, the camel team, but you can also do the exact same thing. So if you feel brave enough and you think that will benefit the community, you can go in-depth and then start working with the internals. But for avoiding that, going too deep into the details, what the camel team did is to create these sub-projects that specialize the camel framework into more reusable components. So today we will be covering two of those components. So they're more than six projects, but the ones that we are really targeting with this integration with camel are two sub-projects. The first one is the camel Kafka connector project. And the other one is the camel K and the camel X1. That allows us to have two different visions of how to connect to Kafka using Apache camel and get the most of this framework for doing your integration. So what Kafka connector and camel K did, it's basically take the content of all the frameworks and all the toolings and all the components and then optimize them to be easier to use without any really Java coding experience and being able to add these layers on top of the camel framework and being able to then tune it to really make it smooth to work with Apache Kafka. So let's get started with the first one. So the camel Kafka connector sub-project. So this sub-project was mainly focused on how to run using the same Kafka Connect APIs on the way that we most of the times address connectivity to Kafka within the VMs and bare metal type of deployments where you can get all the benefits of running the Kafka Connect cluster and then delegate all the lifecycle of your connector to the Kafka Connect cluster. So for this, the camel Kafka Connect project created spool of Kafka connectors that are built on top of the Apache Camel framework and basically tries to reuse in a very simple way those camel components as Kafka sync and Kafka sources to be deployed on top of Kafka Connect. So basically they have this tiny layer to make it easy to plug in and deploy and being able to manage by a Kafka Connect cluster. So there's a list of all the connectors so the 40 type of connectors available if you want to go later and at the end I will share some of these links with you and it's a project from the Apache Camel project. So this looks very simple if you can see perhaps it will be tiny on the slides but it's reusing the exact same type of configuration that usually you find in the Kafka Connect ecosystem. So in this way you need to define the name of your connector then you need to obviously have already downloaded those connectors so you need to download the jar file you deploy that in your Kafka Connect cluster and then start configuring this. So you can either deploy as a standalone configuration using this kind of approach or you can reuse the REST API when you're deploying on a distributed mode for the Kafka Connect cluster. So then you can define your serializer and your key and value converters sorry not serializer converters to handle the different type of serialization for your data. And then you can configure all the specifics of the component so you can do the topics that you're gonna be reading from or writing to and then you can configure specifics of each one of the components like for example for Amazon S3 you need to configure the access key, the secret key what's the region that you're gonna be using and the same for other Amazon services and for Timer or for other components you will need to check the documentation what are the different options that you are able to configure. So the good thing about this is that as you can see here with just defining the configuration we can start using all these kind of components with no code and just simple configuration. And this is very useful when you just need to get on-brand reuse very well tested connectors within your new Kafka ecosystem. And this works very well because it really leverages the benefits of the Kafka Connect cluster and APIs. So if you are running your Kafka Connect cluster then this is gonna be really really easy. However, when we are moving into a more cloud native or Kubernetes native type of deployments having the Kafka Connect cluster as well as the Kubernetes API makes it a little bit redundant. Like you have two abstractions for deploying connectors. So what the Camel team figure out was that it was easier to remove the Kafka Connect layer and then delegate the management of the connectors to obviously Kubernetes and to the Camel K operator. So that's why we are gonna be talking about a Camel K and now the more recently added features called Camelettes. So let's get a little bit on how this Camel K and Camelettes are making a Camel cool for Kafka. So a little bit of Camel K, as we mentioned, Camel K has this umbrella project that has different sub projects handling different specific things. So the core is obviously focusing on the framework by itself. We talk about the Camel Kafka Connectors that focus on how to make these connectors to work with the Kafka Connect. And Camel K is the platform that's specialized on running Camel integrations on all pages and Kubernetes in general. So basically it's taking this declarative approach that Kubernetes offers us to be able to define these components, these connectors, these integrations, these transformations as custom resources. So it's obviously backed by the Camel K operator. And even though the main use case was first targeting the serverless workloads, serverless integrations with K-native and then open-shoot serverless, we realize that there's also other kind of approaches, like for example now doing Kafka to simplify the kind of integrations that we want. So it's also part of the community. It's also a sub project and it started around three years ago. So it's now on the latest versions 1.4. So it's already maturing. And the idea is that Camel K uses the operator pattern to be able to manage integrations. And the interesting thing about Camel K is that it has different capabilities, like from taking during integrations, define as custom resource to be able to support the creation of functions using the Camel K routes and so on. It's very versatile. That's why we mostly have been focusing on the concept of Camelettes. So the idea of the Camelettes or basically it's the contraction of Camel route snippets, Camelettes, or Camelettes as somebody called them, is that now you can take a Camel K to be able to handle a pre-configure and pre-build connectors as well as transformations. The exact same way as Kafka Connect has been in charge of handling the connectors, the lifecycle, exposing our REST API to be able to start them, stop them and configure them, Camel K takes the same approach with the Camelettes to have the exact same responsibilities of how to start a connector, how to create a connector, how to deploy those connectors, where to get them from and being able to deploy those source and syncs within the OpenShift cluster. The interesting thing here is that now, obviously you can either go all the way into the Camel core and create your own deployments and just let Camel K run them, or you can reuse some already created integration connectors available as Camelettes and then tell Camel K to be able to deploy those connectors into your OpenShift environment. And obviously the interesting thing here is that those connectors are then being managed by the OpenShift and the Camel K operators and they can be restarted, you can check what is the status and you can also be able to query the Kubernetes API to get access to other Kubernetes, other, sorry, custom resources available in that exact same cluster. So as we were gonna be seeing in demo, we have a little bit of time, you are able to reference then other components that are available in your cluster. So for example, we're gonna be seeing that if you are running Kafka on Kubernetes using StreamC for example, StreamC also uses the declarative approach to define your Kafka resources. You can define your Kafka cluster as custom resource and your Kafka topics as custom resources. So the benefit of having, for example, the definition of a Kafka topic as a custom resource definition is that then Camel K can reference that resource and being able to link all the information attached to that resource, read the status, the specification of that resource and then reuse it within the context of the component or the connector to then use that information to deploy and to configure these integrations and these routes. So it's very, very interesting. And for that, as I was mentioning, the important thing about the Camelettes is that now we have a Camelette catalog that allows us to have all these components or connectors available. As you can see that we have some of the components that we were mentioning from the Camel Kafka connector and also available as Camelettes in the Camelette catalog where you can get all these connectors and then just reuse them within your cluster to be able to install them and then being able to start using those. So the Apache Camel site in the web has this catalog where you can sort and search for the different connectors. You have connectors, you also have some transformations and we will be adding as part of the project more and more connectors and more and more transformations so you're able to reuse most of them. So for example, when you are using Kafka Connect, you have not only the connectors but also you are able to leverage some of the transformations, like for example, if I want to add a field or extract some value from the payload or being able to change the key of my record, my Kafka record or so on. Some of the things that you can do as part of the single best of transformations that I was mentioning at the beginning. So those are the kind of things that you can also find in the Camelette catalog, those transformations are also available there. And then, so how do Camelettes look like? So most of the times you will see, you will be really working with is the Camelette Bidens. So as I mentioned, you can create your own Camelettes if you want to code and to work with the workflow but most of the times you will be using already Camelettes that have been created as part of the catalog or created by your integration team, for example. If you're part of the Kafka users, you will be most of the times using the Camelette Binding that allows you to then take a Camelette and configure it to deploy your connector or your integration. So in this case, in this Camelette Binding example, what we are taking it's the Camelette Timer Source that is referenced as part of the source. And then, as I was mentioning before, you can then use a reference to an already existing OpenShift resource like in this case, for example, an in-memory channel from Knative for your serverless integrations and then the name of the channel. So with this information, you just need to tell that you are gonna be taking from the timer and then you will be delivering your event or your record into that specific channel. There, you can also replace that not just with Knative channels but also you can use, for example, Kafka topics or you can use other Camelettes as the destination for your binding. So you can then start to build your own data pipelines using this type of Camelette Bindings, reusing some of the Camelettes already available to create more complex flows. In this case, it's very simple. We are just having a timer that is producing those events but you can then have something more, let's say more focus on more complex integrations. So that's what I wanted to show you now. So let's see a demo and some more information about this. So I was mentioning, this is how a Camelette Binding looks like. So I'm using a BS code just to show you the YAML file that you can define as something that it's declarative using YAML, the information about the Camel Binding, the name of the Camel Binding, the source. So it's also a reference to a Camelette that has already been deployed on my cluster with this name, in this case, TimeSource. And the properties for this is gonna be just a hello world. And for the sync, so the source and sync, it's taking the same approaches as Kafka uses now. It's gonna be a Kafka topic that it's called Camel. And if you can see, I'm using StreamC in my cluster to define this specific resource. Or you can just define directly the destination. But most of the times you're already running and leveraging all the benefits of operation, I would recommend to have this exact same approach. Now, as I was mentioning, so we do have Camelettes. So this is a Camelette example, for a transformation. And this is the components that are part of the Camelette catalog. So if you want, you can create and configure and develop your own Camelettes. You just need to follow the Camelette development conventions, where you need to define what is the specification. So you need to check what is the title, the description, what are the required fields that you need to configure to this integration to work. So most of the times you don't want to hard code information there. So you most of the times receive those as part of the arguments of the call to your Camelette and then some dependencies. And then you can define your flow. In this case, we're using the YAML DSL for this flow, where we are taking the source that it's coming into this Camelette and then do simple transformations depending on the configuration, setting the properties and then doing some object processing to change the, in this case, for the hoist field for the information that it's part of the payload. And like this, there are different type of Camelettes. Another example for it's the telegram thing. So if you want to send Kafka messages into a Camelette, then you are able to check the description of this one, the different properties that you require like the authorization token from a telegram, the chat ID where you want to deliver those messages and then the different flows. And here we're using this YAML approach because it makes it very easy. So if you can see, you can just reusing the DSL, no need to code anything, it's mostly configuration and the quality of approach. But as you can see, this is a little bit more complex. So that's the benefit that you can get with the Camel, with the Camel framework is that you can go from something very easy to something really, really complex if you really need to or you need to implement your own specific processes. For example, to do something very, very specific, but most of the times you will find most of them already available. So then how you look at those when you are binding them, for example, the telegram one, how you can use it is, yeah, we already see the Camelette and now we are using the Camelette binding, then we are referencing for example, the Kafka topic that we're gonna be taking those messages and then this example shows how to build these pipelines that I was mentioning in the past, right? So you can have different steps or different phases of your data pipeline. The first one is just extract some information that you can apply the host field action and then you can do some patterns like for example, calling an API that is doing the enrichment until you get to the end and then you have your sync or your final target system or destination or you can then send the information at the end. So this is how the different objects can be related. And then how you deal with them is if we go to the terminal, we get here, we are also in the same process. So what we recommend to use is the Camel CLI, the Camel command line tooling helps you to deal and easily manage your Camel resources. So in this case, we only have some Camel bindings already with this. There's a one timer that is already working, but also we can add more Camel it so we can see, as you are seeing, I'm using Casey, it's the Cubicle command for occurring the OpenShift API. So we are able to get all these custom resources as part of the OpenShift ecosystem. So Casey, get Camelettes. So you see that there's no Camelettes here, but we can then fly, then Camelettes. And these will install all the different Camelettes. So in this case, I have these Camelettes here, local in my machine, but you can also use the ones that are already available in the site. So if you are going to the Camelette side, you will be able to find them. So if you can see here, we have the Camelette catalog that I was showing you in the slides. And there are here all the different connectors that are available. And the one that I will be using here is the TimerSource. And so you can see here what is the information about the configuration. So you need the message. That's the field that you, the property that you need to boot. And then there's all the information, like the context type, the period, and so on. But these are optional. The only one mandatory is the message. And there are some examples, like the one that I shared with you. If you want to avoid to copying them manually, you can just reference them directly from the site. So you can go to the GitHub repo, where you can see the different Camelettes and the TimerSource Camelettes, for example. It's here. And if you want to install this one, you can just go to the raw file and just apply this one into your server. So you're able to install those directly. So what the service is doing when running this component is, if you can see now my OpenShift cluster, is that I have a project called Kafka. So in this Kafka project, I just have the StreamC operator installed that it's managing my Kafka resources. Let's wait for these loads. So in the install operators, I do have the MQStreams operator, that is the version of the StreamC operator and also the CamelK operator with the version of 1.4. And if I was mentioning, you can also use serverless or Knative. You want to get all the extensions for the scale to zero and then the serverless components. And as was mentioned in this project, I do have a Kafka cluster called demo. And remember, because we are using the operator, we are defining everything as a custom resource. So this is a simple Kafka cluster that has the Kafka brokers, three of them and three suitkeepers, and it's exposing just internal endpoints and a route for accessing from the outside. And also, as I was mentioning, we also can define the other resources like Kafka Bridge or, in my case, it was looking for the Kafka topic. Yeah, Kafka topic, so the way here. So in my case, as I was mentioning, we need to create the Camel Kafka topic and because we create that as a custom resource that we are able to reuse them in the example that I was showing. So this, when we create this, what we see is that we have the parts running on my three Kafka brokers, my three suitkeepers, and the operator is already available here. And then I have this pod already running with the information on the integration. And this is created because the integration is already running, but I can delete that one, as I was mentioning. So, KC delete it, Camel Binding, Timer to Kafka, this will delete that integration or that pod that it's running, the actual integration within the Timer and my Kafka cluster, and it removes that pod. If I do apply again the Timer to Kafka, the AMO file, the CamelK operator takes that definition that was created as a custom resource and then deploys my pod and starts them to run within the connection. So if you notice, I didn't have to configure my Kafka Bootstrap server. I just add the reference to the Kafka topic that it's obviously linked to a Kafka cluster where it was being created using the StreamC operator. In the case of the CamelK, I just make the reference to the Kafka topic and then it automatically wired all the information to connect to my Kafka cluster. So that's one of the interesting things about using Camelettes and CamelK. You don't need to configure everything, so it's able to easily connect and plug into this cluster. So obviously it creates the configuration and then it starts to run my integration, my Camelette, and it starts to send events to the Kafka cluster. But if everything goes correctly, we should be able to see the integration running here. So let's check if this is running. So in my case, I'm just going to run the console consumer shell script from Kafka that is available within one of my Kafka cluster brokers. And let's check if we are able to get some information here from the topic. If everything works, then we should be able to see, yeah, our hello world message being received every one second as we configure in our information and our integration. If I want to update this message, it's very straightforward. I just go to my timer to Kafka configuration. So instead of saying hello world, we are going to be putting hello open shift. Save that one. And then let's apply this again. Okay, this should be updating our pod. Yeah, we see that now the old pod is being terminated and then the new pod with the new configuration is coming up. And yeah, here we have hello open shift. Very easy, very straightforward. We can get from something simple or complicated depending on what it's your integration. And then we are defining here directly. So the good thing about this is that you don't need Kafka Connect anymore. So you can rely on all the benefits of the distributed Kubernetes API to handle the lifecycle of your posts. Starting, scaling, and so on. So it's very interesting to have this. Okay, let's get back to the slides after we saw a little bit of this demo. And let's do a close up for this. So I was mentioning we have these camel connectors for Kafka. They're part of the sub projects of the camel umbrella project. It allows you to combine the power of all these big projects like Kafka and camel to work together and then broader the ecosystem and simplify the way to build your data pipelines using Kafka. So you can experience the maturity of the camel project with all these enterprise integration patterns with all these connectors. And you can also get all the benefits of simplicity from Kafka Connect if you're deploying an IBM or leverage all the different benefits from the operator pattern using your OpenShift cluster or your Kubernetes cluster. So in this way, if you're a Kafka user, then you can get now all these big set of components and transformations available up to you. You are able to then create your own without having to code at all. No javas as you can see in the examples. You can just define your flows, how your information is going to be handled, how your transformations are going to be managed, take decisions, choice, implement your patterns without coding and getting your custom integrations. And for the existing camel users, you can easily reuse all your skills that you have as a camel developer or as a camel user and then get into the Kafka world very easily reusing this information without having to relearn APIs or the way to connect using the Kafka component. So that's a very easy way to train your skills. So there's some useful links if you want to check more about this. Some of the ones that I showed you, you can go to the GitHub repo for the camel Kafka connectors. You can also go to camel case subprojects so you can read the documentation. You can check the developer guide for creating camelettes. You can also please join the camel sleep chat so you are able to join the community, share with us what you think about camel case. And obviously you can also follow them on Twitter. You can also start the GitHub repos. I encourage you to do that so you are able to participate and give us your feedback. And I think that's all on my side. I don't know if there's any questions, comments, something that you want to go over. Thank you so much for walking through all of this and I'm really intrigued actually personally by the camelettes and the catalog. If you could go back and show us what's in there for besides the one that you demoed, I'd be really interested in getting your perspective on what's in there, what's coming there, and if someone wanted to create a custom camelette, how would they go about doing that or is there in the documentation maybe there's a little bit around that if you want to create your own because I can imagine people wanting to write their own enterprise specific camelette that's behind their firewall or whatever and that seems to be one of the cool bits of this whole process. Greg, yeah, so as I was mentioning you can go, if you navigate directly to the camel site somehow it's, yeah, wait a second. It just froze as you would expect for this kind of thing. You go through the whole demo and nothing breaks and then you try to do the documentation and it poses up, oh well. Yeah, there we go. And let me check if I can open another. I think, yeah, it just broke. Yeah, so, yeah, in the documentation there's a link to the developer guide so you can check the information on how to develop your own components and as I was mentioning at the beginning you can either reuse the ones that are in the community or if you develop locally as I did show you with the ones that I had in my own local file system that I add them to my cluster those are the ways to go. So you can have your own repo where you can have them private or you can go over them. So if you go to the camel catalog you have this list on the left side you can see we have a lot of four AWS services, Azure services. So those are the ones that the community has sort of been created for you and they're available for free. You can just go to the repo and then just reuse them. If you want to write your own then there's the camel developer guide. This is the information on the basics. So what it stores, what is the sync, what is the single jambal file that you need to work on creating a simple camel. So you can go over this guide and it helps you to define how to create different components that you need to create for having your own camelettes. And now that you have the final jambal file you can then either install that in your cluster or if you want and please do that if it's possible, share it with the camel community so we are able to publish them into the camel catalog. But basically most of the information is right here. Yeah, awesome. So I'm looking forward to an explosion of camelettes in the community so hopefully people will find this topic interesting and figure out different aspects of leveraging the camelettes and share them with the community and come to the community Slack. If you're looking for that, a way to connect. Do you have community meetings, Hugo, for the camel K community? There are. I don't have the schedule on hand but certainly we can share with that with your audience later. We'll definitely dig up that because I think the Slack channel that the Apache camel one is probably a really good place to go to ask questions and stuff like that and find it out but also interested in joining the community this would be a great time and a great way to get started I think and easy to join. Yeah, I mean you can join to the camel GitHub if you have issues and so on. You can follow the discussions there but I think most of the activity really happens in the CELIP channel and that's where you will find certainly everyone and that's where the agendas and the meetings are being published. It's a very vibrant community so I do recommend you to reach them over there and then you will get certainly pointers on how to work with them. Awesome. I'm going to have to check out Zulup and see if I can find that as an alternative to Slack. I think I'm getting burnt out on Slack because of the Kubernetes stuff that we're doing. Yeah. The CNCF is that but it would be nice to have some alternatives here so I'll have to check it out. Yeah, there are a couple and the community decided to use the CELIP as part of that. I know they're doing good with that and yeah, it's killed by Slack channels. It's some of the more recent things that I see developers with. Yeah. The Slack channels and CNCF and for Kubernetes and for OpenShift Commons are pretty active. We monitor to them as well but it's also, I'm always looking for a good new one so that has better archiving of threads and things but I suspect anything's got to be better than Slack for archiving conversations but yeah, so thank you very much Hugo for coming and I'm going to get Hugo to share his slide deck with me and I'm going to post this up in YouTube later today with a link to the slides. I am loving that you kicked off the Monday with so many Star Wars references. It just makes my week anything you missed was a baby Yoda reference so I don't know maybe that's what the Camelettes are lots of baby Yoda's out there. Yeah, they're just a bulb it's the new Canon let's see how it gets. It's a very interesting thing let's leave it to develop. There you go. All right, well Hugo thank you very much again we'll put this all together in a blog post on OpenShift.com or in Red Hat Developers.com very shortly and get it out there and circulate it and you can always reach Hugo on Twitter and hanging around in that Zulu channel I'm sure. So thank you again and thanks to our producer Chris Short for hanging in there with us today and making this happen so take care all. Thank you everyone.