 So, hello, I hope you can hear me well. So my name is Roland Huss, I'm working for Red Hat as a software engineer, I'm one of the working group leaders of the client and until the end of the month, I'm still a TOC member. Okay, let's get started. Today we are talking about Knative and Camelettes. And yeah, so first let's start with this classic picture. I think we already have seen it quite a lot today. This is how you can set up an event in topology with Knative, you have a broker in the middle and then on the left-hand side, you have something which kind of creates cloud events or more like an adapter who transforms external events into cloud events, pushes it to the broker and the broker then calls out to so-called things. And in this talk, we are going to talk about the left-hand side, so how we can create easily many sources with the technology which is called Camelettes, we will see that in a second. And then we also talk about things because Camelettes also provide out-of-the-box things that you can directly use. But first of all, have a look at the existing sources that we have. So we have four sources that come out of the box with every Knative installation. We have the pink source and the API server source which are kind of scheduling source and want some source to connect to the API server of Kubernetes. And then we have some more like general purpose sources like the container source and also sync binding. And you can kind of connect your deployment directly with the Knative eventing. Then we have also some more like maintain sources under the umbrella of Knative which is a handful like Cochito source or CouchDB source like that. You find these also on the Knative repositories. And then of course, there are also liventers like TriggerMesh that offers sources out of the box and here for some cloud connections and we have already seen in the other talks before that other other sources as well. But I would say that we have roughly maybe 50 to 60 sources out there. And there's one. So what the problem with sources actually or what I'm trying to tell you today. So first of all, we know sources are the entry point for cloud events. They are not difficult to implement so it's quite easy. You have a controller, then you have custom resource definition. Maybe you have a single controller for multiple custom resource definitions. But still you have to create the CRDs and you can use some general purpose sources like the container source or the sync binding. But then you have to lose some advantages like the typed approach that you get with a CRD but you still have to create a container image for that of course. Then the other problem actually is the discovery is kind of difficult. You have the event registry. But to be honest, I'm not sure how many people are really using it. So event registry means that you can or a source can register an event type and so that a user can detect that event type and find the source. But a bigger problem actually is that in restricted environments, it's not easy to deal with cluster-wide resources for a regular user. So it's not easy to install CRDs on your own. So if you want to install an own source with a CRD, you have a problem typically. But even more, you often have issues even to read CRDs. So they are locked down environments. OpenShift is being one of them that where you can't really, as a regular user, you can't even list cluster-wide resources. So you can't even find out what resources are available. That's an issue. And the question is actually, are there alternatives? How can we avoid these issues? And actually, let's have a look to Campbell and how can we get to many more sources? Actually, this is a picture of things which I would like to have in this creative eventing source. So that easily, as a user, can directly connect to all of these systems. And so let's have a look to Campbell. So some words. So Campbell, you probably have heard about. Campbell, it's based, so it's implementation of that left-hand book, a left-hand side book with integrals integration patterns. Very famous book, very great stuff in there. They describe all kinds of patterns that you use if you have integration scenario. And Campbell is more or less implementation of these patterns in code. So it comes as a library. Actually, it's already older than 10 years. It's still one of the most active Apache open source projects. It's Java-based, so typically used as a library. You have a certain DSL where you can describe your integration route. And then you have to compile it into some runtime, typically it's Spring Boot or it's Quarkus. And then this runtime needs to be operated either directly on your server or via orchestration platform like with Kubernetes. And the benefit here is, of course, that over those 10 years, they have really, the community created more than 340 components. These are components that are really kind of connectors to external systems. So they are for incoming and for outgoing connections. And you see we have also a big permanent fan here over there. This was quite for some fun at Twitter at some time. Okay, so this was Apache Camry as it started. But then in 2018, the Camry community also decided to modernize their stack and they invented a new sub-project which is called Camry K, K for Kubernetes. And this actually means that it uses a CRD, which is integration. It's called integration. You put in your Camry DSL as a part of the spec of this integration custom resource. And then you just deploy it and everything else. So creating the runtime, creating the image, pushing it to some registry, executing it, and so on, is all taken care by Camry K itself. So it's really kind of gives you a much easier approach to use Camry. And you don't even have to be your Java developer, for example. And then the next step in evolution is our Camillettes. And this is what I'm going to talk about today. These are really kind of predefined route snippets. It's really something like, you will see it in a second in the next slide, which you package into our CRD, which is called Camillettes. So this is kind of a type, like a high-level custom resource definition. So you can deploy many of the Camillettes. And then you can create an instance of this Camillette with a Camillette binding. Easily, and so, but these are all really user manageable resources. So there's no, you don't have to be cluster app administrator for that. And actually, of course, ideally you would have all these 340 components available as Camillettes. At the moment, there are around 70 plus Camillettes that you can directly use. But this list is constantly growing in quite some amount. So actually, let's have a quick look how this look like from the code. So this is a typical example of Camillette. By the way, who of you does already know about Apache Camillette? One, two, okay, I would say maybe half of the audience. It's great. So then you probably recognize this kind of definition. So this is written in the Java DSL, so it's a builder pattern. So you put this into a class, and then you can create a route. In this case, you have an incoming point, which is a Twitter search component. So behind this schema, URL schema, there is a registered handler, and this would do actually the connection. And then you have parameters that you add as query parameters here. And finally, you can do all the pattern stuff here as at all. You can transform them, you can enrich them, you can split them, you can send them out to something else. And here we're sending out directly to an eventing broker. So super simple, but still you have to do a lot of boilerplate here. In Camillette, this would look like that. So actually, this would be integration, custom resource. And now you have shown here really only a fragment of the specification. So you have the configuration here with the concrete values. And then you have the route, which is actually more or less the same what we have seen in this slide before. And now finally, we have Camill K. And Camill K just split up the integration in a template part and an instantiation part. So actually, the template part is on the left-hand side. This is actually the real Camillette. And you see that on the template specification, you have placeholders, like access token, keywords. These are just parameters that you can define. And so there's a Camillette writer who knows Camill very well and just creates this kind of objects, put it under the cluster. And then another user can just instantiate the Camillette by providing a Camillette binding and providing the missing parameters. And then you get in the background an integration object, or the one from the slide before. And then the Camill Cooperator will create the runtime. Okay, so this is about the slides. Now let's go to the demo I have. That's good, I have 15 minutes. So we will try something from scratch. Hopefully it will work. So we start with a Twitter incoming Twitter search. So this is a Camillette component that searches for on Twitter for a certain keyword, in that case, it's slash key doc. It will create then a cloud event, send it over to the Knative broker. And the Knative broker then moves it on to a Knative function. We will also see how, we will also create the function from scratch. The function itself communicates with the Google API, a Google Cloud Translation API to translate the tweet into a random language. Then we create another cloud event, send it back to the broker. And finally, it lands in a so-called Slack sync. So this is the opposite of a source where we send out this translated tweet onto a channel on the Knative Slack, okay? So let's get started, we have not much time. And yeah, they always want me to make demos with public services. So let's please be nice to me. Okay, so you have here a Minicube cluster, empty. You see on the top some watches on certain resources, pod to Knative services and the Camillette bindings. And the first thing that we have to do is, of course, we have to create a broker. Sorry, I have to take off my glasses. Okay, so that was easy. And now let's start with the Twitter search component. For that, we can just, before I do that, I'm just showing you all the Camillettes that are listed here on the cluster. We have also a plugin for Camillettes. You can do everything what I'm doing here, also with YAML files, of course. But here we just listing all the Camillettes that you are available. It's over 70, I say they are really for all kind of external systems. And one of my favorite one, but we don't see them much, but you can try it out, Chuck Norris source, which gives you some random Chuck Norris quote, very nice, very useful. But we are looking for this Twitter search here. So this is our source that we are going to instantiate now. So let me do that so you can also use source Camillette. And for that, we need to create a binding. So what you see here now, we create a binding, which we call Twitter search source. We connect it to the default broker. We set some properties here. So these are key value pairs that we can use. So keywords we are looking for for this KDAG. And then we are adding the access tokens for our Twitter API access. So they are stored in some files, so here on the below the directory. So let's create that. You will see here that the thing, so let's maybe let's have a quick look into what happened behind the scenes. So what I'm having here is a log tail on the Camillette K operator. It detects this Camillette binding, creates the integration object, and then runs a compilation for the Java code behind the scenes. So it then creates an image that gets deployed directly to our system. So this might take a little bit. Hopefully I've already pre-warmed all caches. So normally if you start the first time, it's a little bit slower. But you see already here that it's running. And yeah, that's it. So now we have a connection to Twitter and get already the tweets into our broker. And to see that this really works, let's create a simple e-display service. So the standards that we have already seen in the service, which is a service that just locks out cloud events to the console. And then of course to create a trigger, create event display like that. And then we make a thing on e-display as well. Sorry. OK, so you might want to try that as well. But let me hand over to my browser. So I feel my Twitter client. Let me see. Hello, OK, native con. And then don't forget the tag. We tweet that and then head over to our console. And then this might take a little bit. Because actually this is a polling approach. So Twitter search starts polls every 10 to 15 seconds. You see now, OK, our e-display is already stopping. So we were a little bit too slow. So I'm hoping that we will come up here to see. So now it's creating something and we get our cloud event. So you see that the Twitter search starts creates a very robust cloud event with all the metadata that you can get back from Twitter. And here we have our test text that we have here. OK, so works. Next step is now the Slack sync. So we have the Twitter source now. Now let's go to the Slack sync. Unfortunately, KN doesn't have yet support for Camelette sync, so I can show you actually the YAML file here for such a Camelette. And let me, so this is the Slack sync here. It's also quite simple. So you give it a name. You have two legs here. So the source is actually what you want to connect to. In our case, we want the source as a broker because we received the event from the broker. And we filter also on tweets, tweet translated types. And the other part is the sync, which is just a Camelette Slack sync. And we configure it with a configuration. I have not shown the URL here fully. So this is also a complete authentication against the Slack channel. And yes, that's it. That's all what you need. So let's try that as well. So I make here and apply minus-f sync demos. So this just includes the proper credentials. And you will see then, let me make it a little bit larger, sorry, the binding as well. Oh, it already comes up here. You see the Slack sync. So this is actually the sync itself is implemented as a Knative service because it actually can also go down. It's stateless. And so we have now a Slack service here. And of course, we can try that out as well. So let's use our event plug-in for that. OK, and we have also already seen that in action here. We want to send it to the broker. We have, of course, set a type to be translated. And we add some random body here. And if I'm doing that one second, going like here, I have my Slack here. So this is the channel I'm going to post to. This is on Knative. And what this plug-in actually does, it creates a container within the cluster that sends directly the cloud event to the broker. Because the broker, of course, is not exposed to the outside. And so you can directly now send it to the Slack channel. OK, now proven, Slack sync is already working. Now let's do our final thing. This is actually creating the function. Yeah, there's still some time for that. So what we are doing here now is, of course, we create a function. So actually, we are using Node as our runtime. We want to the cloud events template, which already provides us a nice signature for cloud events. And we call that thing translate tweet. And as I mentioned, we want to talk with the Google Translation API. For that, we, of course, need to authenticate ourselves. And the Google Cloud API typically work that you expose certain environment variables, which points to our authentication file. For that, we first have to create a secret. Sorry, group control, create secret. I have it here. This is taken from that file. So here's my Google Service account included. And I create a secret, which is called Google SA. And this we will expose to the function within an environment variable. OK, this is created. Now let's enter our Google Translate tweet. In order to talk to the Google Cloud API, Translation API, we need to install a dependency, which includes the Google Cloud API into our function itself. So we use NPM for that, just as a regular node. Encoder, hopefully it should be done easily. Again, I would get probably here outed error. I won't fix it yet. OK, and now, as I mentioned, we need to configure our function. And luckily, there's also a feature which is called func config. And we would say we want to create a volume. We want to add a volume. And we want to add it from our secret. So actually, this will add something to your function that you mount from a secret. We take the Google SA, which I just have created, my namespace. And then I have to point to the directory I want to mount it to. And I'm using opt slash GCE, and that example. And finally, we need to add some environment variables. So we add it here like that. And we use the, I have to look that up. Google application credentials, or this is the variable which is picked up by the dependency for Google Cloud Translate. And we say, we want here our amount of directory plus the credentials here. So that one, OK, looks good. And finally, we need a final variable we add here, just like that here. And we use Google project ID. Sorry. And it's Knative Condemo. OK. OK, now we have set up our environment, and now we can start coding. So I'll do with that. So I start my, I start my, I hope that does work, right here. OK, this is the code that has been created for me by K and Funcreate. We remove some boilerplate, which we do not need. I hope I can do that. OK, we don't need the sample code even. OK, now let's get started. So first of all, I have to import my dependency to Google Cloud Translate. And then, as I mentioned, we wanted to translate into kind of a random language. So actually, I have here an array with different languages. They're all from my fellow team members here. And OK, that's here. And now we extract the tweet data from the event. The event that is coming in is based on this schema from the Twitter tweet. So we pick up this tweet. We create this, sorry, this random number here. This is just picking some random number from this array. And then we do the translation. So we have here a translate object. This is all given by the Google Cloud Translation API. And then we call translate. We could also call detect or another function that is exposed by this API. Then we enter the text in here. And then we enter the language code that we want to translate to. So this is the one which is picked up randomly. Then we have this array here, which means that the API is asynchronous for Google. So actually, we need to make our function here as sync as well. OK, and then now we have the text. Now let's create the actual text that we want to post. I have to make it a little bit nicer here. So here you see, I'm using the original tweet text. And then I add the translation here and then put a nice flag in front of this. This is also picked up from the initial array. And finally, we have to return that here, which means I have to set the type tweet translated. And then I have here tweet translator for the source. OK, this is the code that you need. So you see this is really kind of this glue code that you typically also see for lambda functions and where you really connect two different services together and add some extra functionality like here, like translating. Of course, you could call it out to any other service. OK, let's go back here. And of course, we want first to try it out locally. So it's not always if you want to develop against the cloud, it's often easier to pass and try it locally. For that, I just have to source the environment variables locally here as well. So I've set it in this shell file. And then I can use MPM start. I'm running this in the background. So it's running now locally, my function. And then I can just, again, leverage my event plugin. In this case, I'm sending directly to a URL. And the URL is here. It's localhost 8080. So I'm trying to jump to the top so it's easier to see. And I'm just adding some fake information or the data that I needed in my function. So you don't have to provide a full blown tweet cloud event, but actually only that one. And if I send that in there, I have something wrong. So let me just check here. And I have some OK. So it's got that we tested, so let's check it here. It's no such file. Ops, Google, but, but, but, haven't I said the, maybe it's just has been picked up by Echo, Google, application credentials. Looks good. So I'm not sure why it doesn't work. But nevertheless, OK, I can check the later. But as you see, it already points to the path which is inside the builder image. So let's keep finger crossed and deploy it as a function at the moment. And for that, I'm, no, just sure it really works. Send, very strange. OK, let me try to deploy it. So actually, you have already seen how you can deploy such a function. Let's go to, OK, and func deploy. So this will build the function locally. We are my local Docker demon. Send it to Docker I.O., and then we'll deploy the image as a service. Creative service, and yeah, this might take a little bit. OK, now it has been built. And now it's pushed to the registry. And before then, the creative service is great. You will see this at the top as well. You see also that the Twitter source is really, this is a true deployment because it has to pull regularly, so you cannot really have it scaled down to zero. In the future, there are plans to combine Cater for certain sources as well with Camelette, so that you, for example, for the Kafka source, Kafka Camelette, you also can spin down that deployment directly. So we see now that our translate tweet is running. Luckily, it's ready here, like that. And the final step, which is missing, is of course we need to create a trigger still. Trigger, tweet, translate. So actually, we have to filter on type or Apache Camel event. So this is the standard type that Camelette is exposing. You can override this, but this is good enough for us now. And then we are pointing it to our function. So actually, we connect now the Twitter source with our function, actually, like that. And yeah, and now this kind of is it. Now let's check whether this really works as expected. So sorry, I have to go to that one. And I have to go to Slack here. And let's see if I have here some text here. And now let's tweet that. Oops, I already tweeted that in the first code ever. So, but for sure not the last. So now let's wait and see what's happened. So I can go back here and see now that the container is created for the Slack sync. And then we should see the translated text here immediately now in Polish by chance. So this kind of concludes the demo. Let me jump back so sorry that the local experience was not working as expected. Yeah, but I hope I could show you that it's really super easy that you can create fancy integrations with tons of sources combining with functions. And yeah, so this is really the kind of at least from the final set of lambda experience that I could think that for the future you will see more and more like that. OK, last slide, quick outlook. So actually, as I mentioned, we are working on a sync candidate plug-in. There will be also a feature that you get typed options on the command line so that you get auto completion directly for the properties that you can choose from. At the moment you see you have some general purpose properties, key value pairs. But since the schema is also exposed by the Camelette, you can easily create dynamically CLI options that honor the schema. And also support for secret and config maps directly in the Camelette. At the moment they are just literally included the secrets. But Camelettes themselves already support the secrets and config maps, but not the client itself yet. And of course, we want to have more and more and more Camelettes and sync sources. But creating such a Camelette is a super easy process. You just need to do that. And finally, as I already mentioned, there will be at some point in time some integration with Cater where you can also scale down the Camelette sources, of course, only for those sources for which a Cater Scaler is available. OK, that's it. And thank you very much. I think we have time for one question if anyone has any questions. Thanks, Roland. That was great. The challenge, I think, that we've seen is the payload schema of the events and the event type accepted by the syncs. So if you're a developer trying to do this, how would you go about finding the schema of like, tweet the text, right? And then tweet that translated to talk to Slack. It's not easy, right? So there's not really a schema registry for that. So actually, because actually the Camelettes work that they more or less hand over directly everything that they get from the upstream source, for example, like today API. And what I recommend is the workflow that I've just shown where you say you just really try examine the tweet like from the source. You just look at that and pick up what you're interested in and then really create your function around that. So actually, it's really at the moment still kind of an exploring thing where you still have to try out and have to test it. There's no documentation for that. But at the moment, it's super easy. So actually, to be honest, 80% of the demo was to find out how to authenticate against Twitter and Slack, because Twitter has some, you need a developer account for that and you need to increase your developer level and whatnot. But yeah, the other thing was more the easier part. Thanks for the question. Awesome. Thank you, Roland. OK. Thank you. Give a big applause to Roland. Thanks.