 Hello and welcome to this CamelK demo. We want to show today an example use case implemented with this technology. In this scenario, we'd like to see an automated way of having some cross-functional collaboration between various teams. In this particular case, a strategy team has some questions and concerns that they need to have answered. All the teams in the company can help them so that they can then make decisions for the future of the company. Typically, this team, they would get together into a video link and then they will open a Google Sheets document and all those questions and concerns they can type them in and assign the departments they believe can help them. So what we want from CamelK is somehow build a platform that is able to automatically collect all the feedback from all the teams in the organization and make automatic updates on that Google Sheets. So first, we need some preparations to get this working. So of course, we need a Kubernetes environment ready and then we want to deploy a CamelK platform and this typically involves deploying the CamelK operator who will do all the work. And we have deployed as well a Kafka platform that essentially will carry all the information and all the interactions between the different teams. We need as well to prepare API access to Google Sheets so that we can grab all the data from the spreadsheet and we can make the updates. Also, we have prepared a mail server and with that we will simulate the interactions between the teams and also access to Google Drive because we want to upload some reports as we will see in a moment. So this is the overview of the platform. So we can see at the center the Kafka platform and we have a couple of Kafka topics there that will essentially carry all that information. And all around we see these CamelK pieces that essentially will be streaming in and out from the Kafka platform. The questions and answers from Google Sheets to the different teams collect all those responses and then make the updates. And simultaneously we have as well a fifth CamelK implementation that will replay those streams to produce a report that will reflect all those interactions cross-functional happening in the organization rendered in a PDF document that will get uploaded to Google Drive so that perhaps the top management can inspect and see how well the departments are working. So let's get started with all this. In our first stint of the entire data flow our mission is to use a CamelK building block. We will use a CamelK binding and it will capture the data in Google Sheets and then send that over Kafka in a topic. So let's look at that. So the first thing of course we'd like to pretend that we are one of those team members in the strategy team and this is the spreadsheet we want to work with. And some of our colleagues have already entered some questions so we can see here they say after our recent company's acquisition how will we integrate their systems with ours? So we'd like for example to enter our own question here and we say are we still having problems hiring developers? And we can assign these to the developers development team. All right so the next thing then is that what we want to do is to automate this and so we define a CamelK binding. So a CamelK binding is a resource that CamelK understands where we can just simply configure a data source and a target. So as our data source we indicate that we want to use the Google Sheets source. We have here anonymized the sensitive fields and we define a Kafka topic as a destination. So here's where we are going to place all those questions. So now we can look at our platform and we see that we have a Kafka cluster running and we have our CamelK operator ready to observe resources and then create integrations when needed. So we can jump to our command line and then just simply create that CamelK binding. So we say apply CamelK binding stage one sheets to Kafka. So we need to hit enter and immediately we can see that a port has been spawned. So this is the integration and now it's up and running. So it's probably has already captured the data in sheets and put it into Kafka. So now we can jump to the next stage. Now we need to define a second integration that will consume Kafka events from the topic and send them to different emails to the different teams to distribute the questions. So if we look at the source code of that definition, this just requires one single file and we can see that we define a source with the Kafka connector defining the topic that we want to consume. And so this is the routine logic and basically we extract the correct piece of information to check the team that this event has to go to. And depending on the team, then we use one email address or another. And eventually then we send that via email using the SMTP protocol configured here to send it to our platform server. Right, so with that, then we just look at our mail server, sorry, our client that is connected to the server and we see that we have three different inboxes with one per team and basically at the moment, they're empty. So let's just try to execute that. So we have our first stage already running in OpenShift and from the command line, then we can say Camel run and we say stage two and we just need to specify a dependency for the JSON manipulation. So we say Jackson and we hit enter. So when we hit enter, we see that the operator has instantly spawned the integration and the pod is now running. So if we check our inbox, we should soon then get our emails and effectively we get the alert and each team has received one email. So that's one question per team. So there you go. So we delivered the Kafka events and the questions to the different teams. And with that, we have done this stage and then we can just jump to the following one. Next, we want to build a third stage with Camel K that will implement the collection of all the email responses from the different departments and stream those into a second topic in Kafka called answers. So this building block does not require any implementation. It can also be a cabinet binding. Therefore, pure configuration. So let's look at that. So this is the platform where we have already stage one and stage two running and what we want to do is to deploy stage three. So let's look at that. Our cabinet binding is here and we pick up the mail source as the source cabinet so that it will be collecting all those responses by email. We define a meat way action that will transform the mail format response into a JSON structure. And then we'll use the Kafka definition to place that email into the answers topic. So let's then just upload that resource into the environment and the operator should pick that up immediately. So we say OC, apply, minus F, cabinet binding stage three. We hit enter and the operator, let's see. Okay, so it has now created the third element and that is now ready and running and waiting for email responses. And what I'd like to do as well is also deploy our fourth stage that will consume those events we just placed in the topic and will extract some routine information and then with that information, you will know where in the spreadsheet to place those responses. So in this case, we need to implement some code so we will just define a camel K file. So if we look at that definition, this is the stage four, one source file is enough. All we need to do is just define a couple of camel routes. So the first one, as you can see, we are using the Kafka component from camel and we are consuming events from answers. We extract the information from the subject in the email that knows the role that has to be updated in the spreadsheet. Then we do some cleaning on the body of the reply just to eliminate some verbose information and then we make the Google API invocation. So I've anonymized there the token, the access token and basically all we need to do is just to comply with the API specification from Google. And then we use this time an HTTP call to do the update. So if we go now again to the environment, we can say camel, sorry camel, as we use the camel client and we say run stage four and when we pass the dependency camel Jackson for those JSON manipulation. So when we hit enter, immediately the operator again reacts and we have already our integration running there. Okay, so if we look back again at the flow, then we have enabled the integration that will pick up all the responses from the different departments and then updates the Google Sheets documents. So all is left for us to do is to see that in action. So actually we can go to our spreadsheet. We see that the cells are empty. And if we go to our inboxes, we see that we have the three questions from the three different departments. So we will pretend that we are someone from architecture and that we pick up that email that says after our recent company's acquisition, how will we integrate their systems with ours? So someone can go there and reply and say we will be using camel K to build all our integrations. And then send that as a reply. And if our data flows are all working, then we should see immediately our spreadsheet being updated and it just has happened on screen. So we have our first response is in. Now we can pretend we are someone from development that again picks up that message and sees that strategy is asking, are we still having problems hiring developers? Because typically skills are difficult to find. So we say this is less of a problem with camel K because as we know with camel K we can build camel bindings that just require pure configuration. And these any Kubernetes user can do without any camel knowledge. So we send that response. And again, in no time that should be updated in our spreadsheet and we see that there on the third cell. And finally, if we are someone from operations we can reply to this question from strategy that says have we figured out how will we improve our long running batch processes at night? And we can say our plan is to use Kafka streams to build event driven applications and send that as a reply. And there you go. So we have all our responses fully automated and responded in our spreadsheet, which is the goal that we set at the beginning. So if we look back, we have completed our four main stages that fully automate this system that allows this automatic interaction between the different functional teams in the organization. And then we have a fifth and final integration flow for the video today, as we can see on screen. We want to take advantage of Kafka's ability to replay the streams. So in this case, what we want is camel K to consume from both streams that contain all those interactions between the different teams to crunch that and produce a report that will reflect all those cross-functional interactions in the organization and render that into a PDF document that we will upload to Google Drive. So in this case, there's some more logic than usual to do there. So this would be more appropriate for an experienced camel developer. So if we look at the source of that, then this is stage five, still is a single file definition, but we have a small Java helper somewhere there. But in essence, we can see that we have one consumers from the Kafka questions and we have a second consumer from the Kafka answers. We extract the necessary information from there and then we implement what we call in camel a couple of aggregators. And the aggregators basically, they correlate the events to produce a combination of that. So that's what we need. And out of it, then we have all the info that we need and we just render that as PDF with the Java help. And then we upload to Google Drive using the camel component for Google Drive. These parameters here are anonymized once more and all is left to do is just to go and upload these to Google Drive. So this is the folder where actually it will end up. And at the moment, all that we have there is the spreadsheet, but if we go and launch this integration, then we say camel run, the name of the integration stage five, we pass the Java helper and the definition that we have just seen with some of the dependencies. So we launched that and it says integration created and that is going to do the job and if everything goes according to plan in, it gets all the events from Kafka, produces that report, connects to Google API and uploads the report in here. And as we can see, it just did. So we have a report here as a PDF document on screen. We can pretend that we are someone interested on that document and then we open the document and we can see all the interactions that we have seen during the demo with the questions from strategy and the different answers from the different teams as we replied to them from the mail client. And well, and this is it basically. This is all I wanted to show today on the demo. So thank you very much for your patience and listening to this. Bye.