 All right, everybody, welcome back to another OpenShift Commons briefing. Today, we have Ricardo Zanini, who's from Red Hat and a senior software engineer. And we're going to have a topic that I don't really know a lot about, because I haven't heard a lot about it, which is why we invited him here to talk about event-driven applications using Cogito serverless workflows with NK-Native. And so K-Native, we've had a lot of talks about, but Cogito is, I think I'm pronouncing it, mashing up the name wrong, but we'll let Ricardo tell us a lot more about this, and then we'll have live Q&A after the presentation and the demo that he's going to give. So take it away, Ricardo, and introduce yourself, tell us what you do at Red Hat, and tell us all about this. Sure. Sure. Yes. Thanks for having me, guys. Well, my name is Ricardo, like Diana said. Today, we're going to talk about event-driven applications and how we can create those business applications using the serverless workflow specification and NK-Native platform. Well, I'm a software engineer. I work with the Cogito, actually, project, and I'm also helped in the CNSTF serverless workflow project specification as well. So we work in the both ends, doing the specification side, specifying the workflow DSL, and also in the implementation of the Cogito engine. Well, let's get it started. Let me see if I can. All right. And then, Gina, today, I'm going to introduce you, guys, the serverless workflow specification and what we are doing there. Just a brief introduction to the project. I will try to not talk too much about it, because there's a lot of surreptities that you should take a look in the specification and figure yourself all the details. And about the implementation itself that we are working on right now, that is the Cogito project that we call. It is a business application project for designing and creating business applications. And after that, I will introduce a small use case for someone to feel comfortable. It is like an online store or the processing. So it is a super simple use case. Everyone is super familiar, I hope so. And we are going to run a super short demo as well, simplifying how the workflow is reacting upon events and what he is doing on a K-native overall platform. So about the CNCF serverless workflow specification, we say CNCF because the serverless workflow project, it is a sandbox project on CNCF institutions. So you can go there in the sandbox project and you see the serverless workflow specification project in there. And we are accepting contributions and it is open source, pass to license, you can go there and figure things out, ask questions and maybe introduce some new features that you like. Or if you are keen to implement the specification as well, we can help. We have SDKs, we have a lot of tools around the specification that can help you and to understand, to implement, maybe use the specification in your company as well. So what it is, the serverless workflow specification, it is something that we are targeting. It is to create a declarative workflow language. So based and to target for specifically serverless computing technology domain. So that means reading events, producing events, consuming events, correlating them, all functions outside the scope. Have all those things run, it is mostly, we can out-escalate the functions up and down and do all those features that you already know about serverless computing. But having these in mind in the workflow language as well. This quote I brought from the website. So you can go there and visit the website, understand what it is with more details about that. Why? Why we started with this crazy project, why it is important for us? Well, first, we believe that workflows can capture and organize business requirements. So what it is, you can, instead of grab your maybe Java goal or Python code and bring to your business analyst and explain to them, hey, here's what we are doing in this service. Here's what is going on there. Look at these conditionals here, we are doing this and that. Instead of that, you can just bring the workflow to your business analyst, to your business person and explain to them what is happening there and they can understand. They can even help you design the workflow or they can even create a workflow themselves and handle it to you in order to have that run into a runtime maybe. So that's one of the idea behind it. The specification also targets to be a vendor neutral platform independent workflow language. So what this means, means that we do not tie the specification with a specific vendor. It is not code. It is not anything like that. It is just the specification. Anyone out there can take the specification in your runtime and have a workflow run time running based on the specification. It is platform dependent because we are not tied with a specific technology. We do not say, hey, the workflow must be implemented in Java, for instance. This is not the case. So anyone can just bring the specification and share and use that as they please in their implementations. Imagine that having this common way of describing a workflow, we can potentialize and creating some common libraries to be shared among all the runtimes, the implementations itself. We can create tooling around the infrastructure around all the workflow and having all of them to have a common way to describe something and create something together. That is the nice of being an open source specification. So like I said, being vendor neutral, we increase productivity, productivity and learning curve. You don't have to learn all of those workflow language that is out there because nowadays if you go in the IT news, you see that every day, every week, we have a new workflow engine out there. So instead of that, we are proposing this, like a specification that can be run in lots of runtimes. And those runtimes can be offered by, to be run on Kubernetes and OpenShift, Google Cloud can have a runtime for the specification as well. AWS, Azure, whatever cloud provided out there, they can offer their own workflow service based on the specification. That's our win-win situation that we believe that everyone implements the same DSOL for specifying workflows. So the users would gain a lot of that. The specification is based on standards. So we looked around and saw lots of standards out there. So for instance, to define an event, like I want to consume this event in order to start my workflow, or I want to produce an event at the end of my workflow, or I want to produce an event if I go through this branch on my workflow. And we use the cloud events specification to define these events in the workflow definition file in the workflow definition itself. You can also run, and not run, but call external restful services, for instance, using the workflow specification. And we use the OpenEpi to declare how you can create those calls from the workflow itself. So when you, for instance, you want to call, when you receive an order, you want to call an external service to validate your order to do something else. And it is a restful endpoint. You can express that call using the OpenEpi specification. So you bring the OpenEpi interface into the workflow, and you specify which function in there that is defined in the OpenEpi specification that you wish to call using the workflow DSOL. Also, it is, of course, based on workflow patterns. So there is execution handle order, error handling, data management, data transformation. So there's lots of patterns that we envisionize in the, that we, that we view as useful to be, to have in the DSOL, you know, in the specification itself. You can see lots, all those information is in this link here, that it goes to the specification, and you see all of the features that we have in there. Well, the workflow itself is based on a state machine diagram. It is, I say that it's based because, you know, we have start and end conditions in our, in our workflow. So when you start and you go from state to state, you know, in a fluent manner. So each state is responsible to do something. It is responsible, for instance, to produce an event or to call an external function or to decide to split between two brands or to go to one branch or to go to another branch. I can call not another subflow, for instance, and there's a lot of type of states that we, that we have in the specification. As I said, you can go to the specification to see in the details, every one that we support that we have in there. If you, if you feel like that we don't have something that it might be interesting, you can propose, you know, a PR or send or which are out to us and say, hey, this should be nice to have in the specification. And we can, you know, talk together and figure that out for the data processing itself, it is like, you know, you enter with the data and you go out with data. So in between, you have this pipeline of states, you know, going through one after each other and then you can have like a data processing with the data that you're expecting to receive on your workflow. The specification has like those three main, this is the backbone of a workflow. Let's say this way, we have functions, events, and of course the control logic of the workflow, the first thing is the functions. We define the functions is, for instance, a call to a restful service or a call to a GRC API as well. So you can define the functions, the spec file of the open API, for instance, the idea of the function that you'd like to call in the section of the workflow. In the event section, you declare the how you're going to consume and how you're going to produce those events within your workflow. So for instance, you are listening to our audit event and you're producing maybe shipping events or you're producing approved, order approved events or in a, you know, in a scenario like a bid service, you are listening to bid events and you're producing maybe notifications for our users that are listening to that bid, for instance. So you can go crazy and do whatever declaration of events and as you please as your business needs in this section of the workflow. And it states it is not the least, but not the least important part of the workflow control launch that is where you describe the workflow and what this workflow is doing, like the data processing and how things are going to go there, like calling these functions or producing these events that you're declaring there or consuming events that you declare or branching the workflow, the run of the workflow into more sub flows of whatever control logic that it'll come up with or your business need in this part of this section of the workflow. As always, always lies that I linked here to a specific session in the in the in the specification so you can go there and see for yourself all of the details of each session and understand more in details of what we are going trying to express in here. I'll try to be brief because the presentation can be long if I stay on one or other details. Well, every time that we like present the specification, talk about specification events, people ask about implementations. So, well, it is nice to have a specification and it is also have not nice to have an implementation, right? Because that's the point of having a specification. You don't you don't just create a specification to be in there. You create specification to be implemented. So, we took the specification now talking about the Cogito project. We took the specifications and said, hey, there's a petition here when we can actually implement this specification and offer to our users a way of then to declare the workflow using JSON or EMO files in a declarative way. And also if there are other vendors out there implementing the specification, they can migrate to Cogito easily, you know, using the same thing that they learn with this DSL in other implementations with Cogito as well. So, it is a work in progress. So, we are in tech preview actually right now, you know, community-wise of course. There is some limitations of the implementation that we also we are working towards to implement, but the basic things are there and I'm going to show you guys a little bit about the Cogito itself. It is, I like to call this like a cloud-native solution, you know, it is a project with lots of components. So, I'm calling like a cloud-native solution for you to build business applications, so business applications that you can solve business problems. And then what I mean by that, it is like we have an engine that is capable to process rules in a specific domain driven DSL. So, imagine that you're an insurance company and then you want to define how you're going to process that, you know, super details, lots of computations and calculations that it is more or less cumbersome to have that, you know, in your code base. So, you're externalizing rules and you have all of them defined on those rules and you can give to the engine in order to run it. You have also, we have also an engine capable to run complex decisions based on the DMN standard. We have, of course, serverless orchestration feature that is super brand new, something that we are working now that is the implementation of the CNSF serverless workforce in a specification. We also have an agent capable to run BPMN based files. And we offer some tools focused on cloud, so far you can bring everything over your business applications that you created with Codege to the cloud. So, we have a specific tooling for that. We have like editors, since it is a business application driven project, it is, of course, we have editors for you to design and create your BPMN process for you to create and design your DMN decisions as well, your rules, and that you can download on a VS Code mark at place. We have an online editor as well. If you go BPMN.new or DMN.new, you see our online editor there, and you can start scratching your process or your DMN right away. You can also download it from our website, this is the download of the editors. We are an agent, like I said, based on Druze, GBPMN, OptiPlanar, as you, of course, if you're a long-term Java developer, of course, you should have heard about those during your career, like Druze, BPMN, they are super famous, yeah, popular, let's say. That's correct, the correct word. There's super popular frameworks out there, Druze and Druze, and a process based on BPMN. We are also working on supporting services capable of giving support for your business application. So, for instance, if you run a process and you like to see what is happening inside that process, we have some supporting services that is capable to run on cloud that can give you a glimpse of the process itself and what's going on out there. Or even a user task, for instance, in a given process, you can go in this supporting service and see the tasks and approve a task or go forward with a process and et cetera, so there is a lot of features out there in those supporting services as well. And like I said, some cloud tools. We have a specific Kubernetes operator for deploying Code2 applications on OpenShift or on Kubernetes, and I'm going to use the operator in the demonstration. So we bring and pack this all together, and that's what we call the Code2 project. So this is a feature of a handful of components to help you design and create your business applications and deploy it on the cloud. Let's go to the use case straightforward. You mentioned that you have this online store, right? And after new orders, you create an event in your architecture. Every new order we create a new event and then this event is being consumed by this order approval service. So the service will be some sorts of rules and some sort of business requirements, they say if it's approved or not. So that's the state of the architecture right now. And John Doe, the CEO, come to you and say, hey, before approving the order, I'd like to go through a shipping and fraud verification first. So whatever comes a new order before approving it using whatever features that you have now, do a shipping and a fraud verification for me, please. And I like to run that in parallel because we can then concentrate the approval in the end of this order processing. So it came up with, okay, so what about like creating a pre-processing workflow? So when it comes a new order event, we are able to split this process inside like two events. You can create two events after this workflow within this workflow. We create a shipping hand list event and a fraud evaluation event. So it's component in the architecture can listen to this event and react to point it and do whatever they want with that. So in our example, we have the International Shipping Validation Service and we have the Domestic Shipping Validation Service that are listening for the shipping events. And we have the Fraud Evaluation Service that is listening for the Fraud Evaluation Service. Also we finish with the shipping validation we meet a new event saying like shipping verified or something like that. And after the Fraud Evaluation Service is finished, I can say, hey, this looks like a fraud or does not look like a fraud can be anything. And in the end, we have our order approval service again and you can correlate all those events by like, for instance, the order ID and say, hey, I received the shipping approval, I received the evaluation approval and I received this new order event. So I can correlate the three of them and then they can proceed doing whatever they want. So this is a small excerpt of a small processing use case in our example. So I have like five services running around, none of them knows about each other, they just listen about events. So that can be an event-driven architecture, an event-driven applications, they're just reacting upon events and emitting new events and they can add new components in your architecture. So you have this flexibility of creating things around your architecture, removing an event, adding another. So for instance, tomorrow, you won't ship internationally anymore and just remove the international shipping validation service. There's no use anymore, right? You won't have to code something new or anything like that. Just remove an event from your, you just remove a service from your architecture and you stop doing that thing. Or no, wow, I wish to add some new capabilities. So I add a new service in order to listen to this event and do whatever they want. So this is more or less what an event-driven architecture the benefits of an event-driven architecture can give to you. Async processing, this kind of flexibility, composition and orchestration of events. That's why we are talking today about the workflows, for instance. So let's try to zoom into this first service, the word pre-processing workflow. Let's see how it is doing its thing. So I draw this workflow like in order to pre-process my order. So I receive this new order event and I will process the order. I split into two parallel states saying I will handle fraud in one state and I will handle shipping in another state. So I can figure which kind of events I need to produce for my architecture. So this is happening inside that pre-processing workflow service. So this is more or less what looks like my workflow. Let's see how this workflow works and how I'm describing this workflow in a Yable format using the server's workflow specification. So I broke the workflow into three workflows. I have the main workflow, the water workflow and I have one state here that we call the parallel state, calling those two sub-flows, the fraud handling workflow, the shipping handling work. So let's take a look at it. The first part of the workflow, it is the, I call the headers. When we describe, we give some information about the workflow. So the name of the workflow, the identification of the workflow version for you to control the version of the workflow, some description for it. And I will say which state I will start my workflow in this case, the receive order. The second part, I have the events definition when I define how I will consume that event. So let's say that in this particular use case, I have the order event type, cloud event type. So someone out there, any service out there is creating, is producing this ordered event, cloud event for me and I'm capable of listening to it. So the first state that receives the receive order, it is capable of listening to this event, the ordered event here that we tie with the name, like order event name. I have the event by his name, it is the ordered event here. And when I receive this event, I say transition to process order. And this state process order, which I call type parallel, I will create the branch fraud handling, shipping handling. Let's say my engine will call, at the same time in parallel, those two sub flows. And once they finish, I end my workflow. And to create those workflows, I'm using the server's workflow plugin and you can download it in the VS code marketplace. It is maintained by the server's workflow community, the specification community, and you can go there, see the source code, do whatever you want, download it and please hate five the plugin itself. And we have this nice feature that is you can generate diagram using any workflow file. So it uses the, you know, this as simplification of the event diagram not from the UML. So we have a start, we receive the order, it is an event state, we process the order in a parallel state and we create two branches. So let's take a look into the first one, the fraud handling workflow. Same thing. I have the name, which state I will start, the version, and the events that I'm going to produce. So I'm going to produce a type event orders dot fraud evaluation. And let's take a look into the states. So we have a data conditioning here, which is when we start this workflow, we analyze the data inside the input of my workflow and we perform a JSON path expression into this data. And if the total of my order is greater than 1000, I would say that we need fraud verification. If it is under 1000, I won't need fraud verification. Of course, this is super lame. An example, in real use case, you won't do things like that. You can create an estate before that. You can call a service to evaluate some aspects of taxes or behavior of the customer, whatever thing you want. And then you can say to the workflow, hey, this order looks like they need to perform is to be validated the fraud or something like that. So for this case, to simplify the conditions, we have this event type switch and I have these conditions in here, very lame, but it is super nice. And I inject into my data state a new attribute to my workflow, the fraud evaluation equals true. And then I produce the event fraud evaluation. And this event contains in the data attribute everything that I created here in the workflow. So it goes with the fraud evaluation equals true and the order information as well. So anyone that is interested in the orders that fraud evaluation event can do something with it. All right, so let's do the same thing that we did and let's generate diagram using the plug-in. And as you see, we see that we have like a fraud handling sweet state and depending on the condition, I'm going to say that we're going to produce a fraud evaluation if not otherwise we just end our workflow. Well, very simple and hopefully you could understand what is happening there. The other workflow is the shipping handling workflow that is being executed at the same time as the other one. More or less the same thing, we have now two types of events. We have international shipping order event and we have domestic shipping order event. So depending on the context of my order, I will emit, I produce an event like an international shipping order event or a domestic shipping event. And more or less the same thing that we did there, we have also a sweet state case here. And the first condition it is like, if it is within US, it is a domestic shipping. If it is not, it is an international shipping. Again, super lame example, in a real use case, you can call a shipping, maybe you can call it maybe a Google service for localization to find the address, the correct address and then to figure some other things out of whatever and enhance your order data with some output for this service, for instance. But in this case, after if you really realize that this is a US order, so we transition to domestic shipping, if it is international outside US, we handle for international shipping. So for domestic shipping, I'm going to add a new attributes to my data that is shipping domestic, otherwise I'm adding shipping international to my data and I will produce an international shipping order and I will produce a domestic shipping order. Again, super simple, but you can see the powerful of something. So imagine that you're talking with a business person and they can bring the DSL to them and explain, hey, this is what is happening inside this microservice. This is what, which kind of events that we are creating, that we are consuming and what we are doing with it. So for instance, in a real use case, you can say, hey, I'm calling this Google service and we can receive this kind of information and we can process this information as we please using the workflow. Let's see how it looks seeing the generated diagram here. So we have this, the shipping handling state, it is a sweet state and there you go. It can be a domestic shipping or it can be an international shipping as well. As you see, in the end, we are going to produce an event being international or domestic shipping whatsoever and we are going to win. Well, that's what's looked like the event when you see it in this perspective. And of course, this is the, when you are designing and we are creating your workflow. When it turns out to deploy and into a Kubernetes or on a ROPE shift cluster, for instance, or in the cloud, you can use the capabilities of the Kuchita. We have CLI, so in this case, you can deploy the workflow using this command line, like Kuchita deploy, that is the action, the verb of what we are doing, order server's workflow, that is the name of the service that I gave. And they can add all those YAML files into the CLI and the CLI will push, in this case, to an open shift cluster. And in there, we are going to generate code based on those files. And we will create an image, a Kuchita process runtime image into an internal web registry. And the Kuchita operator will be capable to create the Kubernetes resources and the candidate of resources to deploy and have this, to manage this workflow, running within the platform itself. And of course, for this use case, specifically for this working with events with Knative, we need the service platform. This also works on Kubernetes, but on Kubernetes, you can build within the cluster. You have to build the image yourself and then push the image to the cluster and the Kuchita operator will take your image and it will perform the same steps. You create the Kubernetes resources, you create the Knative resources for you and for your service to be connected with the Knative broker in this case. Then I see that we have some chat going on there. You'd like me to stop here and or we can answer the questions after the... Let's answer the questions at the end. And most of the chat is me trying to get people to ask questions. So this is a good point. If people have questions, type them into chat and we'll make it happen at the end. Thanks. All right, no problem. Yeah, I know if I'm rushing or not. So you tell me. All right. What else do we have that service deployed on OpenShift? This is more, sorry about all the technical details, but this is how it looks like from the Kuchita operator perspective how we deploy the service in there. So our service will be like an event source or this event source can be any other thing. In this case, we have the service workflow in here and we are listening to the new ordered events coming from the Knative broker and we're consuming using Knative event in triggers and that triggers an event for us to listen to it. So the order event comes here into the broker and we listen to it and we start our workflow. Our workflow will produce events to the broker using sync bindings. So we produce events to the Knative broker and any other service out there, any other, you know, can be a Knative service, can be any deployable pod or anything else, in the spectrum of the Knative service platform to be listening to this event and to react to it and this is what we are going to see in this short demonstration. Let's me see. I think this is not in, let me try to zoom that. Well, can you guys see this? Yep, looks great. All right. So I'm producing an event to the platform and indeed the country is US. The total value is 1,001 and is an iPhone 12 order. So I'm going to push this event to the platform and you see what will happen in there. So in this terminal here, I have three listeners of our Knative, of our workflow event producing events. So in the top most, we have the fraud evaluation service. So we are listening to the events from the fraud evaluation service. We have the, I'm sorry, the shipping international here in the left and the domestic shipping here in the right. So once one event comes to the, either to the fraud evaluation or to the international or domestic shipping services, you will see the produced event in here. So let me push the event. It is like a fire and forget type of event. It's taking some time because my internet connection is not that good. So I push the event and we can see that we didn't have any pods and listen to that because those are Knative services. And we receive a fraud evaluation because the value is above $1,000 and we received a domestic shipping because the country that we shipped our iPhone it is within US, it is a word from the US. So that's why it is a domestic shipping and you can see it either. And we go to the OpenShift cluster, you can see, hopefully you can see if my international connection helps, help, help. It is a live demo, damn. Yeah, you already saw those, because I'm fashion for the pods of the shipping international service and you'll see there is no resources in there. And yeah, I don't know what is happening. I don't know if I need to refresh this, but whatever. And you'll see that is this event coming in here and this event coming in here. So there is some extensions as well. So we have some information about the process itself. And here you can see that the fraud evaluation service received the fraud evaluation equals true and the domestic shipping service received the shipping domestic. And this shipping domestic it is not, it is in here but the fraud evaluation is not in here because we run one after the other. Well, I see, yeah, we can see the pods in here and because we were talking, Connecticut will destroy the pods because they are not being used. So that's why we are seeing terminating me here. So let's say that we have, I don't know, maybe a new order from Italy and that's above $1,000 as well and I'm gonna send it quickly this time because the pod is already there. I'm creating the shipping international pod in order to listen to this event and react upon it. And the same thing, you won't see fraud nor domestic here because we entered in that branch that we just received from the international shipping. So now try to imagine this in a real use case where you have to equalize your resources. You have to save resources in order to have a more profit company. So in this case, let's say that you receive, I don't know, maybe 9% of your orders, they are domestic. So you don't have to have like the international shipping and handling services be up there all the time. You can have the workflow to control all the logics and you can break through into categories. You can break your domain into more specific things and be flexible enough to distribute your services around your architecture. And you won't have all the, you won't waste resources into something that you won't need for some time. So this is a nice example, a nice use case for YouTube and maybe reevaluate some of those things that we are doing. Because normally people create like, oh, huge microservices, they are doing lots of things. And maybe sometimes it is nice to just break into specific parts and make them run into this fashion. Like I can create like lots of domestic orders or lots of international orders and having this part of my system not producing services on producing events, not whatsoever. So I can reduce my cost and sometimes that will help a lot. Well, let's get back. There's some much more things that I'd like to discuss. There is lots of details and strategies about it. I know that it was super quick and brief things. I introduced lots of concepts, like the cloud events, sorry, the service workflow specification, the code to project, the how to deploy, the K-native. There's lots of things out there. And also lots of resources. You can go there in this on our websites and see what we are doing and the use case that we are capable to help you solve your business problems. So reach out to me as well. I have Twitter account. You can send me a DM message or send an email or reach out to me into GitHub or whatever. I'm up there, you can see. And I would like also to ask you guys to take a look in the blog as well. We are trying to update the blog twice a week with new content and to give people an understanding of what we are doing in Cogito and the business applications in general and how you can solve your business problems. And if you're curious about the demonstration, the example itself, you can go into these well holding here and see all the source code. You see the workflow. I create some scripts in there for you to deploy in your Kubernetes instance. We have the image already in there. No, the Docker file, sorry, already in there. So you can go download the Docker file, run, use the image, run the image locally. You can play around this demonstration, whatever you want. And I would like to invite you to see the presentation from last year that Tihomir and I did. Tihomir is much better than me explaining the serverless workflow specification in there. We go through more details about the specification and if you're curious about it, just go in there. And of course, the serverless workflow.io website, you have all the links that you need to know there. Our Slack channel, our mailing list, our weekly meeting and everything else, you see the resources, okay? Well, thank you, that's it. I don't know if we have any questions or not. We've got some time for questions and there's a couple of them in the chat. So, and I'm gonna try and I'm gonna do them in reverse order, because I think you answered some of the earlier questions already, and Eric is asking, why was K-Native chosen to increase the workload capabilities and not an auto-scaler? Well, just for the demonstration purpose, actually, we can use auto-scaler. That's not a restriction whatsoever. I just found that more simpler to show to users what we can do and then in the example, you will see all the K-Native resources for you to replicate the exact same thing into the environment as well. So, yeah, no, not particularly any reason whatsoever. But for the K-Native eventing part, it is because the Kojta engine, actually we have these connectors and the Kojta operator as well is coded and we have this support for the K-Native eventing platform. So we created a specific K-Native eventing resources so for the K-Native eventing part, that's why. But for the K-Native serving part, no, it can be whatever thing, the auto-scaler can be no regular deployments, whatever, that's no particular reason. Cool, all right, and there's one other question from Ilyan, is it possible to create custom code? Oh, yeah, of course, because this is a Java project. It's a straight through maybe in Java project. In our, in the example itself, I have a Redmi file here that I explained how, let's see, how you can create this serverless workflow Kojta project from the scratch using Java. So you have to know Java, but yeah, you can add a custom code and you can call this custom code using the workflow. We have a specific, in the Kojta implementation, we have a specific functions that you can declare in here for, you know, like the, you can give a name and then in the metadata space, we have a, what do you call, like a service internal calling. So you can call any Java being from the workflow. So yeah, you can do it. But from the specification point of view, we have the JQ capabilities. So you can create a function expression and then externalize this. We can define here the function that is an expression as a function. And you can declare any, if you're a custom code, it is now related to data handling with the input. You can just create the expression and in here in the workflow itself, and that will work. And I'll say insert. Yes, so he's assuming you can call out to other services during execution. Ilian would like to be able to call out to fraud prevention API that they have. Yeah, absolutely. Let me show you the specification itself. Yeah, you can call any functions. We have three in the specification. We expect the RESTful service invocation or C service invocation and also expressions, like I said before. In the RESTful, you can define like a function like this, let's say, same order confirmation and the operation of this function, it is an open API definition file. And after the hashtag here, you have the operation ID of that particular open API. So for example, let me see if I can find it. Well, the famous pet store. Yeah, yeah. So imagine that you have like this one here, this open API spec here, and it would like to call the list pets function. So you just say, hey, this is my file and this is my operation ID. So the code engine will read this specification and will generate code client code to call this REST service. So yeah, you can call it a third party API and do whatever it wants within your workflow. So after receiving the payload of the response of the list pets, you can do whatever you want in your workflow. Yeah, that's correct. Okay, so someone's now asking, where are correlation keys state stored if used? Well, the correlation ID, it is we are working on the stateful implementation. So we don't have it yet, I'll go to the site. But in the specification, we have the part where we define, let me see if I can find it real quickly. Oh, we're on top of the hour. But let me, let's use the, yeah, here. In the events, you declare the correlation in here. So in this use case that I brought, in the end, in the final order process, so if that process would be like a workflow service as well, because I don't know what it is, it is just an example that I gave. But if it is a workflow as well, you can create the correlation in here in the workflow and say the event, say, hey, the attribute name, it is ordered ID, for instance. And then you can, when you receive everything else, you do whatever you want. In Cogito, we don't have stateful yet for the part of the server's workflow, but we are working on it. But in this case, we are going to handle that in memory. So we receive that, at the same time, we correlate and we do all the other stuff. But we are looking forward to making this stateful. Cool. All right, well, there's one earlier question. I think you might have answered it, but someone's asking for an understanding of how this relates or differs to camel K. Well, camel is for, I'd say, integration more, most of the time, implementations of integrations. And we do not target integrations. So you cannot like FTP a file or send an email or those things like that. It is not what we are discussing here. And of course, you can implement a pipeline in camel K. This is not the idea. But from our perspective, it is more or less a declarative workflow of a feature that we are implementing in here. It is not like dealing with integration side of the things. I understand that is some things that they can interconnect or we can implement an integration in here. Like for instance, you can call a Twitter API using the REST function, of course, or you can, in the camel K, you can create a route that you go through every step as we created in here, of course. But each, I understand that each feed, each tool has its particular strengths and you should leverage from that. So from the workflow side, sorry, I'd say that for this kind of declarative workflows, business applications that we have, this is a suitable application for this kind of use case. That's my point in there. Okay, cool. All right. And so we are almost to the top of the R and the very first question was, where did the name come from? Well, that's something that people ask sometimes ended. Yeah, oh, I'm sorry. You, you know, I can't explain it exactly why the founders, they come up with this name, but everything that we do in Codito, it is like related to knowledge. So we call ourselves the key. So the knowledge is everything. That's our motto. And because all these rules, decisions, processes, everything is related to business applications and knowledge based. And it is, I believe it's based on, of course the founders can explain it much better than I, but from this quote, Codito ergo son from Renee, this card is, you can read the week, but yeah, and the key because Kubernetes, you know, it is a cloud-based platform. So that's why that's called Codito. I don't know if I answered your question. Hopefully. I think that's great. You know what the best part is? It's not a Greek or a navigation term. So that's really the most wonderful thing. It's like we have completely run out of K words and we're starting it in the Kubernetes cloud-native space. So you've done well. So in, we've got like four minutes left. Is there some new features or what's in the roadmap that what's coming up next that we should be watching out for? Well, like I said, Codito is a working progress of the serverless workflow specification. So the workflow specification is going super in the community. We are creating great things in there, great big companies are interested in the specification. This is a very nice thing from the community side. We are about to release 1.0 in the specification. So this is a nice thing. Hopefully we will see more people trying to implement in it. And from Codito side, we are keen to implement everything that is there in the specification to be 100% compatible with the specification. We are about to bring a JQ to the engine. So you can create JQ expressions in your workflow. And so you can transform or do whatever you want with the data within the workflow. And of course state, this is a very important thing that we are looking for, to implement in the engine as well. As well as the error handling compensation and all of those workflow patterns that we don't have yet from the workflow, from the serverless workflow perspective. But yeah, there's a lot of work to do in there from the specification and the implementation side. Yeah, so if you can share again that resources page, because I think that's a great way to end. And if people want to get involved, then follow the link. Oh, please. Yeah. Yeah, in the front page of the specification, if you go here in the specification, you'll see our GitHub page. And you can see, oh, I'm sorry, all the information that you need. Mainly how to engage with the community. We have a Slack channel under the CNSF Slack. So please go there and say hello. And if you have any other questions, just reach out to us there. We are keen to help. And looking for contributors as well. Well, thank you for making this really interesting and engaging talk. Ricardo, we'll definitely have you back to celebrate maybe 1.0 release. And just keep it, keep us in mind. And there were plenty of great questions today. So thank you everybody for tuning in. And Chris Short for producing in the background there. So really pleased to have you here, Ricardo. Definitely come back and keep us posted enough to date. And everybody join us on the community Slack channel. If you have questions that he didn't answer today, we'll try and get them there. All right. Well, thank you everybody. And again, Ricardo, thank you. I love the hand waving while you're talking. The most expressive virtual event so far. Yeah, I'm trying hard. All right, well, take care and thanks everybody. Thank you guys. Bye.