 All right, everybody, welcome back to another OpenShift Commons briefing. Today we have Ricardo Zanini, who's from Red Hat and a senior software engineer. And we're going to have a topic that I don't really know a lot about because I haven't heard a lot about it, which is why we invited him here to talk about event-driven applications using Pageto serverless workflows and K-Native. And so K-Native, we've had a lot of talks about, but Pageto is, I think I'm pronouncing it, mashing up the name wrong, but we'll let Ricardo tell us a lot more about this, and then we'll have live Q&A after the presentation and the demo that he's going to give. So take it away, Ricardo, and introduce yourself. Tell us what you do at Red Hat and tell us all about this. Sure, yeah, thanks for having me, guys. Well, my name is Ricardo, like Diana said. Today we're going to talk about event-driven applications and how we can create those business applications using the serverless workflow specification and the K-Native platform. Well, I'm a software engineer. I work with the K-Native, actually, project, and I'm also helped in the CNSTF serverless workflow project specification as well. So we work in the both ends in doing the specification side, you know, specifying the workflow, the SL, and also in the implementation in the K-Native engine. Well, let's get it started. Let me see if I can. All right. In the agenda today, I'm going to introduce to you guys the serverless workflow specification and what we are doing there. You know, just a brief introduction to the project. I will try to not talk too much about it, because there's a lot of surities that you should take a look in the specification and figure yourself for all other details. And about the implementation itself that we are working on right now, that is the Coach2Project that we call. It is a business application project for designing and creating business applications. And after that, I will introduce a small use case, you know, for someone to feel comfortable. It is like our online store or the processing. So it is a super simple use case. Everyone is super familiar, I hope so. And we are going to run a super short demo as well. It's simplifying how the workflow is reacting upon events and what he is doing on a creative overall platform. So about the CNCF serverless workflow specification, we say CNCF because the serverless workflow project, it is a sandbox project on CNCF institutions. So you can go there in the sandbox project and you see the serverless workflow specification project in there. And we are accepting contributions and then it is open source, pass to license, you can go there and then figure things out, ask questions and maybe introduce some new features that you like. Or if you're keen to implement the specification as well, we can help. We have SDKs, we have a lot of tools around the specification that can help you and to understand, to implement, maybe use the specification in your company as well. So what it is, the serverless workflow specification, it is something that we are targeting, it is to create a declarative workflow language so based and to target for specifically serverless computing technology domain. So that means reading events, producing events, consuming events, correlating them, call functions outside the scope. Have all those things run, it's more fully, we can out escalate the functions up and down and do all those features that you already know about serverless computing. But having these in mind in the in the in the workflow language as well. This quote I brought from the website so you can go there and visit the website, understand what it is with more details about that. Why? Why we started with this crazy project? Why it is important for us? Well, first that we believe that workflows can capture and organize business requirements. So what it is, you can instead of grab your maybe Java, Go or Python code and bring to our business analysts and explain to them, hey, here's what we are doing in this service. Here's what is going on there. You know, look at these these conditionals here. We're doing this and that. Instead of that, you can just now bring the workflow to your business, your business person and explain to them what is happening there and they can understand. They can even help you design the workflow or they can even create a workflow themselves and handle it to you in order to have that run into a runtime, maybe. So that's one of the the the idea behind it. The specification also targets to be a vendor neutral platform independent workflow languages. So what this means means that we do not tie the specification with a specific vendor. It is not code. There's not anything like that. It is just the specification. Anyone out there can take the specification and implement in your runtime and have a workflow runtime running, you know, based on the specification. It is platform dependent because we're not tied with a specific technology. We do not say, hey, the workflow must be implemented in Java, for instance, this is not the case. So anyone can just bring the specification and share and use that as they please in their in their implementations. So imagine that having this common way of describing a workflow, we can, you know, potentialize and creating creating some common libraries to be shared among all the the the runtime is the implement the implementations itself. We can create tooling around the infrastructure around that around all the workflow and having all of them, you know, to have a common way to describe something and create something together. That's the the the the nice of the being no unopened source specification. So like like I said, being vendor neutral, we increase productivity, productivity and learning curve. You don't have to learn all of those workflow language that is that is out there because nowadays if you go in the IT news, you see that every day, every week we have a new workflow engine out there. So instead of that, we are proposing this like on a specification that can be run in lots of runtimes and those runtimes can be, you know, offered by you run on Kubernetes and OpenShift, you know, Google Cloud can have a runtime for the specification as well. AWS, Azure, whatever cloud provided out there, they can offer their own workflow service based on the specification. That's a win-win situation that we believe, you know, that everyone implements the same DSOL for specifying workflows. So the users would gain a lot of that. The specification is based on standards. So we looked around and saw lots of standards out there. So for instance, to define an event, like I want to consume this event in order to start my workflow or I want to produce an event at the end of my workflow or I want to produce an event if I go through this branch on my workflow. And we use the cloud events specification to define this event in the workflow definition file in the workflow definition itself. You can also run and not run but call external RESTful services for instance, using the workflow specification. And we use the OpenEPI to declare how you can create those calls from the workflow itself. So when you, for instance, you want to call, when you receive an order, you want to call an external service to validate your order to do something else. And it is a RESTful endpoint. You can express that call using the OpenEPI specification. So you bring the OpenEPI interface into the workflow when you specify which function in there that is defined in the OpenEPI specification that you wish to call using the workflow DSO. Also, it is of course based on workflow patterns. So there is execution handle order or handling management data transformation. So there's lots of patterns that we envisionize in the, that we view as useful to be, to have in the DSO in the specification itself. You can see lots of all those information in this link here that it goes to the specification and you'll see all of the features that we have in there. Well, the workflow itself, it is based on a state machine diagram. I say that it's based because we have start and end conditions in our workflow. So when you start and you go from state to state, in a fluent manner, so each state is responsible to do something. It is responsible, for instance, to produce an event or to call an external function or to decide to split between two brands or to go to one branch or to go to another branch. I can call another sub flow, for instance, and there's a lot of type of states that we have in the specification. As I said, you can go to the specification to see in the details, everyone that we support that we have in there. And if you feel like that we don't have something that it might be interesting, you can propose a PR or send or which are odd to us and say, hey, this should be nice to have in the specification and we can talk together and figure that out. For the data processing itself, it is like you enter with data and you go out with data. So in between, you have this pipeline of states, going through one after each other and then you can have like a data processing with the data that you're expecting to receive on your workflow. The specification has like those three main, this is the backbone of a workflow, let's say this way, we have functions, events, and of course the control logic of the workflow. The first thing is the functions. We define the functions as, for instance, a call to a restful service or a call to a GRC API as well. So you can define the functions, the spec file of the open API, for instance, the idea of the function that you'd like to call in this section of the workflow. In the event section, you declare how you're going to consume and how you're going to produce those events within your workflow. So for instance, you are listening to our ordered event and you're producing maybe shipping events or you're producing approved events or in a scenario like a bid service you were listening to bid events and you're producing maybe notifications for users that are listening to that bid, for instance. So you can go crazy and do whatever declaration of events and as you please, as your business needs in this section of the workflow. And it states it is not the least important part of the workflow control launch that is where you describe the workflow and what this workflow is doing, like the data processing and how things are going to go. They're like calling these functions or producing these events that you're declaring there or consuming events that you declare or branching the workflow, the run of the workflow into more sub flows of whatever control logic that it'll come up with or your business need in this part, this section of the workflow. As always, all these slides that I linked here to a specific session in the specification. So you can go there and see for yourself all of the details of each session and understand more in details of what we are trying to express in here. I'll try to be brief because the presentation can be long if I stay on one or other details. Well, every time that we present the specification, talk about specification events, people ask about implementations. So, well, it is nice to have a specification and it is also nice to have an implementation, right? Because that's the point of having a specification. We don't just create a specification to be in there. You create a specification to be implemented. So we took the specification. Now, talking about the Cogito project, we took the specifications and said, hey, there's a petition here when we can actually implement a specification and offer to our users a way of then to declare the workflow using JSON or EMO files. In a declarative way, and also if there are other vendors out there implementing the specification, they can migrate to Cogito easily, using the same thing that they learn with this DSL in other implementations with Cogito as well. So it's a work in progress. So we are in tech preview actually right now, community-wise, of course. There is some limitations of the implementation that we are working towards to implement, but the basic things are there and I'm going to show you guys. A little bit about the Cogito itself. It is, I like to call this like a cloud-native solution. It is a product with lots of components. So I'm calling like a cloud-native solution for you to build business applications. So business applications that you can solve business problems and then what I mean by that. It is like we have an engine that is capable to process rules in a specific domain driven DSL. So imagine that you're an insurance company and then you want to define how you're going to process that super details. Lots of computations and calculations that it is more or less cumbersome to have that in your code base. So you're externalizing rules and you have all of them defined on those rules and you can give to the engine in order to run it. You have also, we have also an engine capable to run complex decisions based on the DMN standard. We have of course, serverless orchestration feature that is super brand new, something that we are working now that is the implementation of the CNSF serverless workforce in a specification. We also have an agent capable to run BPMN based files and we offer some tools focused on cloud. So for you can bring everything all of your business applications that you created with Codege to the cloud. So we have a specific tooling for that. We have like editors. So it is a business application driven project. It is of course, we have editors for you to design and create your BPMN process for you to create and design your DMN decisions as well, your rules and that you can download on a VS code mark at place. We have an online editor as well. If you go BPMN.new or DMN.new you see our online editor there and you can start scratching your process or your DMN right away. You can also download it from our website, just download the editors. We are an agent, like I said, based on Druze, BPMN, OptiPlanar. It's you, of course, if you're a long-term Java developer, of course, you should have heard about those during your career. Like Druze, BPMN, they are super famous. Yeah, popular, let's see. That's correct, the correct word. They're super popular frameworks out there, Druze and Druze and process based on BPMN. We are also working on supporting services capable of giving support for your business application. So for instance, if you run a process and you'd like to see what is happening inside that process, we have some supporting services that is capable to run on cloud that can give you a glimpse of the process itself and what's going on out there or even a user task, for instance, in a given process, you can go in this supporting service and see the tasks and then approve a task or go forward with a process and et cetera. So there is a lot of features out there in those supporting services as well. And like I said, some cloud tools. We have a specific Kubernetes operator for deploying code to applications on OpenShift or on Kubernetes. And I'm going to use the operator in the demonstration. So we bring and pack this all together and that's what we call the Code to Project. So this is a feature of a handful components to help you design and create your business applications and deploy it on the cloud. Let's go to the use case straightforward. It is, you mentioned that you have this online store, right? And after new orders, you create an event in your architecture. Every new order we create a new event and then this event is being consumed by this order approval service. So the services will be some sorts of rules and some sort of business requirements. They say if it's approved or not. So that's the state of the architecture right now. And John Doe, the CEO come to you and say, hey, before approving the order, I'd like to go through a shipping and fraud verification first. So whatever comes a new order before approving it using whatever features that you have now do a shipping and a fraud verification for me, please. And I'd like to run that in parallel because we can then concentrate the approval in the end of this order processing. So it came up with, okay, so what about like creating a pre-processing workflow? So when it comes a new order event, we are able to split this process inside like, two events. You can create two events after this workflow within this workflow. We create a shipping hand list event and a fraud evaluation event. So it's component in the architecture can listen to this event and react upon it and do whatever they want with that. So in our example, we have the International Shipping Validation Service and we have the Domestic Shipping Validation Service that are listening for the shipping events and we have the Fraud Evaluation Service that is listening for the Fraud Evaluation Service. Also we finish with the shipping validation, we meet a new event saying like shipping verified or something like that. And after the Fraud Evaluation Service is finished, like can say, hey, this looks like a fraud or does not look like a fraud can be anything. And in the end we have our order approval service again and you can correlate all those events by like for instance the order ID and say, hey, I received the shipping approval, I received the evaluation approval and I received this new order event. So I can correlate the three of them and then they can proceed doing whatever they want. So this is a small, I excerpt for a small processing use case in our example. So I have like five services running around, none of them knows about each other, they just listen about events. So that can be an event-driven architecture, an event-driven applications. They are just reacting upon events and meeting new events and they can add new components in your architecture. So you have this flexibility of creating things around your architecture, removing an event, adding another. So for instance, tomorrow you won't ship internationally anymore and you just remove the international shipping validation service. There's no use anymore, right? You won't have to code something new or anything like that. Just remove an event from your, you just remove a service from your architecture and you stop doing that thing. Or no, well, I wish to add some new capabilities. So I add a new service in order to listen to this event and do whatever they want. So this is more or less what an event-driven architecture can, the benefits of an event-driven architecture can give to you, you know, async processing, this kind of flexibility, you know, composition and orchestration of events. That's why we are talking today, you know, like about the workflows for instance. So let's try to zoom in into this first service, the word preprocessing workflow. Let's see how it is doing its thing. So I draw this workflow like in order to pre-process my order. So I receive this new order event and I will process the order. I split into two parallel states saying I will handle fraud in one state and I will handle shipping in another state. So I can figure if I, which kind of events I need to, you know, produce for my architecture. So this is happening inside that preprocessing workflow service. So this is more or less what looks like my workflow. Let's see how this workflow works. And it is described, how I'm describing this workflow in a Yable format using the service workflow specification. So I broke the workflow into three workflows. I have the main workflow, the water workflow and I have one state here that we call the parallel state, you know, calling those two sub-flows, the fraud handling workflow, the shipping handling work. So let's take a look at it. The first part of the workflow, it is the, I call the headers, you know, when we describe, we give some information about the workflow. So the name of the workflow, the identification of the workflow version for you to control the version of the workflow, some description for it. And I will say which state I will start my workflow in this case, the receive order. The second part, I have the events definition when I define how I will consume that event. So let's say that in this particular use case, I have the order event type, cloud event type. So someone out there, you know, any service out there is creating, is producing this ordered event, cloud event for me and I'm capable of listening to it. So the first state that receives the receive order, it is capable of, you know, listening to this event, the order event here that we tie with the name, like order event name. I have the event by his name, it is the order event here. And when I receive this event, I say transition to process order. And this state process order, which I call type parallel, I will create the branch fraud handling, shipping handling. Let's say my engine will call, you know, at the same time in parallel those two sub flows. And as they finish, I end my workflow. And to create those workflows, I'm using the serverless workflow plugin and you can download it in the VS code marketplace. It is maintained by the serverless workflow community, specification community and you can go there, see the source code, do whatever you want, download it and please hate five in the plugin itself. And we have this nice feature that is you can generate diagram using any workflow file. So it uses the, you know, this as simplification of, the event diagram not from the UML. So we have a start, we receive the order. It is an event state. We process the order in a parallel state and we create two branches. So let's take a look into the first one, the fraud handling workflow, same thing. I have the name, which state I will start, the version and the events that I'm going to produce. So I'm going to produce a type event orders dot fraud evaluation. And let's take a look into the states. So we have a data conditioning here, which is when we start this workflow, we analyze the data inside the input of my workflow and we perform a JSON path expression into this data. And if the total of my order is greater than 1000, I would say that we need fraud verification. If it is under 1000, I won't need fraud verification. Of course, this is super lame. Next example, in real use case, you won't do things like that. You can like create an estate before that. Like you can call a service to evaluate some aspects of taxes or behavior of the customer, whatever thing you want. And then you can say to the workflow, hey, this order looks like that they need to perform is to be validated the fraud or something like that. So for this case, to simplify the conditions, we have this event type switch and I have these conditions in here, very lame, but it is super nice. And I inject into my data state a new attribute to my workflow, the fraud evaluation equals true. And then I produce the event fraud evaluation. And this event contains in the data attribute everything that I created here in the workflow. So it goes with the fraud evaluation equals true and the order information as well. So anyone that is interested in the orders that fraud evaluation event can do something with it. All right, so let's do the same thing that we did and let's generate diagram using the plug-in. And as you see, we see that we have like a fraud handling sweet state and depending on the condition, I'm going to say that we can produce a fraud evaluation. If not otherwise, we just end our workflow. Well, very simple and hopefully you could understand what is happening there. The other workflow is the shipping handling workflow that is being executed at the same time as the other one. More or less the same thing, we have now two types of events. We have international shipping order event and we have domestic shipping order event. So depending on the context of my order, I will emit, I will produce an event like an international shipping order event or a domestic shipping event. And more or less the same thing that we did there, we have also a sweet state case here. And the first condition it is like, if it is within US, it is a domestic shipping. If it is not, it is an international shipping. So again, super lame example in a real use case, you can like call a shipping, maybe you can call it maybe a Google service for localization to find the address, the correct address and then to figure some other things out of whatever and enhance your order data with some output for this service, for instance. But in this case, if we realize that this is a US order, so we transition to domestic shipping, if it is outside US, we handle for international shipping. So for the domestic shipping, I'm going to add a new attributes to my data that is shipping domestic, otherwise I'm adding shipping international to my data and I will produce an international shipping order and I will produce a domestic shipping order. Again, super simple, but you can see the powerful of something. So imagine that you're talking with a business person and you can bring the DSR to them and explain, hey, this is what is happening inside this microservice. This is what, which kind of events that we are creating, that we are consuming and what we are doing with it. So for instance, in a real use case, you can say, hey, I'm calling this Google service and we can receive this kind of information and we can process this information as we please using the workflow. Let's see how it looks seeing the generated diagram here. So we have the shipping handling state. It is a sweet state and there you go. It can be a domestic shipping or it can be an international shipping as well. As you see, in the end, we are going to produce an event being international or domestic shipping whatsoever and we are going to win. Well, that's what's looked like the event when you see it in this perspective. And of course, this is the, when you are designing and we are creating your workflow. When it turns out to deploy and into a Kubernetes or on a ROP ship cluster, for instance, or in the cloud, you can use the capabilities of the Kojito. We have CLI. So in this case, you can deploy the workflow using this command line like Kojito deploy, that is the action, the verb of what we are doing or the server's workflow, that is the name of the server that I gave and you can add all those YAML files into the CLI and the CLI will push in this case to an open ship cluster. And in there, we are going to generate code based on those files and we will create an image, a Kojito process runtime image into your internal web registry and the Kojito operator will be capable to create the Kubernetes resources and the Kojito resources to deploy and have this to manage this workflow running within the platform itself. And of course, for this use case specifically, for this working with events with Kojito, we need the server's platform. This also works on Kubernetes, but on Kubernetes, you can build within a cluster. You have to build the image yourself and then push the image to the cluster and the Kojito operator will take your image and it will perform the same steps. You've created the Kubernetes resources, you've created the Kubernetes resources for you and for your servers to be connected with the Kubernetes broker in this case. Then I see that we have some chat going on there. You'd like me to stop here and or we can answer the questions after the... Let's answer the questions at the end and most of the chat is me trying to get people to ask questions. So this is a good point. If people have questions, type them into chat and we'll make it happen at the end. Thanks. All right, no problem. Yeah, I know if I'm rushing or not. So you tell me, all right. When also we have the service deployed on OpenShift, this is more, sorry about all the technical details, but this is how it looks like from the Codjito operator perspective, how we deploy the service in there. So our service will be like an event source or this event source can be any other thing. In this case, we have the serverless workflow in here and we are listening to the new ordered events coming from the KineDev broker and we're consuming using KineDev event in triggers and that triggers an event for us to listen to it. So the ordered event comes here into the broker and we listen to it and we start our workflow. Our workflow will produce events to the broker using sync bindings. So we produce events to the KineDev broker and any other service out there, any other, can be a KineDev service, can be any deployable pod or anything else, in the spectrum of the KineDev server platform to be listening to this event and to react to it. And this is what we are going to see in this short demonstration. Let's me see. I think this is not, let me try to zoom that. Well, can you guys see this? Yep, looks great. All right. So I'm producing an event to the platform and indeed the country is US, the total value is 1,001 and it's an iPhone 12 order. So I'm going to push this event to the platform and you see what will happen in there. So in this terminal here, I have three listeners of our KineDev, of our workflow event producing events. So in the top most, we have the fraud evaluation service. So we are listening to the events from the fraud evaluation service. We have the, I'm sorry, the shipping international here in the left and the domestic shipping here in the right. So once one event comes to the, either through the fraud evaluation or to the international or domestic shipping services, you will see the produced event in here. So let me push the event. It is like a fire and forget type of event. It's taking some time because my internet connection is not that good. So I push the event and we can see that we didn't have any pods and listen to that because those are KineDev services. And we receive a fraud evaluation because the value is above $1,000. And we received a domestic shipping because the country that we shipped our iPhone, it is within US, it is a word from US. So that's why it is a domestic shipping and you can see it in there. And with we go to the OpenShift cluster, you can see, hopefully you can see if my international connection helps, help, help. It is a live demo down. Yeah, you already saw those like, because I'm fetching for the pods of the shipping international service and you'll see there is no resources in there. And yeah, I don't know what is happening. I don't know if I need to refresh this, but whatever. And you'll see that is this event coming in here and this event coming in here. So there is some extensions as well. So we have some information about the process itself. And here you can see that the fraud evaluation service received the fraud evaluation equals true and the domestic shipping service received the shipping domestic. And this shipping domestic it is not, it is in here but the fraud evaluation is not in here because we run one after the other. Well, I see, yeah, we can see the pods in here and because we were talking, Connecticut will destroy the pods because they are not being used. So that's why we are seeing terminating me here. So let's say that we have, I don't know, maybe a new order from Italy and that's above $1,000 as well when I'm gonna send it. Quickly this time, because the pods are right there. I'm creating the shipping international pod in order to listen to this event and react upon it. And the same thing, you won't see fraud nor domestic here because we entered in that branch that we just received from the international shipping. So now try to imagine this in a real use case where you have to equalize your resources. You have to save resources in order to have a more profit company. So in this case, let's say that you receive, I don't know, maybe 9% of your orders, they are domestic. So you don't have to have the international shipping handling services be up there all the time. You can have the workflow to control all the logics and you can break through into categories. You can break your domain into more specific things and be flexible enough to distribute your services around your architecture. And you won't have all the, you won't waste resources into something that you won't need for some times. So this is a nice example, a nice use case for YouTube and maybe reevaluate some of those things that we are doing. Because normally people create like, oh, huge microservices, they are doing lots of things. And maybe sometimes it is nice to just break into specific parts and make them run into this fashion. Like I can create like lots of domestic orders or lots of international orders and having this part of my system not producing services, not producing events, not whatsoever. So I can reduce my cost and sometimes that will help a lot. Well, let's get back. There's some much more things that I'd like to discuss. There is lots of details and strategies about it. I know that it was super quick and brief things. I introduced lots of concepts like the cloud events. Sorry, the service workflow specification, the code to project, how to deploy it, the K-native. There's lots of things out there. And also lots of resources. You can go there on our websites and see what we are doing and the use case that we are capable to help you solve your business problem. So reach out to me as well. I have Twitter account. You can send me DM message or send an email or reach out into GitHub or whatever. I'm up there. You can see. I would like also to ask you guys to take a look in the blog as well. We are trying to update the blog twice a week with new content and to give people an understanding of what we are doing in Codito and the business applications in general and how you can solve your business problems. And if you're curious about the demonstration, the example itself, you can go into this we're holding here and see all the source code. You see the workflow. I create some scripts in there for you to deploy in your Kubernetes instance. We have the image already in there. No, the Docker file, sorry. Already in there. So you can go download the Docker file, run, view the image, run the image locally. You can play around this demonstration whatever you want. And I would like to invite you to see the presentation from last year that Tihomir and I did. Tihomir is much better than me explaining the serverless workflow specification in there. We go through more details about the specification and if you're curious about it, just go in there. And of course, the serverless workflow.io website, you have all the links that you need to know there, our Slack channel, our mailing list, our weekly meeting and everything else you see in the resources, okay? Well, thank you. That's it. I don't know if we have any questions or not. We've got some time for questions and there's a couple of them in the chat. So, and I'm gonna try and I'm gonna do them in reverse order because I think you answered some of the earlier questions already and Eric is asking, why was K-Native chosen to increase the workload capabilities and not an auto-scaler? Well, just for the demonstration purpose actually, we can use auto-scaler. That's not a restriction whatsoever. I just found that more simpler to show to users what we can do. And then in the example, you will see all the K-Native resources for you to replicate the exact same thing into the, in your environment as well. So, yeah, no, not particularly any reason whatsoever. But for the K-Native event in part, it is because the Kojta engine actually, we have these connectors and the Kojta operator as well is coded and we have this support for the K-Native event in platform. So we created a specific K-Native event in resources for it. So for the K-Native event in part, that's why. But for the K-Native serving part, no, can be whatever thing, the auto-scaler or it can be no regular deployments, whatever. That's no particular reason. Cool. All right. And that there's one other question from Ilian. Is it possible to create custom code? Oh, yeah, of course. Cause this is a Java project. It's a straight through maybe in Java project. In our, in the example itself, I have a redmi file here that I explained how, let's see, how you can create this server's workflow Kojta project from the scratch using Java. So you have to know Java, but yeah, you can add a custom code and you can call this custom code using a workflow. We have a specific in the Kojta implementation. We have a specific functions that you can declare in here for, you know, like the, you can give a name and then in the metadata space, we have a, what do you call like a service internal calling? So you can call any Java being from the workflow. So yeah, you can do it. But from the, from the specification point of view, we have the JQ capabilities. So you can, you can like create a function expression and then, you know, externalize this. We can define here the function that is an expression as a function function. And you can, you know, declare any, if you're a custom code, it is now related to data handling with the input. You can just create the expression and in here in the workflow itself and that it will work. And then we'll, insert. Yes. So he's assuming you can call out to other services during execution. And Julian would like to be able to call out to fraud prevention API that they have. So. Yeah, absolutely. Let me, let me share. Let me, let me show you the specification itself. Yeah, you can, you can call any, any, any functions. You know, we have three in the specification. We, we, we expect the RESTful service invocation or C service invocation and also expressions. Like I said before, in the RESTful, you can define like a function like this. Let's say it's same order confirmation and the operation of this function, it is a, an open API definition file. And after the hashtag here, you have the, the, the method, the operation ID of that particular open API. So for, for example, let me see if I can find it. Oh, the famous, the famous pet store. Yeah, yeah. So imagine that you have like this one here, you know, this open API spec here and you would like to call the least pets function. So you just say, hey, this is my file and this is my operation ID. So the code engine will, will, will read this, this, this specification and will generate code client code to call this REST service. So yeah, you can call it, you know, a third party API and do whatever it wants within your workflow. So after receiving the, the, the payload of the, the, the response of the least pets, you can do whatever you want in your workflow. Yeah. That's correct. Okay. And so someone's now asking, where are correlation keys state stored if used? Well, this, the correlation ID, it is, we are working on the stateful implementation. So we don't have it yet on code to site. But in the specification, we have the part where we define, let me see if I can find it real quickly. Oh, very on top of the hour. But let me, what was the, yeah, here. In the events, you can, you declare the correlation in here. So in this, in this use case that I, that I brought, in the end, in the final order process. So if that process would be like a workflow service as well, because I don't know what it is. It is just an example that I gave, but if it is a workflow as well, you can, you can create the correlation in here in the workflow and say the event, say, hey, the attribute name, it is ordered ID, for instance. And then you can, when you receive everything else, you do whatever you want. In Kojito, we don't have a stateful yet for the, for the, for the server's workflow, but we are working on it. But in this case, we are going to handle that in memory. So we receive that at the same time, we correlate and we do all the other stuff. But we are looking forward to make it stateful. Cool. All right. Well, there's one earlier question. I think you might have answered it, but someone's asking for an understanding of how this relates or differs to camel K. Well, camel is for, I'd say integration more, most of the time, implementations of integrations. And we do not target integrations. So you cannot like FTP a file or send an email or those things like that. It is not what we are discussing in here. And of course you can implement a pipeline in camel K. This is not the idea. But from our perspective, it is more of a declarative workflow of a feature that we are implementing in here. It is not like dealing with integration side of the things. You know, I understand that is some things that they can interconnect or we can implement an integration in here. Like for instance, you can call a Twitter API using the REST function, of course, or you can, in the camel K, you can create a route that you go through every step as we created in here, of course. But each, I understand that each feed, each tool has its particular strengths and you should leverage from that. So from the workflow side, sorry. I'd say that for this kind of declarative workflows, business applications that we have, this is a suitable application for this kind of use case. That's my point in there. Okay, cool. All right. And so we are almost to the top of the yard. And the very first question was, where did the name come from? Well, that's something that people ask sometimes ended. Yeah, oh, I'm sorry. You, you know, I can't explain it exactly why the founders, they come up with this name. But everything that we do in Cojito, it is like related to knowledge. So we call ourselves the key. So the knowledge is everything. That's our motto. And cause, you know, all these rules, decisions, process, everything is related to, you know, business applications and knowledge based. And it is, I believe it's based on, of course, the founders can explain it much better than I, but from this quote, Cojito ergo son from Renee, this card is, you can read the week, but yeah, and the K cause Kubernetes, you know, it is a cloud-based platform. So that's why that's called Cojito. I don't know if I answered your question, hopefully. I think that's great. You know what the best part is? It's not a Greek or a navigation term. So that's really the most wonderful thing. It's like we have completely run out of K words and we're starting it in the Kubernetes cloud native space. So you've done well. So we've got like four minutes left. Is there some new features or what's in the roadmap that what's coming up next that we should be watching out for? Well, like I said, Cojito is a working progress of the serverless workflow specification. So the workflow specification is going super in the community. We are creating great things in there, great big companies are interested in the specification. This is a very nice thing from the community side. We are about to release 1.0 in the specification. So this is a nice thing. Hopefully we will see more people trying to implement it. And from Cojito side, we are keen to implement everything that is there in the specification to be 100% compatible with the specification. We are about to bring a JQ to the engine. So you can create JQ expressions in your workflow. And so you can transform or do whatever you want with the data within the workflow. And of course state, this is a very important thing that we are looking for, to implement in the engine as well. As well as a handling compensation and all of those workflow patterns that we don't have yet from the workflow, from the serverless workflow perspective. But yeah, there's a lot of work to do in there and from the specification and the implementation side. So if you can share again that resources page because I think that's a great way to end. And if people want to get involved, then follow the link. Oh, please. Yeah. Yeah, in the front page of the specification, if you go here in the specification, you'll see our GitHub page. And you can see, oh, I'm sorry, all the information that you need. Mainly how to engage with the community. We have a Slack channel under the CNSF Slack. So please go there and say hello. And if you have any other questions, just reach out to us there. We are keen to help. And looking for contributors as well. Well, thank you for making this really interesting and engaging talk. Ricardo will definitely have you back to celebrate maybe 1.0 release. And just keep us in mind there were plenty of great questions today. So thank you everybody for tuning in. And Chris Short for producing in the background there. So really pleased to have you here, Ricardo. Definitely come back and keep us posted enough to date. And everybody, join us on the community Slack channel if you have questions that he didn't answer today. We'll try and get them there. All right. Well, thank you everybody. And again, Ricardo, thank you. I love the hand waving while you're talking. The most aggressive virtual event so far. Yeah, I'm trying hard. All right. Well, take care and thanks everybody. Take you guys. Bye.