 Hi, welcome everyone. My name is Stihomir and I have Riccardo here with me. Hello guys. Yeah, in this talk we will present to you guys the serverless workflow project. We will give an introduction to the project, look at some of its use cases, and then Riccardo is going to present us his really cool demo on how serverless workflow fits within a container orchestration environment using Kubernetes and Knative. Thanks. All right, so serverless workflow is a cloud-native foundation sandbox project. It is part of its serverless working group. It is open source and Apache 2.0 license. It is a community project and here you can find our information for our GitHub repository website and community chat and meeting information. All right, serverless workflow defines a declarative and domain specific workflow language. Declareative as it is not expressed in low-level code however defines an abstraction language which can be defined in both JSON and YAML formats. Domain specific as it does target specifically the domain of orchestration of event-driven and distributed services. And to give an example of that, here on the left-hand side we have two simple requirements written in a natural language. The first one being when a patient has for example a bladder infection, we want to notify urologists or doctors dealing with that type of issue and the same thing with the second one when a patient has a regular heartbeat. We want to notify cardiologists for that type of issue. On the right-hand side, we see that serverless workflow does not express its language in code or if L statements and things like that. Neither does it express it using terminology that does not fit the specific domain targets. On the bottom right-hand side, you can see that this type of requirements, we can translate directly into events and services. So for example, a patient having a bladder infection or a regular heartbeat, we can look at it and translate it into events that could be produced by, for example, in this case, some different hospital systems and notifying a doctor, in this case, the particular doctor that deals with the patient issue can be translated into vocation of distributed services. All right, serverless workflow is based on standards. For event definition, we use the cloud event specification to define events, events that can be produced or consumed and also define correlations between many different events that could be happening in your systems. We use the open API specification to define operations and services that need to be invoked during workflow execution. The serverless workflow specification then defines different workflow patterns or control patterns, which then define execution order, error handling, and data management. And those are all based on widely known and used workflow patterns. Thanks. The overall project goals of the serverless workflow project are to define our language, which again can be expressed in both Jason and Yama format, and to focus on portability and vendor neutrality. So we want to be able to define a language which you can then execute on many different runtime services, and those runtime services can be deployed in many different environments, including container and cloud platforms. So in order to start using events within your workflows, you have to first start defining them. Like we said, events can be either consumed or produced during workflow execution. With serverless workflow, you can see here that we have a direct one-to-one mapping between how events are actually expressed within the cloud events specification format and how you're actually defining them within your workflows. So here you also can see that for event correlation, we also use the cloud events format, specifically its context attributes. Alright, so how can you now, now that you define your events, how can you now interact with them? So as we said, events can either start workflow execution, they can continue workflow execution at some way, way points that can be either consumed or produced and they can be used also to make logical decisions. On the right hand side, we see a very simple and definition of the serverless workflow language, which says, okay, at this point we're going to end the workflow execution, but before we end it, we're actually going to produce an event of type workflow completed event. So this event then can be consumed by other different services in your systems, for example, other workflows or pretty much anything else that is listening to this type of event. Similar to events, we have one to define services and the operation on this distributed services we want to invoke during workflow execution, and as we said, for this serverless work or project user utilizes this open API specification. On the left hand side here in the box, we see a simple open API definition, in this case written in YAML, and it shows one particular operation of this service that we want to use. We want to invoke during workflow execution. On the right hand side, we see that there is again a one to one mapping. So in order to define this particular operation and the service that you want to invoke during workflow execution, you basically have an operation parameter, which is a combination of the path or the URI to the open API definition of the service and the unique operation ID, which gives you unique one to one mapping so your runtime should exactly know what operation needs to be executed on the service whenever the workflow requests for it. Now that we have defined the services, we want to be able to invoke it and we understand that there is many different types of services that you might want to invoke during workflow execution. With serverless workflow, you have the ability to define and invoke RESTful services, we see on the left hand side an example, but also you have the ability to define invocation of services, they're not RESTful, they're not probably exposed at some endpoints, but however, are triggered by events. So that is also possible. The last part of defining the workflow is actually the control flow logic. Here we want to define states and the order in which they're executed. With serverless workflows, states are kind of like a black box that does some particular control flow logic type. States can receive either data inputs or can they receive events. They perform their particular type of control flow logic that they're supposed to do, and then they produce that output or can produce events that can be consumed by other states within the control flow logic. So serverless workflow specifies explicit control flow logic, which means that we want to clearly allow you to define what you want to build. A lot of times control flow logic can be very granular, which means that it becomes at some point ambiguous. And what we wanted to do is kind of try to eliminate that. It is often very hard to see during control flow, even visually, what parts kind of fit together and which parts in combination define a control flow logic block that makes sense on a domain specific or a logical level. On the bottom, we see that each state within serverless workflow has a specific type, and these types are somewhat clear. For example, we have an event type, which basically at this point we're dealing with a control flow logic that has to do with events. The same thing, for example, a switch type. It is clear that this particular state is going to deal with logical decisions based on either data input or the event payloads. So that is what we mean by explicit control logic. I'm not saying that one is better than the other. However, this is how we the approach that we have taken within the serverless work language. Next. You can express many different types of control flow patterns with serverless workflow. You can define either sequences of execution. You can do database looping, parallel execution. You can make decisions like we said based on either data or some sort of events. You can deal with error handling, for example, if you retries or define how errors are handled during your workflow execution. In addition to that, serverless workflow also allows you to deal with control flow that has to deal with human interactions, which is sometimes very important during execution of your workflows. And there's some other things and they're all specified within our specification documents. Next. Now let's take a look at the overall project components or what is all included within the serverless workflow project. So far we have been talking about the serverless workflow language, which is described as a JSON schema and this JSON schema really defines all the rules and the patterns that you can use when defining your language. In addition to that, the project also defines a set of language extensions and these extensions do not control execution or the control flow logic or what happens when the workflow is executed, but provides more information about the workflow that you write. That can be consumed by different runtime systems in order to overall improve the performance of your workflows in terms of your cost and things like that. Some of the language extensions that we provide are KPI or key performance indicators, extensions for tracing, simulations and we're adding more as we go. Another part of the serverless workflow project is things like software development kits. We have them currently both for both the Java and the Go languages. Testing TCK or this is a compatibility kit for runtime implementations where they can compare their implementation to the requirements of the serverless workflow specification. In addition, we provide a set of plugins for widely used IDEs. Let's take a look at one of these language extensions the serverless workflow provides. In this case, let's take a look at the KPI extension. This particular language extension allows you to basically compare expected versus actual data of your runtime or information produced during the runtime of your workflows. It helps you really try to improve your workflows in terms of performance, cost and its effect. On the right-hand side, you can see a small example of the definition of this KPI extension. Again, all the extensions just like the actual serverless workflow language can be expressed both in JSON or YAML, so you have the choice to do that. But here basically allows you to define some expected metrics that have to deal with what do you expect to happen, how many times you think that some services should be invoked, what is the overall cost that you expect from running your workflows during a certain period of time, and then you can compare them with the actual results and see if those metrics are met or not. The next thing we want to take a look at is the Java SDK and this particular SDK provides features like parsing of JSON or YAML that runtime implementations can easily use and don't have to deal with that. It provides a fluent API so it allows you to define your workflow definitions just using programming language rather than dealing with the JSON or YAML. It provides validation, so validation against the serverless workflow specification itself and it also provides diagram generation. So as you're defining and your workflow models, you can go either from the JSON or YAML that is parsed or the workflow models define using the language itself into diagrams that you can then visualize. One more thing we're going to take a look at here is the serverless workflow visual studio code plugin. And this is a plugin that's available on the visual studio code marketplace. You can download it now and start using it. And its feature, sorry, provides code hints and code snippets for both JSON and YAML files. Again, against the serverless workflow JSON schema, it provides validation. And at the same time, it also provides diagram generation. So as you're modeling your workflows in visual studio code, you can easily preview the diagram of your particular workflow and make sure they visually also make sense. Alright, so that's it for me. So now we go to the cool part. Now Ricardo is going to take over and do his cool demo. Yeah, let's see what we have prepared for you guys today. So the first thing is just when we were thinking about the demo, we started thinking of how we're supposed to fit a particular use case with the serverless workflow scenario and how that can help us. So as developers, we have this very particular use case that I wish to open a PR against a project, a GitHub project, and I wish to have my PR being labeled exactly with what is supposed to be labeled, and I wish to have my pull request be reviewed by someone. So it would be nice to have maybe a bot that could label my pull request and maybe also add the correct reviewer for my pull request. So having this in mind, we draw a proposed workflow for that. So imagine that we could receive an event change, like a PR has been opened or changed, so we receive this event on our system. And based on that event, we can analyze the context of the pull request and understand what have been changed. So we can understand by the context by the files that have been changed. And based on that, we can call the GitHub API to apply the correct labels and also to add the required reviewers. And at the end of the workflow, we can create and publish a new event saying to the platform, hey, the PR has been verified, and you can do whatever you want with this event. So having this workflow in mind, we prepared implementation with some technology around Kubernetes, Knative, the serverless workflow, and some runtime implementation of the serverless workflow. And the first thing is, once we receive a pull request, we'd like to receive this event on a broker, so in this case, on a Knative broker. And then this broker will broadcast this event for anyone interested. And in this case, we have our serverless workflow runtime running on this platform, and we will listen to this event of this pull request event. And we're going to do everything that we explained the last slide, like analyzing the pull request, what's going on with this pull request, and one of the things is to query the GitHub API for the files that have been changed in this PR. And also apply the labels, apply the reviewers. So for that, we need this GitHub API functions also deployed on our platform, on our Kubernetes platform that we are going to call like GitHub API functions. So we have all those Knative server functions deployed in there that we can consume using our workflow engine. And also the workflow is finished, we will publish a new event to the broker, and the broker can also broadcast this event for any interested part of this event. And in this case, the pull request verified event will be consumed by our notification service. So this notification service can be anything. It can notify via mail, via Slack, Telegram or whatever other channels that we have out there in your company. So that's the main architecture, you know, the implementation of what we have, and then we basically have this broker implemented with Knative eventing system. So Knative eventing will delegate the broadcast events around the Kubernetes name space. We have this workflow runtime implemented with our own runtime implementation of the server workflow. We have this notification service. Also, aware of the server of the events that are published by our workflow runtime, and we'll do the notification with that. And what kind of technology that we use, like I said, we have the serverless workflow implementation that we are working on, that it's called Cogito project. We have Knative to serve the platform to be the infrastructure for us for the serverless infrastructure to handle cloud events for us in the Kubernetes name space to handle our functions in there. We also have some, the Java functions that you saw, the GitHub functions, they are all implemented in Java with Quarkus could be implemented in any language actually so use Java because we are more uses to that. And we have came with the Cremal framework that is an integration framework for communicate with this like API to make a nice notification to a given like channel. So I'm going to change my screen now to the fun part like Tihomi said. So, first of all, we're going to just watch the pods on our namespace. So for now, you everything that we have here and is in the application namespace is the workflow engine running and our Cogito operator running as well. This operator will is deployed the service and is controlling the service and how things should be and the state of the workflow should be so that's why there's a part of the operator in there. And here in the bottom. I'm in my project so I have this given, you know, Cogito service workflow demo project in GitHub so I'm going to start creating a new branch, let's say Q-Call and let's create a super fast file here. And with some information here in here like hello world, and whatever other things that you might have and you see that we have this file. We're going to add, we need to commit that like new test file, hello demo, commit message. Okay, so I'm going to push that to my project. Okay, I finished pushing my service in there. So GitHub is nice and he's saying, oh, hey, you pushed a new branch here. Would you like to open a pull request? Of course I want to open a pull request. So this is my new test file which help demo and let's say Q-Call, hello demo. Also an open pull request, you know, a new event will come to my Kubernetes namespace in Canada. We will handle that and we will start, you know, the operation of the overall operation inside the namespace. So the broker will take this event and will broadcast to our Cogito workflow, to our runtime workflow engine. So let's see how it goes. After opening the pull request, you can see that in the top of my screen that we are receiving the event. So we have this GitHub event listener that is a gain native source kind of component. You also see the GitHub service that has all the functions that will, that are required to interact with the GitHub API. And on StatePod is scaling to one, because we are using a serverless platform, so it's supposed to do all this kind of stuff. You see that the pull request just added the workflow added to the correct label and then the correct guy here to reveal our PR. And as well, the notification service was now waking up to receive notification in here. So you'll see that we received some notifications about the changes that we made in the PR. So that's it for the demo. Let's go back to your presentation. Do you have any other things to say? Coach Homer? No, this was really cool. Thanks for doing this. It was great. Okay, let's continue then. Okay, so I guess that's it. Right, Homer? We finished our presentation. Yeah. And I don't know, do you wish to, you know, to share some more information about the serverless workflow project? No, we're good. I think here you can find more information. Definitely our website at serverless workflow.io. Again, like Ricardo and I, we're community members of the project and we would like to also invite everybody that's watching to join. We have, like we said, community bi-weekly meetings. You can just join them and see how it goes. And here you can also find our GitHub repository with the specification project in there, has all the details and the information and documentation, including examples, use cases and things like that that you can go ahead and check out and learn more about the project. Yeah, and everything about the demo, you can see in this hole here, you reach the, you know, all the serverless workflow files that we used in this demo, how you can create your own namespace on Kubernetes and use all the scripts in there, the applications, the service. Everything is in this, in this address and you will be able to run this actual demo in your laptop as well. I think that's it. Yeah. All right. Thanks everybody.