 All right. Hi everybody and welcome to the serverless working group updates Timur and I are going to be talking about two different projects today in cloud events as well as the workflow Specification so see where can you go ahead and share your screen? I think we're going to vanish now All right, let's start presentation and we go so I'm going to be up first So let's go to the next slide Obviously as I mentioned we're going to talk about cloud events give you quick update And then the bulk of the talk will be team. We're talking about the serverless workflow spec All right. So first obviously cloud events So just a quick recap We delivered the latest version of the spec which is version 101 back in December of last year Same deliverables as before and I'm not going to go into what the spec is I'm assuming everybody understands what that is or you can go find it. This isn't that kind of call We're going to deep dive but just so you understand what the deliverables were It was obviously the specification itself the transport bindings the encoding formats a primer to give you guidance on Why we did what we did how to use the spec stuff like that and a whole bunch of SDKs to help you get started with cloud events Okay Now in terms of what's coming up next for the cloud events back itself to be honest not a whole lot We are doing some minor bug fixes that people find but for the most part we're finding the spec is pretty much okay And we're just waiting for more community feedback to see if there are things we need to work on in the future But we're not just tolling our fingers We're actually looking at the next set of pain points and as you can see here We're going to start looking at what we call sort of the lifecycle of event delivery and from from beginning to end Okay, now the cloud events helps you deliver the event from the producer to the to the consumer itself But in order for that to actually get started what has to happen is the consumer first has to discover What event producers are out there and what the events are that they actually do produce Okay, so what we're going to be doing is working on a discovery API spec and that's going to allow producers to advertise That they're out there and what the events are that that people can't subscribe to so a consumer will then hit a discovery endpoint Find out what their what events are out there who produces them How they can be delivered, you know different transfer protocols What are different subscription options? Do they support filtering stuff like that and then of course how to actually subscribe? Okay After that then comes the actual subscription itself Now if the protocol that they chose does not actually have a native subscription type mechanism built into it like for example HGP does not We will then this as part of the subscription API spec define how the subscription should happen, right? So for example HGP will define the rest operation to do the subscribe Okay, and inside there you're going to specify all the things you might expect right which events you want to get If there's filtering how to specify the filters How you specify where to deliver the events to the format of the messages stuff like that As well as some API's actually manage the subscription, right? How do you make updates to it if you can how do you delete the subscription? Very obvious types things and then of course at the end of the life cycle You then have a cloud event spec itself, which is going to be used to annotate your events to figure out Or how to do proper routing of the event to the proper destination, right? The extra metadata is part of the cloud event spec So that's what we're going to be looking at next in terms of life cycle Specifications to complete the complete to complete the complete picture around event delivery, right? So one last little spec. I want to talk about it on the next slide and that's the schema registry spec as one of the little bits of metadata inside the cloud events spec is a URL to the schema of the event that's being transported and People can host that pretty much wherever they want But we realized that there really wasn't a standardization effort around the API's of a schema registry itself Meaning how can producers talk to more than one schema registry to actually upload their schema and manage those schemas, right? Version them and stuff like that. So we thought, okay, let's see if we can do some help there So we're defining a set of API's mainly for producers to figure out how to talk to the schema registry is to upload their schema itself Okay, very simple little crud type API is nothing earth-shattering there But it will have automatic versioning in there so that you don't have to worry about You know versioning numbers of the URLs to access different versions of the scheme itself will help you a little bit there Okay, but in terms of actually using the scheme itself It's pretty much what you'd expect producers will use the schema to advertise For the consumers to use and then the schema itself will be used in the serialization of the event to make sure it Adheres to that particular format Consumers can then access the schema registry to use the schema to then figure out the validation Make sure they understand how to parse the schema or deserialize I'm sorry, I'm not gonna deserialize the event to the right format and stuff like that The same thing you'd expect even without the schema registry there, but the schema registry helps get the interop around Publishing and sharing of the schemas themselves and that's the last bit of new things are working on Okay, so that's pretty much it for the cloud event side of the serverless working work group So let me now hand it over to you Mer to talk about the workflow stuff All right, thanks Doug So serverless workflow is basically a workflow language that allows you to orchestrate microservices and events We typically categorize workflow languages in different buckets going from flowchart based work for languages Form-based or code-based where the actual workflow language is the underlying programming which of choice Serverless workflow falls into to the declarative work for language territory So with serverless workflow being declarative we focus on what to do and not how So the language itself is dependent on time implementations to interpret the what and actually execute the instructions that we put in our workflow definitions with serverless workflow you can define Workflows using JSON or YAML and the language itself is completely defined using JSON scheme Now we have to look many different Workflow languages exist out there and we have to look typically at the target domains of the workflow language to pick Which one we want to choose for serverless workflow We of course focus on the business domain meaning that you can express your workflow definitions with terms and and knowledge that you gave Have in your business domain, but at the same time serverless workflow targets the serverless Technology domain meaning we can also express things like functions events Retries and things like that are common in the serverless of microservices applications as well So serverless workflow tries to merge and allow you to write workflows using both your domain or business Logic as well as the domain of serverless technology as well Now one advantage of serverless workplace that is completely agnostic meaning as we as we Mentioned earlier the workflow definitions or the the JSON or YAML that you write to express your workflow logic Is not a programming language dependent at the same time the workflow data that is Being executed and that you can update and and work with during workflow execution is also expressed in JSON format and as such serverless workflow is a domain specific A declarative lang workflow language which can be Executed on many different run times in different programming languages At the same time you can take the particular runtime in any language and deploy it on multiple cloud platforms So it's really agnostic from any type of technology that is underlying that is executing the actual workflows itself When we talk about serverless workflow We want to compare it with what's out there already and the typical comparison of what it is right now What serverless workflow is is with things like aw step functions or google cloud workflow We're even a lot of comparisons to work towards bpmf as well and this table kind of shows you some of the features of the languages and how it compares between them, so These there are of course a lot more features That exist but these are kind of like the quarters that we looked at when we look at when we think about orchestrating microservices as events So serverless workflow currently is somewhat of a superset Of the other declarative being step functions and cloud workflow But is a subset of bpmn2 But again, it focuses on the feature set specific to its target domains being orchestration Of microservices as events Now one thing about serverless workflow that we really try to do is we try to focus on standards and you can see here some of the Standards and technology wise that we're we're really enforcing within our workflows And of course when being cloud events now, this is two advantages on the one hand when you're writing your workflows You can utilize the standards and actually to describe Again, if you remember the what or what your execution orchestration should be rather than some proprietary or hard-coded or predefined function functionality that you might only have available But at the same time it also is the runtime implementations to start thinking about standards and allow you to to to really be comfortable with the runtime implementations Of serverless workflow in terms of longevity and things like open source And specification things and and kind of enforces that side as well Now and besides the just of being Acknowledged and and things like that with serverless workflow We also took a look at the logical structuring of the workflow definition itself The workflow definition is composed of three parts. The first one is your control flow logic block, which includes The language structure such as sync async invocations looping branching Parallel execution the typical stuff that you think of when you think of Workflow language and using workflows in general But at the same time, we also have event definitions and event definitions are completely reusable pieces So you can define your events that are supposed to be consumed or produced by the workflow Either with inside your workflow definition or you can completely expose it as a service or write it somewhere else So multiple workflows can actually access them and reuse those definitions We enforce cloud event specification here, of course But the source or the actual producers of those events could be many different things in the end The third part of the workflow definition is function definitions And those again just like event definitions are completely reusable between workflows They can be defined in line as well We'll see the demo later or you can expose them as a separate file and and treat them Or allow them to be used by multiple workflows definitions If you wish the function definitions define some invocation of a service or some invocation of a particular expression So through function definition, you can easily utilize things like open api g rpc Or you can even define a container that you have you want to actually execute an image on or something during workflow execution The serverless workflow also defines some of the ecosystem So not only do we define the core language and the structure of the language if you have documentation and things like that That describes every little piece of Of the workflow language that you can use. We also include things like a vs code extension that helps kind of developers get Includes set of SDKs which currently they're they're there to parse Workflow definitions validate them and things like that. We currently have them and go in java We just added the net SDK and we're currently also working on a typescript SDK In addition to that just again to get started on the website We have an online editor where you don't really need a particular Like vs code or idea or some sort of programming Language type of editor to actually start getting started to define your workflow definitions and also to visualize them via a simple uml diagrams as well Now even though serverless workflow is fairly new We have received A lot of interest from different people different people from the community And we currently have a small but very healthy community We have a couple of partner projects as you can see here within cncf and that it's very important to us To grow those relationships and integrations with them We do have currently open source collaborations with multiple companies As you can see and there are already a couple of well java based right now open source implementations of the serverless workflow Specification and more are coming in the future in different languages. Like I said currently we only have the java guys are predominant in this space it seems currently Now as far as the project release roadmap, um, we currently have released just this month version 0.6 We're planning 0.7 release this summer And once we do that, uh, hopefully we can start targeting version 1.0 of the specification At least a release candidate won by uh, november of this year That's currently our plan and it highly also of course depends on the community effort and contributions as well All right. So now that we talked about serverless workflow, let's Do a little demo. It's just going to be very simple But it is going to I think show uh, how to get started and kind of what workflow and serverless workflow in general can bring to you So for that, uh, I want to kind of start it off and say The demo is going to be three very simple services written in different programming languages So first service here is going to be a node service no j s And let's get it started for that. We can say npm run local. So we will run this on local post However, you can easily, you know Imagine this being deployed on any container or cloud platform for sake sake for the demo We will run this locally so we can actually curl this Service oh and actually let me pipe it through jq. So we get a pretty output And you can see our node js service just gives us a little piece of json saying Invoked node js function and that's it our second service that we want to run is a go service And we have Handled up go. So we're just going to get that started and This service is going to run 8081 Whoops, sorry And let's test it if it's up and running we can issue a post to it And you can see it also returns a little piece of json called invoke go function. That's it. That's really all the third service that we have Is our little java service Which currently this is running on quarkus But you can run it on spring boot or any of the java run times or or environments that you want to run it and Let's see. This is already started and our java Service runs on port 82 and it has a little end point of greet I think and here we see we have invoked it via curl And it says java function invoked our java function Now the idea behind this is to really show you that You want we want to with service work will orchestrate Microservices and here we have three different functions. They're written in three different languages They can be really deployed anywhere you want or live You know not a local host, of course and now Our assignment is to orchestrate it in some way And for that we can start using our serverless workflow. So for this we're going to use the redhead cogito Current open source implementation for this there like we saw others, but let's go ahead and Look at the little project I've created here Let's do this so I can actually see things. Whoops. Sorry Now this is a very simple java maven project And as you can see there is no java sources There is no particular java based objects a lot of runtimes that you see out there Especially workflow runtimes kind of enforces you to use a particular programming languages Especially for the data model serverless workflow you're using jason. So even though this is a java project being a java runtime The way we express orchestration is completely agnostic to any programming language itself And another thing that we see here. I don't have a workflow yet. We will define it together However, one thing I've defined is a little open api definitions and if you see here We can see in our vs code We can generate a little open api And you can see our our in this case our go service, which we just invoked during curl It has a one endpoint and just slash and we can invoke it as well. The same thing we have defined here Is for java service and our node service that we currently about So when I write my workflows or when you write your workflows, you don't have to know much about where this service is If he needs a certain authentication, you don't have to define that inside your workflow definition You use open api, which is a standard for that and you let the runtime engines figure out how to actually invoke Those services during workflow execution the way you define them with a standard definition Rather than something proprietary or custom So now what we have left to do is really start writing our workflow So for this i'm going to create a new file and it's going to be let's say um my workflow And like we said, we can define them in jason or yaml this case. Let's go ahead and define it in Jason and now before I get started. I want to just show real quick with insert of vs code I have The serverless workflow plugin installed already But if you work with vs code and certainly makes a lot of sense to to to install this little extension as well the reason for that It gives you things like code completions and code hints and things like that To allow you to easier Define your workflow using the serverless workflow link. So let's go ahead our idea of the workflow. It's a unique identifier where let's call it just simple workflow and We'll see what why this is important to define the idea later. Let's give it some sort of version of 1.0 and Let's give it a some descriptive name our simple orchestration All right, so far so good, right? So now we have to tell the runtime engine through a workflow definition. Well, okay when you start this workflow, where do you actually Start and let's call our starting point again. Let's say start or So this is some starting You know the next thing we want to do is actually Do the what? Part of the workflow which is defined the control for a logic for this Serverless workflow gives you an array called states each state is a single building block that you define That defines how the control for logic should What what actually the workflow should do so in our case? Let's define a state And we have a name Which kind of has to match here so Start matching this state means that this particular workflow state is going to be the first one executed when this workflow execution happens Now another thing with serverless workflow that you have We have different types of states and currently we have nine of them So for the one that we want to use which is basically just invoking Functions or microservices. We have this thing called an operation state and an operation state has actions actions actually Define invocations of some services that you want to invoke. So in our case, let's give our first action a name and let's call it invoke go function and We can give it a function ref and our function ref Let's just call it Go function and that's it And we have of course to define two more of those because we want to invoke three services now the default execution of the services in actions is Sequential but you can see you can't define an action mode parameter where you can also actually also say that these services or invocations should be completed in parallel. So let's see we want to go and java And that's it And at the end what we have to do is we have to tell our workflow engine that once it has executed those functions Yeah, uh, the last one you said no function should be java function, right? Oh, sorry. Yep. Thank you so much. Yep. Uh, all right Thank you Yeah, and basically I want to tell a runtime once this state is executed or three functions are Invoked we want to end workflow execution And if you can see so far, this is all Domain specific we didn't use any programming language. We didn't use any We ways or proprietary ways. We basically once this is human readable And once when you read you can actually figure out what's and you can of course use your domain specific language To define everything within workflow execution now that we have defined our state We do have to tell the runtime a little bit more about the functions that we want to actually invoke so for that service workflow gives you a little functions array and Now the name of our function definition. Let's We have references the function ref parameter. So let's call call this a go function And our operation At this point a lot of workflow languages give you some custom parameters proprietary things a set of things that you can do With serverless workflow, you basically use open api. So for us, this means that we want to actually reference our api definition for a go service So we say go service json And within the go service json, we want to uh use its unique operation id for the type of Service that we want to go use and this operation id here is called go So this is it with this we have told the runtime to use the open api definition to learn how to invoke the service And when it evokes it you should use this particular unique operation id telling it exactly what to invoke during workflow execution Now the same thing we want to use Again three times. So for this we Want to reference this one and we have node service json here And the unique operation id is node Where is my workflow? all right, and for the java function we have our java service open api definition here and Java service and our operation id is simply just java. So let's go with that And this is it. This is our workflow that's going to first invoke our go function Then our node.js function And then our java function now one of the things that it's also important here if you look at our for example node service We have for this particular path or service what to invoke we use get now you can change this to post Without ever affecting Your workflow definition. So having this open api definitions or we also support g rpc Where you can invoke some certain reusable expressions Having them defined like this really allows you to change things around dynamically depending on What the services or even the services that might host this open ip definition They can change and you don't really have to change your underlying Workflow definition itself. All right, so that's it. So let's go ahead and Start our little workflow Application so what this particular runtime is going to do this java runtime is going to read our workflow definition and our open Idif definition is going to start generating things for us. The first thing that is going to generate is the actual code to invoke The open api stuff that's using the open api as the case in different languages Then is going to actually parse our workflow definition And generate some particular code that actually is going to expose our workflow as a service And the particular endpoint of the service that is going to expose for us is determined by the id parameter So for example, the service that we have right now running and this is running Just to show you import 80 83 just to be different from the other ones It's going to be a local host 80 83 And in this case slash simple work. All right, so let's go ahead and we can actually run this so for that I Let's see if I have actually I can actually have the curl for that Nope, I guess I don't Which let's try it one second. I have a curl prepared for this and You call EU 2021 So I don't have to mess it up There it is and I might have to change it a little bit depending on our Definition so let's go ahead and paste this now. Let's see our endpoint if it Is still true. This is slash simple but for us our id simple workflow. So let's change that around All right, so when I do this we're going to Should be getting an instance of a workflow our workflow is going to execute our Uh, little three functions in this particular order. So let's go ahead and do that And here is our output, which is the output of the workflow. It's uh results and includes the three Outputs of our services So the results of each of the services location gets merged with uh, these Workflow state data and the final after the three results are merged into the state data the workflow produces this particular result So that's it for the little demo. I hope it really helps you guys Understand how easy to get started and start orchestrating services No matter where they live and how they're defined Using particular standards like open api and grpc. So that's it for the demo and i'm going to start sharing back our presentation all right so Thank you all for uh joining our talk and and being here Doug and I would really like to thank you And here you can find some more information about both cloud events and serverless workflow And for cloud events specifically, uh, they have Cloud events as weekly calls thursdays as at 12 p.m. Eastern time and you can find all the information you need To join those calls on the serverless workflow. Uh github repository. I mean cloud events Sorry, same thing for serverless workflow. We uh, it has calls on monday. So at 1 p.m. Eastern Uh, and you can also find uh that information on on github as well Uh on behalf of of Doug and I I would like to thank you all for joining and uh, we will stay around for any questions Thank you very much