 Hello. How is everyone today? I'm great, Brian. I work yourself. Oh, it's a busy Wednesday, but I am doing well. Great. Right. So, uh, today is, um, April 15th, 2020. And, uh, let me actually bring up the agenda here real quick. Before we get started, just keep in mind that this meeting is recorded and will be available for the whole world to see. So don't say anything. You wouldn't want the whole world to see. Yes. So, um, today our agenda is, uh, actually not very long. Um, there, um, there are some items that are going on in the background right now, um, namely around, um, sick out delivery versus, uh, the working group serverless. And we are currently working those issues out. Uh, but first up is, um, a presentation here from, Oh, I'm not going to. All right. Is it to your mirror? Yeah, yeah, you guys. All right. So, um, I'll let you take over right now. And, um, thank you. So let me stop sharing. Can you guys see my screen? Yes. Great. All right. Let me just pull up. Presentation. All right. So first of all, thank you guys for having us today. Thanks for allowing us to present. Um, this is a presentation for our sandbox proposal for the serverless workload specification. Uh, my name is Tichomir Shudilovich. I, I'm a developer at redhead been there for 11 years and, uh, I'm working on business automation there. Uh, as far as CNCF goes, I've been contributing to the service workflow spec for about close to a year now. And I've also done a couple of contributions to the cloud event specification just so you guys know who I am. Um, so the agenda here for today's talk is, we're just going to give an introduction to what a service workflow specification that the motivations behind it, you know, look at a couple of key features and use cases. So everybody understands what this project is about. Uh, then we'll go into kind of like the project information and then we'll do a quick demo of course, any questions that you guys might have. So if we can summarize the service workload specification in one sentence, uh, we want to focus on then being a vendor neutral specification for defining workflow models. These workflow models are responsible for orchestrating microservices. And when we talk about orchestrating, we talk about coordination and management of, both services, um, loosely distributed services, microservices, and also events that can trigger, uh, for example, these services. This slide kind of want to get right into it, what we are and what we are not. Uh, in the top left corner in the box, you see what the service workflow specification provides. Um, or is striving to provide that the core of the specification is the JSON schema. And this is something we have been working on and also have released version 0.1 recently of the schema, uh, alongside of course, the specification document and everything, examples, use case documents and everything else. Uh, this JSON schema clearly describes the workflow model. That, uh, you can use to describe serverless workflows out of this JSON schema. Then we strive to provide, hopefully, in the near future, um, a lot of different things like SPIs, APIs, uh, TC case for conformance purposes in many different languages. We, we had, uh, we'll contribute, uh, an SPI, a Java SPI, but however, we have to wait until we have a proper GitHub structure in place for that. Uh, so that is kind of like, uh, in the community, there has to be implementers. So these are basically runtime implementations. They can use the provided SPI, say, uh, and APIs to create a runtime implementations for the server, serverless workflows. They have to use that or create their own. And they have to, in order to conform to the specification, uh, they have to conform to, uh, to given tests in the TCK. And what we're actually trying to achieve, uh, the main goal is to be able to write JSON or YAML based, which specification default, uh, um, has both, um, formats available, uh, workflows, which can execute on many different runtimes. And by that, be able to do that. Um, um, um, um, um, and by that be able to be deployed on many different cloud providers. So that's kind of like the core idea behind the specification and what we're striving to do. Now, as far as the motivation go, why are we even attempting this or why is this project interesting or important? If you look in this slide, there's a whole bunch of serverless workflow implementations already in place. And these are just some of them and it seems new ones are popping up every month, it seems. Um, they go, of course, you guys heard about AWS, uh, Microsoft Azure, and there's a bunch of other ones, um, that, that are available. And a lot of it is not all of the cloud providers have realized that workflows are an integral part of their serverless offering in order to be, to complement, um, the, the development and deployment of serverless. Applications. Now, well, again, but that doesn't mean we need the specification. So why do we actually need the specification? The current user situation for serverless workflows, as we've seen with all those implementations are, once you choose, um, cloud provider that offers, for example, a serverless workflow implementation, you run into big vendor locks on both the workflow model level. So you basically stuck with the proprietary, in most cases, definition of workflow definition. And it's also your stuck with workflow notation, which is the visual representation of the workflow itself. Um, so, you know, it's very hard currently, if not impossible to take your workflow definitions and moving from one cloud provider to the next. And this creates this situation where we need a portable and vendor neutral specification. Now, as far as the serverless workflow specification really focus on the workflow model, we're not currently looking at notation. We have attempted that a little bit here and there, but we were waiting to grow and get a community in place before we attempt to actually deal with the notation part of it. But we are focusing definitely on the workflow model. So this is why the motivation behind a specification. And this is why we believe that it is very important currently, given the situation of what's going out there with serverless workflows and cloud providers that a specification is is much needed. So let's take a look at some of the key features of the serverless workflow specification. They all fall into kind of two buckets. And everything that we're doing or adding to the specification, we are looking into these two buckets and see how we improve based on those. The first one, of course, is something that workflows are doing and have been around for a very long time. And what they're very good at is the clear separation of concerns. We would like workflows are really used to allow developers to build their business logic, their functions, their services to focus on business logic, and workflows, what they do, they offload a lot of crosscutting concerns such as parallel execution. So data management, things like that. All the orchestration logic is this is kind of like we're workflows are responsible for. At the same time, we have to look at workflows running in the serverless environment, and especially the cost. The cost to running in serverless environments might be quite different than what in the typical or other types of environments workflows. A lot of times are either charged based on some sort of transition. So if your workflow goes from one state to the next, you get charged a certain amount of money, or in some cases you're charged again for execution time of the workflow. So in both cases, all the control for logic and the things that we're adding to our specification, we're looking at how we can structure those and build those to lower the overall execution cost of the services running. Now, the core of the serverless workflow is the model definition. Like I said, we both support both JSON and YAML format, so we're kind of machine readable, understandable, and embeddable. This is kind of like you can see the core model definition each workflow has a unique ID, can have a name, a version, description, and the core parts of the model definitions are function definitions. These are reusable as you can define it once it will look at specific examples and then reference them throughout your workflow states or the building blocks to execute the control for logic that can reference them and call different services. Workflows can both react to events, so it's being instantiated by existence of events, and it can also produce events. So the second bucket of the model definitions are reusable event definitions. Those conform, we conform to the cloud events specification, so all of our event types that the workflows can connect upon have to be in the cloud events version 1.0 format. The third bucket is the workflow control for logic or the building blocks we call in states currently, and we will look at that those are really the building blocks to allow you to do, for example, for example, gateways where you can split your workflow execution into different branches, you can do parallel execution and things like that and we will take a look at that as well. So this is, for example, definition of functions within serverless workflow specification. As you can see this is very kind of vendor neutral. We each function can have a name, a resource, and a type. So from the serverless workflow definition, you can really use a whole bunch of different ways of accessing and defining services that should be invoked within during the workflow execution. Now, of course, we know functions also for services need input parameters, and since those are defined actually in the states that reference them, because multiple states can can execute different functions with different types of parameter input. Events, like we said, workflows can react to events, workflows can be instantiated given the event times also workflows can produce certain types of events that they can be consumed by different services or even other workflows. As we use this cloud event specification 1.0 and events have a name, they have a type which has to match the cloud event type parameter source, and we also use correlation because within workflow instantiation, a lot of times you have to correlate different events to the same workflow instance, for example, to resume execution of a workflow instance, or even start a new instance of a workflow. In this case, we use a correlation token. This is just within cloud event specification, a parameter. In this case, we use an applicant ID, because we want to correlate all the events, for example, in this case, applications submitted set scores received and stuff like that to the applicant same applicant in this case, and in most likely the same workflow instance that handles this application process that consumes these events. Now, the core building block or the building blocks that allow you to do all kinds of different things are states. State has a unique name each state, and we have a whole bunch of different types, and each type represents a certain type of state that can perform a certain function. Each state of or serverless workflow can define the start of the workflow instance. So this is where the start parameter types comes in. We have different types of starts that we have currently defined or expanding on. The default one just means the workflow instance is started. But there can also be, for example, one thing that we currently have is a scheduled workflow instance where you can say I want to schedule the execution of this workflow in this particular time frame. Next comes specific parameters for each state has different parameters depending on what it does. And also each state has an end definition. Again, this is the notes that it is the end of the workflow execution. It can be different things. It can be there to terminate workflow execution, which means stop it, or in this case in this example, it can be an end type of event where when the workflow instance is completed, we produce a cloud event, which then can be consumed by either services or other data indexing or whatever you want to do to say, hey, this workflow is completed or it can trigger other workflow instances as well. If it's not an end state, a workflow has to transition. So we have transitions from one state to the next. And this is denoted with the transition parameter, which we define the unique name of the state that we want to transition to. State types, and I'm sorry if this is hard to read, we currently define nine types of states. We go from, of course, and we feel that these are kind of like the core states, especially for a version 0.1 that we have released. We're looking into adding more states and refining the ones that we currently have. But currently, we deal with events. We have the core one. So for example, the event state, which is a state that is has ability to consume events. And the operation state is a state that allows us to do actions and actions can reference function executions switch state. So we have support for gateways, which are database. So for example, given some information or data that is provided either with the starting of the workflow instance or the events that we're consumed the event data can trigger different actions or path throughout the workflow as as it's being executed. Of course, parallel state laws parallel execution. And for example, the callback state which is the newest one that we added this really allows us to do integration with different services and also user tasks so you being able to wait for a user decision is very important at times during workflow execution when human decision making is important. So this is kind of like the core of the states that we have available in the, in the specification current. Now, as far as use cases goes, there's a bunch of use cases available and we do have in our github repo currently use cases documented defines I think five or six different ones. I just wanted to show one here is an online vehicle auction, for example, example, where, yes. Any question. Okay, I'll go ahead. So in this case we we have an online vehicle auction where users from different types of devices mobile web you name it can bid on vehicles and for example in this case a serverless workflow can be used very nicely to orchestrate different services in this case authentication service bidding service and inventory service. At the same time be able to store something in a in a common data store. So a workflow can do many different things, and this is kind of like one use case where, where, where it can be used. Now as far as project information goes. We are currently a subgroup of the CNCF serverless working group. As far as communication goes we have monthly zoom calls and which happen first Monday every every month. This is the document where you can track our meetings. We also meet almost weekly on working on the specification primer document which is still ongoing and should be done in the near future as far as github goes we're currently under the CNCF WG serverless sub repository in the workflow directory. And here is also a link for our 01 release that happened about two weeks ago. As far as governance goes we're consensus and community driven. We're still a very small project as far as numbers goes as far as community goes but we're growing fairly fast. We're hoping that within the inclusion and the adult, you know, into CNCF sandbox project we can really grow this community. As far as owners goes and I had, hey the word owners currently the companies that have the decision making power. Currently a redhead Nokia Camunda and Hawaii. And we're looking at expanding that as well. As far as license go we're currently Apache v version 2.0. And I'm no Brandon Burns mentioned in the TOC PR that this might not be the perfect license for for a specification so we will hopefully work with him on and defining that in the near future to make sure it's the correct licensing. But either way should it should be open source completely. As far as the community goes we currently have a mailing list which again is the serverless working group mailing list. We have a slack channel that we're using as far as communication or chat goes. And we also started a blog with where we are posting information and just community blogs about the specification itself. So just TOC sponsors I know from the PR that we have created Brandon Burns and Liz Rice have raised their hand to to to sponsor our project and I don't know really somebody can explain me the the if we need three or how does that work. I don't know very well how that goes. All right, so do we have time still yeah we do right. So let's take a look at a quick demo actually that will speak for show you guys so so what I've created is is. We're doing a specification but also redhead what we have done and will contribute back to to CNCF is actually an API and also runtime implementation which of course is redhead specific in this case but it's a first implementation of of the serverless workflow specification it is not bounding bound to the entire specification version 0.1 yet but we're kind of getting there. But for the sake of this example I wanted to show you this demo and I will go into presentation mode what we're actually doing is we have this particular workflow which is called Jason service call. And what this will do is we want to start the workflow we want to access an existing service in this case is the rest service, which is freely available rest countries that you and specifically the name endpoint, which we want to access the other ones. So this is our service execution and get the list of get the information by the country that that are our input is for at that point. Let me go back to this. This is a simple work. So this is the operation state, which actually calls has an action that gets the country information. Alright, so we reference the country info function, the country info function then it will execute the the particular service that I just showed to get the country information. At that point it's very simple we have a switch state, which you can look at it as a gateway, and which has two choices or two path from the gateway, and it's based on the population. If, for example, and this is just for the test if the population size that we get back from the service that is is less than 20 million we say, classify the population size of this country a small or medium. Or if it's greater, we want to classify this country population sizes large and you can play with the numbers he doesn't matter it's just for the text. The classification actually again calls to functions or to services which store this information, and they're defined up here as well. So, basically, it's a simple workflow you we just received the country information, and we classify it. And that's it. And we can run this locally. I'm going to show you guys that is going to skip the test to run faster. This is running on Quarkus locally. And what our implementation does that this is something that we also want the specification to show the workflows can be exposed in services as well. And what the what really happens is our workflow is going to be exposed as on a rest endpoint. So calling this rest standpoint and giving it some Jason data in this case is going to trigger the execution of the workflow. I go here and I go to local host, and we created a small little web page is just an HTML page, which calls the HTTP get for Jason service call so this is also if you see Jason service call is the ID of our workflow. So the idea workflow which is unique ID is actually going to be the same name as the rest endpoint that the workflow is exposed. Yeah, go ahead. Interrupting here on the demo is always demo to show because there's a number of questions and we have a couple of other. Yeah, I'll just I'll finish here on minutes. So basically this clicking on classify will trigger the workflow execution passing in the name of the country Germany. You will get the information classify the population sizes large and display and we can do, for example, here in a small media and just to show you also just an example one of the things things we have done to show the yes you can deploy the applications locally, but they're also deployable in this case and open shift. We have deployed this application on open shift, and you can easily see the same type of information you can type in like Switzerland, for example, and classify there as well. So that's it for the demo and yes of course go ahead and any questions. I'll let Harry then jump in but I have a number of questions noted down that that I'd like to ask first. So the first one you partially answered this one but is there a an open source reference implementation. Yes, everything that I've shown so far is open source. Actually, there is two things that I've shown that one is the API which is the actual ability to parse the Jason and the YAML from the workflow specification. Yes, that is completely open sources Java and that is something that we will contribute back to the open shift once we have, like I said a proper GitHub structure in place, and we hope that the community then will pick that up and help us not only that, but also be contribute the API is in different languages as well. But the execution engine is not open source. It is completely also open source. Yes, it is open source. Yes, everything that we do is open source. I'm not promoting their runtime implementation because he has nothing to do with the specification it was just for the example to show you guys something running. And we also hope that more and more companies, if they get involved with the specification will create open source runtimes for it as well. Did you talk already to other projects there are a couple of workflow projects you mentioned obviously our goal but there was a big brigade as a CNCS project. Are you already collaborating with these projects actively. No, we, we, our collaboration is really mostly with with the serverless working group and the guys from cloud events. There will be definitely something we would like to to have the help and the collaboration in the future. But no so far we haven't had the chance to do that yet. So, the reason why I'm asking is actually two things, obviously whenever like specification projects are a bit different to two implementation projects in what that somebody needs to implement specification obviously, which requires to be handled a bit differently and if the goal is interoperability. I think it's a good thing to do but I would really have this almost as a prerequisite for your project that you actively engage with the Argo and brigade folks also for your own interest because otherwise you will build a specification and all the other projects in the CNCS are not not actively using it. So, I think as a, as a pointer talking to the Argo and brigade folks and how you can more jointly work together would be a great idea here. Yeah, I completely agree and I think this is where we kind of really try to get our project which is currently small into a CNCS Sandbox project is that will really give us exposure not only to the community but also the other teams within the CNCS. So, yeah, the collaboration would be much easier for us in that case. I actually still propose to do it the other way around honestly I would really propose reaching out this project and engaging them early on. So obviously CNCS might help you but you had a very early stage with the 0.1 and I mean I've done a lot of standardization work myself and I really recommend getting them on board and really agreeing on the common problem that you want to solve with the CNCS because just be in the CNCS won't resolve this issue for you. So if they're interested if they see value in doing this, they should see actually the value right away. So I know it's not required in the official Sandbox CNCS documents, but I would really strongly encourage you to do this beforehand because like there is no silver bullet. Just by being in the CNCS but suddenly people will magically show up and then come to you to approach it. So that would be my proposal for you to actively reach out to them. Okay, that's a good idea. It's, I mean you could accept because one thing is the Sandbox project for sure they could just more or less die or if they're not become active that's why we'll do it right in front. Definitely yeah but one thing is yes we're currently still small as far as community go but we do have some a lot of big companies in place that have been around workflows and business automation for years so that we have the advantage of that. So, but yes definitely as you said we will engage and and try to see, you know, where it leads us. Definitely. Any reference implementation we asked. I wonder what what are the pieces that we have shown right now is really specific to serverless because it feels like it's a like general purpose workflow definition language. Yeah, I didn't understand the question. So, so I wonder what what's really so specific to serverless workflows here because basically it's a workflow definition language and Jason Jason and the animal as far as I as I understand that. I don't see this that they're really serverless part in there. Yeah, that's a good question for serverless is really kind of like a buzzword anyways, we described serverless we use the word serverless because we focus the niche of this specification is to orchestrate serverless functions. And this is kind of like what we're what the way serverless is used. Yes, there is some community, you know, there was some question in the past. Maybe we can call it cloud workflow, maybe we can change the name, but I feel the serverless fits this niche nicely because as long as that buzzword is used for functions and microservices deployed in the cloud environments I think we also need to stick with that word, because we are not looking for a general replacement of existing general business automation or business process management workflows. We're looking for a very niche specific and do what we do best within orchestrating microservices which are loosely deployed on on cloud providing environment provider environments that's kind of like what we're looking at. My next question would be kind of like moving in here is there also part of the specification to define how I should write a service that works with that specification on an API level. How you should write what I'm sorry a service like if I look at lambda I know they have like just taking numbers and example here there's lots of other serverless implementations out there. Like what payloads I can expect if I'm like writing a service or a step in your in your workflow process you're planning to standardize and this as well like this is like the data structure that they get. I mean cloud events define parts of it but they more or less define the wrapper but not the actual payloads and you have some payment components in there. So there was a plan I could define this more in the protocol and payload level for the services that I want that I want to write using this language as well. All right it's a good question I'll try to answer it. Currently the only there there are two kind of like restrictions of what the service workload specification defines the one of course as you mentioned on the event side is the format of cloud events the cloud events payload is really can be is in its data parameter it can be an adjacent structure XML structure, or even base 64 as they define it in their specification. As I mentioned before, workflows really want to offload your the orchestration part and part of that is not orchestration events and services but also the data management for example inputs to a particular call as you mentioned to a function getting its results and then passing that, for example, to the next function call the restriction that we currently is the data structures within the sort of the workflows is Jason based. All right so right now, as far as payload goes in order for them to be merged within for example the workflow context or the state data context they have to be Jason format. That is currently our restriction that doesn't mean the data within this Jason structure which defines the data or the payloads cannot be anything basic before encoded images URLs you name it, but the overall structure of the data has to be Jason format. Yeah, I just think it from an interoperability perspective and another detective and you might talk to the technical people as well. And I don't know that also that part of CDF, but they are not taught exactly discussing these details and interoperability working group. And I don't know exactly those details and how to name things and how to pass, like around the bags along and there's also a lot of work for it sounds like a totally different field in the open telemetry project like instead of these, these ideas to have like this data that they do some work on baggage transportation that these kind of things so if we're really talking about a specification I would see these things fall into this one as well honestly. Okay. And that can help to introduce you to the right people if you need any pointers there so definitely. Harry any questions from your side. No, I have no further questions I think your discussion is very meaningful because if you are trying to talk about a specification and then they really need to care about that what can it fit to different implementations. Instead of just a specification. I think further discussion around this topic, maybe this right right person and also I think they need to involve the service working group in the discussion to see their point of view. What, what, what do you specifically mean we have been working very closely with the serverless working group. I mean, folks from service working group input from service working group community. Sorry, I don't really understand what that means. The users or implementations. I assume there will be users or implementations in current service working group know or for workflow interested in implementation so I think they may have some valuable input and they'll regarding to the project itself. Correct. Yeah, we have we have we give status updates on the in the serverless working group meetings. And we have been working with we're part of the group so we have received some valuable information and so far they have been also helping us. Some with comments on on the primary documentation that we're working on so yeah we're we have been working closely with with them for close to a year now. They also helped us some with the correlation in from, you know, data from from serverless from cloud event specification so we're working close and we will continue working with them. As far as being a subgroup under service working group currently. So there is communication. We definitely looking for more or more advice or whoever is interested into into giving us. Okay, Dennis is good move on to some other topics on the agenda. Next one is a quick one for those of you have been with us for a while we have been working for a logo for the. This week at delivery working group. There is a second round. There's also an issue linked in the agenda. If you haven't shared your ideas where you want this to go and provide your input please do so. Ideally, yeah. But by the end of the week. Well, this is a short week already so let's say by mid of next week so Wednesday next week, please provide your input so we can then eventually move this. Over here. I know that Diane really likes. Yes, comments in the issue, and we'll work on being able to wrap this up. I'll do that. Get it in there. I also wanted to provide some quick updates on project proposals. It was already happened in there so you should be ready for your review process. Yes, you actually are and it's currently on us the chairs that we did not find the time yet. Document is fine expect really to hear from us this week I had it on my plan to have a look at the talk today. But for some reason today turned out to be a bit shorter and expected to definitely expect something from us in the coming days so we did not forget about you just be a short of this. I don't want to think the operator framework and hub is still with the to see for kudo. Same story there so we are still waiting for feedback from the to see as there were some questions from the to see if I think kudo. In case you got. Sorry, go ahead. Okay, yeah, I also have update on regarding to the build pack proposal and I'm reading the documentation for build pack containers and I will take the review this week. I can be the contact the person for build pack review process. Yeah, for for kudo and operate the framework I would ask you to ping maybe that you see mailing list directly regarding approach updates that might be the quickest one for you to get it so for kudo we are still waiting for the final decision how to proceed and for operator framework and hub the review has been done by the working group and is now with to see the to see has to decide on next step so. Yes, then captain has done his review. This needs to be done forwarded to the to see and any other projects we forgot about no I think there's no others in review right now, but obviously we not started with the serverless workflows. And I want to take the last couple of minutes for the air gap working group to provide a quick update and also operator who wants to talk about your gap. So we have nobody there just give them a minute in case you did not have a look the air gap working group, and I will post it in the agenda, started to work on a best practices document on how to do area of deployments and I think right now they only. They have already one customer example in there, I think it's from. I think it's from great. From crazy. In case you are running an air gap environment or are interested you should have a look at it there also meeting on Friday for those of you who are interested and want to join I'm just finding the documentary you just give me a second. Oh, that's weird. Amy. Is it possible that we have the meeting notes. Yeah, I'll figure out where where the document is a definitely where is my office team. But in case you're running air gap environments, have a look at it and also try to join the Friday meeting on the operator side, we have Mark here. Yes. Now we can hear. I am here. Yeah, we don't really have anything like we've had to kind of recent meetings in the air gap are the operator working groups still kind of working through the definition of the operator there's some interesting conversations that happen every couple of weeks like encourage anybody who has an opinion about what defines an operator. We're still at the phase right now just trying to define what the operator is. I believe we have our next meeting next week on this encourage anybody to come and join in participate in it. Nothing to report yet, but hopefully soon. I think that's pretty much it for the agenda today and we are finishing on time today, perfectly on time. Yeah, just a post in the link to the work done by your gap in a couple of minutes here. And you know that I think all of you will reach out to some of the projects offline. The next couple of days and then see each other again. Thanks everyone.