 Hello, hello. Yeah, it's like it. Hey, seems like just the two of us, at least for now, Falko was asking if I had sent out an invite for today's meeting. I just sent him the link to the archive. Oh, there's one more, hello. Hi. Hey, how are you doing? I'm doing good, thank you. I think if Falko wants to be able to join, he will too. So, and dear me, since you're about to leave early, let's get started. So, I think not much has changed on the document, I'm afraid, and I'm sorry for that. Given the current situation, there's a lot going on at work, and I haven't had time to visit the document. But I noted after our call, I noted down a few things in the section. So, I put my action item, I applied roughly the structure of the primer from Cloud Events, that is starting with the history, the design goal section, and then this processing was something specific to the workflows and specification design, and then also having a glossary, like a definition of things. I think, yeah, we've had that in the last version as well. And I closed a few of the discussion points. I also added a little bit of suggested text, but we still have a lot open, and maybe let's touch on one or two points now, and then try to fill the document with more text about it. Can you see my screen? Yeah, yeah, I can, I'm saying, yeah. Cool, great. So we had one question by Scott Nichols, who was asking, so why serverless? What does the workflow, what makes it serverless? And I really wanted to touch on this one because I had a few doubts myself. So I know we would want to use the workflow specification language to orchestrate serverless functions, for example, lambdas or serverless microservices that would scale to zero and known maybe by a callback endpoint. And then there is, of course, the functions as a service concept on its own where you just identify the function and leave the rest to the platform that is from pulling the source, building the binary or loading it in runtime, just in time compiler, until executing the actual hook. And an early version of the design specification said that function invocation was not part of the workflow specification group. So because maybe because there are so many different ways of how we do it, but at some point, I think Tirmia in your blog post that you shared in the serverless working group, you also mentioned that we are orchestrating event-driven functions. And I wonder if that is something specific to serverless workflows. Oh, what would you think that the workflow here is that we specify serverless? Because we adopt cloud events, maybe I think this is something we could put. But then we already got pulled back by Scott saying, hey, cloud events is not about serverless, it's cloud events. Yeah, cloud events is just a format. That's it. The specification is small. And compared to what we're doing, it's tiny actually. It just specifies the unified format of events. What we're doing, we cannot, what makes it serverless? Well, it depends. What is serverless? You deploy microservices or services, they're loosely coupled. They're driven by events. So we have event-driven applications, whether they're, and that is kind of like the core of serverless computing by itself. Can you do event-driven applications outside of a cloud environment? Yeah, so it doesn't make it any different. We cannot control where our services are deployed, that we're orchestrating or how they're deployed. We're defining a way to orchestrate their invocation. Functions by themselves can be event-driven or applications or an event can trigger multiple invocations of some sort of services. And that's exactly what we're orchestrating. We're offloading event definitions off of serverless or not. Event services, yeah. Yeah, I would just argue that any work, so one aspect of serverless to me, at least, is of course not having to manage servers. But then I find that to be true with all workflow engines that I've come across. Yeah, you can manage events outside of serverless and that's something we've been doing, for example, might work for over 10 years. Events can be, for example, anything, Kafka events, anything, this is about deployment, really. And what makes us serverless is more than just what it does. We can argue that the JSON and YAML formats are more suited for serverless. But then again, you see even SAP is starting to use BP-M2 for serverless orchestration. And BP-M2 itself is absolutely nothing to do with serverless itself, you know? So as far as why we have to say we're serverless workflow, we are assuming that the majority or the default definition of a serverless application is an event driven, loosely coupled, micro-services and the way we're structuring our workflow to basically be able to act upon events and simply do not define, for example, in a lot of workflow engines, you define a lot of the executions and we don't. We basically say, here is your action, here is a function that can live anywhere, you have a URL to it. So it's a little different approach than the traditional workflows. So how about those function invocations? I think what we are adopting here or at least in the examples, there is a mention of the URN, or AR, is it an ARN example or something similar where we? Yeah, it's an R, yeah. Yeah, that's the Amazon resource name, right? And so this is, it's a virtual endpoint. It's not a URL, it's not a location. So this is location transparent, that's okay. Yet we're still, so we're doing, or we're, at least what it looks like is a synchronizing location, or maybe, okay, if synchronous is something, transport and process specific, then let's say this is, it's a request response model. So it's not really event-driven. Or could I send an event to a function and pick up the result from a bucket? So I think there is no such, this would be specific to the invocation, right? Our actions have an action mode parameter which is either sequential or parallel, I guess. So we do support both the sync and async execution of the invocations of the remote services, yeah. Okay, but it's not like sending out events and collecting the results later in the workflow. It's really the execution of a single, okay, that's why we call it state also, right? I don't understand. I mean, you cannot mix functions and events, I think in this point. Events are events. I mean, and functions are invocations of some services. Maybe I don't understand, sorry. Okay, but the way serverless functions would be invoked, they are serverless and at the core of serverless, it's event-driven, then the way to invoke functions would be events like you would bind a function to a storage trigger, right? Yeah, yeah, okay, I understand. The way we kind of do this, we said from the beginning that the use of workflows in serverless computing is to offload the orchestration lodging from business logic, which in this case, our workflow offloads the event collection and the event triggering from the function itself. Where before in the function, you had to define your trigger events and then what happens when those events are happened. You have now a workflow which consumes these events and then triggers the function for you. The functions themselves or the services, yes, they have to be exposed one way or the other somehow and that is typically done via this URL, whether that be a REST endpoint or whatever that might be, or some identifier that is in the system known to be associated with that service invocation. Yeah, so that doesn't change. You know, even if you define events inside of these services, they have to be exposed somehow. You know? So but it couldn't be a Kafka topic, for example, as an endpoint of the invocation or could it? Yes, of course you could. Yes, there has to be some service then that listens to a Kafka endpoint and then whatever event type the Kafka event is and converts that into a Cloud event. So that part, we're not concerned of that part is the part of the serverless provider infrastructure of how to invoke the workflows themselves. We don't really... I mean the functions, I mean the functions. Yeah, that could be it. But yeah, of course we have also currently like business processes, they're listening to a Kafka topic, yeah. Okay, so do we miss something in the how an action or function? It's defined, we have the function ref and we have the functions definition at the beginning of the workflow spec. That again, that doesn't say anything about the transport. I'm just wondering if I went ahead and just provided a workflow engine. Somebody has lambdas in Amazon and has Azure functions, a Cloud one, whatever. There is a way of how these serverless functions would be invoked, but I'm not sure how to tell the engine how I want them to be invoked, you know what I mean? So I don't want the engine to download the AI and the function, I just want it to invoke it with Amazon. Yeah, the way you can implement it in many ways, you can define the resource and the types. So for example, for implementations we provided the type parameter where we left it open-ended for the implementations to say, okay, the resource might be the name of a Kafka topic and the type might be Kafka for one function, for another function, the resource might be a URL and the type might be REST. So we're kind of open-ended on that. There could be some improvements, but I think it's kind of like... No, I mean, it's good. It's a lot of flexibility on this end. So one could go ahead and say, okay, the type is JavaScript and then just refer to a JavaScript URL. Yeah, exactly. I guess what we are... Hi, Falko. Yeah, hi guys, sorry for being late. What we could say as a smallest common denominator is that in our real function in vocation is something identified by a URI maybe even. If it's not a location, but just the name is the URI maybe, some identifier, that's what we have there. And from our perspective, it receives and returns JSON. Whether that is later translated to something else might be implementation specific, but from our language to protocol, the data model we speak is JSON, right? Well, yeah, the parameters to an action have to be defined as JSON, yeah. Well, yeah, that's just encoding, isn't it? So it's a tree-based encoding format. I could encode the same data structure as XML or YAML. Yeah, but in order to define the niche, which we could say JSON is what we are having at least as a programming model, so to say. If people implement that internally in some different way, that's fine, but like the model of thinking and what is discussed is always JSON, am I right? Or am I missing something? Is JSON left open as well? Can I use XML with this language as well? I think, yeah, so because this is not specified and this is an open, it's not really an issue. Just at this, there the specification doesn't go any further. And it's in line with the original idea that the workflow specification is only specifying the business logic and yes, it is offloading these specifics to the platform that implements the workflow language. And it's also open to the platform how it then implements it and if you, again, my example, like if you had a platform that only deals with JavaScript functions, pulls them into a node and node BM and executes those, it's also a function as a service sort of, you can build workflows for that or you could have the Knative services being invoked with Istio that would be the, with the serving component of Knative that would be your HTTP callback request response pattern or if you employ Knative eventing and the guys are currently working on this sync binding and so on, then this would be purely event driven. And in that case, there is, I don't have to tell you how many different adaptations are there for simple events messages. We had this example with Kafka. So then the type of the function would simply be Kafka and there wouldn't be a URI but it would be a Kafka topic name probably. And maybe there needs to be in that special case there also need to be extensions that would specify the Kafka broadcast and so on. A little bit similar to all the adaptations of cloud events, this would be an adaptation of the workflow specification to different function invocation formats sort of. But I don't get that I just wasn't clear on what to write here because we also got this comment from Scott what the work, what makes the serverless workflow serverless. Yeah, for like Azure Logic Apps, it would be for example, just like what they have the name of an action, you know so it would be just a string. So depending on the implementation it could be anything but wait a second before we continue can I ask because I haven't seen Mona around yet can you introduce yourself just to know who you are so we can greet you properly? Yeah, sure. So I'm a software architect at Accenture and I have 12 years of experience. My background was mainly in the Java sphere but since 2016 I'm interested in containers and cloud native stuff. And I started like last year with Kubernetes and I'm interested in getting into the open source because it's really interesting to be involved within the community. And that's why I'm here just to take a look at what's going on especially serverless interests me the most. So that's why I'm here today. Welcome and can I ask how you got like did you get us from the Slack channel or did you, how did you find because we're like trying to grow a community so this is great. Yeah, I was browsing GitHub actually and I found your serverless working group because I'm really so interested into getting into open source stuff and getting involved in the community. So I found the working group and I found the links to the meetings and stuff so I just subscribe myself. Well, thank you. And yeah, feel free to contribute and hang around and join every meeting, they'll be great. Yeah, sure. It's my pleasure. Yeah, also welcome for me. Um, Fargo, I, sorry, I thought again about your common data format comment and I think there's something to it because all the data bindings possible in different encodings. Although everything is possibly matching everything else. It's good to have a key format from which to everything else is derived. And Jason is probably not the most. But aren't we saying at the very beginning that what starts our workflow is a single JSON document? Maybe that has been changed through all the pull requests flying by. But yeah, the implementations is many, a couple of things. The workflow has a data input and yes, it is JSON format currently is defined. A workflow can also be started with a start event state via events. An event, the cloud events define the data and the section which also we have to state and we haven't yet, but we cannot consume every type of data from the cloud events. And that's something that I've been trying to do in my free time because cloud events format defines a data content type and also has two context parameters. One is data, one is data base underscore base 64. We have to be clear that for workflow we can only consume cloud events where the data format is type JSON currently. Okay, so that sounds like we are defining a proper niche in where things are not contradicting themselves. So then that also will then be valid for function invocations unless we would have a function invocation where we could then have a cloud event as input output, but I mean, that would again require JSON as a data format. Yeah, we can say JSON however, there is a difference between, now of course the data of the cloud event can, it has to be JSON format, but for example, it can have a parameter that has a binary string, right? So we're not restricting us to 100% pure JSON, we're just saying the context type of the data format of the cloud event has to be type JSON for us to be able to merge it with the state data or the workflow data, which is JSON itself. So there has to be some sort of, so we're not fully restricting ourselves to 100% JSON, but yeah, there is a restriction, yeah. And the only thing I want is that these assumptions are written down clearly to the readers. Yeah, definitely. Yeah, it's a benefit and it's definitely gonna be the wrapping format. I mean, we're not going to define a workflow where we pass on JPEGs. Yeah. Well, yeah, I mean, we can do the... No, no, no, but the JPEG would still be base 64 encoded in a parameter in our JSON... Yeah, yeah, or a URL to a JPEG, but somewhere that's also as well. The only restriction we can say is like, look, the data format over the internal data structure where workflow definition is JSON. So whatever it wants to be consumed as far as merged into the data of the state that I input that output and to be able to be filtered has to conform to the same format. Other than that, we can deal with it. And that's it. So also, I think our filters, everything that where we extract the paths or we define, select the paths for merging data. And so everything is JSON paths, right? Yeah, and Falko last time mentioned that probably is not a good idea, but we'll take... I have been looking, Falko, I can tell you into using a single expression language that is a very good idea you have. And I've been talking to, and you know, Edson and those guys here about using FEEL. All they all say that FEEL has some sort of current restrictions as far as what it can do or not. So maybe a separate meeting you and I can talk about it to see if we can go ahead and use that as the single expression language that we force, kind of, in order to promote vendor, being vendor neutral, I guess, or we don't know what to call it anymore. But yeah, yeah, yeah, definitely. Yeah, yeah, so that's a separate discussion. But yeah, currently, you can say we, in your primary field about expression languages, we define that we don't define one, but we also don't restrict to a single one. And that kills any operability. And in DMN, we had big success in having real interoperability of DMN models because we do have a standardized expression language. And maybe some things are still missing. For our use case, some things could be improved, but the foundation is at least a lot better than this blog post that introduced JSON path. And it leaves, you know, any more complex operation to some underlying scripting, which it is not close as specified. Yeah, so yeah, definitely. Let's take that as an issue item and work on it to improve. But blog posts, do you mean to me as blog posts? No, JSON path has been Oh, that one, yeah, yeah, by a blog post. Yeah, yeah. If you look for JSON path, it's the first line. And I'm not sure if there was any experiment to ever standardize that, but in its current form, I think it's kind of, any implementation is somewhat proprietary. There are some common parts that you can probably do similar in every implementation, but then those comments parts are relatively thin. It's mostly regulating data access of photo access properties and do some path magic, but, you know, sometimes you need functions in order to work on more complex structures like lists or, yeah, and then it gets tricky. Well, you can blow up an engine with if you allow one or too many functions. So, how about it's a conditional expressions? Is that something, do you know if we can use JSON path for all of our expressions? Yeah, that's exactly the point, right? You can, in theory, you can, but for anything more complex, you would, JSON path just says more complex functions could be provided by some other scripting language that is, the implementation is based on. I think the basic implementation is the basic vision of JSON path was that it is implemented on some, on top of a scripting language, like for example, JavaScript. And then you could leverage any functions that your scripting language provides. Okay. But I guess we wouldn't want to have a hard binding against something like JavaScript. Even though it's a language that is available on many platforms, it's also, the problem is that it's a Turing complete language and you don't wanna have full programming language skills inside your expression language. Yeah. That's what the services and the functions are for. So what was, just to make a note to myself, what was your suggested language was feel, right? Yeah, feel, F-E-E-L, like the word feel. And that's part of the DMN specification by the OMG. Yeah, it's something to work with. And I guess this whole discussion should definitely be recorded and put into the document. Maybe I can take that as an action item to. Speaking of which, how can we download recordings do we need to contact Duck for this? Is this room is being recorded? And I think it's using, I'm guessing it's using cloud recording, but available only to the account owner. So. Yeah. And well, how was it? There was also some stuff appearing on YouTube. Not sure if that's automatic or if that is something that Duck does after meetings. Yeah, probably after the meetings. There is a recording button, but that's please request recording permission from the meeting host. Yeah. Yeah. Best is to talk to Duck because it's his meeting. Yeah. And zoom in. He should receive an email or maybe he has something automated to catch the recordings and push them to YouTube. So, Tirmia, I don't know how we are in time. You mentioned that you have to, you have something to attend to. So. Yeah, I was cool, but school is out. So, I got all the time you need. Well, okay. They closed all the schools. Oh, well, I wasn't following it that closely for the US, same here. Okay. So, Stateful, I also had this other one, Stateful versus Stateless. I think we touched on it the last time a little bit. Oh. Yeah, maybe let's go through this slowly if we have a little bit more time. So, for the workflow concepts, I think I'm okay with saying that would you agree that the quintessence of what we've been discussing is that serverless workflow language is serverless because we use formats and we adopt the cloud event specification formats that are common in the serverless space. Cloud events specification, we adopt to this. We define our triggers with it. And last but not least, something about the function invocation, but I'm not conclusive on that one. Yeah, I also have a feeling that some stuff is still like fuzzy there. So maybe in summary, one could argue that we are somehow using the commonly used objects of serverless frameworks. And this for somehow seems to be functions that are identified by some kind of name, URI, something like that. I guess the really common ground is just a string, but maybe we could at least give some examples as you did here. And well, drilling down like the three things is probably events, functions, and JSON as a programming model. Yeah, so that is, okay, this is for programming. So we're at least in the same declaration language space, but how about that workflow context? So by context, sorry, by context, I mean, yes, the JSON data structure starts the workflow execution. And in between, we're also working with the JSON data structure. So this is the context attribute. I think, yeah, that's okay. For the function invocation, I just leave it open. If anybody has any ideas, I mean, I have plenty of ideas of how to interpret serverless, but I think that's not the roots of this. So you could have the entire workflow execution be serverless, but the language makes no assumptions about that, right? But I mean, I don't think we have to focus on this world serverless, who cares? Nobody else does. If you look at any other serverless quote, unquote, workflow invitation out there, nobody describes why, it's just is. Even like I said, I said earlier, and Falko, you will be happy about that. You probably know SAP now uses BPM2 as well. What's serverless about that, nothing. You know what I mean? And as far as us, the only thing we have to describe, like I said, is we're orchestrating an event-driven loosely coupled applications. And that's all it matters, honestly. Can you use serverless workflow outside of serverless? Yes, of course. And we allow for other states, other than event states, to be starting states of the workflow. So you can use our definitions to describe workflows that don't even deal with event-driven architecture. So we're not forced. Our name is, because what we're trying to do with this specification is unify workflows that are running in serverless orchestrate, the model of running currently in serverless, whether that be whatever they might be. So that's kind of like our business. Our business is how this serverless event architectures are defined, or what kind of services to provide that we cannot go there, you know? Then I already pulled part of what we discussed into function orchestration, because I think the details about how we invoke functions, that this is really about what it is that the workflow language is supposed to orchestrate. I know that we can emit events and we can consume events. We can also be triggered by events. But I think core of all this is to orchestrate processing that goes on elsewhere and then is somewhat serverless. So I wouldn't create a serverless workflow just to do some data manipulation without any invocation. I mean, I could do it without invoking any other function. But I think the core of this is to orchestrate these functions, right, function invocations. Would you agree? Yeah, I mean, the core of what? Sorry. Of the workflow language. So it's to orchestrate function invocations. Is that a sentence that you would subscribe to? I mean, yeah, defining what orchestration is would be also a nice thing to have, basically meaning. And that goes to why do we need workflows in the first place. And the main reason is to separate, right? And that's the separation of business logic, which our functions or services need to focus on, versus the, quote unquote, orchestration, which is everything else, which is control flow logic, data, management, event management, execution, semantics, or execution definitions. So we're offloading. You can write all this stuff without workflows. And the people have been doing it. So why do we care about workflows is because of the separation of demands. What are we taking care of inside of repeatable, reusable, graphic? You can graph the structure of these workflows, where we're offloading from actual or business logic, which is our services that need to focus on specific things and that are business-oriented only. The actual problem of our business, business problem. OK. Graphing, you mean the notation, right? Well, yeah, I mean, whether you use toolings or not, these workflows are one way or the other presentable in a readable and understandable structure, whether they're reading the JSON from top to bottom or in case of unreadable markup in terms of some sort of graphical model. So control flow, I get it. It's data management. Not so sure about the data management. So event management, yeah, defining the triggers or emitting triggers, execution, semantics that's adopting. Data management is what goes into the context and what is being used to invoke a function. Data management, as far as workflow orchestration goes, is within functions, when you write your single function, which is supposed to target solving a single business requirement, right? If without data management, you have to know the data inputs, the data outputs, everything from all the other services that it might be triggering after or that might be, you have to know all of that in order to solve some sort of particular business problem. Workflow, offload of that. In workflows, you define the parameters to your functions and in workflows, define how the results of those particular functions are being handled as well, right? So as far as then your function coding goes in your service, you don't have to worry about any of that. You just have to focus on what data do you need and what you need to do with that data, right? And you don't have to worry about the big picture, which is offloaded to the workflow. So this is somewhat holistic, yeah. That's, okay, cool. I think that's something we can explain the function orchestration that is addressed by serverless workflow language. And yeah, the portability, I actually, I don't wanna go down this right now. I just copy and pasted it because it's one of the goals. Since the workflow specification doesn't make a lot of assumptions on the actual implementation of functions or the engine, I think it's really hard to state portability. But that's just an opinion right now. So maybe we can pick up on that later. What do you mean for the ability? Can you do six? Oh yeah, good question. This facilitates serverless workflow portability is one of the goals of the serverless workflow specification language, right? So it's to make the workflow description portable between engines, that one engine could be entirely designed to invoke ARNs and the other engine could be entirely designed to communicate with native services through Kafka topics. Yeah, I mean, we're only looking at portability on the model level. There's going to be, no matter what, there's going to be differences in implementations. But what we're trying to do is to minimize those, right? What you're talking about, I don't think that's something feasible in a real world to do because if I port my serverless workflows from AWS to Microsoft, right? I'm going to have to change things. But the difference is changing the strings rather than having to change my whole workflow definition, right? So I don't know. At the current stage, we could say portability to the extent possible for the orchestration part. Limitations, known limitations currently with what we have right now is the expression language and the concrete function binding where vendors will have to extend the language to get something working. If we fix the expression language, we could get to a stage where if you have functions of the same signatures, you could in theory take a model and run it on a different platform, giving you provide those same functions with same signatures. Maybe they still have different names that would need to be mapped, but yeah, that's realistic. And then of course, for the cloud events, there is a similar thing, right? I mean, we have the data models in there where we fix it to JSON. But then of course also wherever these events are coming from the signatures or the data structures have to match what the process expects. And I think cloud events is going in the right direction with the discovery and subscription. Yeah, that could be an additional help here. Yeah, because if you have an event type, I think there is a known format. I don't know if they allow for specifications of the content except for the encoding so that along with the event registry that you would also register, which fields of the event are mandatory optional. In the body of the event, I don't think this currently happening, but I'm not sure. So registration is something ongoing in the cloud events group. Okay, I think that's okay to explain these two goals, what is meant and actually what is not meant. So not touching on function bindings and leaving the expression language open for now, although this is more of a common thing, right? Because this is work in progress. It's neither a goal nor a non-goal to define a common expression language or it's just we haven't decided on this, but it seems for the function bindings, at least for now this is, because it was also in the original design document, this is something that the specification would not cover and that is good because it's exhaustive. Then let's make that clear here, that expression language could be something that is scoped smaller. Function bindings, we all seem to agree that this is something that needs to be left open for vendor extensions. Yeah. And well, to the expression language, they're pros and cons. On the one hand, we are narrowing the possible applications, but we would also increase portability. And that was a similar discussion in DMN as well, while the specification leaves it open, most vendors then settled on feel as an expression language. I think we at Kamunda were one of the only ones that supported a whole portfolio of different expression and scripting languages. But we're slowly also circling in on feel because it gives us this fixed expression language to work with, but yeah, different discussion. Maybe we should put an agenda item here or a table of contents item on the expression language discussion. We made like an action item on the table or table of contents item. It should probably be a chapter in this document to have this debate and explain the current outcome. Would you rather see this because this I think is the concepts. And to me, the expression language discussion, giving specific examples, naming existing languages and so on is something more of the actual realization. So something in the specification design, right? So maybe specification design, should we have an expression language sub point here? I think that could make sense. Do you have some copy paste content that I could provide for that because we recently made that exercise of discussing which expression language to use in our engine. It's a little bit the data. Not sure if data transformation may be the expression language. I mean, yeah, we got to see this field. I wouldn't put any specific name of any expression language down now because we still have to evaluate what if we can use this field or not. It has a lot of restrictions compared to some other ones out there. But I would definitely say a single expression language force to enforce like portability. I mean, nothing wrong with naming it as a candidate but still making clear that this decision is not final yet or that it's not a decision at all. It's just a candidate that we could look at. Yeah. Just to express to that you see that we are not blind on this topic but we have had some initial research of what the world looks like. I like this term that you said the other day manual that we are like measuring the world or that the cloud events team measured the world before they started their specification. And I would assume if the TOC wants us to adopt this that they want us to have a certain measurement of the world done as well. I said, we need to show a little bit of activism so we can make a measure of the world actually. But since we got that comment that we should also cover tech then because we are already generated from AWS Lambda. This is a Netflix conductor. I think you may have suggested we cover this. This is just, we need to show something. I think the primer shouldn't be all self-concerned and should make these connections to different projects and specifications, right? So that's, yeah. In some way. Okay, last one. We're all most at the top of the hour unless I invited you for one hour and a half but I don't think so. No, it's just one hour. Stateful versus stateless. Or maybe we can leave it for next time because we only have seven minutes and maybe we can wrap up. But let's give it a try. So stateful versus stateless to me is not making any sense because the workflow has stayed and it's passed between what we call states to make a record. So every workflow, you'd probably call it workflow data to me is state of the workflow. The whole, the naming already indicates that we are talking about some kind of state. And there might be workflows running very quickly and don't need to persist the state for a longer time but still they will have a state. And in my opinion. How would we support stateless service workflow implementations? Does it mean that I wouldn't even know how the specification, if I support something stateless then maybe I make the persistency of the state transparent so that the system takes just care of it. But this is a system design implementation aspect. So this is not something that the specification can ensure. This is where we differ ourselves from AWS in a big way. The Amazon state language is a stateless language. What does it mean? It doesn't provide any means for scaling and scaling to zero especially. Just by us using cloud events and using a correlation token allows us to scale to zero. And restart the workflow. For example, when an events actually arrive or the events that we define in our events states, for example, is a starting state to even start the workflow instance, right? This is where we are actually a super set of AWS state language right there, okay? Is that the property of the language or just the property of Amazon's implementation? Well, the whole thing we're talking about is just the model, right? Our model allows for stateful orchestration because we define a model through which users can model stateful workflows execution, right? If we use just the Amazon state language there is no way to even model something like what we can because there is no event-driven state in the Amazon state language. So just by the means of our workflow definition being able to use an event state or this callback state or whatever and the other states that we have or not we support both. So it's just a feature of the language, the model itself rather than anything else. Okay, so the summary would be the language has weight states. Yeah, exactly, I mean, what else can you say? But then you have to define a weight state, yes. But yeah, in short, we allow for, we know that in serverless environments we have two things, right? Well, a different pricing model or scaling is very important. And our language allows users to write certain workflow definitions that can scale. But I'd argue that Amazon's step functions is scalable as well. So you can have multiple invocations of your workflow and it would scale just like- Yeah, yeah, but that is done versus other Amazon services, right? That is not shown in the model itself. I suggest we don't argue too much with Amazon here but then say something like the language supports asynchronous communication and therefore it's clear that we have to wait for an asynchronous response to arrive and we have different language elements that would allow that. Yeah, and in addition, we also have correlation, right? Where it's just inherent capability that you need for asynchronous communication. Yeah, I'll go. We could mention that here, yeah? Sure. I was thinking the other day whether re-entrant is the right term but it's probably not. Re-entrant is something where you can interrupt, right? It's kind of something you could say here but like, yeah, you are, yeah. Asynchronous continuation is a term that's JBPM and some of the other offsprings of that are using but that's actually used for a different mechanic that's mostly used for some- So what was the word asynchronous? Continuation. Continuation, yeah. But I think the bottom-up line is really asynchronous communication. Like the language has elements that can wait for an asynchronous response or an asynchronous event to come back. That could be, of course, cloud event topics but I don't know, do we allow something? I mean, we have that callback state now, right? So that could be an argument here. Not sure if a normal function in vocation qualifies as that, ideally it should because from a business point of view, it happens frequently that business users don't really want to see too many things. Like, it would be good from my experience to have a way to hide asynchronous communication behind a single element. Sure, that's a convenience but sometimes you wanna get down to it. From a business modeling perspective, that's a requirement. From a technical argument, you could say for who cares, it's maybe just convenience. Synthetic sugar, if you want. But from a business point of view, that's a requirement. We have had that in every engine that we ever built. Okay, I think that finally clarifies what stateful versus stateless means and support both stateless and stateful serverless workflow implementations with that goal defines, sorry, I... Good. So a stateful workflow would then be a workflow that just limits itself to certain elements that are basically just synchronous function calls. You could potentially have something like an initial event that kicks off the workflow but from then on, you can't wait. You just straight through process everything in one go. What's the funny thing about that and that is why many people always need stateful workflows is whatever function fails. And what if you want to have things like retrying or stopping the workflow right there and then continue later once you fix the problem. So the saga pattern comes into play here and then obviously things like compensation. What if you really have a failure that you need to know? No, but that's still stateless to me by the definition that we just had. So if you define your workflow with retries upfront there is no interaction with it. It's just retries the action several times. Yeah, that's true if the retry is finished. Like what if you need human intervention once all your retries failed and then later on you wanna continue the workflow based on some manual fix that you did in your infrastructure. That is a stateful workflow where it's not, okay, okay, I got misled by when you mentioned retries. The automatic retries, yes, that is something that you can build in stateless, but yeah, basically like stateful error handling, keeping the business transaction in flight, not forcing the users to restart from the beginning of the workflow and then you continue later. All right, that's cool. Thanks a lot. I think we've touched on something. Even though I'm not a good example myself, please whatever you feel like or have time, look at a section, write something. And yeah, then shall we schedule a call for next week? Yes, please let's do that. Let's keep up this discussion. Yeah, and just an update and I told Manja, we got a TLC sponsor and it's not a small one. It's actually the biggest one there is. Microsoft, so we need two more, but I don't think see any problems with that. We're currently under a SIG review, just to kinda explain because I'm a complete noob about it too until recently. CNCF has these SIGs, specialty interest groups, right? Which every proposal has to fall under. The problem with us is that there is no serverless group. And when cloud events was made actually proposed to Sandbox, there was no SIGs. So they just kinda reviewed it themselves. But right now there is some sort of discussion whether serverless should be a SIG on its own or not, the doc doesn't wanna do it because it's extra work, blah, blah, blah. I thought it is already a SIG. No, no, so we're gonna fall under the runtime SIG at CNCF and they're going to currently review our pull request. What's this guy? The co-founder of Kubernetes actually, working at Microsoft named Brendan Burns. Yup, has raised his hand to sponsor this specification. What does a sponsor do? Which means absolutely nothing. So they just say they support it and we need support from two more. We'll figure out how to get and then we will have to present this somehow to the TOC for some sort of review. And I don't know when that's gonna happen as soon as I know you will know as well. So it's moving forward and we're making some noise. We're making actually a lot of noise and let's keep making noise. That's the only thing we can do and see where it takes us. That's awesome. Speaking in terms of noise, TO could you give us a little bit more time to review pull requests? No, I'm just kidding, yes. Things that were flying by today, you saw yourself, there was an error in there and this pace is too fast. I admire your energy, but if you don't want this to be a one-man specification, then let's please have some time. Oh man, you're starting to sound so like somebody else. Yes, of course. I'm sorry. Yes, I agree with that. And yeah, and as far as we did not let our new person say anything. What now? Yeah. Yes. What are you still with us? Yeah. Yeah, yeah, I'm still here. Yeah, thank you. Yeah, thanks for monitoring our call. That was really interesting. I think we, so I am not sure how far you are up to date with what we're actually doing here. So we are trying to write, this call was specifically about a primer. There is also a monthly serverless workflow working group meeting. That is every first Monday of the month. We have, I think there is a standing invite. You would find it on the email archive of the CNCF serverless list. There is also a document link to our workflow meeting minutes. Next event will be on April the 6th. And just now I, we would plan a follow up to discuss the primer document that we've been discussing for the entire time. We should, yeah, give an overview of the specification without going into the normative references and stuff like that. So next week, I'm a little bit more flexible or less flexible considering that the kids are at home too. Schools are closed. Everything is going to be in lockdown soon, I suspect. So what about time next week? Any preferences? I'm open self. I'm in the local, I mean, I'm German actually, so. So I am facing the same stuff you are facing. So I'm flexible as well. So, yeah. Well, I mean, also we have- We are going to have a majority here then. Let me, let me just wake up. Let's do it after nine o'clock here. So five, I guess. But I wanted to say there's still an open PR to add. Finally, Manuel is an owner. I think that's very important. Another question for Falco, maybe offline, but I don't see Mauricio, maybe he's on vacation, but maybe he is looking into other adventures now. So do you want to be replaced as an owner from Camunda? I don't know. You can just let me know. We're flexible, but we need more owners. We need more contributors. We need more people to just look and tell us how crazy we are. And that's it. I mean, without that, I don't know where Cathy is. I don't control that. But right now, as far as your thing, out of three owners that we have, I'm the only one that's actually being active. And that's a problem. So let's fix it, you know? Yeah, I will take this offline with Mauricio and see how we can proceed here. I guess the goal is to have one owner per company, right? Yeah, I mean, that would be nice, of course. But yeah, right now what we need is the exact problem is that we have some rules and regulations, the stupid governance document. We have nobody to enforce it. So if you guys are looking into things like, you know, longer pull requests and reviews, reviews only meetings, the stuff that other specifications might be doing, we gotta get there. And for that, you know, we gotta change our structure and position ourselves the way that we also look as a good team, you know? And the best way is to have active people actually be owners of the specification regardless, you know? Okay, regarding the time, I looked up, so I tell you what, you're in Atlanta, right? And I looked up, it looks like we could actually have a meeting at 3 p.m. European time. I mean, at San Francisco, just to see, hey, just keeping it open for Casey. That would be 7 a.m. I don't know if they start working that early in San Francisco, but at least we could schedule our next call at that time. So that would be next Monday, 23rd, I'll send out invites, 3 p.m. Germany, 10 a.m. Atlanta time. And the week after, we anyways, I don't know. I said, yeah, the week after we have a regular one. Oh no, no, no, there's one more Monday in March. Sorry about that. Okay, and okay, last thing, everybody, I think, we can meet next week, but I wanted to ask one more question to you. May I missed the beginning of the serverless call? Have you been there or Falco, have you been there? So the serverless workgroup meeting. No, I couldn't make it this week. Or last week, that was fine. Yeah. Ah, I didn't join it either. Lots of European folks missed it because Pacific time switched to Pacific daylight time. And I miss every year. Yeah, Casey was there and there was an agenda item that she would give a readout from the workflow subgroup, which is a standing item by the way. So I think if to give a readout to the serverless working group, maybe we should join. And yeah, I missed it. So I was just wondering what Casey had to say about what we are doing. I think I had an item on my to-do list to review the call recording and see what was also the response from the rest of the working group there, right? Especially this, that we submitted a TOC pull request that should probably gain some interest over there, right? Yeah, yeah, good point. There are recordings, so maybe we can find one. Okay, thanks everybody. That was really cool. At least it clarified a few questions in my mind and let's continue next weekend. Sounds like a plan. All right, bye everybody. Bye. Bye.