 OK, I think we can start. So Doug, you mentioned that, you know, someone from IBM is going to, I mean, all of you, all of you. Yes, all of you, yes. OK, fine. OK, so let's go through this agenda. I think first, I think today we're going to do the presentation. And then we are going to go through the review comments. I see that our hero wrote some review comments which we can discuss. OK, so let's first start with the presentation. I'm going to start first to talk about function graph. Let's share my screen. OK, here's a meeting minutes. So maybe if someone can help, I'm going to write the meeting minute, but if someone else can help writing it, that would be good. Let me slide the show. Can you see my slide? Yep. OK, so in function graph model, so it's like this function workflow is naturally modeled as a state machine. There are three elements in the function graph. So one is a list of event triggers. That's like the cloud events. For example, the storage event, HTV event, or any other media streaming event, you know, database access event, email event, code repo, update any event. OK, this is just like some example. And then besides event trigger, there will be some list of states. So we have five states, one called event state, operation state, switch state, delay state, and state. Yeah, I'm going to go through them later. I mean, in more detail in later slides. And then there are also functions, or you can call it actions associated with the state. For example, in the event state, if one event comes, it's going to trigger some function. And this function will, in the function, it could like some primitive say, it's parallel execution of these functions, or sequential execution, or it's branching execution. And then there could be retry mechanism if the function fails versus retry mechanism. And then also like some information passing from the, you know, between the functions from the previous function to the next function. So these are the three key elements of the workflow. And the right side is just a diagram of that workflow. Just example diagram. So let's go to this. First, let's see the list of states. We have, as mentioned before, we have five states. Event states is used to wait for events from event sources. So this event source could be an event source, as mentioned before. And then, you know, when that event comes, it's going to invoke one or more functions to run either sequence or in parallel. So it could be, the event could be multiple events, not just one event, could be end or relationship of this event. So it could be say, you know, when event one comes or event two comes, then, you know, these functions need to be executed. Or it could be an end event, say, when both event comes. And that function, a specific function is triggered. So that's the event state. And then there's operation state. So the difference between event state and operation state is that, you know, operation state does not need to be triggered by any event. So when it goes to that step, then it just automatically one or more functions will run either in sequence or in parallel, but without waiting for any event. The other part are the same to the event state. Only difference is it's not triggered. Those functions are not triggered by any event. And then there's a switch state. This state permits the transition to other states, you know, based on different conditions. For example, could be, you know, the previous steps function execution result, you know, different function execution result in the previous step or previous state will trigger, you know, the branching to different next states. Or it could be, you know, the other, some other condition too. And then there's a delay state. This is just, you know, causes a workflow execution to delay for a specific duration or until a specific time or date. This is very simple, this delay state. And this state determines the workflow with either failure or success status. So when the workflow ends, it's going to go to the end state. It could end with successful execution of the workflow or with a failure execution of the workflow. So these are the states. And then these are the event triggers. So event triggers, we have three parameters. One is the event source. It could be the cloud, you know, could be any event source. This is same as a cloud event source. And then will be an event ID. And then will be the filter applied to the cloud event to get the required information, which, you know, will be passed to the next states, which will be passed to the function. Okay, so this is filter on the payload or on the, it could be on the payload or on the other, you know, metadata information on the cloud events. And then after the filter, the information will be passed to the function or could be passed to the next state. So the trigger name is the name of the events. And this name will be used by the event expression in the event state. So in the event state, when we say, okay, event one or called event, for example, like motion event. So that's the event name. That event name is defined in here. This is the event name trigger name. And then the other information will decide in the event itself. So when we go to the event state, the event state does not deal with the detail of the event. It only reference the event name. And then, yeah, I think that that's about it. You know, the filter, I already mentioned about that. And the event ID is a cloud event ID. Kathy, just a quick question. The filter, does that operate just on the cloud event properties or can it also filter information within data? Yeah, it's within data. It could be both. Okay. Yeah. So the function, okay, so event, so now I'm going to go through the each event. So for the event state, we're going to, you know, this state name, there will be a type. So for this type, event state, the type will be event. If it's an operational state, the type will be, let me go to the next one. So then the type, you see this parameter? This type will be operation. And then for the switch state, the type will be, no, not this one. The type will be switch. So let me just go back. So it just, this type just be said, what kind of state it is. We have five different types of state. And then the start point mean that with the, that's basically whether this is a start state or is a intermediate state. So if it's a start state, this will be set to true. So the workflow could start with an event state or could start with the operation state. It could be any, except the end state. It could start with any state. And then so for the event state, a key part is that, you know, because this event state suffice by what event triggers what function at that specific step. So we will define what events will trigger that function. So here there's an event expression. It's a Boolean expression of the events. And then there's action mode, which defines the function. Basically this is a function mode. It's asynchronous or asynchronous, the function. So if there are multiple function, we can see these actions means the functions. The function, you know, there will be, it could be a array of functions. So if for the, like more than one functions, are they going to execute it in sequence or in parallel? This is defined in the action mode. And then of course will be the next state, which means after this event state function execution completes, what is the next state to transition to? Okay, this is the event state. And then, you know, go to the next operation state. So the only difference between this and the event state is that it doesn't need an event trigger. So there's no definition of the events of the event trigger. Here the type is operation start. Again, it could be the start state. So if it's truly is a start state, otherwise it's not the start state. That is same thing as before. There's action mode, which is synchronous asynchronous. And then the actions is a sequence of, I mean, it's a list of functions that will be, that will be, that will run in this state and then will be the next state. And here is remember in the previous two state with this event state or action state, we have this action definition. So this is the function definition. You know, there's action mode, which is synchronous asynchronous, which defines whether these two or more functions will execute in synchronous mode. And then this action definition defines the detail actions. So here is the action definition. We have the function name that's specified. This is again, as action definition could be a real action definition, a real of actions. So each action will specify what's the function name, what is the timeout value, and then what is the retry mechanism? Like, you know, when you match, so when the function returns something, so match on result value, which is, you know, when the functions result, you match that result, and then what is retry interval, what is maximum retry, and that was the next state for each retry to go to. So, yeah, that's about it. And then the switch state is like, you know, it's like a switch, you know. So you match, again, type a switch, putting this could be the start state, too. And then trace is like, you know, what you do, so that's what's specified. So for example, we got this, okay, this trace could depend on the different event data or event information or the function result information. So then we need to apply a filter. This pass is basically like a payload pass. So which field will be used as a match to this value? Here, there's a value. Because, you know, for example, if this event, right, it has a lot of information, the event payload has a lot of information, or there'll be multiple event metadata information. So which field we are going to use to do the matching? So this pass specify that part. And then the value specify what value you're to match. So this trace is kind of like a list array, right? So you match different value, you are going to do, this is a comparison operator. For example, you say this value is equal to this value or larger than this value or smaller than this value. And then you match different value, you are going to switch to different next state. Now, of course, if you don't match anything, then what's the next state? What's that state to transition to? So this is, as the right side said, you know, there's a match and then go to the next state. And then in the next state, it's going to execute different functions. That's about it. It's just, you know, this, just the way to model the workflow. So different types of workflows can be modeled using all these events come back to the original. Use this, you know, different states and then events. So one point which I, you know, I haven't got time to edit yet is about the correlation. For example, in the, so here there are multiple events, right? So when this event comes, how we correlate, you know, this event one, this event one instance and this event two instance will go to the same workflow instance. So we need a correlation and that can be defined at the beginning of the workflow. So I didn't, you know, have time to put in the slide yet. But if you go to the, I can put that in later. That's about it. Any questions or any, anything which you think will not be able to support the workflow which we identify in the use cases or the workflow functionality we identify in the spec? This is Rachel. It seems like it's a pretty thought out way of like what you need. It's like a great statement of like what we need to be able to support. And this seems like a very, like it's a specific use case but it's very generalizable. Like everyone will need to do these kinds of things. So I appreciate you like walking us through this. Okay, great. Thank you for the comment. I'm just writing down. Any other comments? Okay. I think, you know, probably, you know, you can, you can, you know, I'm going to put the slide in some, I mean, some place and then you can go through it again. And then, you know, if I have any comments I'm going to put this inside the document too. So you can feel free to post it. We have gone through. I have gone through this, you know, we have gone through this multiple times. I think, you know, it's generic enough to support this, or this functionality. Of course, you know, there are still quite some, there are some details, right? Which, you know, is, you know, it's hard to really dive into to go through in the, I mean, in this short time period. But the high-level idea is, you know, is here. Okay, if that's, if no questions, I'm going to hand it over to Olivia and let me do this. So let me, so are you going to share something, Olivia? Yeah, a few slides as well. Okay, so let me, yeah, let me stop sharing. I can share, yeah. Yes, share screen. Then I can make that bigger. Yes, can you see the full screen slide at this point? Yes. And can you hear me okay? Yeah. Yeah, okay. Thanks, so hi, I'm Olivia, I'm with IBM Research and I'm an open-wiz contributor. And I'd like to tell you, yes, a little bit about what we're doing with, it's a workflow in open-wiz, because you notice we call them compositional workflows and you'll see there are some differences that will be clear after this talk, hopefully. So, because it's a bit different before telling you what we do, I'd like to tell us a little bit about why we do things the way we do them. And so here's a list of goals or things we're trying to accomplish. So of course, we're trying to build workflows or compositions of functions. And there are several important things in our view. When we do that, you know, starting from, you know, maybe non-controversial ones to more controversial ones. So first, we want things to be polycloud. So we want functions to be expressed in, you know, many different programming languages. We want compositions as well to be, to make it possible to express in many different programming languages. We care a lot about the costs, the billing. So what that means is we want to have an architecture, want to have a system and a programming model where the cost of the composition is pretty much the cost of the thing you compose. Of course, you know, you're making some choices, you're making some decisions, but these are typically very, very, very cheap computations. So that should be reflected by the fact that we can run composition pretty much at the cost of the component. One thing that's really important to us is what I call substitution. You can think of that as meaning that functions and compositions should be totally interchangeable. So for instance, if I have a client application that calls into a function, if I later decide that this function is just too simple and I want to replace it by a composition, I can do that in my library. The client doesn't have to change. From the point of view of the client, this is exact same API to call the function or a composition. In particular, in open-waste, we can call functions in a synchronous manner, in a blocking manner where we say call the function and wait for the result and get the result. So that should also work with compositions. So that means, for instance, that doing just even driven a synchronous composition doesn't quite get it for us. Another, you know, something related to that is as a function runs in our system, it doesn't have to know whether it's just called by a client or called by an enclosing composition. Nothing changes from the point of view of function. Maybe the function might want to know and do some things according to that, but it doesn't have to know. In principle, it just works. Now, the big thing about our approach is that we very much believe in structured programming. We think that, you know, in the sense that, you know, describing workflows or composition as graphs as, you know, vertices and edges is great for discussing them. It's great for displaying them. It's great for documenting them, but necessarily great for actually programming complex behaviors. In the same way, we think that, you know, YAML may be a great configuration language, but it's not a programming language. So what we want to do is to get something much closer to the traditional programming experience of composing functions, you know, outside of the cloud in the traditional environment. That means we want to do things like sequences, conditional loops. We want to be able to have nested compositions, compositions that goes into composition that goes into composition. We want to do structure error handling. I'm putting parallelism and concurrency here as well because it's something we're working on. Also, we have not released this in open source, so I'm not going to talk about it much, but that's definitely part of the goals. While we're very opinionated about, you know, the way we think is the best way to express compositions, we think the runtime on the other end should not be opinionated at all. It should be completely open-minded. It should, you know, contain as little as possible of the choices we have in program mode. It should be as flexible as possible. So it should support, you know, running compositions, but, you know, in a very, very, very program model agnostic manner. And I'll show you that in a few slides. Before I go, I also like to add two things here. Important, again, not going to talk much about it because we're working on it, but it's really in the back of our mind and really dictates some of the decisions we've made already. The first one is, you know, we don't think the end goal is to actually compose function. We believe the end goal is to compose cloud services, you know, whatever, databases, you know, AI services, et cetera, et cetera. And of course, functions have a place in there because they can be great for bridging gaps between services. But at the end of the day, the workflow, the composition is a few functions and lots of other things. And this is what we eventually want to support. The last thing is we believe events are really, really important, whether it's correlation or streaming. And there I think it's been discussed quite a bit in this poll already on previous polls. The tricky question is, as we try to make things, as we try to express more things, we also make things more complex and it's hard to know exactly where to stop and this is something we're working on. Okay, so if we have, so as I said before, we want to start with a pretty high level language and go down to pretty simple runtime infrastructure, you can think of it as a risk processor. And so how do we do that? Well, you know, how we traditionally do that. We essentially start with high level code. We do some things that look into a compiler. We compile this to an intermediate for presentation and this is what we run, right? So that means there are two components to what we've done when so far as a Node.js library which you use to program compositions. We also have a Python library, but it's not open source yet. And I'll show you the Node.js library in the next slide. And the second one is an extension to the open-source runtime that essentially does two things. The first one is it knows how to build dynamic chains of functions, right? We had sequence action in open-source before, but you had to define the sequence beforehand. So here's the key difference is that as you run things, you can decide what to run next and what to run next and what to run next and when to stop. The second thing is that we also believe that state is important and the runtime has to do some state management, at least a little bit of it. So here's what a composer looks like. So on the left, you have a very simple example of a composition described in the composer programming language. So here in this simple example, we're building a translator from an unknown language to English. And the way we do that is by composing two functions. The first one is language ID. It detects, it tries to guess the language of a fragment of text and then there's a translator function that translate from a given input language to a given output language. And as you can see, this composition is start mixing things. It has two things in particular here. So one is it has a sequence, not just the sequence of the two functions, but also some inline function in between that take the output of the first function and put it in shape so that it can be the input for the next function. And it's also wrapped by a big tri-catch block because language ID might fail to detect the language. And in that case, we want to produce a human readable error message. What you see on the right is something or tooling can build a graphical representation of this composition but that can be used for debugging, monitoring, et cetera, et cetera, but the ground truth is the code on the left. This is what you write. This is what you version control. This is what you edit in your editor. So we have a bunch of constructs like that. There's a longer list, but here is really the basic stuff. Essentially, you can have functions, you can have sequences, you can have conditional, you can have a loose tri-catch kind of things. You can declare variables and other things and parallel is add, absolutely finally these kind of things. So, yes, so that's pretty much what I want to tell you about the program language. You've seen the example before. And again, if you look at this slide, it's pretty clear that you can do the same thing in any kind of programming language. The inline function syntax will change from one language to another, but essentially everything else can be the same. So, one thing I'd like to say, so the way this works is an extension of the notion of data flow and sequences. So in principle, when you have one function executes after another function, that means the output of the previous functions becomes the input of the next function. That's the basic flow. But on top of that, there's a notion of state in this composition, right? Which is composed of a few things. The first one is where you are in the composition. Think of it as a program counter, right? Or, you know, if you think of finite state machines, this would be the actual state you're at in the composition. But there's more than that because we're doing structure programming. So we have things like register exception handler. We have things like variable declaration. And so these need to be carried through by our runtime implementation. Additionally, the user can even add some notion of state that could be added there. So for instance, the callback to invoke at the very end of the execution of the composition. So we can support these kind of things. So what really key about this state is that what really needs to be managed is the state ID. And the state can be either inside the runtime, outside of the runtime, it doesn't matter. What we believe is the right thing to do with serverless is to keep state inside the runtime if it's small enough, right? If we can manage it for cheap. And if you really start having a lot of state, then you have to, the user has to help. You have to provide storage solution for your state. And then we're only going to manage the keys to your state. Okay, so how is this implemented? This is implemented using a mechanism called conductor action, which is the extensions we made to the open-risk runtime. And essentially it's the idea that a function, I mean, again, in open-risk we call functions actions, but same thing. So a function instead of returning and final results can return a triplet consisting of a pointer to an action as parameters and states. And which means that rather than this being the end of the execution, it means that the runtime system should now take the specified action, invoke it on the specified parameter, and that should be the next step of your composition. And when this next step of the composition is done, we should go back to the conductor action, which you can think of the scheduler in all this, and then re-invoke the scheduler with the combination of what was obtained from executing the component action, the results, and the state that needed to be preserved while the action was running, right? So at the bottom you have a handwritten conductor action. So we're not encouraging you to do this, but you can write a conductor action by hand. It's a kind of a finite state machine. And what it essentially does, it builds a sequence with two steps. In the first one, we call a function called triple. In the second one, we call the function called increments. And if you look at what that means at runtime, that means we start running this conductor action, this scheduler, it returns that we should run triple first. So triple's run, then it's being re-invoked. Now it's at state one. So that means it decides that the next step is increment, go run increments, then returns, and then there's no more action to run. So this is the end of the execution, this is the end of the pipeline. So putting all of this together, what that means is we have a library that generates a JSON representation for the composition. So it's not a finite state machine. It's something, I'll show you an example in the next slide, it's something much closer to the abstract syntax tree. So something very close to what was the syntax of the composition in the first place. Then what we do is we take this JSON composition, we combine it with a piece of code that is essentially a scheduler that is going to be the body of the action. It's the same for all of our composition. It doesn't change, only the JSON changes. And then this produces a conductor action and this is the thing that the runtime system executes. And the way this is executed, if you again want to know the details, that first it translates this abstract syntax tree through something like a finite state machine and then it interprets the finite state machine. But one big difference with all of the things that we've discussed so far is that this finite state machine is really an internal representation of the runtime system, it can change. And it's definitely not the representation that typically your user programs with to describe a composition. So, you know, this is the JSON. So maybe at that level it doesn't look that different, but you can see here this is a JSON for the example code I had before. It has a first, you know, the root node is a type try. It has a body and a handler. The body itself is a sequence which has a few components, these components is an array of things. Some of them are name actions, some of them are inline functions, et cetera, et cetera. So that's about it. And just going back to the goals. So just a couple of comments here. I missed the previous two goals and I hope I'll be able to join more of these goals. But I listened to the recordings and I understand from Kathy that the goal of the work group is to primarily try to define the user-facing representation, you know, what the user, how is the user going to program the workflow? And I feel maybe that's a bit premature. I would be, you know, like if I think, in my view, what the cloud event specification is doing is much more bottom-up. It's trying to tackle and decide the kind of infrastructure we can have in common in our runtime systems so that they can collaborate and work together much more effectively. So the reason I'm thinking this is a bit more premature is because I think there's a lot of options, a lot of different ways to do these workflows and I'm not sure we have, you know, we have, we know already which what are the good ones, the better ones. And on the other hand, I think from the runtime perspective, we start to understand the kind of additions we need to make, to make it possible to just run any kind of workflow, any kind of program roles, whether you like finance, dig machines, you like structured programming, you like declarative programming. I think we should be able to agree on some of the key components of the runtime system. We need to support that. That's it for me. Any questions? So I'm curious, if you think that defining the user-facing model for a better phrase is a little premature, what would be your recommendation on the first baby step towards heading down to finding the workflow? So I think having said that, the workbook is doing the right thing, which is to look at use cases, right? And so to understand exactly how, what are the primitive capabilities that we need? So for instance, again, as I said in this presentation, I didn't really talk about parallelism currency or very much about events. So I think these are key things to understand and the way to understand them is to go through what are the use cases? And using a finite state machine for that makes perfect sense. But I think then we should, rather than maybe shooting for holistic definition, what are all the states we can have in the finite state machines? Maybe it should be more of, what are the different kind of building blocks? Like we need fork maybe, we need join. We need, I really like to think of this as defining the risk processor for the clouds, where we define the minimal instruction sets. And I feel like the, maybe the approach of defining this finite state machine is trying to do at the same time, understanding what the backend infrastructure has to be, what the primitive or the risk infrastructure is, but at the same time is also trying to be the user facing, the user facing program model. I think, I think, I mean, in my experience, it makes a lot of sense to not confuse the user facing program model and the system representation or this, the system abstraction because, because they, yes, at the end of the day, one is supposed to be able to execute everything, but they tend to have, they have very different goals, right? One is to, the goal of the user facing one is to be user-friendly, is to be easy to understand, easy to refactor, et cetera, lots of different things. The goal of the system level one is to be minimal in particular, I think. Yeah, that helps. I think, you know, yeah, thank you. So I think, you know, it's important separate the, I agree it's important separate the, the user specification of the workflow with, you know, how backend is going to implement it to support that workflow. So I think I agree with your point on that. So that's why, you know, I think, you know, we should, you know, really pull into how, you know, the service platform should, you know, should schedule the container or should, you know, do the runtime to support that workflow. I think, you know, it's important to provide a way for the user, it's designed application workflow. So that's why, you know, let's go to this workflow spec, right? We have discussed these use cases, you know, in last meeting. So all these use cases, I think, you know, using the, it's actually when you look at these use cases, right, from user point of view, it is the kind of like a state machine based workflow. So the user doesn't need to know the detail of, you know, the runtime or, you know, the detail of, you know, how you are going to run the, you know, to schedule the resource to support the function. The user want to know, you just say, you know, what the user would like to, the user needs to pass, you know, its desired application on specification to the service platform. You just say, okay, at this step, you know, what event trigger my function or maybe there's no event trigger that function, that function just start to run. And then, you know, whether it's going to branch out or what. So I think that actually from a user's perspective, a state machine like representation of the application is awaited rather than, you know, reporting very complicated. I understand. And I think it makes sense, but I think it's also a very opinionated view of how to program. So in my experience, for instance, finite state machines and things very directly inspired from finite state machines are very familiar to hardware programmers or hardware designers. And they use that a lot. They like it a lot. I think it's much less the case for, you know, application, traditional application programmers. Yeah. Okay, so that's a different opinion, but, you know, the model, I think, you know, which I presented, I don't think it's a real, it's a real state machine. It is not. It's just a state machine. It's not really a finite state machine to be straight. Yeah, I mean, when I say finite state machine I'll also mean something vaguely, you know, that resembles the finite state machines, yeah. But I think, for instance, some of the questions there, if you look at finite state machines in particular, the people that do, you know, graphical representation or graphical design languages for finite state machines, they often fail to address the problem of hierarchical composition, which is how you get, you know, in finite state, all you build larger, you know, automata or finite state machine from smaller ones. What's the semantics for composing them? In particular, what's the error semantics when you start composing these things? How do you, what does it mean to abort one of these things? I think that finite state machines make a lot of sense, you know, as long as they fit on your screen and we start having complexities that goes beyond that, they become much, much harder to manipulate. Yeah, maybe if you can comment in detail, like, you know, which part or which functionality it cannot support, I think that's more useful. But yes, no, I, sure. So then my take is, when you build, what I'm trying to say here is, sure, maybe we can get to finite state machine specification that does everything, right? That covers all the bases we want to cover. But then I think in that case, you could have a much, much simpler specification of what it is, right? You could simplify this to a fewer primitive. So let me pick on AWS, for instance, rather than Huawei, you know, the Amazon state language. They define lots, lots, lots of different ways of doing retries, lots of ways to handle errors. Because it makes sense, because from a user's perspective, one of the values of using Amazon step functions is you have lots of policies by which you can decide what to do if something goes wrong. So I think that was even the, maybe the driver behind the, you know, having step functions in the first place. And so that's really important for the user. But when you think about it, lots of these, many of these mechanisms have a lot in common, right? And the primitives you need to implement these mechanisms are things like try catch, looping, counting. So if you can support counting, looping, and catching errors, then you can express all of these versions of the mechanisms with back of, you know, you need delays as well, as you pointed out in your presentation. But you can describe all these policies with exponential back of, with and retries, with retry for one minute, et cetera, et cetera, et cetera. All of them you can express as, you know, derive construct on top of these primitive constructs. So maybe this is what we end up doing, but I think it would be, it would be very reluctance to try to, you know, try to make an exhaustive list of all the possible ways we can recover from error, because I think this is wrong. So thanks very much for the demo and example. What you were talking about and the difference between the language representation, it looks like you have Python coming and JavaScript right now. And the underlying, like, the underlying, like, underlying representation of how to process these things reminds me a lot of the way that internally at Google, Borg has a very complex protocol buffer that expresses how you should run things. But then there are higher level tools that actually compile down to that. And everyone uses the higher level tools and no one, you know, actually goes and wrestles with the underlying protocol offers. I was wondering, is that underlying representation or composer well-defined, like standardized or well-defined somewhere? Yeah. So, I mean, so again, there are two aspects to it. The first one is the JSON representation, the one that essentially gets uploaded to, you know, gets deployed to OpenMix for True Cloud. That is well-defined. I mean, again, the documentation is working in progress, but the intent is for that to be specified well-defined, fixed, or, you know, monotone, well, we add stuff to it. We add more capabilities maybe, but this is backward compatible. Now, you know, at runtime, we end up building a finance state machine out of that representation. That, on the other hand, is not intended to be a stable representation. This is something, for instance, we continue to optimize, and so we don't want to commit to a specific representation because we change it as we make it more efficient, for instance. The contract is that JSON... The contract is that JSON. Yes. Yes. And you can write any tool, you know, if you want to come in and have a community contribution that generates this JSON from something else, you can. So, I would like to pull it back to... I would like to... So, I think, you know, if we have any questions, if not, I'd like to go to the comments section on the spec. Can you see my spec? Can you see the document? No, you're not sharing. Okay, let me see. Oh, share screen. Sorry. No? Can you see? Yep, now we can. Okay, good. Okay. So, let's go through this comment. I think we addressed some comments in the last... in the last meeting. So, let me first go through the meeting minutes. So, the... here is not hero. I think you have a comment saying, in order to express sequence of function execution for each parallel branch, sequence of function execution, each parallel branch, two-dimensional would be a powerful action field rather than a single-dimensional array. Okay. So, could you clarify a little bit on that? Not hero? Yes. Okay. So, let me go through here. I think you have this. I think we have this action here, right? Hold on. Just here. No. Those are examples. Are you talking about this? So, where is your comment? Here. Oh, here. Okay, good. My comment is just a... I made a similar comment, and this is just one of them. And after the last meeting, and Ruiz-san commented to me, and two-dimensional array is not enough. There is a case which two-dimensional array is not enough. And I agree with that. And can you expand the comment on the right-hand side? The right-hand side? Yes. This one? Give me your comment. Yes. Maybe above that. So, you asked me who is right? Here. Oh, here. Oh, okay. Oh, here. Okay. I see. So, I think basically I didn't fully understand how to use the array of action. The first of all, I felt that there was a inconsistency between sequential and parallel. In case of parallel, it seems to me there is no way to express the sequence in the branch of parallel. So, that is the reason I commented last time, but Ruiz-san said there is another case. For instance, the parallel branch can have parallel action inside. So, after the comment back to Ruiz-san, I thought that we need a way to create a nested state. That is the last state of my comment. So, two-dimensional array is not enough. I think we need to allow to create a nested state. Okay. So, your suggestion is we need to have nested state and the complicated function combination. Something like that, right? Yes. Okay. Yeah. Okay. I think we can think about this. Yeah. This is Louis. You saw my comment there. I think we've got various options in terms of being able to represent a combination of parallel and sequential actions. So, we can certainly explore those options. Okay. Any other comment on this? So, I think your case is, you know, okay. So, you give some example, right? So, if we are actually parallel states such as A, B, so A, B, C, D parallel. Okay. So, you mean is A, B, C, D is executed in parallel. These are four functions, right? This is what you mean by four functions, right? Yes. And then later after these four functions are completed. There's another two function, executing parallel. Is that what you mean? Yeah. Actually, this example is provided by Louis. So, it's bigger to ask him to explain that. I think this is also one of the places where it's easy to define a few constructs, but later realize that they are not flexible enough. So, for instance, you know, the parallel composition, we definitely need things like timeouts. We definitely need things like, in our use cases, we see cases where in a parallel composition, there are branches that are main branches that are musts, right? That means you're trying to do different things in parallel. Some of them have to succeed before you can continue. Others you're willing to sacrifice if they don't finish in time. That's one example. We have maps where the parallel tasks are the same, but the data is different as opposed to having different, as opposed to having the dual where you have the same data but different tasks. So, there are lots of ways to build parallel constructs. And I think it's important to try to extract what are the primitive operations on them and what's the forking semantics, the joining semantics, for instance, might be one way to do it. I think we also see in some of your most advanced use cases, and maybe I'll add that to the document, is sometimes you have a flow of execution, then it forks into two separate tasks. And then one of these tasks further forks into two separate tasks. But the order in which you want to join the tasks that you've forked is not the order in which you forked the tasks. So, if you only have these constructs that are, you know, matching beginning and ends of a parallel composition, then you cannot express that. So, that's one place where I think there's a... Maybe you can post on what you just said in this document, both to see whether it's existing, whether we need to expand it or we do this. For example, here we have different match criteria and retry. And this for different functions, right? So, different function result, based on the different function result. If the function execution is different, this is an array. So, different result could have different retry, separate independent retry mechanism. Yeah, I've seen that. I don't think this is exactly the same thing, but especially I make sure there's been no change to what I've read. I've had what I'm talking about. Okay, so it will be good because we have limited time. It will be good if you can post comments on the which one. You're thinking, you know, this will or cannot handle it. That would be great. Okay. Or maybe I... Okay. Or if people would like to discuss here, I'm okay. So, I think, you know, you mentioned the... Can you have a specific example, say, which we're talking about, which you're thinking, you know, this cannot handle? Yes. Yes, I'll add to the document offline. Oh, okay. Okay. That would be good. So, this, for this one. So, we're seeing, you know, so here, I think, you know, trying to see whether any other people have any comments on this. Yes. Can I move to the bottom? I'd like to hear the other person's opinion of the one of one of six. Okay. So, sorry, I didn't quite get it. Would you like me to move to what? Move to another topic. Can I talk to another topic? Oh, yeah. Sure. Yeah. The... I'd like to hear other persons, other people's opinion in terms of the bottom, the section one of one of six. The bottom. Oh, here. Oh, I see. Okay. I see. So input and output processing. Yeah. It seems to me the current draft missing how to compose the previous function, the output to the current function input. So I think we need to specify like Amazon snake language. I don't think that Amazon snake language is a good background. Just an example. I agree. This is an important question. And the way we've done it is to permit writing small pieces of script that gets run in line that doesn't have to be declared as function, but running between the function invocation. And an interesting question is what the language in which to write this function. So Amazon as a specific, you know, JSON path, way to specify JSON path. I think in general, we can have something like JQ, for instance, to specify these things. What we do for now is to use JavaScript or Python to specify these things. But there are drawbacks to making these input and output adapters too flexible because if they are very flexible, then running them safely requires a lot of sandboxing, a lot of runtime resources that can be avoided if this is a more specialized language. So here, I think you would like to, so here what you mean is to see how the, do you mean the, how the information is passed from one function to the next function? Is that what you mean? Oh, okay. So that's actually, okay. Let me try to see whether I can clarify. That's in here. So let me see the filter. Okay. So where is that action? Not this one. State. Oh, okay. Let me see. Kathy, what, why you look for it? I just have like a question. So this does seem like a, like a pretty notable problem. I hadn't thought about it before. Sorry for not reading all the way to the bottom of the doc. And so like, I'm wondering, is this something that the spec needs to handle or is this something that like every implementation can handle? What do people think of that? You mean the, the information passing from one function to the next? And how do we make sure that we get the type that we expect from the last function? Oh, okay. So actually, I think that this is an important point. So information passing from one function to another function. Also information passing from the event, event information could be the payload and metadata to the function. I think these two, we all need to address. I think I put it here, but I cannot, maybe I didn't put it in. I'm going to put it in and then later people can see whether that works. Is that what you want? What you, is that where you think in terms of the problem? Yeah, in addition to event to event and function to function. Also, we need to, a way to reconcile state to state, I think. And so, okay. So if, so let me write it down. Information passing from event to function to function. Let me see. Need to address. Here. Information passing from event to function from between functions, right? Yeah, between function and between state. Between state. Very good. Yeah. Yeah. I can post that. Maybe I, I forgot to post that how we do this, you know, information passing from event to function between functions and between states. Yeah. Okay. And specifically, and specifically like, what do we do if it's not the thing that we expect to get? Yeah. Since, since Oliver, Oliver, you still hear how does open-wisk handle that? Like if you are composing things and you get something that you don't expect. So, yeah. So, so we, so we, we first we follow from the open-wisk convention. So in open-wisk, what every function produce a JSON dictionary. So either this JSON dictionary is a fields name error or not. If it doesn't, that's considered to be normal results. If it does, that's considered to be an abnormal results. So if you have a, the example I was showing earlier, for instance, if you have a normal result, you can execute the same sequence after with the output of the previous function being the input of the next. If the, if the output object contains a field called error, then we jump to the error handler, which is the thing we introduce with try catch constructs. So. And the error handler is written by the developer themselves. So you would do any like state changes? Well, we, we have a, let's say if you have a standard library and we have a growing standard library where you can have predefined behaviors. In particular, not so much for error handling, but for retries, for instance, where we have a, in the standard library, there's a retry and times, for instance, that says, okay, if I get an error, that means I'm going to retry the call one more time and with the exact same parameters as the first time. Right. And you can be more of, of things of that kind. So, so let me, okay. I think let me address this quickly. So for the, for the, for this, the arrow, if the function has an arrow, we have it here. Say, it's a, it's a retry. So here is a, it's a retry mechanism, which say, you know, you, the function result match what value and then you are going to do the retry. For the information passing. For the information passing. For the information passing. It doesn't matter if the user just say, okay, I want information passed from this function to the next function or from an event to a function or from between the states. It has a filter mechanism. Say what, which part of the information, whether it's a, it's a function result or it is a previous event is, is a previous state, information from previous state or it's information from the event. We all first apply the information from the event. We all first apply a filter, you know, because you might not want to pass all the information, right? You might want to pass a specific information, a specific part of the information field. So there's a filter there. Of course, if the filter, if you have no filter, that means all the information you want to pass to the, either the next state or to the next function. So there's a filter there. And then there's a, after applying that filter, you know, the information is passed to the next function or to the next, to the next state. I think, you know, that, that's how it, you know, the user specify, you know, specify it. But maybe I forgot to put that information here, but definitely this can be addressed. I'm going to post it next, post it to the document and then you can take a look to see whether that makes sense. Go ahead, Rachel. Oh, just that, that sounds great. Happy to like chime in on that. Sure. Thanks. Any other comment? We're running out. We'll have one more minute. I'm going to have to say one thing. I made several comments into the graph and, and, and with some commenting back to me. And if you, if you, if you comment back to me and if someone has a comment, it's very welcome to me. Okay. Go ahead. Oh, just I'll have, I'm happy to Rachel, you're, you're, you're broken. Your wife is broken. Oh, okay. Yeah. I might not have a way around that right now, but I'll chime in on those things. Great. I think that's all, you know, please feel, you know, free to post comments. I think that that will help us, you know, make this better. So I think in the next meeting, we are going to mostly just concentrate on addressing, you know, all these different comments to see how we can improve this document. Sounds good. Okay, great. And with that, we can, I think we will we're done for this meeting. Thanks everyone. Thanks. Thanks. Thanks. Thanks. Thanks.