 Hello, yeah, okay. So is this your first call? Do you want to be associated with some company? I'm from Oracle. I'm just looking into this Do you know if Ricardo wants to join today? No, I don't know. I'll go ping him right now see if he's available Since the channels switch to password authentication, maybe we give people a little bit more time Yeah, I think also last time we took them about 10 minutes for everybody to get on board Hello, Chairman Is this your first call on serverless workflow? Yes. Yes. Yeah So I was going to join with the few of my teammates, Jorgen and Olivier. So yeah I would like to edit you from Oracle as well then Just out of curiosity, are you also with Oracle Cloud Infrastructure as K is? Or are you from a different department? Yeah, so we are from Oracle Cloud Infrastructure. Yeah Yeah, and by the way, it's X. It's Sorry, sorry Man, no response from Ricardo yet, but I'll let you know if he says anything Hey, there's K, hi K Okay, it's almost five minutes. So let's start Um community question time. Does anybody have a question? No, then let's get to our first agenda point. We have a new logo After the CNC design team has proposed us with Several options and we had a coloring discussion last meeting two weeks ago We finally come on the Slack channel to this logo. That is our new Project logo. If I understand correctly, we are waiting for the artwork team to come up with the different formats to upload it in the Do they upload it to the landscape or would they deliver it to us? Do you mean? I think we will get a link Where we will get this logo with text without text black and white white only I mean all kinds of different options So then we can pick and choose but where the link is I'm on it I was just told the artwork repo. So where that is. I don't know yet. Sorry I'm new to this as well Okay, and we have a few spec updates Do you mean do you want to say something about the updates to the subflow state spec? Yeah, definitely in the last two weeks. So we've had a couple of Updates the biggest one is the one. I hope we will discuss today And I have a little presentation for it as well. So we can all kind of look into it But these updates they're mentioned there is we updated the subflow state specification document because via community also It wasn't very clear as far as how function in event definitions get propagated to Subflow states again for for people their new subflow state is a state that allows you to Rather than have it allows you to have reusable workflows That can be used in in several other workflows. They saw a particular business solution So a subwork a subflow state allows you to point to one and that gets embedded and gets executed at that point during the workflow execution um, so There was a question. Okay, the subflow state inherit function event definitions So there was an ongoing discussion that in the prs And at the end we decided that no each subflow state has to define its own function and events so services that he wants to invoke And the event study needs to be either consumed or produced during its own execution That was mainly the decision because we are it's a specification base We do not wish to allow run times now that we don't wish to but it is better for run times For us to be very clear and specific what we want And also at the same time each workflow Regardless if it is a subflow or a parent flow should be able to be validated on its own Rather than depending on another workflow's definition. So we decided on that the second one is Kind of goes in line with this Because we now force that each workflow defines its own functions and the definition. There's all definitely Cases where we want to reuse those And so we allow now for functions not only to be referenced in line and events But also to reference an existing json or yaml file Which include Those so basically you can define a json or yaml file which includes your function definitions or your event definitions And you can reference them in in in your function in your workflow and that should be embedded So multiple workflows can reuse them rather than having to to to inline them in every single workflow definition that you have So that that was an update So the third one is the one that we're going to actually talk about today It's just a pr form currently But I that's something that we all really need to decide and I'm just trying to make a case for it today and see what everybody thinks Um, and we'll go we'll we'll talk in detail that now. So I don't want to waste any time now As far as issues goes if you guys have time, please look at those two issues that are linked here They have to do with retries and and one of our community members There make some really good points on what we can do to improve Especially our retry definitions in the in in the current work was Specification So having more people look at it and chime in I think would really be helpful um, especially because right now I am uh rewriting or trying to rewrite the air handling and the time out in the retry sections of the of the specifications And this has to do with that. So any input would would be much appreciated Yeah If I get it, that's all by our contributions by Jürgen. So thanks a lot for All of the issues and the pull requests that came out of it. Yeah Thank you very much for that that nice discussion we had on the prs and and the issues I have two questions on the remaining issues on retries The postmarked documentation. Could this be clarified? So documentation only or would it would we have to restrict the use of the interval versus max attempts or the use of How we specify intervals um I think there's definitely going to be changes to the to the schema as well. So we might need to relabel those issues. Sorry Okay So I'd like spec label on it. I don't know great and Yeah, I think that's for the agenda today. I didn't add any other topics um, so Tim, do you want to do the deep dive right away or should we First conclude ask for any other questions. Yeah, and then jump into the deep dive. Yeah, definitely Whichever way you think is best Okay, um, then Let me ask if there are more questions Okay I would have one about the use of open api, but maybe I can take this offline favor of time Let me do Final whole call. So Michael. Hi. I noticed you joining. Thanks. Yes. I'm here And So, yeah, that's it. Then let's go into our first deep dive About the function definitions Yeah, let's see how I can share my screen. But just while I find uh, oh, I think you have to stop sharing, uh, Manu But just to understand last Our last meeting we said that it would be beneficial for everybody if we if we started taking our time during these meetings and discussing certain parts of the specifications And and honestly, I I'd love for this to be an open discussion Not a monologue and uh, and uh, and this one I think will be a little bit spicy. I must say So I'm hoping that everybody participates in it because introduces a Change to to to what our specification. I think we should kind of look at it and discuss So let me start sharing the screen And Sorry, my presentation is not very pretty, but I started it this morning so last meeting Can you guys see my screen? Yep, okay great. So last meeting we said that the first topic of the discussion In our kind of deep dives where we want to call these sessions is about function definitions So I just wanted to start off by saying what are function definitions in serverless workflow They're used to describe what services Need to be invoked and how to invoke them. They're typically external services They need to be uh invoked during workflow execution as part of the orchestration of of services and everything else that that You're doing and you're defining with our serverless work for markup and again In order to solve business problems everything that we're Doing or the defining has to solve a particular Part of your business problems within your organization or or or the problems you are trying to solve Another thing about function definitions is that they should really provide the runtimes All the information needed in order to invoke this particular external service So that's kind of I want to say that up front So but there is a lot of parts about invoking services There is an authentication callbacks. We have especially at webhooks in a lot of different parts They're part of the actually invoking a service or a function. So we'll get to that as well Um, as far as our specification is concerned since we're not doing an in-house project or a proprietary type of markup We have to be aware about portability So we have to understand that whatever markup we define Or say users have to use if they choose to use the serverless workflow specification Uh portability should be a very sorry about my dog should be very important part of of what we're doing So in order to see where we're kind of taking the or where I we are proposing to take this Function definitions. Let's take a look at how this currently looks with our Our specification and I did this little example here. This is a whole workflow definition in yaml and basically If you look on top after id name inversion You'll see functions which defines the function definition array in serverless workflow Instead of inlining function definitions in state inside of states or steps or or those parts. They're really concerned with execution or logic We actually define them up front. So we have our functions array. Each one has a name parameter Which is a unique identifier of this particular function definition. Uh, this is workflow unique identifier Not unique identifier within the service that we're trying to evoke. This is just Domain specific to the workflow markup itself The second thing uh parameter is called resource. Uh, and this defines the endpoint location Of this particular service that is exposed to the public And then we have a proprietary string based parameter called type Which we initially thought Or currently have thought when we when we did this that it would allow runtime implementations to give further Information about the type of service. We kind of left it open-ended, which is a string currently type So use uh users can give some more information That is again domain specific to them about this particular service So here you will you'll see two functions defined one is the get current time and one is the read wikipedia function definitions and within the states, uh, then Different states have actions for example the operation state callback state And an event states can define actions within actions. We can reference Those functions, which means referencing a function within an action means at this point the actual service should be executed during workflow Execution so we have a function ref and the second parameter down here on line 19 is ref name And at this point we say we reference our function Definition and at this point we want to execute the get current time Function the same thing on starting on line 20. You can see it's the same thing, but we also allow parameters Which are basically json objects currently to to to be passed as the payload of of For the service that needs to be executed So any questions so far this is kind of like what we currently have I have a question. So you've defined some of these parameters that I understand they're getting passed through for example the wikipedia example Why is it that the there isn't a definition of what parameters are available on the read wikipedia function in this particular case? Yeah, and we're actually going to get to that and this is a very good question One problem with and as you will see with other currently used or or popular um Workflow markups out there You don't have that as a workflow developer currently You not only have to know what you want to write as far as your orchestration to solve your orchestration business problem But you in a way have to be an admin as well to understand the api and all of the operations of all the services that you want to invoke Which definitely makes it hard for Modelers all right. So one of the parts of this where I'm going to is kind of like a step-by-step approach to Get to where I kind of want to get at the end All right. Um, I'll wait for it. No, no problem So this is just again, uh, a little bit of iteration of how we currently do things We have a name resource and type parameter function definitions also Similar to states have a metadata definition. This is our free form type of extension object Which uh modelers can use to add non executable um Parameters and information to their workflow models. So yeah metadata is also available for function definitions As far as function ref goes again, we have a ref name which references the name unique name parameter on the of the function definition And as we have seen in the example the parameters, which is a free form json objects where you can add The data that needs to be passed to the particular service that we want to invoke During workflow execution Now let's In order for us to see where we are and how to improve One of the things I think maybe it's a good idea. Let's compare it to other ones Now I picked google cloud workflow not because I pick on google or anything like that It's just a new one and it's kind of new and I just wanted to see. Hey, let's see what they're doing All right, and this is for their documentation instead of states. They have steps and similar To for example aws, which we'll see they have they defined their service execution inside of their steps Where we kind of do it a little bit different and will we define them up front and reference them? and also can reference Files json or yaml Where is the reusability? We talked about this earlier, but basically This is on top. It's kind of like the definition. They have a call parameter where you can Have an enumeration of different types of htp calls to the services that you can make you can have arguments URL methods you can set headers say the body And blah blah blah authentication information is right there as well And then you have things like timeouts and the results as far as being the results of the service the data results how Is it placed within the workflow data? As the state execution continues and underneath is the same example that are showed On the earliest earlier slide using the serverless workflow Specification. This is kind of like what they're yaml their their workflows are only yaml based This is what it looks like basically where we show the json. No, we also show the yaml in this matter So that's kind of what it would look as you can see that's another approach to to It's similar to ours at this point And another of course a very popular workflow language out there is aws And I don't have a full example. I've been rushing to do this But this is how this is the very similar thing that they find a resource which is In this case an arm, but in a way that's a uri at the end you can look at that and they also have A json object type called parameters where you can basically stick anything in there you wish in order To provide all the data all the information for this particular service to execute So looking at these three and of course there are many others that I haven't looked at and I hope you guys can maybe help me with Looking at even different approaches. They're very similar, right? So Let's see about some notes that are made and again, this is just my opinion and you guys can tell me yours and I would love to hear it So some notes about all of this mentioned including ours They try to be neutral, but in a way every single definition that we have seen so far is proprietary In the sense that it allows you to do what it allows you to do. All right Hmm, it's very specific to Container or the cloud platforms you run on for example Just letting you know from the information that I read and if you guys work at google or work with google flow, please correct me But they'll allow for example currently at this stage only htp executions to be made Um aws also has the services that they have the exposed in their system and they allow you to call those So it's very kind of closed box type of approach where we as a specification. We kind of have to look at a much broader State at this time So no matter what we do or no matter what we're trying or these different approaches are trying to do They were never going to fit The requirements Of everybody now we can't do that either. However They're always going to be Changes and updates and improvements needed But they're all based on again the proprietary or the in-house definition of how you invoke a function Now the second notes I put on this slide is specific to our function definitions Uh, I mean what I mean, but we're in the same boat. So we we're also trying to evoke services during or define how services are evoked during Um workflow execution, but we are a specification. So we have to look at this differently Uh, it's a specification. We have to distinguish ourselves We can again do what we're doing and keep what we're doing and refining it and updating as new customers or new consumers So the service workflow markup come in with the requirements But again, it will be some sort of custom based definition, right? Um, and my idea or or the idea that that I think us going forward should probably look at is to rely on other specifications Same thing we our specifications say that we rely for cloud event specification for cloud format Well functions in a way or executing a service is a very similar thing We should really rely on existing specifications Does do things 100 times better than anything that we can specifically create and do for function or service or uh definitions And at the same time we need to focus on portability and the more information such as authentication username passwords headers things like that that we stick in our markup itself are going to Uh limit our port portability in the future across containers And cloud platforms or even if we're just doing this normal local host type of project, you know, that that our specification is also there for So What does this really mean going long term for function definitions? We I think we should start relying on open api specification And why fail for many reasons? Um, but let's kind of look at first where that's taking us So services that we need that's uh workflows, uh need to invoke during their executions Have to provide or have available an open i description and an open api is a is a specification It is a huge widely used Uh, you guys can look at the docs and read everything but their description is basically jason the reamble format Open api covers almost all use cases for invoking restful hdp services including authentication callbacks for webhooks, etc, etc So what they can provide and already do is already there and we don't have to duplicate it We don't have to create a subset of it and keep improving it. It's it's there and it's widely used um, it is very good for runtime implementation of our specification because Our runtimes get all the information they need all the tooling that already exists in multiple different languages Uh that they can read an open api definition and know exactly how to actually make this call Remember also, they're a single call to rest service can actually mean multiple calls can mean getting a jwt The token doing basic authentication or even a lot more Things than that so open api can already describe all this for us Also, another thing is for tooling now. This is where I come to your question that was asked before uh With allowing opi specifications if there is ever tooling for serverless workflow, which I hope there will be outside of just the the visual code plugin that we provide Some visual tooling for example You will be able to get read the reference opi open api specific definition and help the users With the actual services or operations provided by the service they need to execute So it completely offloads that part uh from the workflow modelers Uh, which I think will help a lot Any questions so far? Yeah, sorry, uh, I have connection problems in between but uh open api so I I do like it probably the most for anything that is rest-based because it's Really complete as you said it allows everything um For amazon web service, uh, you've given the example and they are using arn resources It's a ui format to describe their endpoints, but There you don't even get to choose the transport method. So whether that is http Or whether they are using java rmi internally, you don't get to know For the the google example that is a very generic, uh, http kit so first um If we stick with http um would open api to happen to know if We might run into compatibility issues because if you if you want to call something That you could easily call with an http request and you can specify all the authentication bearer and and whatever you need in the request But you don't have an api schema for it You'd have to build it yourself, right? Yeah, and I thought about that a lot actually and the way I see it is an open api or swagger Is it's also called as a lot of tooling already? And they make it very simple to build one if one doesn't exist number one number two I I made sure before I did this and you guys can really really do your research yourself And and I would love to get everybody's feedback Everybody's doing open api now if you look at aws itself They allow you to upload Swagger definitions open api definitions and also allow you to build open api definitions from their existing services Same thing with google cloud Same thing With open shift for example the stuff that i'm kind of working in so i know that I just talked to for example scott nickles about k native would that fit within the two and he said yes You can also define k native services using an open api definition so Do I understand That there might be use cases where users can say hey, I don't want to do this or I cannot do this But that is the trade-off that we're going to have to to deal with this as a specification Um, I I also described in the pr that we still have the metadata section So for users really that simply do not want to use what we the open api as I will show in the next slide What the proposed change to the function definition they can still use metadata to describe how to invoke their service with the exclamation or or or denote that those types of workflow definitions we Cannot as a specification and share that they're portable across Multiple containers or or cloud environments But it's still possible to do using the metadata Okay, so we would default the type to open api I'm guessing the latest version. Yeah, you'll see type is type is completely removed. You'll see Next slide and then if you want to Yeah, yeah, I noticed but Okay, so anything else could use the extension mechanism to describe a different um Invocation method. Yes, definitely, but like we have to still understand that we cannot please everybody no matter what we do But however, the amount of benefits this has for the bigger public has to outweigh If we're going to do this, so this is kind of why the discussion No, I love it. I love it. I really want to make this the the standard or default way for um method and vocation the only thing I don't want to close any doors. Uh, so by removing the type field So I was wondering if we should default it to open api or if we really want to remove it completely We could reintroduce it later if we wanted to definitely definitely So I think this is the last slide Uh, I hope no, there is one more after this, but I did an example now. It's the same example as we saw before But with the proposed function definition where we still need a unique name Again, the name is domain specific to the workflow definition itself as far as referencing it in actions But you see rather than having resource type Uh, and anything you have a single parameter called operation Uh, the operation parameter is is is a string with two parts, uh divided by the little hashtag The first part is a uri. So it doesn't have to be htp It can be class path file path whatever as as far as as as long as it fits the the uri specification To the actual json or yaml file where your service, uh open api service definition, uh, is present The part after the hashtag is the operation id and if you look at the, uh open api specification There's a widely used parameter called operation id Which is again a domain specific parameter, which really ties in well to our domain specific language workflow language itself And this is a unique mapping in the open api definition Uh to a particular endpoint Or operation that your service provides This is very Ugly this the hashtag is uh is the Uri delimiter to specify a fragment. So daytime here would be a fragment But open api uses object references, um Where the hashtag separates, uh, I think The specific paths to the object, right? um in There are several uh We have to identify the uniqueness In open api definition, how do we uniquely define a certain operation of the service that it provides? There is two things you cannot use the path name itself. It's not unique Because it can use variables Um, you can you can you can use name and path maybe But one of the standards that's really across the board right now is to use the operation id which is, uh, A string that has to be unique within the open api a maps one to one To a certain operation of the service And the same the reason and i'm just letting you know we can change this format But take a look at for example of apache camel the latest, uh, not recently They also added open api support and they use the same type of string So I didn't steal it from them but looking at this stuff the the so the approach is something that other Are looking at that's why I think is is is useful But we we really if we find some better approach or idea Let's use that so that's but we need some sort of way to define Okay Here is the yaml or jason which has the open api definition file So the runtime can read it the tooling can read it and see all the different operations that the service provides And then we need the unique identifier Uh, which represents a single operation of that service that we actually want to invoke So that's kind of like that if we use this type of string format or not Or two parameters versus one I am okay with all of that One comment I would make is that I think the query online 22 Shouldn't be required in this case because the action in the search should come from the open api spec right um That's a good question. However, one of the things that I uh, for example day of the week if you look at that The api the open api specification itself does not know or is involved in the actual execution of the workflow itself So what happens on really after the execution of the first function in this case the get current time The results of that function are merged with the State in this case the get today's wikipedia articles operation state And then the second function gets invoked Right, so we have to pass the workflow has to pass this data Which were the results of the first function invocation As the body or the parameter to the second function in for me execution So we still have that to define that however what open api makes it a lot easier now Is that up front during compile time we can say hey The as parameters you pass a query Right parameter, but the open api definition says it should really be called something else Or you're passing in one parameter, but in api open api definition of the service endpoint Uh, you require two parameters for example I mean what I mean is that it's not necessarily a query parameter like it could be a post body It could be a whatever depending on what the api spec says. Yeah, it should be Exactly, that's why I'm saying that query maybe is Not needed there because whatever how whatever form those parameters take should be defined as part of the open api spec Definitely. Yeah, that's a one-to-one mapping that that open api even helps us with to know exactly the structure Of the parameter that we have to pass in um, even They have open api has a schema definition for the object type So even workload developers will exactly know what the structure of the parameters needs to be in order to invoke this service Does that help or no, all right. So do you agree that the query online 22 was not required or do you think it is required? um I think you could just bump action and search up one level. Yeah. Yeah, I think so too. Yeah, that's fine I agree Hey, hi tihou. Can you hear me? Yes All right. Hi guys. I'm sorry. I'm I'm late um, sorry Yeah, uh, regarding these parameters. It is uh, this will fit very well in the pr that issue that I opened regarding we giving Reasonable reason to the two parameters. So, uh, if they are path parameters query parameters Had their parameters or body parameters We we don't know, you know, actually, uh, we we don't need to know now because uh with the open api thing And yeah, I guess query we we won't need that and maybe just uh parameters Maybe I don't know name name me and data and in the data we we fetch from the from the from the Variable name or from the body of the state Yeah, or anything like that because uh, we we have this context information Coming from the open api and one other question that I have it is that Regarding this well, uh, well all that you have there in the operation for implementations, let's say that They might have the the open api JSON file Within the their context like if I'm I'm writing a java implementation for that I can I can have that that the Emo or json file within my class path Or if I'm running I'm ready and go application. I can Have that, you know packet on my binary as well or I can have that I external or have friends like file or anything like that. So this is a valid bit Where uh, or I? Name right so yeah, so it's you can be spite it's tp. I can I can fetch this file for whatever Path file it's just has to follow your eye Specification. Oh, yeah, just just let's try to to I don't know maybe Document that and somehow or Write that on that. Yeah, because maybe people will will ask like oh if I if I can't fetch the The resource my My workflow won't work, you know because I won't I won't have this operation now. So yeah, we can actually have that Yeah, I think in the pr. I define it as a in some of the examples even use file column forward slash forward slash rather an htp just to show that You can really define where it is yourself rather than always has to be available on some public endpoint Oh, that's nice. Yeah, I'm sorry. So last no, no problem. So just last slide ahead I'm sorry it takes so long. So here is the link to the pr. I think it's you know, currently only open pr right now Um, so the changes are name name is again still the unique identifier for the function definition Operation is now the new parameter, which we said has two parts divided by the hashtag the first one being a new r. I um as Ricardo said and the second part being in the operation id and The low I create the the image below I created a little open api definition using the swagger editor um That is not completely correct, but it's just the more most important part there that shows the how the operation id for slash daytime maps to the In actually in our definition of of this string So yeah, and you can see you can see like you can define in open api the response codes Uh, what kind of return message it gets and the parameters and And things like that So yeah, it it does what it does. It does what the specification is intended to do. It does it very well So yeah, so that's all I had if you guys have any questions concerns um If you want to know more about function definitions or anything right now, please pick up I have a question about the Serverless functions like if you go back to the aws step functions example where I think it was showing an arm For a function Okay, in this case, it's sns publish If we instead have an example where the State is making a call to a lambda function Um, how would how would that work? What would you know? What would that look like with with this? um We it would look the same the first one would be this the operation string would be basically the first thing Is again a uri to your open api Jason or yaml definition the second part would be your operation id for example called publish And you would need to either have one available or create yourself uh an open api definition jason or yaml uh with uh path Which can be really this path right here And an operation id of publish so the runtime can map the operation id to to the operate The operation id to the to the one defined in your open api specification so so somebody that's running this in in aws they would need to Upload an open api spec somewhere the points to their lambda function Yeah, but from what i've seen again, this is why i kind of i do feel confident about this more than not And where i want your guys's input because you guys might be experts in certain domains there much better than i ever could be To know but from what i read that even in on aws you can It can create them for you They them being the open api Uh definitions for the services exposed So not in all cases you have to do your your own and upload them somewhere but in a lot of cases in different cloud and and and Providers will do this for you same thing on open shift All the services you can open shift can generate the the open api definitions for you already Gotcha, I think that makes sense for examples like calling sqs or calling some you know large established aws service Yeah, but struggling to figure out how it works with something like my own custom lambda function that i've uploaded Uh, and I want to invoke that I guess um I guess what what it sounds like is In that case, I I'm not sure if i'd be in like using an open api spec for the lambda service and then passing in the parameters That tell it to invoke my function or if I need to provide an open api spec to describe the kind of api that that lambda function forms um Yeah, the funny thing about lambda functions, I as far as I know and I'm not very overly familiar with amazon services is um As far as I know in order to make a restful call to a lambda function That can also expect a result you'd have to I put an api gateway in front where you're sure it's the pass yourself And then you just refer to the arn As the function to be executed and in this Scenario, I'm not even sure if how the lambda can How it would produce the result but Um, it's that's the only way to make it restful and once you have that you for your api gateway in on amazon you could Come up with a swagger definition This example uses sns publish and it's unidirectional. It's really more of a cloud event cloud events, of course not Exactly supported uh in amazon, but it is the transport here is um A message broker So you have to publish to a topic in order to Invoke the function. I think the the lambda here is bound to that topic aion And There you only have a message structure. That's one way. So I don't know if Actually, I assume that Open api cannot define um such apis because it's You really done for for restful apis and those are meant to be htdp htds base risk request response clients of a protocol um, if there is Something other and that's why I mentioned maybe we wanted to retain the type For function invocations. I'm not assuming anybody wants to do old-fashioned coba calls or do some asn1 encoding but If there are other invocation protocols not based on htdp um Then retaining the type field would be at least an option to extend the specification and write some proprietary extension The other thing I have in mind is that since this is uh unidirectional It's not really waiting for a result. We could somehow formulate such an such a message as an event A produced event that is being produced by the workflow. We we have and that's a good thing Manuel you mentioned this we have two ways of invoking functions within the serverless workflow specification one is Via the actual function definition, which is meant more in our case for synchronous htdp calls in this case Uh, we also have the ability to invoke functions in actions via events That is also already there in the specification itself So you can already uh define in a lot of cases right now We have functions that are not exposed via htdp or not exposed at all But they can be triggered via events in different containers for example And in that case we can also describe a trigger event in a result event in actions Uh to invoke those types of services as well um, so those are more if you want more an async type of um scenario where where where um You fire and then you wait you will most likely use anyways event based invocation of your services Or specifically the callback state even if you wanted to But yeah, so function is more of like a synchronous type of invocation scenario in my opinion again so one thing on this, uh From me thinking about from a user experience point of view like in in step functions like this example we're looking at I know it's sns, but like the the goal for when the launch step functions was to orchestrate functions, right? and I think we can look at another example where they'll they have like Ability to trigger functions directly So if if with our specification Um, we need is some sort of a similar way to trigger functions, right? but by asking the function author to also define a yaml spec and Have it available somewhere uploaded somewhere. I think that's going to increase a lot of development cost which might not be What the function author was looking for? Definitely, yeah, and that's this this is the type of discussion I think we definitely I wanted to have with this there is definite tradeoff And we have to say what we want to do is a is a group Uh, there is definitely a little bit of upfront development with this The way I looked at it and you can tell me if I'm wrong is yes, there is but you have to understand that the actual People developing the workflow model Could be completely Or not understand the actual service Um definitions that they want to invoke I know that I want to as a business analyst or as a user creating Solving by particular business problem via orchestration of services I know what type of service I need to invoke, but I do not need to know exactly every single operation Or if if I have to first get a jwt if I have to do basic authentication What is even my username and password up front? Open api is much better suited for that. So yes, you can say your requirement increases development time But at the same time having uh open apis Definition even for your services will allow you to pour them tomorrow to a different cloud provider pour them to a different Container for example So I don't see this as particularly bad thing for yourself doing this work anyways And given the open api tooling, it seriously need takes like minimal time to do that Uh, it's also some discovery and it's already if you already have defined services For example inside the runtime containers such as quarkus spring boot things like that it is basically just one line of Of in most cases one line of Application properties or some sort of properties that you have to set and it will generate the json and yaml for you So open api is very unique in that a lot of tooling already exists for it That will help you not have to spend a lot of a lot of development effort But what it buys us as a specification is Is really the fact that what would you rather have a workflow definition that is not portable? Or a workflow definition that is and I think that's what we need to decide because Using open api allows us as a specification to say we are indeed portable for for service definitions and and then they're in invocations And that's the tradeoff that I want to that I want to make a decision on with you guys if possible Can we think of like, uh, like I don't know from a market perspective like what the 80 20 on this is like as a Work developer and as a function developer Is like 80 of the time I'm going to require these like Open api seems to me like an advanced use case, right? And I don't know if that's true, but it just seems to me like is that something? Do we want to force that upon everybody or can we think of something like the current method as like a quick start? That can solve Most of the use cases for function invocations or do we think that for every use case of function invocation that we Need to ask the function or the workflow developer to write an open api spec as well. Yeah I don't know My take again, sorry if anybody wants to speak up, please do but my take is again, we are working on a specification We're not working on a in-house project. The problem we're going to run into is Tomorrow, let's say somebody wants to use the specification by google ads or Anybody really out there adds a parameter that they specifically need that we Can only add in the next version And then so they cannot use our markup that will go to google same thing Again, somebody is going to have something that we don't and it again is all proprietary and in-house definitions If it's not open api again, the point of my talk here is let's use specification based stuff And we can never replicate for example what opiate api does and if we have use cases where This is really why we use cloud events. We don't really want to create another Event format. So we said let's use the specifications same thing with function definitions the service definitions is Let's use specifications for that because that really allow us to grow a lot more than for example markups that focus on in-house or proprietary definitions for this Great might be worthwhile having a an inline version of the definition To and there might be another use of this type field is to say, okay. Well, yes, you know, ideally you should Have an an existing open api that you can get it from or maybe you can host a copy of that yourself if one's not available Yeah, you might make for a nice, you know compact example if we had an inline format as well that people could use If if you guys are developing a runtime where you simply say no, we don't want to use this Um, you can still use the metadata section of the function definition to really put in there anything you possibly want That is fair game and you can adjust your runtime Which I think we're actually going to do a little bit of that also within within redhead and also and Ricardo tell me if i'm wrong, but We also looking at metadata to inject further information That is specific to our runtime, but just on the specification level itself. We cannot consider those types of Um Workflow definitions portable, right? I mean am I wrong you guys tell me I really like the open api spec and first class support for I guess what I'm worried about is removing the the type field which seems like it requires the use of open api spec um for for all functions where um, I guess I'm worried that there are use cases that There are not where the interaction isn't restful And so an a open api spec being required would be confusing to users of the definition All right, so I think that that's two people. I think manual and also said the same thing So one of the things maybe to to move towards this implementations. Let's put back The type parameter which is again to everybody a string that basically run times can put in their own identification Or further description of of the type of service they want to invoke to make sense for their runtime Would that be right? Yeah, I agree with that with this as well because Let's say that they can't they they wish to invoke a soap service. For instance, how they did that Yeah, then you know, they they might use they they view sdl. They are very old school Um, yeah, I guess that makes sense having the type parameter. Um, and we can even um Assume a default type parameter being an open api definition For instance, I don't know maybe something like that and then if you won't change you just Type your own type parameter and interpret that in a way that you that you can Do your thing but again won't be portable Because we are looking for portability as well. So we can you can port Part your workflow from from one runtime to another and then if you use a proprietary type You won't be able to do that. So that's that could be reinforced In the in this specification as well Regarding http, I think in the generic http method we ran into issues of how to encode the query parameter to a url the Authorization header fields or potentially additional headers to be sent The structure of the body right because the encoding of the body wasn't clear from the function call Um, if there was some write-up Oh, I'm thinking about following the the google example um We could have a type http that would still allow a generic adaptation, right? So I got a um, would you rather switch to open api completely? I use the tooling to generate specs for um function development. So I don't know I know you guys work a little in java If it's probably like back in the wstl times you wouldn't Run the full specification eventually you end up annotating your code And you have all the specification created through tooling. So a similar thing you could do with open api But for somebody who just steps in I agree here it it would be a really a lot of effort Describing the api open api spec first In order to be able to use any workflow engine to make some calls. So Something that just generically calls an http endpoint Makes it still sense to me. I know a lot of people who Still implement services that way because they want to type up something quick um So if we had a type http and Probably with some predefined metadata on the method headers Um body and whatever is still in is is like in the google example um Would you support it to add it to the specification or is that a no go because uh, there is not enough validation in it Hmm, this is hard to To answer, uh, well From from a runtime implementation perspective, I'd say that Doesn't matter if it is come come from an open api or directly from the spec, you know, the code base would be the same Uh But I have mixed the feelings about this because uh, you know on We we can we can have that using the only the open api and then leave Every every every standard of defining how to call an http restful service to the open api And that's it because this is the standard today. Everyone's doing that. There's a lot of tooling like uh, jihosh said To work with swagger for instance, you can you know any java application can you can generate that no quite quite quite quick and for go as well It is basically the same thing um But at the same time and understand that we understand that we can we can have uh, lots of people, you know, seeking for the the the specification just to call simple service and then They do not have the the open api Defined and for that I'd say that they they should use their own metadata And do their stuff in that and you know The runtime just supposed to to support that because we'll be a hell to maintain the specification In order to align with the open api maybe So, uh, like the the issue that I open to classify the parameters when calling our http service Being a rest or not um We we are basically doing the same job that open api is doing So in my opinion, it should be a no-go. I'd say that we should only have the type open api defined by default and this is if you do not Define a type just be will be open api because it is wide use used by everyone in the industry and you know And uh, we can rely on the standards of the open api and this for rest And for rest for you know called this kind of services and for http Uh, I'd say that the the implement you should decide how to if they would support that or not Yeah, I have to still learn a bit more about open api. Thanks. Um I had one question Still because tma you mentioned once or twice that uh authorization is Open api would give us a also give us a specification of how to authenticate with the service um, and I was thinking about some messy proprietary authentication Messages that I've come across and how they would be supported Or if they could be supported generating an open api spec So I got a definition. So, um How How versatile is it to have? Let's say random headers added to the request The the actual headers you can define them all in your open api thing. So again, that would offload that for us as well But again, you can use metadata I think metadata in a way because it's not Really provides execution semantics You can you can add all the headers extra headers you want And again, you have to be kind of cautious is this portable now In 99 of the cases I understand for everybody probably even here portability is not Really a big issue you again you look at more work created by this rather than not But we we have to understand again We're working on a on the specification itself and one of the most important thing that we're trying to distinguish ourselves from From from all the other workflow markups out there Is this portability issue and vendor neutrality and then I think that for us has to be kind of like embedded in And I have a hard time with that as well To kind of look at this from that perspective more and more than anything else Yeah, it I I fully understand that doing this might limit the users actually of what you're doing because Like you guys all said, it might be much faster and easier just to hardcore the htp or l in there But at the same time Then you run into the same problems that we're trying to solve as a specification, right? So so that's that's one thing like why It doesn't prevent all sort of users in the event definitions to To to use the proprietary event formatting right on the on the runtime either But we recommend using the cloud event format as well So so that I think distinguishes us from from from other Workflow markups out there. They're popping up every week. It seems And I think that's a positive, but yeah Um, I'm all I'm really a fan of open api. Yeah, just seven Yeah, and if there is a better about it If there is the the reason we picked it is because like rikardo said it is a standard out there If there is other ones for for for non-htp based, let's use that but One of the things I think that's important for us to be specification based, right? And and let's just pick the best one That I think I brought it up several times The only thing I wasn't sure about if it's Fully covers the features of a generic htp request. So the security information and so on that would also go into the parameters then Yeah, you also you also solved the for example the cases that we we we discussed with argo recently about webhooks It also deals with callbacks as well So there's a lot of use cases that we really don't have to specify On our own and have to work with in order to To make users happy. So in a way creates more works which War work for our users, uh, which might limit the adoption, but the same way I think the inclusion of of of run times and users Uh, especially the people writing the run times. I think this is a huge step up for them I kind of wonder. I know we're a bit going a bit over time now. The um, the example you gave before about the Wikipedia API, you know, you've got to pass this. Um, I think it's open search Action parameter every time the way that we previously described that you'd have to put that inside every Function every indication definition It may make sense to have some kind of a like a mapping on top of the open API to say well I'm not just calling, you know, generic Wikipedia API and you know passing through my parameters, but saying, okay For this function definition I always want to do the search and then you've only got one parameter that people then pass to that function If you're following me, that's a nice sort. Yeah to to have Pre-customized function calls here pre-filled headers In some cases you wouldn't want to have to repeat that everywhere I open up an issue for them. I would love to to to to work with you guys on that as well So I mean, what did you guys think overall because this is our kind of like first deep dive But it's a pretty hard one because we introduced a big change In the next ones the way I think it was intended just to talk about what we have I think uh two things if if you guys don't mind either in chat or this week or in the next two weeks in our In our team chat if you don't mind like writing, what would you like to see discussed next? Um, so let's pick a topic for for our next deep dive Uh, or anything else you want to talk about also the same thing is like if any of you guys would be willing Uh to lead this type of discussion for next time Uh, please step up I it would be really nice if I wasn't always the one talking about this stuff Of course, I can if you guys want me to but if anybody of you would like to take a section of the specification And talk about it to everybody And and and have a discussion that would be great too. So yeah Hey, thanks to you Mia for leading the discussion today. Uh, was Sorry, okay Same thing. This was really useful. Thank you. We should yeah, we should be doing more of these deep dives Yeah, I really enjoy it as well. Uh, I left some mess some comments in the PR about the dysfunction definition where you be opening press so we can Uh, I believe talk in there in the PR itself. Um, because we are in top of the hour I don't know if we have any more time to discuss anything Great. Thank you guys so much for your comments and your time To join and listen to all this and yeah, hope to see you guys again next time Thank you. Cheers. Cheers. Oh, thanks guys. I really enjoyed it. Thanks. Bye. Cheers. Bye