 Hi everyone. Hi everyone. Hi John. Hi Timmy. Hi John. Hi Alex. Just going to see if Barla or Ed are joining us. Hello everybody. John are you hosting today or who's hosting? Hello yeah. Yeah Manu usually in our meetings he's really good at it. So yeah I picked up moderating our community meetings and we've picked up with some primer work where I send out the invites for the call and I also reached out to Alex to have today's call. So we're welcoming Alex Collins from Argo, the Argo project that has I think most recently joined the CNCF and we are the serverless workflow group, a subgroup of the serverless working group that has set out to do standardization work for serverless in and a couple of other tasks that have come out of an initial serverless serverless white pipe work that this working group has done. So are we waiting for anyone you have invited Alex or should we start? No I was hoping that Barla would join and he has just joined today so that's great. Yes I'm already in Alex and I'm Barla from Argo team. Yeah so just to kind of introduce myself and Barla, Barla is a kind of long standing engineer on Argo workflows project. I think Barla has been working, maybe he'll introduce himself in a second for about a year. I'm the principal engineer. I previously worked on Argo CD but now I've been working on Argo workflows since December. I typically index more on community aspects and I also index on more strongly on kind of internal delivery into it. So into it we really won Argo workflows internally for a number of different reasons. So we run it as part of our machine learning platform, we run it as part of our data processing platform. We also run it for certain amount of CI and we run it for doing some performance testing as well as basically a platform for executors and we also use its part of our automation within our developer platform as well. And about that's about half the work that we do and about half the work is related to the open source community and we leverage and utilize the open source community to help drive the direction of Argo workflows. So when it was first originally developed I think about three years ago Argo workflows is only really intended as a kind of very generic cloud native i kubernetes workflow system. Obviously things develop over time and you realize you sit within a wider context of there and often you're often sorted suited to particular workflows and Argo workflows is particularly popular amongst the machine learning community so we have a lot of users from from different companies that use it particularly for that. And we also have some kind of interesting CI use cases and other kind of just general use cases. So we're particularly interested in scaling to very large workloads. So typically a workload with you know several thousand steps is pretty common in terms of it. You know running on you know different platforms I had a very interesting conversation with some guys from Cray in back in November about what they were doing and we know we used places like CERN and so forth as well for their for their platforms. So that's just a bit of introduction. Have you guys used or played around with Argo workflows in any kind of depth or breadth do you think we have looked at Argo. I have no specifically we created the kind of markup comparison documentation. I've actually ran Hello World locally on my machine as well. This is the whom you're speaking by the way. So yeah there is some in you know level. I cannot say I'm an expert in Argo by any means. But yes yes I have played with it. So I have one do you guys like a little demo in that case. Would that be a good idea to do. I'm going to see if I can get it up and running on my machine. Okay yeah I have to admit personally I have I only know Argo from the video so I haven't get it up and running on my system. I'm mostly organizing the meetings and a little bit of the standards discussion so I'm not that much into engineering or operating. But I have to say I'm pretty impressed and how this reach out came about was actually the comment from Alois of the sick delivery who has recently reviewed our work on workflows and has recommended that we reach out to more projects in the CNCF ecosystem and this is how we got to Argo. Okay good. Let me show my screen. I usually make a mistake when I do this. Do you see the user interface. Yes. Okay so this is the this is the user this is the user interface there are basically three ways to no correction four way five ways to escalate you quickly. Two common ways that people interact in three other ways you can interact with Argo workflows. And so the common way of course is one common ways to the user interface. And this is the user interface as of version 2.8. And it's very it's relatively straightforward user interface. And once you've logged in you can submit a workflow and workflows are defined as Kubernetes YAML. And you can see this is a standard YAML and I'm given a given a name here and an entry point into a specification the specification defines the workflow and the workflow typically can most of workflows reduced to directed acyclic graph that there's very, you know the most basic workflows directly acyclic graph which executes pods as as each node basically. And this one has a has a secret that usually made up a sequence of templates that are connected together by dependencies. So in this example, I've got a container here, which runs an image called Argo say, which is prints the word hello Argo to the console. And that's that's what I submit that. And then what what we do is we figure out what the graph is for your workflow we determine the first steps to execute me execute those those steps and when they when they complete them then your work first complete. This really shows could be what I'm going to show you a more advanced example by uploading one called coin flip recursive. So this is a more complicated example of a workflow. This basically simulates flipping a coin. And depending what the outcome of the conflict the conflict is basically a random number here you can see this line here. If it's heads, it's if a random number is zero and it's heads otherwise it's tails prints that result and basically flips that coin repeatedly until it comes to a conclusion. So it's just submit that workflow and you can see that this then shows you a graph of those steps of the workflow flipping the coins initially steps is a is execution of a pod. And you can see these ones that on the right hand side of the grave in and skipped because they didn't get to the right outcome of waiting to wait to flip heads and then finally we get to heads and this this work flows and complete. So I can use the user interface to do that. Some other features of the user interface of things like you know templates if I want reusable templates I can. I can use those workflows across workflow which is a workflow executes on a schedule so once every minute once every hour once a day. And then finally the big feature here is a thing called the workflow archive. So when and to keep the number of workflows in your in your running set which costs money. You can archive them to a database for kind of doing data analytics afterwards. Not sure there's none in here not normally they get automatically archived I guess I probably got it figured off and then obviously we've got some searching. So that's that we also have a lot of that. In a CLI tool called Argo and you can kind of brew install Argo to get it running and that has just it's very standard kind of Kubernetes style ago commands. And some kind of really key commands of this for example and submit to submit a new workflow or list to list workflows. And it'll show me a list of my workflows there and then I can get the workflow by using Argo get and then I can say give me the app is YAML and it'll show me the YAML of the workflow there as well. Now, those of you familiar with cloud native will will probably recognize the kind of standard format of Kubernetes manifest you've got some metadata, such as the name and namespace. You've got some kind of specification block which defines all the work flow is and then finally for status that block which explains the current status of the workflow and what it's been what how it's been executing there as well. And actually these can be executed using the cube, cube CTL commands this is the third way that you can get access to your data so you don't actually need to install an Argo binary if you've already got the cube CTL binary and access the cluster you can just use that to get it. I did say five ways and now I'm trying to structure my brain of what the father five ways are so that this is where the fourth place programmatic access. So I'll go workflows is intended to be embedded in one of the use cases is embedded in other platforms for executing their workflows so coupon pipeline embeds Argo workflows. So the fifth way includes some programmatic API is in terms of a Python API contributed by the by the open source community and also a Java API that we've done for our internal users who are on using Java as well. The fifth way. I guess the fifth place probably just using the API itself so we have an open API you can use. And then there is just a standard swagger JSON document that explains how to use that and it's actually not too dissimilar to the community. I'm going to pause there just in case people want to ask any questions about any of the other things that I've gone over there. And I'm just going to give you a bit of an example of how to find out more information about Argo workflows and what you want to do if you want to ask any questions about it. Thank you very much. Yes, actually I do have plenty of questions in my head I wonder which are the most for this meeting in this purpose. Pressing so you mentioned Kubeflow pipelines, and I heard about that before I'm also aware of the Kubeflow DSL. How does the Python DSL we've relate to the Kubeflow DSL is that the more or less the same code base or So I mean they use Argo workflows as the component within their software to execute the workflows and actually they have their own Python language that they transform into YAML. And we also have a separate independent SDK that people use. As I mentioned, we're very popular in the machine learning community and a lot of people in machine learning community like to use Python so they kind of naturally evolve as a result. And I also learned that Argo has a larger ecosystem you have Argo events that uses the gateways and sensors also speaks cloud events. And what it seems that all the the extra tooling Argo CD, especially where you have experiments workloads and so on it eventually does it boil down to workflow and the workflow controller of resources, or is are these separate projects. So they're, they're, I mean, they're separate separate projects and Argo CD is a bit off to one side and it's, it's, it's job is basically to get the content of your git repository into your cluster. And it kind of read us that by doing a series of kubectl play commands under the obviously it's more complicated than that when you actually dig into the details. And that Argo events is is quite closely related to Argo workflows because Argo workflows only provides the execution component of workflows, and some very basic crunched entries are quite recent. But if you want to do things like trigger a workflow from a GitHub events, or, you know, you know, a drop into an S through buckets, then Argo workflows doesn't have any kind of native support for that and Argo events is a very, is, you know, kind of a sister solution to use on site. I'm going to pause. I know that Ed and McClelley. Hey, this is a leaf also from input and because also on she's our product manager. Yeah, just to expand on what Alex was saying is saying like workflows and events are particularly closely tied and then like Argo CD and rollouts are also kind of closely tied. You know, obviously Argo CD is for like deploying and managing applications, whereas workflows and events, it's more about triggering asynchronous processing or, you know, batch jobs on Kubernetes, however they are related in the sense that a lot of people will create pipelines which basically consists of events triggering workflows that kind of thing with Argo CD, for example. So one way to view it is that Argo CD handles kind of the management of your typical services based computing model, you have the services running, listening for requests, doing things. Whereas workflows is your more like your batch style of processing model and events is your kind of event driven or event based processing model. In many complex applications, you'll actually use all three forms of processing. You'll have like a web server or something listening to things. But you also have, you know, asynchronous processing or you will queue things for data processing and, you know, things triggering off of the end. So the idea of the overall idea of the Argo project is that we, you know, building a toolkit that can basically encompass all of these modes of computing so that we can put it all together. So that makes sense. Yes, perfectly. So Argo CD is more of the integrated solution where you would automatically allow callbacks from GitHub to trigger your pipelines for continuous integration. And the workflows and events together can achieve a similar thing but is more of the really event driven processing model as you explained. So events and workflows, they overlap with CD only in the sense that, I mean, CD, we purposely, you could use workflow for CI as well, of course, like so, like events, you could just advance the trigger off of GitHub, you know, events, and do CI type of stuff. So CD part mainly just deploys things into the cluster where you could use something like events and workflows or CD is to do what we call auto sync. You know, that is when something changes in a GitHub repo, you automatically like sync it, although Argo CD provides a separate mechanism for that. CD also has a very kind of full feature UI in a specific to deployment. So like when you deploy something you could see what's running so you get health status. You can diff what's running in the cluster. So, so basically it's, it's kind of like serve some of the same purposes as the Kubernetes dashboard except the Argo CD dashboard is tailored much more to, you know, applications and so on that are that are running on the system than just, you know, arbitrary resources, although you can't see any resource depleting Kubernetes through that. So a lot of our Argo CD users actually, you know, like an operations team, you know, they want to provide a solution to their developers or users to deploy and manage application and Kubernetes and instead of giving each of them, like, you know, Kubernetes, like namespace access, you know, basically a cube can pick up. Instead, they just give them access to an Argo CD pipeline and they do all of their work to get off. As a result, the developers are actually using Argo CD as the main UI to Kubernetes. So yeah, it's a little bit. So we are targeting two kinds of users, one for ML and data processing users. There they use Argo workflows with Argo events to do any kind of data processing, moving from tools like airflow, or and doing ML ops. Then the other side where it's application developers, they use Argo CD to sync their clusters with whatever is defined in Git. Now Argo CD can work with any pipeline, it can work with Jenkins, it can work with Argo workflows, it can work with Tecton, any pipeline. So Argo CD does the sync part and showing the applications integrated with logs, troubleshooting, all of that, but to drive the change across different environments using Argo CD, either you can do auto sync, or you can drive it through a pipeline. And that pipeline can be either Argo workflows, or it can be Jenkins or any pipeline. Yes, thank you very much. So ID Argo CD is the much more integrated tailored to continuous delivery system and then yes. Yeah, the notion of workflows from events and then this being making up the event driven processing model is actually something that the serverless working group has concluded to at the bottom towards the end of the white paper. So having looked at the serverless landscape. I think one conclusion is that yes events would trigger functions. I think that is also the main term that is used in the serverless working group. And then functions could emit other events to trigger other functions and this representing or making up a workflow. So it's kind of decomposing the application workflows into events and functions. And while cloud events did the very good job at standardizing the event format 1.0 and it has reached very good adoption among public cloud providers. There was still the workflow tasks and this is where our subgroup comes in. So in serverless workflow subgroup we I think initial work has started with adopting a language that is a little bit similar to Amazon States language if you may be familiar with that. Or if you know it function stage by Huawei. And it has evolved a little bit further with mostly the work done by T homea. And we have recently also applied to become a sandbox project so to host the work done within the serverless workflow. Subgroup, and to do get a little bit more presence among the CNCF project so to become to become a sandbox project ourselves. And this is where we had review meeting with the sick delivery and maybe to me if you want to pull up the slides or I can do it to introduce where we stand with the serverless workflow language so far. Yeah, definitely. Thanks Manu. Well, first hello everybody. I'm really happy about this meeting and. Yeah, the presentation right now argue is really nice but I really like it. But my name is to whom I just wanted to introduce kind of myself to if you guys don't mind for a minute I work at redhead I've been around workflows for years so it's not like we have some, even though we have a small community we have people in place that been around for many many years. And the reason why I want to introduce myself I went, I go all the way back to so open people and stuff like that those type of times where we had very custom markup formats to describe workflows. And over the time and especially at redhead I'm kind of like in your guys's boat to we everything that I've been doing for the last decade or more is open source. And the reason why we especially got involved with CNCF especially the workflow group and have invested tons of you can say people they're on board here have invested a lot of time into it just like you guys have from community perspective is the reason that we really need specifications. This is like at redhead I've been using BP man to DMN CMM and it's very important for any open source project, in my opinion again is to really utilize specifications. And this is kind of like where where where, especially now that the we're writing actually complete a runtime implementation on our end for the serverless workflow specification so it is implementable. So if you guys have any questions regarding that I'll be more than happy, but that is kind of like the point where we come in and I think this is kind of like the integration points I think Argo is amazing the runtime and everything he can do. But we are more focused here on the specification on the on the on the model of representing what you guys, you know, of course called workflows and NASA as well. And if, can I share the screen manual is that okay. Yeah. Yeah. When I give us an introduction to Well, I kind of just want to go. I don't want to go through the whole presentation that I did with with the sick but can you guys see my screen now. Yes, yes. I just want to go through a couple of slides and all our time is limited I don't want to bore anybody here, but I think two of our slides here kind of represent the specification overall. So what is this slide which you guys are under to of course, but it's the state of kind of like the work for a world now we were going, we are going seriously from this BPM and to kind of, you know, driven world that kind of really put the runtime implementation put the entire workflow, kind of work under a vendor lock because of specific tooling as well as runtimes and BPM and to is is is a huge usually specification however he has its issues. And it's not discussed about that but the point is the world is moving into what you guys are also doing Jason YAML best work for serverless and why, because you guys are going to simply orchestrate event based workflows in in in the cloud and these number of different types of Jason YAML based markups is growing every day. And there is a need for standardizing this I think also in your guys is and I think you guys have just said okay, we can work with tecton pipe ones, but it can you really take your workflow notation that you currently have imported to the types of not only cloud platforms but also runtimes that exist. There is that's where the kind of specifications are important. And the second slide I wanted to show is basically and I'll make it bigger is kind of what the serverless specification is trying to do. Again, we're not providing runtimes we're providing similar to your Python definition of your workflow we're creating providing a Jason schema that is the core of our model definition. Out of this Jason schema that we can create API's SPI's and also we are working thinking in the future providing a TCK because every specification definitely needs that. But implementations that would provide is the runtime so the goal of the serverless specification is provide the Jason and YAML both formats are supported by the specification to be able to execute on different runtimes on different cloud platforms. And that's kind of where we are at we we we are working as a team on on defining this Jason schema that actually mode, you know they can be used across different runtimes. So, kind of like, and what else. Yeah, this is kind of like the problem that we see currently. I don't know how it is with Argo you guys can tell me. But when you're starting to use a workflow solution currently you're kind of vendor lock, especially on the workflow model, and also workflow notation because if you see things like AWS, Amazon, Microsoft, things like that every everybody has a different model that are represented workflows with our specification does not go into it. Maybe yet, but this we really focus just on the model. So, from the integration perspective between the two groups. I think we have to talk about the notation. What can be expressed with Argo workflows and can that the same thing be expressed with with with our current Jason schema. Now again, is there interest in that on your end, I don't know, right. But at least, you know, with this meeting I think we can start maybe some discussion on that and what do you guys think. Next question. Is there any other implementation of this specification yet. Anybody else. As far as we know, no, we have an implementation with a redhead and I can, I can do a demo if you guys want with that which is a fully event based triggered workflow. If you guys are interested in that I've been more than happy to do it now or whenever they're, you know, yeah, I would be interested in seeing it. I guess if you're going to do this kind of stuff you do need some kind of reference implementation. Just like what you guys have a runtime engine which takes your custom Kubernetes resource the YAML that you produce and converts it probably into an internal object structure. That's type of the runtime that any type of runtime will probably do right. You need to represent the Jason and YAML in some sort of internal object model. And can be executed. Correct. Are you also looking at specifying the event specific. Oh, yes, yes. I think I want to just show this slide maybe which kind of what what kind what is the service workflow. We focus on a language that is that allows workflows to orchestrate microservices what does that mean but microservices event based triggered workflows. That can be of course repeatable. There is three parts of the service workforce specification which are function definitions. In our case we of course do not care how these functions are written they can be polyglot, and we're not kind of soap where did you define that, but we define how these functions can be executed. That's under the function so we can go look into the example then we have events events are a core type of structure in the in what that means is events can start workflow execution events can be produced when the workflow execution ends, and also events can be produced during the the execution of workflows. Now, as far as parameters goes their past two functions, they're Jason so they can be events as well. So cloud event format which is a format that represents events in a Jason format can be used pretty much throughout the the workflow. Now the third part is what you guys call steps, and we currently call it states are the building blocks or the control flow flow logic blocks that allows you to do things like what you guys already can do for example parallel execution, a split joint type of situation, and stuff like that. So we have. Let me go get to this slide if I can real quick. Yeah, and it's your go to produces like that end users for us or that other workflow things will like kind of compile down to essentially um excuse me I didn't didn't hear the question very well. I'm pauses. So you know this Jason syntax and everything and is the idea that end users like you know ML engineers data scientists are actually going to write their workflows in this back or more that you know they will be using something else but it will translate or compile down to this. The idea is just similar. Argo, which you just showed in your your really nice demo is that just like your hub you allow users to enter in YAML code. This is the same type of aspect we do not read. It's, it's, it's users are supposed to write their Jason and YAML code, depending that conforms to the Jason schema and just to show you that so this back is intended for use by end users. Yes, yes, sir. So you said something about model and notation. I thought, and then you said you're focusing on model. Definitely. And the reason for that is Number one, go ahead. So this is the model. I didn't understand the difference you meant as model versus notation, because if this is the user spec that the user is supposed to write this is the notation or No, this is the model the notation would be the graphical or the visual representation of your workflow. Right. Okay. So we do not tackle that part. I don't think Argo at this time does either. Maybe I'm incorrect, right? But So the users write like Jason or YAML or do they create the workflows visually or both? No, for this specification, we only focus for on the markup or the model, which is the Jason or the YAML. But you expect end users to write that directly, right? Yes, with the help of course, and this is kind of like where I wanted to kind of go. Just one second. We do have Implementations of this back are there today. Currently just one. Okay, how many users are using it today? Currently as far as use cases, we were pushing this as far. As far as users end users are using this. Community in the redhead of the cogito project, which are adopting it right now with this is fairly started, you know, the specification has been evolving for over a year and a half now. We started over six months ago doing the implementation, which has been completed and now we're adopting but as far as that I, this will be pushed also as far as the redhead. I'm just going to talk to for myself. How do you say product so you will also have community users as far as our community. How do you mostly use the inside red hat? What you're saying. Yeah, as far as an open source project for which used to be JPM and rules, and we have evolved that into a new project called cogito which now also includes the support for BPM and to the MNC and and now also the serverless workflow specification as well. We see it as one of the many formats which which we're also targeting Kubernetes and stuff like that on our end but there has nothing to do with the specification. I don't want to waste anybody's time. But the nice thing also about your YAML, which is an extension of JSON and also the JSON format you can write. How do you say plugins, for example, I wrote a VS code plugin to for, for, or like ID support. Yeah, RDE support. So one of the things that both we can do, both the Argo and then and also anything that all the animals once is right, very simple ID support to help users write the JSON or the I think that's nice. A couple of points I want to just highlight these are one of them will sound like a low technical detail is that basically all all our workflows boiled down to direct today's cyclic graphs. In the way that if your programming language is touring complete you can do anything in any other programming language can do. If you can, if you can boil down your workflow down into a direct today's cycle graph, where each node is a function vocation, then you can, but you can basically model every graph every workflow you need to anything else you do on that until that is a is bonus but also I could be syntactic sugar. And we have plenty of kind of effectively syntactic sugar in our charts. And that SDK aspect is really important for uptake of people I can see that the defining a workflow specification allows you to really kind of decouple those two aspects have one organization build various different SDKs and different languages and another organization build the actual execute the workflow executor as well quite quite separately, and I wouldn't I wouldn't underestimate the importance of that kind of tooling to users, our users don't really like using YAML at all. They, they, you know, they tolerate it, but it doesn't have, you know, anything like the kind of tooling that they're used to when when writing code such as also completion, you know, syntax, you know, sophisticated syntax highlighting YAML doesn't doesn't give you any of that you effectively have to code your YAML, submit it to be executed and if it gets rejected by the executor then you know it's syntactically embedded it's much better to do that in the, in the ID. Yeah, definitely. The only thing we have done personally we had conversations with IBM to have this tool called Node Red, and they're building a palette for it but that has nothing to do, especially because we in the specification like I said we haven't tackled the notation part yet it might come later on and maybe it's a collaboration between, you know, our teams together or whoever community wise might be interested but that's a big task, you know, and especially if you, you already have an amazing really BPMN and as far as I talk about it a lot but the notation part is really useful and nice. And, you know, having to do something differently or do we reuse something that already exists in their regard is a decision that still has to be made at the end. But I do, I did want to show from our end, just real quickly and you guys can go through it. We did create somewhat like examples. I took this from your guys as examples I hope you guys don't mind. And we have written comparable side by side comparison between Argo and the serverless workflow. And I think it's really showed a lot of different things. Number one, I think from functionality wise so far what I've seen now of course your examples might not cover all the functionality Argo does. So maybe we can work on this, but they're fairly comparable. So far what I've seen right. There are some things that Argo does that are really like they're different that we currently do not support in the serverless workflow specification but we are not set in stone and we can collaborate on how we can possibly match up functionality wise. What I can see so far. Do you have an example of a DAG? Yes, I think actually. I took the DAG example actually from Argo. By the way, I like your comparison here. Yes, yeah, and I think it's really important to have this in order to actually say hey can we even integrate this or is this even useful. And I hope to add more examples and hopefully to get help from you guys, which examples would be good to add and that's kind of like maybe something we can talk together about. So what I've seen so far, my understanding is that yes, the serverless workflow specification is more verbose. So most of the examples you will see that the amount of lines of YAML that you need is typically more than what you guys currently have so we can work on actually reducing that I think that's one of the tasks we got out of. You say commentize it over and over right like operation default. Yes, definitely. So that's something that we can work on. But the other side, I think the one serverless workflow specification has right now. It's, it's actually defines a more. No, I want to say readable, but it is somewhat more structured than what our goal has and I think that's maybe a trade off that we can kind of look into and compare and contrast and work on together if you guys are of course interested in that I think the service serverless workflow specification defines more concrete states with types that can be more easily translated for tooling and kind of more readability, if you can even get that within what we're doing, more or less without tooling but that's kind of like the comparison. There is some things that Argo does specifically on the functionality and that we do differently. For example, timeouts. I know one of the examples you guys have a workflow why timeout where you start a workflow and you say okay, if it doesn't complete within a minute, just exit right in in the service specification where it's done a little differently, which is, for example, timeout on on events actually occurring to start the workflow instance, or timeouts on actually executing a function or for you guys is a in your guys example, a function that's defined in a pod. So that's kind of like the difference. Another difference is that, for example, where you guys, and you can see we you, we had to use, in order for me to translate these examples. We had to use metadata, and I'm trying to figure out one example, for example, this. For example, that's one thing that that is different in that you guys have the container definitions, right, where you use explicitly say my function or my function that is exposed in kubernetes is in a container where the serverless work for specification kind of abstracts that into functions, right, where do you where do you map that to a container does it is a container. Because we don't really only work on kubernetes level. No, no, no, but if it is implemented as a container on kubernetes where do you well, we have to the specification defines two parameters one is a type which is a user defined type. So the type parameter to kind of specify your runtime. This is a rest API, or this is something that runs on the container, or this is a Kafka event, or this is a Java or a Python interface, right. And also we have metadata extension points on both kind of like extension points on both the state level or the definition level that we also have extension points on the whole workflow level. So you can implement things like logging or tracing or entire extension points that you can implement if you want to. I guess one big difference is that, you know, this is not only for kubernetes or even containers. Yeah, as a specification we cannot focus on one thing. So, yeah, it's that's, but hopefully we can, as far as functionality goes, it can be used or not we're trying to figure out. Yeah, whereas our go workflow there only for kubernetes. So, so as I see it, it will be more like a layer on top of our goal, because somebody needs to generate the kubernetes manifest, our goal, these are all kubernetes manifest which directly can be applied to kubernetes. So, so somebody either Argo team or somebody else has to write like a mapping layer on top of the Argo specs to convert this to the Argo Kubernetes manifest. Yes, assuming that the languages reach feature balance so that we could represent everything in serverless workflow as Argo can currently express it with the workflow CRD. Then yes, this would be simply a layer on top if there was translation to be done I think that's manageable if there was however any mismatch like the workflow time out that to me has already identified then there's more work probably to be done on the language specification. The thing is with little less adopters like Argo is a big project that has lots of users and is field proven and is a production ready implementation. So at the serverless workflow subgroup it's whether an attempt as you've already seen, we're not covering the function binding. So the binding to Kubernetes platform or to container environment is is on remains unspecified, but there is the common concept of having pieces of work expressed or marginalized in functions, and then to give some controls structure to the execution of this work. So I think at this level, the workflow language tries to express a control logic and then. So I'm with Nokia Bell apps and we are looking at it to to find a way to have a common description language we are executing stuff completely differently I think also Kogito does so Kogito is I understand if I understand correctly is very much tailored to Java functions and can do through other function bindings in can invoke a lot of more different workloads. We, for example, we would compile the entire workflow into a single container runtime. So it's a completely different execution model underneath that, but we would we want would like to have is commonly adopted or accepted workflow language and maybe also reaching consensus on the terminology with several projects because eventually what we've already figured is Amazon States language calls these individual stop states, I don't think so in states language there is only one forward pass, but a state machine would not be a cyclic. So there is a lot of alignment necessary to get into, which eventually benefits the user right if the user has only this one learning curve to adopt this one terminology and then knows how to operate in different environments. This is I think what we want to get to with the standardization of the workflow language. So do you have any interest from Amazon or Microsoft and like also adopting this for like persistent cloud functions or stuff like that. So I know there are several larger parties involved in cloud events specification and in the serverless working group. Their interest in this subgroup task has been, I don't know, maybe sidetracked a lot because of the cloud events specification but also a bit hesitant to jump to workflow specification. We had ideas that maybe it is too early to talk about workflow specification. So that rather the statefulness of serverless executions needs to be discussed first. That would be your artifact layer. So, maybe, maybe it is too early yet for them on. Maybe we just need to get a little bit further like a chicken and egg problem to have a little bit more meat before we can actually have them participate in the discussion. And also that this could model something like airflow as well. Should user use airflow through this specification. That is not in your scope. So I'm not aware of the apache airflow. Yeah. I don't know what we're trying to do. Number one is create. Number one thing is vendor neutrality and we want to be portable. I think this, the situation currently with all these different workflows based YAML and JSON based notations. There has to be some standardization and there will be if it's not with this specification it will be somebody else that creates one. And I think Argo is a project and you guys can prove me wrong but I've been working on opensource so long writing your own markup for something has its limits and it has its kind of life expectancy because if it's not this specification with CNCF like I said a specification for this will be created and it really depends who adopts it. The big guns adopted like Microsoft and AWS and those guys Amazon probably not, but they might in the future, but as far as open source type of smaller project type it's very important for survival of especially your notation. And you guys know this better than me. We had things before like custom markups even with our BPMN before BPMN and it always fell short. And I think also other smaller size open source companies can can can talk about that specifications help you in many ways and I think we're not perfect. You know, there's a lot of things that we want to change it we would like to get in a community order to make our stuff better, but at the same time we won't also work with you guys. So that's the type of thing. Yes, like Manio said we're in a chicken and egg thing we're a specification at CNCF that is looking for adoption, but at the same time we need adoption in order to be allowed to grow. This is kind of like where we would like to find some sort of community within other projects and sort of interest. We're not saying do one or the other. But we would also be willing of course to help projects there say okay we have some interest here and we would like to have some sort of adoption for the future to help there out as well as far as pull a request and help there as well because at the end we're all CNCF and we're all open source for a budget to license right. So that's the kind of thing we're looking for right now. What do you see as the main I don't know like it's the goal that this is glue that can glue together, workflow that run on containers, Kubernetes, Lambda, whatever. Or are you how getting a particular application there is like what are you focusing on and creating this back. The focus is very similar. I think what the majority of the current JSON and YAML based workflow specification is which is the workflow orchestration. So essentially, especially in the serverless community, you have many, many different microservices deployed that might run some on Kubernetes but some might not run on Kubernetes. That's not the exact question. For example, how important is it that your spec be able to encompass both fads as well as like our growth style Kubernetes workflows, container based workflows. I think it is important to do both, because at the end of the day, we strive I think to be able to model workflows in both worlds. Is there such a big difference in those. Yes, I believe there is like in the execution environment there is, but we also had until we'd like to distinguish between serverless in terms of programming model, you know, basically how developers write their code versus, you know, like the execution model using containers or what the programming model and you know developers into it. They're not really interested right now in writing applications using a fast style. They actually find it much more difficult to do it that way. So, however, there is a lot of interest in terms of using events to trigger workflows and doing, you know, that your more core screen async processing so I don't know if that distinction makes sense. So, so unifying very disparate workflow models like fads or event based model I mean you could think of it as workflows or event based processing right there, or you know what fads even, but unifying very disparate models like fads versus batch versus, you know, lambda that kind of thing. It may be very hard to do that and target a particular community so that's just my random thought. So, since we are almost over time, I had one question like does Red Hat or any of the members of this work group have any resource who can map the two project specs? Like do you guys have anybody who would be interested in doing it? Yeah. I think we should definitely contribute like the conversion because as I said, it has to be a layer above because Argo is very Kubernetes. It is Kubernetes CRD and manifests. How is the work split up today I guess in the working group? Who's working on what? First of all, I find this very interesting. Yes. Yes, specifications help. Totally agree. I definitely have some more conversations. I think personally, I think it would help us kind of map out the space and at least figure out what part Argo workflows part fits into. I mean, we have our idea of where it fits into but we're not so familiar with things outside of that domain. And of course, you know, you're also representing communities of users either inside your company or your customer. So, you know, I think that engagement would be very useful. But how is the work for the working group currently split up? Like who is working on what? Well, we're currently a very small team, right? We have a couple of, we have, of course, manual with Nokia, we have RedHack, we have Camunda on board and Huawei. And we have a small community of people who join meetings that are involved. As far as work does, we have some growth need, of course, on the community side. We do kind of things. We have a roadmap where we add what we are planning to do. And basically, we're doing it as time, of course, allows like other CNCF projects that things as well. Yeah, I'm sure this is all public recognition, but if you would just send us a link to your roadmap, that would help us. Yeah, definitely. So, like RedHack, for example, I'm assuming everyone is contributing to this back. They're interested in doing this back. But RedHack also has the implementation that they're working on. Yeah, I mean, I'll do the demo because of time now in the next meeting. Absolutely. Yeah, but I do want to say there is interest. I'm not, even though we work for different companies, of course, since this is a specification, we're not pushing any interest into this at all. We are looking at the entire community needs and having the Argo community involved in just, you know, allowing us would be a huge thing for us. I don't think you guys understand how big this would be for us to have such a community with you guys that have not only an implementation, but also much larger community and exposure and everything basically to kind of help us out. And at the same time, you know, of course, we would be involved into helping with the integration as well. So it's not like we don't have hands. Absolutely. I mean, you know, we already work with several groups at RedHack, actually, both Argo CD as well. Definitely, definitely. So that would be nice as far, you know, if you guys see interest, I do see the need for Argo would be nice. For us, of course, it's within CNCF, we could use a little bit of help to kind of push us forward as far as infrastructure goes, growth within the CNCF ecosystem goes, because with CNCF it seems very difficult. It's easy to do projects, but it's harder to do specifications without some sort of adoption. So, so, you know, we're kind of here saying, okay, we're here, we have something I think that is useful. It is, of course, not perfect and needs community work and community love basically. However, I think there is something and, and if we can possibly work with other CNCF products like you guys don't be big. Currently, we have monthly community meetings and the next one being first Monday in June. That is, I think first, and also for next week Monday we have a primer call schedule to think about the same time as today's call. In which we discuss the base concepts of the language and whether we took the right turns and try to summarize this into sort of a conceptual motivational primer paper. Personally, I can say I'd be very happy to welcome you to the calls and for me in Nokia develops we I have the similar situation of whether I should root for adopting it or not. Because so we've recently, we've launched canyx micro functions some edge serverless platform and I'm in the same boat here of whether we should do the effort and implement this workflow language specification or try to shape it a little bit further before we do so. Yeah. Yeah, I mean, you know, I guess being too early is just as bad as being too late. So we'll go back and we'll look at the comparison those examples that that that's on your GitHub repo right. Yes. Okay. What is the name of the red hat project. I don't know if there's any documentation of a O G I T O. Yeah, there is what sorry a website and all kinds of stuff. And I'll link you personally I'll send you in chat all kinds of links to examples and everything you want. But yeah, I'm just not. I'm not we're not promoting that on the certain specification and whatsoever because at the end is just a runtime implementation. Yeah, I mean, what we find is that like for, you know, I'm not sure about specs but we find that for actual like tools that you want people to use this obviously very important to decide which community you're targeting. You know, because you can't like go out there, like three different communities at once. So, I don't know if you have brought agreement there or whether. Some people interested in a spec right now are more like that type of models others are more ML processing type of things but they are very different communities so And, and, and if it's too early then it's kind of hard to create a spec that will be adopted by multiple communities so those are some of our concerns I'm sure you have some. And I think you're very correct in that what we have within the specification as far as people involved. Our people have been around business process modeling for many, many years. So yes, currently the specification is kind of tending in that direction. But I think that's where kind of like this talk for example is big because then we can reach out to you guys for help on all the other environments as well and get help and input on how to improve. Yeah, I mean when we were starting the Argo workflow project. We obviously looked at all the different workflow engines that were available and there were many right even at that time. And so we tried to pick an area we feel was not didn't have too many existing versions targeting that particular use case or committee and on that we felt with is likely to grow rapidly so that's how we kind of pick the ML or data processing area and we'd love to of course understand better what the other members of the working group are thinking in terms of what use cases or communities, or what they personally want to use it for. So this is great. Yeah, but that's a set up a follow meeting to get further into this. Yes, perfect. Would you like to join in our next community call or should we make it a separate series. If they're not too busy. And, you know, that would be great. If you have a packed agenda then it probably be better to have a separate. No, I think we're good. We separated all the primer work so. Okay. Yes, please send us the invitation forward as invitations for. Yes, I will. Thanks. Thank you so much. It's nice to meet you. Nice to meet you too. You guys know you can just reach out to us on the slack. So if you want to ask any more questions. Yes, we will. Yeah. Thank you. Thank you. Thanks all. Bye. Thank you guys.