 Hello, Lewis. Thank you for all that MQTT feedback. Yeah, you're welcome. It was very helpful. You caught all my sins. You can tell that I wrote the MQPP spec and then converted that to MQTT. You caught me on a few places there. Yeah, it's the way to do it. That's what I do all the time. Hey, guys. Good morning. Hello. You sound very happy and cheery today. Good morning. Hey, Lewis. Good morning. Hey, Austin. You know, one of these days I'm going to figure out and then remember how to actually spell Huawei. I can never remember that. It's one of those words. H-U-A-W-E-I. Okay. H-U-A-W-E-I. Thank you. All right. Got it. So, Austin to give a nice little vacation. Hi Doug. Hey, everyone. Yeah, I went to Sedona in Arizona. It was beautiful. It's one of those places that people say, oh, it's so beautiful, so beautiful. And then you get there and you're like, wow, it really is. It really is so beautiful. It's been a few days hiking and just being outside and away from the screen. Cool. Hey, Aaron. Hi. I brought a guest with me, Aaron. Hey. Hello. Am I sharing the right window? All right. Thank you, Mark. I think we made this time for Rob to join. I forgot all about that. It'd be funny if he didn't join. Remind him. All right. You want to get started, Mark? Or you want to wait? I don't think he's going to join. Sure. When we get started. I believe that from last week, we were curious what Austin has come up with. It's one of the key items. I think that there is an issue that you put in. We didn't get to discuss on Thursday. So Austin, would you like to start with discussing this? Sure thing. Let's see. So I kind of outlined the goals of my presentation in the beginning of this issue. You know, I want to introduce cloud events and announce the initial version that we finished. Make sure we communicate the why. You know, why this is important. Why it's going to be a big deal now and in the future. And communicate use cases of cloud events. Want to kind of paint a picture of possibility of what, what cloud event will enable in the future. And then also make it real for the audience by showing, by showing up just a taste of this in a real demo. And lastly, of course, tell our story of collaboration and celebrate the people who've been working on this. So anyway, I only have, I think 30, 35 minutes, I believe in the talk at the same time. There's a lot of people who've expressed interest in participating in this demo. And there's a lot of stuff we could do. So it's a bit of a challenge here for inspiration. I've outlined a handful of use cases for discussion topics of things that we can illustrate. And then I followed up with a comment of kind of something I think that we might be able to pull off. And that may be able to accommodate all the people who want to participate in this comment is a proposal for a e-commerce and event driven e-commerce store. And I'd simply walk through it during my presentation and I pretend to be a user and I do a couple of actions in this e-commerce application. I'd register for the application. I'd view a couple of products. I'd encounter an error and I'd also purchase a product. And these actions are going to be expressed as events. And these events can integrate. There are going to be cloud events, of course. I think I might just publish them directly from the client side of the web application. And I outlined a few events in this flow. There's the user profile is created. That happens when the user is registered. A storage object is created. That happens when we fetch the user's profile image. An error is received. That happens when the user encounters an error on the client side and gets some type of five-putter's status code. A user-viewed event, which happens when the user is simply viewing a product on the website. And a user-purchased event, which of course happens when I pretend to go and purchase a product. The way I think this could work is if we incorporate our company's project called the Event Gateway, the whole thing was designed as an event router for the serverless era. And it could simply, anyone could hook up their fast product to it directly. And I could basically rig up this whole demo to publish events from the client side, through to the Event Gateway, through to the various companies' fast products. In their respective platforms, they could also do some cloud events handling. I'm not sure how far they want to take it. It's really up to them. But this is the general story I was thinking. The question is, can we pull it off? Of course, because there's a lot in here. And how does everyone want to integrate? And my general thoughts were we'd publish these events and people can pick what they want to do with them. I think the best is to try and show something which is a really mixed environment, like maybe events from few cloud providers, maybe more than one router, and then multiple functions consuming it to try and make it messy. Yes, we can do all that. It'll be a complicated architecture and difficult to rig up. The other question is observability. My goal here is to make it real for people. How do we actually show them what's going on and how do we show them across all these integrations, what's going on? Cloud events can solve this problem because we have consistent metadata finally and we can put tracing information in there. In the long term, we're going to be able to nail that and provide a great event-driven observability story but as for the demo, which is coming up very soon, I'm not sure how much I'm going to be able to pull off and that's contingent on the complexity of the demo itself. That's actually something I was trying to wrap my head around because I understand everything you said there in terms of the demo. You've got a client generating events, goes through the gateway, spits out events that is function calls to all the various platforms that each platform then receives it. And the next step that I'm having to wrap my head around in terms of how did you envision presenting the output from all our various companies receiving these things in your demo environment? I mean, were you thinking something as simple as having a whole bunch of different, you know, mini windows that here's the Microsoft one, here's the IBM one, here's the VMware one, and look, they all received events and you could see that they all got it. I mean, how are you envisioning this playing out or being displayed to people? Great question. It depends, of course, what providers want to do with this. You know, because we have a centralized piece of middleware which all this stuff will be routing through, we can always put in something to add tags and use that to create some type of visualization afterward. I don't know how much of that we're going to be able to rig up before the demo, so I might just have to kind of fake it for the demo. Yeah, and I have another idea, you know, Twitter is like a Twitter account or, you know, an object bucket where every function after it gets the event sort of generates a tweet or writes a file or something like that, and then, you know, you can open a Twitter feed or object bucket, you know, and show all those responses. You know, I got it from OpenWisk, you know, got it, you know, everyone will sort of bring, got it on Twitter. So you're thinking of each of our companies function, basically generating a tweet, and then Austin can just monitor that tweet stream on the screen. You can see which company actually receives it and tweets about it. Yes, and even maybe print something from the content to make it a little interesting. Interesting. It can generate like a message that has something like Hello, CubeCon, you know, something, you know, KLC style, and, you know, it gets routed to all the functions and every function will say, you know, got this. Interesting. That's, yeah, the challenge here is to define a simple story that everybody can integrate into. And that Twitter example is actually a pretty good one. I guess I also wanted to try and make sure that we're showing off the value of cloud events and what can we do to show off the value of cloud events in that Twitter example you think you're on? Well, I would wave my hand to say at least the events can be simplified multiple sources. I think just solving the shared problem of, you know, multiple frameworks receiving the same events from different sources. You know, it's not trivial today. Demo TCP IP is not attractive. I realize that. I've been doing this for the last 25 years, but we can do a fancy scenario and with that very effectively show off Austin software or we can do more less scenario but show kind of end-to-end interrupt between all different platforms. And I'm more leaning towards the latter, I have to say. What do you think that looks like, Cummins? Well, we can have, there's function as a service endpoints and there are publishers. And I think if we can show that the functions as a service endpoints can receive and dissect cloud events from all kinds of different publishers if we can basically just show that. And that might be such that, you know, you send an event and then the code does nothing but just dumping the contents of that event into a log and it does so on OpenWISC and it does so on our platform, does some other platforms, and then you can go and publish to that from different providers and you basically wire them together. That's already a win and for that we don't need to have a fancy scenario. And the fancy scenario is harder and more work, I guess, than just putting, having existing event publishers, you know, changing them to go and publish or make up an event publisher that published an event but where you can go and generically parse that thing. I think that opening multiple consoles seems a little complex. I think that's why having something like either throwing an object in a bucket or a Twitter feed or anything that can show all the messages arriving from all the different fast platforms will make it simpler. I do agree that having more than one publisher allows us to make a stronger point versus having a single, you know, even maybe one event from Google, one event from Azure, you know, et cetera. Yeah, and so unifying all the kinds of events that we all have, like getting on, because what happens here with the events is that there are some semantic alignments that I don't think we're ready for. Like the storage object created thing, there is no such event that all of us publish, right? Yeah, that's what I mean. Now you would have to normalize twice. You normalize from A, whatever current event format exists to the shared event formats that we have agreed to, and then now you're doing a second transformation where you need to have the storage event and need to go and normalize that across platforms, and that's something that's just not something we've done in this forum. Right, but I don't think the example is that you get a notification about an object bucket change in S3 and in Azure, and then you create a thumbnail, because that's really not something that we've standardized, but just at least printing out that, you know, got an event from Azure or got an event from Google. I agree, the visualization of that you got the event, if you want to do this via Twitter or via some other platform, that's fine. I mean, you show the mechanics, you just want to surface the fact that someone has gotten a call and then has, and you want to make that visible somewhere. That's reasonable. But I think fundamentally, what I see from Austin's proposal is he's wanting to understand the real world use case. And I think what you're saying is, perhaps we don't need to show or we can't show real world use case and we should just show interop. Is that the distinction that I'm hearing? Yes, that's my point. Showing interop is going to be easier than showing a full scenario, because the full scenario really means that we have to figure out how to align the semantics. Yeah, to clarify, this demo tries to go for interop and a real world use case. So Austin, one thing I'm not really clear. Let's say I have an S3 event notification that generates an HTTP. That could generate directly into the function. So what will be the role of the event router in that case? Could you clarify that question? You're on? For example, I want to generate an S3 bucket event. I go to SNS. I say, here's my HTTP endpoints and send a message to it. And the HTTP endpoint will get, using an API gate, we will get to the function. So what is the role of the event router in that scenario? Is that instead of registering the function, you register a generic location and that would route based on some policy? Or is that to register things that don't have a web book? If you wanted to create a scenario where you're reacting to an event from AWS S3, we could have that S3 event be published. It could be sent via SNS over to a Lambda function. The Lambda function could normalize it and the Lambda function could send it anywhere, to be honest. Or it could go through the event router. The event router, to be honest, I think the event router is a very, our event router is a very convenient way to rig up this demo. At the same time, I don't want it to be centered on our event router. I am trying to show off some other mechanism for, I mean, thanks to cloud events, you could just send these off anywhere without an event router. I think the event router can be a value value. Right, but if we can show something interesting about the event router, I think it's a plus. I'm just trying to see what's this interesting thing we want. Because if you're just forwarding an HTTP request, people say, okay, why do I need it? But if you're showing something more interesting, maybe transforming an event, you know, something, then it sure shows value. To me, I thought the point of the event router was to sort of, what's the word I'm looking for, to sort of interject the interoperability in the sense that, sure, you could take an event producer, hook it up to an event consumer, and that could work just fine. But you also could very well just set yourself up to be locked into one or the other. But sticking the event router in the middle, where all five or however many platforms are all hooked into that, the event router is allowing us to easily have a single event producer broadcast this thing out to a whole bunch of different people. And while it's not necessarily ensuring their ability by doing the broadcasting for us, one, the producing side doesn't need to know how to do the registration of all these various guys. It goes through an intermediary. And then because we have multiple consumers on the other side, they're all going to receive, in essence, the exact same event, but they are guaranteed, they're guaranteeing the interoperability because they're all processing it correctly. So I look at the gateway is just or the router is being sort of the middleware that's sort of helping guarantee that everybody can talk the same thing without being hard-coded to each other. Yes, absolutely. And plus there's going to be a lot of value in the future because it's the centralized pipe that you can add tracing information. You can do transformations that you can do validation. So you could do a lot there. But I just want to back up a bit. It seemed like there was some consensus earlier on in this call that if we just simply publish the same event to multiple fast providers that that would be enough to show off right now. Is that what people were communicating earlier? And is this what we think will be enough to win hearts and minds of developers? I think we're not the same event. But so we can publish arbitrary events from arbitrary sources if they comply with our spec, right? So I have this I posted this earlier the email unfortunately didn't go out. I have this generic function which basically goes so an Azure function now which goes and takes any event grid event and turns it into a cloud event. So you can hook up anything that's in our platform. We can now spit out as a cloud event and then send it to everybody. I think what we need Austin is to think one is show more than one event maybe potentially even from two separate cloud providers just for the sake of showing off the interoperability. And then we need to make it visual like either the Twitter or any other idea where people can relate to that it actually arrived. Okay. So I like this first year on I think the Twitter scenario using Twitter to make it visual is great because it has kind of verality built into it potentially. People can look at it and interact with it and I think that's a great solution. What is the premise? What is the scenario and how do we get more than one event from each provider and integrate Clemens functions and Azure events into this as well as Google's as well as well as other people's what is that and also how is that like can we get that close to a real-world use case too at the same time? You know in some ways being able to show the same function running on all the platforms would be the most interesting to me or something very close to the same. In other words converging the interoperability and portability of functions. And that's why I did this in Node.js which is not my happiest place to be. So effectively I have I sent the link in the chat window for my yeah for that thing. Actually I'd like to push back a little on that only because if you run the exact same code everywhere I don't think it tells as much of an interop story. Our fast platforms are incompatible enough for that not to be possible. We have to write a standard signature. So but I think the core code can be can be just good enough. So if you since you're sharing if you go and take a look at the cloud event have our piece. By the way if you write everyone writes a sheen to some common thing that we defined in the sense you can have the same function be executed from everywhere just with a tiny wrapper that maps it to the signature of each no actually actually meant that code that you just looked at. So this is this is the you wanted this? So that is what this does is basically just brings out the cloud event as the body of a cloud event as it comes in. So we can make that function just speak to Twitter and just take the events or parts of the cloud and then as you deploy that in your past platform with different scaffolding around it it's going to go and tweet. And then the other handler so this is the cloud events handle that's being able to parse the cloud event per se. The other one that's the event grid handler that knows how to turn and this is something that now that uses now our native binding to event grid that knows how to take our event set into cloud events effectively and then goes and posts them out. So I have kind of both ends of this. This is hooking into our platform and I have a hardwired URL here so basically you would kind of deploy your bridge to go somewhere else here. I didn't want to make this too fancy. And then on the other side you have your tweeting thing and if so with this we, so this is the microsoft preparatory if you will bridge that translates from our stuff into generic cloud events stuff and on the other side that function is able to tweet that out. If we I think already I think we need to go and maybe you need to go and build a little bit of a scenario around it but already if I can go and and I can go and push with four deployments of this function to open WISC and over to AWS I don't know and somewhere else and I can get tweets from all of those different implementations that's already a win. I agree with that actually and I think we could show the story could also be here's something coming from Azure to Google and I hear something coming from Google to Azure here's something coming from AWS to Azure and here's something coming from AWS that is now will be reacted to by Azure Google IBM and multiple ones and I think there's a good story to tell. And not just cloud but also open source is a nice thing. Yeah I mean you can go and write this mapping function here such that the mapping function knows five URLs of all different targets rolls the dice and then the event goes this way or that way and then they all surface out in Twitter feed all together. Yeah. You're saying each of us produce or each of us become other than producers and we randomly send out an event to one of the other guys and then that person just tweets about it. Yeah they tweet about it and we can make it a single that single tweet stream or we can all have our own tweet accounts for that that's up to how we want to do it and how we think it's better we can I think everybody having their own Twitter account for this purpose and then having a Twitter feed that goes and subscribes in all of those maybe the fanciest one. Okay I like it simple. And then we show this a tweet deck so that it actually screams. Sorry I just want to clarify so a platform publishes an event that gets transformed to a cloud event and sent out to another platform which reacts to it and posts to Twitter. Yeah pretty simple. Yeah and what's the mechanism by which we all find out about each other's target endpoints is it just offline we just tell each other it? Yeah. I think we'll just swap URLs and that's it. Okay easy enough. And then so the way we would and then it's really up to the respective platform how you trigger your events so we might go and just you know so my case might be based on blobs and I might just throw random flicker pictures into my own of course into an account and then let you all know about it. I like this. So again you're saying every function will do something else? The events could be anything that happens on your respective platform then you emit it and then the subscribing function on the other platform is just going to post something to Twitter but we'll have to figure out what those scenarios are. So I will tell you the events I raise so let's say I'm building a thing that goes and uploads clicker pictures into the storage account that then will contain links to those pictures because I'm going to make that account public and if you understand my events which is easy for you to tell because you have a cloud that you have an event type then you can actually go and dig into that into the details because then you know what it's all about and then you could actually go and tweet that picture. Okay. Right. Let's write it down and then let's do it. Yeah. We want a few of those things like every function to do something else. So that's one use case. Yeah, maybe we'll do something simple like sentiment analysis on a text or something. I love that idea because IBM we have an IBM Watson service that does tone analyzing. So yeah, similar type of thing. That'd be cool. Yeah, bring in Watson. Of course. It already won Jeopardy. Come on, it could do this easy. Yes. It also can, it can cure everything. It can also put tomatoes on the blockchain. Exactly. We got Watson on the brain over here. You know that. Come on. All right. Let me run this. Let me run this by you guys. So there's, I think we could do kind of these one-to-one interoperability scenarios, you know, rigged up via Twitter. One platform publishes an event. Another platform reacts to it and we can actually nail real world use cases with each one of those whether it's an image was uploaded, whether it's sentiment analysis, whether it's processing clickstream data or something. I think that'll be, that'll be great. And then I think it'll be even more interesting if we could show one to many connections after that. And so something, something happens when we react, whether they're on cloud platforms and open source platform and maybe there's a function in the edge that's also reacting to this. And I think that'll be kind of the cherry on top. Yeah. So to limit complexity, are we going to tackle a whole, so if, if we like that, that story, and I do think it's pretty compelling. We should nail down these use cases. We should nail down each example and get to work. I agree. I already wrote an example for S3 so I can sort of listen on, you can throw something on S3 and then we trigger on that and send the information of the text file to something else. So it's a, it's a, what's the, what's the exact use case you're on? A text file is uploaded to S3 and Yeah. And then I'm reading it and writing sort of the sentiment analysis of that file. Okay. Or maybe, you know, we'll figure out something to do with the text. And, you know, we could have another example, which was like looking for like social security numbers and credit cards and so sensitive data in a file. They're kind of PCI analysis. Yes. So this way I don't need to write it already have it You're on. If you could pick one. But right now I have you down for AWS to Nucleo example. Some, analyzing a text file. If you could just, if you could just pick one thing. Sure. That'd be great. DeClemens, what do you, what do you think you'd like to do? I will do the following thing. I will write a periodic agent that goes and fetches a random picture from my wonderful aviation geek photo library. And then we'll go and take one of the thumbnail sizes and throw that into an Azure storage account. And then I'm going to use that this gateway effectively to be sitting between Event Grid and an arbitrary and any number of arbitrary services and I will basically just use random function to go and balance them. And that image that's uploaded is going to be sent to what type of the image gets uploaded to Azure storage. And I will give all of you a cloud event that is the Microsoft storage block created event of which I will tell you the detail data schema and then out of the data schema you can fish the URL and then you can go and tweet that you got this event and then you can also include the image URL and that will actually then show in the Twitter feed. Just for the daisy chain, you know what, maybe we listen on an S3 event and we convert it to a cloud event and forward it to someone else to generate a thumbnail so we can show the daisy chain. Since you're going to get an event that has a picture on it because I'm generating that already you can do that. Actually since Watson IoT for instance has, sorry Watson IoT has all kinds of fancy image classification logic in it you could go and classify that picture and figure out what airplane it is. By the way, we have an example with using Azure face recognition API. Yeah. One thing that I might say is we should be careful about having a carefully constructed chain operating because then if any one site fails then we all fail. Having the broadcast and having us do things randomly sounds like a better bet. Yes, because something is always going to work. Keeping it flat was also one of my ideas here in terms of just making sure that it all works because if you have a chain then one piece failing makes the whole thing fail so that's a good look. Right. We want to show success across the board but I'm also cautious on demos. Oh yeah. All right, so I'm still trying to map out the story here. We've got a clear AWS to Nucleo example. We have an Azure image uploaded to Azure Storage event that's going to be published. Is there something we want to do specifically with that? Yeah, but the question is to show cloud event let's assume we got an S3 event we need to generate some cloud event to something else. Yeah, and that's your example right now. Right. We could do something uploaded to S3 and react to it with a Nucleo function. Okay. But who's going to convert the S3 event to a cloud event? Is that going to be a separate function or maybe event router or something? That's going to have to be a separate function. Okay, one thing you could do there is just run that as a Lambda. Yep. Just like what Clemens did on Azure with Azure. Yeah, that's the exact code that I'm showing here is that's taking our preparatory event format and turn that into cloud event. Oh, so you're saying we'll do the same and convert it to... Yeah. Okay, or any other function convert it to a cloud event and then something else processes the event. Like two functions, one converting the S3 to cloud events the second one takes the cloud events and does the text processing. Yeah, and that's my question to Clemens right now. Clemens, if there was another platform receiving this event, do you have any ideas as to what it should do? Do you have a function on another platform? Yeah, so since we had this Twitter idea I think for my storage event I would propose that you guys if we're doing this Twitter thing that everybody writes to a Twitter feed and the way they write to the Twitter feed is they pick up some properties from the some attributes from our cloud event and say this is what I got and then they can in the image case since everybody understands that can then go and take the image URL and just include that in the tweet as the image element and then the image will actually show in the Twitter feed. Pointing back to our storage. Mm-hmm. Sounds like this could be a candidate for the one to many connection. Yeah, and that's how I think about it. I would literally, everybody who writes in their platform which will probably be if we're doing this in Node be the same code but just with different scaffolding around it Yep. Will give me a URI and then I will go and basically make a list of URIs in this function here in my deployment and then choose a random number for every call I get and so I randomly spray across them. Oh cool. So you can send us like a postman example of the message that you're going to send Yeah. Exactly. That's what I'm going to do. I will within the next um now it's going to be difficult with promises because I'm traveling to Hanover mess in tomorrow morning um I will as soon as I can I will say within the next two days give you a working example of all that. Mm-hmm. Is it going to be stored in somewhere without the password? Or are we the password that we know? Yes, yes, yes, yes, yes, yes. I will find a way to make it all known. All right. It sounds like we have we have two pretty cool examples. We have AWS to Nucleo. Something happens on S3 Nucleo is going to react to it doing maybe sentiment analysis of a text file something like that and write it to Twitter. That's a cool example because that's combined with an open source platform. I think that will be pretty compelling to use. And then we have the Azure example to many fast providers. So whoever wants to integrate with that can write some functions to do so. Clemens is going to provide that information and the use case is doing some type of something with an image and also writing that to Twitter. Yeah. So Clemens you know we could certainly help out writing a function to react to the image and do something with it. If you have a if you assume that you get a URL for me it would be good if someone could pick up the work and do the tweeting thing in Node and it's basically that there must be 400 different Node.js libraries mostly abandoned that all know the Twitter API. Yeah. Actually Clemens before we actually get into giving up work we have two examples here. Do we feel like we need a third one? We can do more. So Austin do you have time for a third one? It depends on how compelling it is. I think AWS to an open source fast platform is very cool. I think Azure really fast platforms is very cool. Are we missing anything? Well again I think the real world use case what posting to Twitter isn't necessarily real world but in some ways I care more about the interop right now and show real world later on. In my mind we're not showing off in these examples we're not showing off posting to Twitter. We're actually doing something and then the result is posted to Twitter for solutions and durability and maybe help this thing get some viral traction. I like the way you think. It's all in the market. You know what let's just move forward with these two examples and then let's see where we get to with these. So there's the AWS nuclear example you're on. I think that's the balls going to be in your court on that one. It sounds like you just run a lambda function which transforms to a cloud event which sends it over to your platform. Since we're already doing an image example I think the sentiment analysis of a text file or doing some other type of data processing would be compelling that's my opinion at least. And then regarding the Azure to many scenario our team can certainly write one or multiple functions to react to that image uploaded event on Azure Storage. Clemens if you could just give us an example of what that event looks like and everything we could get started and we'll start all this by let me outline all this in the GitHub issue just to clarify expectations and everything and we could keep the conversation going in there and then get to work on these two scenarios. Yeah so you got the I sent email just before this meeting which shows effectively what the structure of that event looks like. Great. So I apologize I had to step away for a second I had a phone call but as I was coming back there Austin you made some comment about one particular platform talking to another particular platform and you called them out by name. Is it possible for us to write this up in such a way that anybody can choose to participate on either side or at least on the receiving side? That's my next task. So Maybe we can even do a few things you know if it's a text file everyone can choose a different type of manipulation on the file. Yeah or something. Yeah so you can you know one can do sentiment the other one does you know PCI compliance the third guy does I don't know word count. Well I think the key here is let's get some things you know at least simple up and running and then we can then we can expand the use cases so we have time. Yeah that's right I think keeping it simple especially your example you're on just because that's how I think the story should start in the demo it's just like here's a one to one connection across platforms and then the next chapter in that story is here's a one to many connection across platforms and I think that's that's fine let me let me try and write this up in a github issue and we'll get some feedback hopefully on on Thursday but but this is pretty good in my opinion. Would it make sense for us to create a private Slack channel in order to share URLs and you know user name passwords etc. Can't hurt. Yeah we could probably just use the cloud events channel right. Okay if you're okay with the publicizing that are who ever subscribed to that. Well Dax you got rid of the different you actually said URLs I'm not that worried about but then you started talking about passwords and stuff I was assuming at least from my side I was going to not require authentication for you guys to send me stuff do we want to talk about that. Sorry Doug why is it why is the authorization necessary. Well I was going to say it's not because I was going to have a function that lets anybody post to me and I was going to just do something with it but Mark had mentioned something about user names and passwords are you guys assuming that your functions are going to require authentication. No I I don't think we should we should worry about that. Okay I agree too I just want to make sure. Yeah first for me. Okay cool okay Doug I think I could write this down in a way where people can plug in I think we should keep the first example simple just showing that one to one example between AWS and Nucleo the same example Azure to many is going to be a good candidate for other people to plug in and I'll write down some criteria as to what that should look like people want to integrate and then maybe we could take it from there. Yeah I think both of them could include one to many the first one doesn't have to be one to one. Okay yeah I agree. So one thing it might so one thing that might be required for the Azure to anybody thing is that so we require I'm asking my engineering team for what the white listing procedure what the white listing procedure looks like but we have a handshake that is kind of a parallel function so you don't need to show it and I can give that to you how the handshake looks but we have kind of this anti-abuse function which I'm trying to formalize in our web hook spec but that's kind of in place so for us to be able to actually push to you you need to opt into our abuse mechanism so I'll give you an answer on whether you need to do this or not for us to be able to push to you. Glements can you can you just push to a single endpoint like our event gateway because we can handle the one to many connection well the point is that Azure should be able to go and like an Azure thing should be able to talk to all those things right so let me talk to that let me talk to our guys in doubt we can go and talk to your gateway okay so we'll figure this out I'm just saying this is something that is in the picture so this abuse mechanism that I'm writing into the HB hook spec is a real thing got it okay just to clarify the use cases here we have an image uploaded to Azure storage people can react to that and then we have sentiment analysis of a text file so that text file is going to be uploaded to AWS the image will be uploaded to Azure that's right okay action item is for me to write this up I should get that done by the end of today let's just keep chatting about it if anyone has other suggestions just add it to this issue and hopefully it will be in a good place to present this on Thursday and I will say that we will need to check with our friends at Oracle and Google see whether they want to participate as well yeah I think the best way to do that is to write it down in a way that's clear and that gives them a clear way to integrate yep yeah this is Lily we'll be an end consumer so we'll take the event and put it to Twitter awesome and I love the Twitter suggestion you're on nice one yeah that's gonna be fun it's gonna work the demo guts will be with us yeah as things go Twitter will be down in that thing most likely but yeah what a great solution to the observability story right now so anyway this is cool maybe we'll prepare a slack but in case Twitter is down we'll figure it out worst case scenario I'll just record a video beforehand so I can always fall back to that the man has done it before I don't know I mean Austin's giving a keynote of course Twitter will be up yes yes yes what do you think about it yeah so my I'm gonna be as good as I can in terms of coding I will be in a banner or message for the next two days and I will try to squeeze it into whatever time allows me but it might I will do it as fast as I can you're wrong I got a quick question for you in the first example is now AWS to many doing some type of sentiment analysis of a text file would you be okay if we rig that example up through our event gateway we could author the lambda function that does the transform and just yeah over to you okay so you're gonna essentially write the lambda function pass it to you and then we should expect a cloud event that's right and we'll post it via HTTP we just need to know where to where to post it to sure and can you send us again a postman example of a dummy event absolutely yeah yeah we can do that we should come up with some what that text file is figure out just try to take the free part of the message yeah we're just going to put it in the request body I think this is what Clemens refers to as a structured event that's why I'm saying the best if you send us an example if we have any issue or clarification we could we would test it against the function the email that I sent earlier to the list actually contains an example already what I still have to do is to send you an example with where the URL actually the URL for that for the image bulb actually resolves and you can get at it so for that I need to make it count public we also need an example based on the S3 event which have a slightly different metadata that is the truth and good luck with that yeah but that's Austin's problem that's the fun the nice thing is the differences will now be miniscule and we can tell because we now have a standard yep yep absolutely cool I will write this up in an issue I think that's it unless anyone has any other further comments sounds like we're done here okay alright I like this thanks everyone thank you