 All right, so I'm Doug Davis from IBM. I am the co-chair of the serverless working group, a working group under industry and CF. I'm also co-chair of the Cloud Events Project, which we'll be talking about a little bit in a second. First of all, the agenda. First, talk a little bit about the serverless working group, how it got started, a little bit of an overview. Talk a little bit about Cloud Events, just a quick intro because it came from the serverless working group. Talk a little bit about workflow, what's new for the serverless working group, and then a little bit of Birds of Feather. This is the one you should be afraid of, because this is the one I'm going to make you talk. In Barcelona, for Kupkan, we actually had an 80-minute-long session where we covered the first couple of things in like 15 minutes, and then we set the rest of the 85 minutes on that Birds of Feather session. For me, anyway, it was really, really useful. Whether you guys think of that, I don't know, but I need to get input from you guys, so that's what Birds of Feather is about, so just a warning. Okay, so let's talk about the serverless working group. In mid-2017, the CNCF decided they wanted to take a look at what's going on with serverless. It was a relatively new technology, and they weren't sure what, if anything, the CNCF would do about it. So they started the serverless working group for the main purpose of doing that investigation. Come on in. Basically, doing that investigation. And what the serverless working group did was basically produce a white paper that gave an overview of the technology itself. So what is serverless? How does it compare to the other technologies out there? Containers of the service, platformers of the service. What's the difference between serverless versus functions of the service, that kind of stuff. Also gave an overview about the technology itself in terms of what usually goes into a serverless platform. Talked about the state of the ecosystem. What's going on there relative to the open source community and proprietary type stuff? We didn't give it any recommendations in terms of what was good or just bad. Just provided an overview of what's out there today. That overview was also manifested in, not just the white paper, but also that landscape document we produced. Which is basically nothing more than a spreadsheet of all the various serverless platforms out there, what types of features each one had, stuff like that. It wasn't a judgment about which one was good or bad. Didn't try to compare them. It was just for someone trying to figure out what's going on in serverless. Here's a list of things to choose from if you want to play with it. That's all it was. The overview of what's going on in the system. And then finally in the white paper, we did a recommendations section. Which basically gave recommendations for the technical oversight committee in terms of what should they do next relative to serverless. And most of the recommendations were around things like education. Educate the community about serverless, where it's appropriate to use it, where it's not appropriate to use it, that kind of stuff. But one very important thing there was what can the cloud, what can CNCF do next relative to interoperability, or solving pain points for the community itself. And one of the things we noticed was that serverless in many cases is really about events. Events are flowing around and they're using the events to actually kick off invocations of functions or serverless functions. So we started looking at what around events possibly was an interoperability problem and we realized that while all these events are flowing around, there's no real standard way to sort of analyze events just to help them get from point A to point B. And that's what cloud events was really all about. And that's what the serverless working group decided to start up with. A new working group focused on how can you address that one little pain point of how to get an event from point A to point B without having to do with some of the pain involved. So let's talk a little bit of what that pain actually is. So they said you have an event flowing around between clouds, between services, and every event is gonna be different in some way, right? Every business application is gonna send that event or receive an event in some sort of proprietary format. And that's fine. We don't wanna actually change that because that gets their job done. However, when you start talking about seeing that event from one point to another, in order for those points in the middle or even the final destination to know what to do with it just to route it to the business processing logic, it usually has to understand that message itself. Meaning it has to understand the event and what's going on inside there to make that determination of how to do the proper routing, filtering or whatever. Well, what cloud events are trying to do is trying to come up with a standard way to represent the common metadata needed to do that routing for you. Now keep in mind, we're not trying to reinvent another cloud event's common format. That everybody is supposed to use all events are supposed to look this way. That's not our point. We're just trying to define the bare minimum metadata needed to do this base routing infrastructure that we're talking about. So let's take a look at what that actually means in practice because it's really not that complicated. So first, let's take this HTTP message, right? Very simple message, couple HTTP headers and then in the body, you have some JSON, nothing big. But in order for me to understand how to route this to the appropriate processor, I have to understand that JSON, right? I have to understand basically the business logic. And that's a lot to ask of me as a piece of middleware, right? Think of the middleware as let's say a proxy to try to get it from point A to point B, right? If he wants to route all new item type of action, he has to understand that it's JSON, if there's a field called action and there's a thing called new or value called new item to route it to place. That's actually a lot of work for somebody to understand when you're trying to make a piece of middleware be as generic as possible. Well, what cloud events does is said, look, here's the four required attributes of cloud events, spec version. This is the version of cloud events back, very basic stuff. Type. What is the type of the event that's flowing through here? So if you're looking at what's happening here is it's saying, okay, this is a com.bigcode.newevent or new item. So new item is technically the action itself or the type of event, but we prefix this with com.bigcode because it's from that source, right? Just so we get a little bit of uniqueness to it because there could be dot less event producers producing a new item type of event. So this just distinguishes it from those guys. Then you have the source. So who sent this event? In this particular case, bigcode.com.repo, right? And finally just some unique identifier. Think of it just a UUID and that's it. There's not a whole lot here and that's the point. It's a minimal amount of data so that anybody who receives this can look at this and say, oh, okay, I know that there's a string called type and it's going to describe the type of event. So I can set up a very generic framework that says route all events from say big code, I'm sorry, from com.bigcode.star to this processor over there or all events that end in maybe new item go over there, right? Some sort of basic logic that doesn't necessarily force you to understand what's in the body but can do simple string sort of searches on either the type or the source to do whatever routing you need, right? If that's all you need is that basic routing, right? Without understanding the business logic and that's all it's trying to do, right? So in a lot of ways, the same way the HTTP headers don't really tell you much about the message itself in terms of the business logic. It's just there to help you get the message point A to point B, to go through some middleware that doesn't really understand what's going on but can understand basic HTTP common headers. That's all we're doing here, right? Very simplistic thing but with that it's able to give you a level of interoperability between the various platforms, right? Because as I said, the middleware doesn't have to understand the business logic anymore just to do its job and that's critical to what we're all about here. Now, this is the binary format meaning there's an existing message which is sprinkling some metadata in there, that's it. If for some reason though, you want to actually stick that data inside the body as say for example, JSON, we do tell you how that looks just to get some interoperability, right? So notice for example the content type change from application JSON to application cloud events JSON. Application JSON now appears down here on a cloud event property we define called data content type. So the HTTP body goes into a data attribute but everything else is basically the same just wrapper in JSON, right? So you could use this as your global event format if you really wanted to but that's not the point because we don't want to force field users if they don't want to. If you have a format that works for you, great, use it. Just sprinkle with metadata with a couple metadata and that's it. So that's basically cloud events in a nutshell. We define some metadata, we define how it appears inside certain formats like JSON or how it appears in certain transports like HTTP and that's all there is to it. You have a couple of SDKs to help you do the generation of the messages or processing the messages to get it to and from those formats so that SDKs do that for you and that's it, okay? Very simple idea, what we're seeing is starting to get some real traction because people are looking for this type of standardization for their middleware, okay? So in terms of the deliverables, we have the specification itself, we're at 0.3 right now, just released a couple weeks ago. We have a couple of that formats, JSON, HTTP, protobuf, and we have some transports, some of them are popular ones. We're adding new ones all the time. Kafka is actually the next one that should be delivered very soon. We also have a primer that gives you some insight into some of the decisions we made and why we made them because a lot of people look at this and say, well that's interesting that you have these properties but why don't you have a property that tells you where the message is going, right? That's what the primer talks about. The primer will explain to you why we don't have a destination field in there if you're interested in that kind of stuff. So it gives you, as I said, gives you some insights. We have some SDKs for marshaling and unmarshaling their data, different server languages, and then we have some extensions, okay? But that's basically it in nutshell. Now the reason I mentioned this is because this is the first concrete deliverable aside from the white paper, from the serverless working group. And as I said, it's about trying to get interoperability or address pain points for the community, okay? So this is at 0.3. Technically it's at 0.9 in terms of maturity level. We are actually hoping to go to 1.0 within a couple of weeks. And so then the question is, okay, what are we gonna do after that? It's the same people working in the serverless working group at PotEvent. We just put the serverless working group on hold while we finish up this. So what is next for the serverless working group at PotEvent, which would hopefully come in a couple of weeks. So one of the things we decided to work on a long time ago was something called workflow. So when you have these serverless functions out there, a lot of times we found that people want to sort of string them together, right? Event comes in, goes to one function, that produce another event, goes to another function, maybe there's some traditional logic, needs to branch off, do something. There's a whole workflow or pipeline type of structure people want to define. And there are a lot of different ways to do it. There's probably a technology out there that already does that. Of course, it's not interoperable, right? The way you define it in one system isn't necessarily gonna work in other systems. So we thought, well, the notion is out there, can we make life easier for people by trying to standardize or come up with a common way to define that workflow? And that's what this workflow project is gonna be working on. So as I said, the goal here is to define the basic primitive for how to string together various function invocations, right? So it's gonna include some basic primitives in there, like states and steps, triggers, references to the functions that are gonna get invoked, possibly some filtering and that kind of stuff. The important thing to know here that it's talking about how to actually wire these things together, meaning what comes next and how to decide what comes next. It's not gonna necessarily talk about how to invoke the function itself, in particular function signatures. That's not the thing that workflow gets into. It ignores that. That is one of the things that we're looking at possibly looking to standardize later from a serverless working group perspective, but workflow doesn't get into that. It doesn't talk about how to invoke the function, it just says what function to invoke net. Very important distinction. It also doesn't talk about the instant format or metadata, that's for someone else to solve. It's just basically putting the wires between the various possible. And again, this is all about fostering portability and interoperability across platforms. And this is the second step for the serverless working group. Of course, the cloud events being the first step. So let's take a look at what the workflow actually means for more concrete perspective. So workflow needs execution flow. And really what we're talking about is a state machine. At least in this particular proposal that we have in front of us right now. So the state machine is sort of represented here. We'll talk about the various pieces in there. So the first are obviously events because that's what we're talking about processing here. So events aren't just the thing that comes into the system. So we have obviously an event coming into the workflow itself, the event figure itself. But as you go from state to state, those are actually events as well. So the result of the function getting invoked is treated as another event that then moves on to the next state. So obviously the states here talk about what the current state entire system, whether a failure, do you keep going to the next step? Is there a if statement, like the switch statement in there, basically the if statement, right? Does this, after this one state, do you stand out to multiple functions or multiple other states? That kind of stuff. And then finally add actions. Basically the functions themselves, right? So some of the states will actually invoke other functions. Other states lead on to other states themselves like the switch state goes on to the task state. And that's basically the basic idea of the current proposal for a workflow document. The current state of this is that it's only at a point one stage. We started working on this, we used that weekly calls. But then we realized that because the same people are in both groups, leading cloud events and serverless, they really don't have time to do both at the same time. So we kind of put this one on hold. As I said, after the cloud event stuff goes to 1.0, we'll probably circle back around to continue working on here. But if you have any interest in the workflow stuff, please look at the serverless working group. We'd love to get your feedback on that. It's still very, very rough in the early stages. So there's lots of room for you guys to help shape it if it does actually need to. So, as I said, we are looking to do next in the serverless working group with workflow being just one of the possible things. I listed some of the other ones on there. I'd mentioned things like function signatures. That's been a very popular one, in all honesty. For some of these things, well I think they'd be really, really good to standardize on. There is some level of resistance in the group to do too much too quickly. Just a lot of people that worry that if you standardize something it's going to limit people's creativity to expand it or to explore the possibilities with that particular domain. So we have to tread very, very lightly here because a lot of you are very scared of standards, especially in the open source community. But if you're interested in any of these ideas or the other ones you think might be worthy, drop me a note or just come to one of the serverless working group calls. We'd love to hear what you guys think are the pain points for you to address. Now, that was actually very quick, 15 minutes. Now we get to the part that you're dreading. This is questions for you guys. Because what I want to do now is, okay I told you about cloud events, I told you about the serverless working group, told you about some of the things we think are pain points for people that we want to consider working on. But the problem is, that's just our view. The cloud events working group or the serverless working group, we have, at one point in time we've had, I think over 70 different companies, or represented from companies up here. That's a lot of people. But one thing that I actually think we're missing, we do have some, but to do more, is the end community input. We need people who are actually using this stuff on a daily basis. Not the people producing the infrastructures, producing this to cloud providers. We need people and users, basically giving us feedback in terms of when you go to use Lambda, IBM Cloud Functions, Google Cloud Functions, whatever. What are the pain points you're experiencing? And that's what I have this next section is really about. This is a bird of feather thing. So let's start off with some simple questions, even before that one. How many people here are using containers today in production? Really, not everybody's hand went up. You guys, this is the CNCF. You guys should all be saying, yes, we love containers and we're all using it. No, so that's interesting. And that's actually, I'm not surprised. It's disappointing for me, because this is my job, right? But it doesn't surprise me, because I found that there's a lot of, not resistance, but it's not as quickly growing as we would have hoped, right? For whatever reason, people aren't picking it up on it. Maybe because it takes time to migrate the infrastructure, maybe they're still nervous about it. For whatever reason, I'm not that surprised. So, less hands are gonna go up, I'm sure. How many people are using serverless in production today? One, two, three, okay. Okay, so I'm gonna pick on you three then a lot for the following questions. Okay, well, actually, first of all, for you guys, you're actually using either serverless in production or just playing around with serverless. Would you be comfortable telling me what kind of things you use it for? Actually, do we have a microphone? I'm gonna pick on you guys. You raise your hand first. Actually, we'll kind of play around serverless right now. We're not like in mass production. It's just a few use cases. What kind of use cases are you using? What are the first type of use cases you found interesting to possibly consider? Our clothes, we're running on AWS. So, we're using it for auto scaling. And so, when a server is a spot server gets, well, we got notified that Scout Server will be gone and we'll use Lambda to drain everything on that server. And also, we're using it to collect the logs from EKS. That's it. And I assume that it's the auto scaling up as well as the scaling down to zero that interests you, right? Well, that's just our use case. And also, we're using Lambda to calculate some custom metrics for our scaling. Okay, only because you're closest and I'm gonna pick on you too. Thanks. Actually, I'm building functional service in my company by Dancer. I'm from by Dancer. By Dancer, do you know by Dancer? No. Okay. Our use case is about two scenarios. First one is the service. We call light service about some Node.js service. We use service in our company to run. Another use case is the Async event such as Kafka event and object storage event such as that, for example, if you have some event into Kafka, you can watch the consume the Kafka event. Then today I use the crowd event to describe the event. Are you that right? You thought of it? Yeah, I use the crowd event. Yes. Cool. Yeah, in the user's function, I also send the event as a crowd event type into the user right of the function and it will receive the event. The event is also a crowd event. So. Let me ask you a question. Yeah. Why are you using serverless as opposed to just stand up a server that just waits all the time? Your question is about why do we use serverless? Yeah, why use serverless instead of just the standalone server that's running all day long? Yeah, it's a serverless to have some advantage. In our use case, if we maintain standalone service in our production, we need to scale the max capacity every time. So it's hard for user to decide what the capacity is. So the serverless is the main advantage for saving our resources. Yep, okay, auto scaling side, yep. Yeah, right. Okay, cool, thank you. All right, so the next question on here, I believe, is for everybody else aside from those three, why aren't you considering serverless? And last time I asked this question in Barcelona, I got some really interesting answers. Is anybody going to raise their hand to say why they're a little resistant to play with serverless at this point in time? No one? There has to be some reason you're not playing with it. Is it just too new? Is it just you're not comfortable with it yet? Any answer is fine, I'm just curious. You guys, I was afraid it'd be a very quiet audience. You guys are gonna make me do all the talking. Okay, so maybe this will help. So in Barcelona, some of the reasons I got was, or were related to, with containers as a service, or with containers, we're told, hey, take your model of the application and split it up into microservices. And that's fine, conceptually. Sometimes it's a hard task though, right? Where do you draw those lines? Where do you draw the boundaries to actually have the little microservices? Well, with functions or serverless, we're now saying, hey, great, now do it again. Take your microservices and split them up into tiny little, almost like utilities, right? And again, it's back to where do you draw those lines? How do you then split it up? How do you then manage not just five microservices, now 50 functions, right? And people are a little nervous about that because they're not sure whether tooling is there yet, right? How to do the split? The education around it is a little bit nerve-wracking for people. And that was actually one of the things we learned most about the audience of Barcelona was, there's just some fear and they're not ready for that progression yet. And they're also afraid that the tooling wasn't there yet to help them. Because they feel like there's a challenge with just, for example, gathering logs from your five microservices, oh my gosh. Now these things are exploded out to have so much and then they're gonna scale. How do you manage all the logs across that? And there are tools to help you, but they weren't sure whether they were mature enough and that was part of the reason they were doing it. Anybody wanna say anything yet? No, okay, fine. You guys are gonna make me park. Yes. Actually, we are using service in our production, but the service we are using is not a true service. He cannot scale to zero. Yes, so they provide us some reserve mode. We give them some volume, then they scale to that volume to serve us. And what holds us from using it, the first one is the scalability. Like the scale to zero, the ability to scale to zero. And the other problem I want to mention is that. I'm sorry, I'm not sure. The scale to zero, are you saying that's a problem for you because you can't go to zero because you want to see what your infrastructure won't let you? The infrastructure does not provide the ability to scale to zero. So I did not raise my hand previously because I thought that was not a true service. Got it, okay. The second one, the second one is that we rely a lot of microservice behind us, but they are not a bus service. They are traditional microservices. And they are not service. They are not service. And what's worse is that we need to use some rich client to call them. And they provide some JAR file, Java rich client to call them. And we are no JS application. It is hard for us to interoperate with them. So though there are some mesh there, but they can only do some common service. But for some business service, it is hard for us to translate those rich client into the mesh. So it's difficult for our business to turn fully service. Intro, okay. So just for your reference, I'm not a purist. So the fact that you don't scale down to zero, that's still serverless to me. I know a lot of people think service has to go to zero. For me, serverless is a little more abstract. And to me, it's like the auto-scaling and the function level type stuff. But even then, but okay. But thank you for providing that. That's useful. Thank you. Okay. So since most of you are either shy or not using serverless, this next question may not apply to you. So when you guys are actually using serverless though, what are the pain points for you? I mean, you guys were using cloud events so hopefully that's helping you a little. But are you finding, for example, that you're tied down to one particular architecture, right? You said you're using AWS, I believe, right? Do you feel like you're locked in or do you feel like you could take your Lambda to another platform and easily run it there, right? Is portability of functions something that worries you, right? Is that because the function signature, for example, will probably change or you go to someplace else because Lambda has their own little function signature with context being passed in. Other ones can have their own way of doing it. Is that kind of stuff worry you at all in terms of portability? Or do you feel like eh, it might be a little bit of work but you can easily port it over to Google Cloud or IBM Cloud or something like that in their function platforms. What kind of pain points do you guys have relative to portability, tooling, debugging, anything else? Anything you want to mention? There you go, ooh, thank you. Good to talk to you again. Yeah, for us, currently we're not worrying about portability because we just migrated from Unpairming's data center to AWS. So right now we're, it's a phase where we locked in AWS, we don't worry about that. But for me, from a personal experience, I'm a little worried about how to, I feel isolated of the code in different Namdas. Because, well, for microservices, it's an independent piece of software that actually completes a meaningful functions features. But serverless, for serverless Namdas functions is too much smaller to a point that maybe just not that meaningful sometimes. Is that a problem or is it just a mental model you have to be used to? I think it's a mental model. Because sometimes I couldn't tell the upstream of this Namda function. And I certainly could tell the downstream, but it makes, well, kind of made me worried about, well, what if we have thousands of Namdas and I don't know, or really don't know how it flows, the data, how it flows. So understanding how the data flows between the various land is a concern for you. Yeah. That helps. Okay, cool. Is there anybody else who'd like to volunteer for pain points? Oh, here we are, cool. I think our one pain point is that all the developers used the tool that they have some more heavy programs, like traditional application. And also some application was the development by Donat. It's hard to develop again with Microsoft service. So it costs a lot of time to do that. Okay, thank you. That's consistent with what I heard in Barcelona. So thank you. Okay, we are running a little long time here. But it's one last question for you guys. I've actually two asks, hopefully. Now you've mentioned, actually it's interesting. You were just talking about you have current workloads and it's a challenge to split them up and actually make them into function and stuff. Are there workloads that you'd like to see supported? So for example, one of the things that I hear different opinions on is whether, for example, serverless should support streaming, right? Are those the type, you know, is that something you want? Because a lot of people think of serverless as small little function, short lived. In fact, you know, like Lambda, for example, won't let you run functions for certain minutes and stuff like that. But what if you wanna have a function that actually runs for a couple hours because it's doing some stream processing, right? Do anybody have any use cases that push the boundaries of what you can do in current serverless platforms that you wanna mention that don't work well today? Okay, running that time anyway. So one other thing I wanted to find out about is, so you have functions as a service, you have platform as a service, you have containers as a service. In all three of these, you're basically running containers under the covers, right? Different scale of the size of the containers, obviously functions are probably smaller than containers as a service. Platform as a service may be bigger than both of them, depending on your point of view. To me, the split between those three worlds is almost artificial, right? Why is it that you only get auto-scaling and serverless? Why don't you get auto-scaling and platform as a service or containers as a service, right? And if you look at a project like Knative, you're almost trying to bring those three worlds together. For those of you who have played around in Knative, you know what I'm talking about. So I'm wondering whether you guys feel like you're being forced to choose artificially between the three platforms or whether you would prefer a world where it's just, here's the container, here's how I want it to behave, and you don't care about what it's called in terms of platform versus function versus container as a service. Anybody have that sense or is this all in my head? Just curious, thank you. Okay, anybody else have an opinion? Or maybe, because to some people, as I said, I'm not a purist, right? But to some people, they care about this stuff and there's a very strict definition. You don't want to mix the world. I'm just curious, does anybody want to voice an opinion one way or the other? Okay, we have four minutes left, you're out of luck, of course you guys can speak any more. So just in summary, but thank you for the feedback. I know it wasn't as 80 minutes long, but still the baby I could give you was useful, so thank you very much. So just some links for you, service working group, link to the workflow spec, for the cloud events stuff, some links there to the org itself, spec repo, the SDKs, and like I said, if you're interested in the service working group in terms of what we're going to work on next, whether it's workflow or you have an idea for something else after that, please let us know, either drop me a note or join the service working group, we have mailing lists and stuff, let us know where your pain points are, where you'd like to see us go next. And so with that, I think we have three minutes left if there are any questions. But anything I talked about? Anything? Excellent, like the agenda of the working group, right? It seems like the time is not very friendly to the Chinese users. Is there any way you can improve this? It's like 1 a.m., like something like that. Depending where you're in China, yeah. We could, the problem is we've only had, I think, one person from China joined on a regular basis. And so for one person, it'd be very difficult to convince everybody else to switch. If we got indication that there's more than one, we would definitely consider it. So voice your opinion, ask it, say on the mailing list, you wanna join, but it's bad time, and we can do a trial run. Okay. But if we don't hear from you, we're not gonna think about it, to be honest. That's how it comes down to, right? The majority wins in that case, unfortunately. We will shoot emails too. Yes, please, send an email to the group, not just to me, to the group, right? So they see that there's interest. I would personally, I have no problem doing it. I don't know about everybody else, I'm okay with staying up the midnight occasionally and just switching back and forth. I would do it. Thank you. Yep. All right, any other questions? Yeah, I just, I think I can understand that we do the definition for a cloud event, but for the workflow, next steps, I'm a little confused. Why should we define another workflow? You know, there's so many workflows. What's the benefit, what's the difference of what somebody said in the workflow? There's a wonderful comic strip about, oh my gosh, there are 14 different ways of doing this. We're gonna create one that rules them all. One unified, then 45, 46. That's always the concern here. I don't have a good answer for it, other than the CNCF, I think, has done a really wonderful job of bringing the community together, right? For example, cloud events is not that exciting. It's relatively simple, right? But the fact that the entire community came together under CNCF to do it means something, and so it's gonna increase the chances of people adopting it. With something like workflow or anything else we do, I think it's gonna be the same thing. You run the risk of it being yet another whatever, but because if we do it right, we're gonna have the big players in there, you know, the IBM, Red Hat, Google, Microsoft, hopefully, and Amazon, hopefully we'll get them to participate, and if they participate in the definition of that thing, I think it increases the likelihood of it being adopted. Because I had the exact same field of cloud events first started, right? It's gonna be yet another sort of eventing type of thing, right? But we've heard, I can't mention their names, but we've heard that a lot of the big players are adopting it. And I think it's simply because they've been participating in it and they've had a chance to voice their opinion and shape it to something that meets their needs. And we didn't try to go too far, right? As I said, we weren't trying to define yet another common event format. We looked at one particular pain point that says, how do we get that common metadata into an event, right? So if we can do something similar with workflow, granted workflow is a lot bigger than four little metadata attributes, but if we can do something similar there, it says if we can cover a certain percentage of the workflows that people are defining today in a relatively common format and get the big boys to agree to support it, then it can take off. But you're right. If we can't get the big boys to adopt it, it's gonna go nowhere. And that's the risk of getting. So that's why we have to go very slowly and carefully about it. But I do share your concerns. Thank you. And with that, we'll end it over. But thank you very much.