 Good morning. Good morning. Morning. Karthik, are you all ready to go? Yep, already. Cool. I think that we'll probably start at about 8.03. Do you want to try sharing your screen off just to get that going? Yeah. Give me one second. Are you guys able to see it? Yep. I can see it. All right. Is that good? Yep. Thank you. All right. Perfect. Thank you. Good morning. Good morning, everybody. We'll give it just a couple minutes to get going to get some more folks to join. All right. Just one more minute. We'll get kicked off. All right. Let's see. We don't have the normal load of people, but I think we should still get going anyways here. Good morning, everybody. Thanks for joining. We've got a schedule within the, the Google drive as usual. It's packed with a couple of things. We're going to have a presentation from, from Yugovite. So Cartrix on the line here to, to talk about the scale out database that they've been creating. I think it's pretty cool stuff. And then we'll also have a, a spot to talk about the, the sessions that, that cube con a little follow up discussion before getting into that. It's wanted a general note. I think that Camille has been reaching out to some folks on the SWG. I don't know. I don't know. I don't know. I don't know. I just want to make sure, to give some feedback to some folks on the SWG. Please do spend some time with Camille to give her some feedback on the SWG and what it's doing and what it should do or what you think it should do. This is all part of, you know, making sure that as a TSE makes decisions on, you know, or giving charters to the, to the storage working groups that they're informed, you know, based on the perspectives of the people in the groups. So if you could take some time and give Camille some feedback that we'd all appreciate that. So with that, let me hand it over to Karthik and we'll get going talking about Yugabhite. All right, thanks a lot. Hey, guys, I'm Karthik. Going to talk to you about Yugabhite. It's a transactional high-performance database for planet-scale applications and we'll dive right into what that is in detail. A real quick intro about ourselves, like three of us, the founders started this, like it's Karnan, Karthik, and myself. We started the project. I'm one of the founders and the CTO here and all three of us, including nine others, worked at Facebook on a variety of different applications in production. We worked on both Cassandra and HBase in order to put it in production for use cases such as messaging inbox, messaging search, time series, spam detection, so on and so forth. And yeah, let's jump right in. So real quick thing about the problem we're trying to solve, we saw this pattern repeated quite often at Facebook and having been in the open-source community with HBase way back, we had seen that a lot of companies were trying to repeat this at the web 2.0, like tech company sector, but now this pattern is becoming even more common in the enterprise, especially with the advent of the public cloud. So how do people build planet-scale apps? It's pretty clear that Docker with Kubernetes as the orchestration is the favorite choice for people to put stateless applications in and that's pretty much going into production and becoming mainstream, but when it comes to data, that's when the challenge begins. So today's way of doing a data architecture is to have a SQL master and slave, whether it's sharded or a single node scale up solution, they have a SQL master and slave and they have one or more no SQL solutions because there's certain advantages provided by no SQL databases that really help and the minute you put your data across multiple data stores, it becomes very expensive to recompose the data. So people put the data that they need to serve to the end user into a cache like Redis. So immediately with this sort of an architectural setup, like even if it is containerized, the issue then becomes you need to figure out which subset of the data goes into a transactional database like an SQL database, which subsets and what types of access patterns are ideal for which of the no SQL databases and which of the subset of data is being accessed by the user and therefore has to stay in a cache like Redis. And because multi-region is becoming like the norm and a lot of applications want to keep their data closer to the user for low read latency access, you need to figure out how to replicate it at pretty much every level, right? And if there is a failure in this sort of a system that's put together and it's like the blueprint is similar but the exact implementation varies, maybe the choice of technology varies a little bit here and there, but inevitably if there's a failure, it takes a long time to figure out what went wrong, right? So at Ugobyte, so the question we get asked is like, suppose you go to a public cloud like AWS, right? Let's take the AWS example. How does it change this picture? Well, it makes it a little easier for sure, but not a whole lot because you replace the Redis set of machines with elastic cache that Amazon or a cloud provider will manage for you. The SQL is replaced with something like an Aurora or an RDS and the NoSQL tier is replaced with DynamoDB. So effectively the architecture is still pretty much predominantly the same. So at Ugobyte, we try to go into why is it not possible to converge all the three? And this is based on a lot of work we did at Facebook and with a lot of other work we had done, we had seen done assisted teams with the projects like Tau. So what really is the characteristic of the databases that makes it like makes an app require multiple of them, right? So if we split it into three core requirements like pillars that a database should offer, you can think of it as SQL databases including Aurora offer you high performance and transactionality but not planet scale because it's difficult to get your data distributed scaled out, add machines as you want, like all of those are manual. NoSQL databases like MongoDB on the open source side or a variety of others and like that's just an example and Azure Cosmos DB which is a multi-model NoSQL database through Microsoft both offer high performance and planet scale but don't offer transactions when you need them. Like I'm talking about transactions in the in both a single row and multi-row. So some of that is offered, some of that is not. On the other side, like the other tack that Google Spanner took was to go after planet scale and transactional workloads but it's not ideal for high performance because you're subject to atomic clock like effectively the atomic clock latency for the streaming type of workloads where you don't really need it. And so at Ubibike we're trying to bring all the three pieces together. So it's got to be high performance where you can serve it with low latency and it can just be a serving tier. It's got to be transactional when you need it for the subset of applications that are subset of workloads that need transactions and planet scale. Okay, so those are our design goals, transactional, high performance, planet scale and of course cloud native. So really quickly on the transactional side we wanted to have, we wanted the core data fabric to have distributed asset transaction support for both single row and multi-row assets and with a document based storage engine core but that can be exposed using a variety of different APIs that people are used to. On the performance side, we wanted it to be really low latency. So ideally for a majority of the workloads people should not need to deploy a cache in front of the system and it should be able to accommodate high throughput. Build it with planet scale in mind so you're able to globally distribute data as well as offer tunable reads so that people in remote data centers can read from their nearest data center with some semblance of consistency. And finally on the cloud native side the obvious ones are of course being highly scalable and highly resilient. So add nodes when you need to either expand your storage footprint or you need more serving capacity or cache capacity and be highly resilient which is tolerate node failures or most of the common cloud failures without any intervention but more importantly also make it really easy for the user to use this database by expressing an intent and the database kind of respecting the user's intent and also give a seamless operator experience for day two operations when you're trying to keep this running in production. And we're gonna look at a few of these things in detail but at the core of the database like what we did was instead of being too purist about the exact languages we brought in the features, the best features of the two sides of the house. So on the SQL side we bring in strong consistency, secondary indexes, asset transactions, single row, multi row and the expressiveness of the query language where you have where clouds and joins if something will continually work toward than ads so that's at the core philosophy. And on the no SQL side we bring in tunable read latency so read from a follower or one of my async replicas of the nearest data center if you want low read latency but you're okay with timeline consistency optimize for large streaming rights support features like automatic expiry of data with time to live kind of feature and be able to scale out and be fault tolerant with your data with primitives to support how do you partition data? How do you lay out data on this so on and so forth. Okay, so if you take Azure Cosmos DB as the bleeding edge of no SQL and Google Spanner as the bleeding edge of SQL in a cloud like environment today what you go by is it brings the best of the two words into a single database. So we are multi model and high performance just like Azure Cosmos DB and asset transactional and globally consistent like Spanner. Okay, so very briefly on the architecture at the core it's a scale out database you'll be able to add machines in order to scale it out. Each node has a what is called a doc DB is what we call it internally it's a heavily customized version of rock DB and the nodes in order to replicate data with consistency across nodes we use raft based replication. We have a global transaction manager in order to do distributed transactions or distinguish it from a single row asset and still keep that highly performance and we do automatic sharding and load balancing across all the data in respect of how you access it and all of this is written in pure C++ so everything is ground up put together in C++ for high performance and finally we allow people to access the database through well-known languages as starting points. So we offer Cassandra CQL like Cassandra query language the Redis API and we're working on Postgres as another API so you'll be able to come in through any of these three APIs each of them forms a table in the core data fabric and is able to service and some of these languages we've actually added extensions as we see fit like in order to support the use cases we want for example in Cassandra we added distributed transactions so you'll be able to do begin transactions do some stuff in transactions of secondary indexes JSON data support and so on and so forth. So with that as what Yugobite is it does not have external dependency so it can run on premise on a cloud on a VM on a container it can pretty much run anywhere so on any I add. All right so just a brief intro now let me go into what the current state of Yugobite is and then we can jump into like a demo that like a shopping cart on the current state side we're in 097 publicly available beta marching towards a one dot O generally available version in March in April time frame but we've tested it so far for high scalability so we've gone up to 50 nodes and we're able to see that you can linearly scale and get millions of reads and write IOPS without really sacrificing your latency so like what you see at 50 nodes is and these are key value like point key value reads so 2.6 million reads with 200 microsecond latencies and 1.2 million writes with 3 millisecond but that's a three-way replicated consistent write. Okay and it's a highly performant database because that's another of our core pillars so we tested it against some of the more performant no SQL databases like Cassandra. This is a YCSB report of what Yugobite compares with Cassandra and it shows the number of operations per second so we've taken a lot of we put in a lot of effort and a lot of learnings from running such systems in production at Facebook in order to squeeze a lot of performance out of it but performance is a continuum it's never ending so we'll continue to keep improving it. We added distributed transactions so you'll be able to create a table, a Cassandra table and in this classical banking bank account example you have a account name, account type balance you can shard your data by account name having sharded all of the account names and keeping them together you'll be able to perform cross shard transactions where one account, you're able to transfer some money from one account to another account which could potentially live on different nodes and we do the whole clock tracking, clock skew, et cetera. This is an actual running system in one of our customers environments it's like an example of a user login password style setup, two copies of the data in US West two copies in US East and one copy in Tokyo the replication factor is five which means you need a quorum of three guys in order to do the right successfully with consistency and your reads can happen from any of the data centers that are local to you this setup can actually survive an entire region failure but and give you low read latency from any of the different regions so users logging in would be able to log in very quickly but users changing their password would have like because the read latencies are in the 200 microsecond range whereas the write latencies are close to 200 milliseconds even if the, and this is an average across low testers running up in on all the three different geographic regions that's because you have to get quorum to establish consistency and writes from Tokyo would invariably take longer to do that. Like Yugobay already worked with multiple clouds so Amazon, Google and on-premise are well tested and Azure is something that we're trying to add support for. But let's jump quickly into our demo and this is an all Kubernetes demo. YugaStore is a sample app that's an online e-commerce bookstore you can find it on GitHub it's an open source project as well. So the first thing that I've done and because this is not too terribly interesting to do live and wait for it to come up is to bring up Yugobay as a Kubernetes stateful set. It's a replication factor three setup so the Yugobay cluster is three-way replicated and it's got three nodes in it and this can be scaled up or down on the fly. The second thing that I did was to bring up the Yugostore app. This is a Node.js Express and React based app which simulates a bookstore. So it's like a very simple e-commerce app it lists some books you'll be able to categorize books into some static groups and so on. So having done that let me quickly jump into showing you the actual application. Hopefully you guys are able to see the screen it's the Kubernetes dashboard and please do say something if you're not otherwise I'm assuming it's all good. So what you see here the first that these three T servers are the slaves these are the guys that actually serve IO. The three masters are background coordinators they're as many masters as the replication factor and the last deployment here is the stateless app deployment. So I'm gonna go ahead and switch into the Yugabyte dashboard. So this is actually running inside Kubernetes and you see that the different masters have talked to each other and using raft elected one of themselves as the leader and this setup has replication factor three. It has one key space with one table in it called products and we're gonna look at a demo of how that shows up in the UI. It's got three T servers and obviously that is scalable on the fly. So if I go to the oops one second. That's one thing I think my okay. Yeah so take me a second here sorry. Something's gotta go wrong when you do a demo right. Okay we'll get to that in a second. Did you sacrifice anything to the demo gods this morning Karthik? What's that? I forgot to start the mini queue. So like because that thing hums and you guys wouldn't be able to hear me but it's all good now. So we're back in business sorry. Like that thing really makes an noise on my machine. Yeah so these are the tablet servers. What you see about this setup is it's all running in a single cloud single region single zone. So it's not a multi but it can very easily be deployed in a multi region multi zone or a multi cloud fashion. The database internally understands that. Now let's go to the React app. So this is the app that shows you a list of products or books that are being listed. There's some static categories. So you can look at books just which are the business books, cookbooks, mystery and suspense books and so on and so forth. And these are more static grouping. You'll be able to go into any one of these books and be able to see some static content like the title and the description and some dynamic content like the average rating, like the number of stars that people have given on average and the total number of reviews and so on and so forth. And you'll be able to sort by the dynamic attributes as well which is you'll be able to see like what are your, the books sorted by the total number of stars that you got total number of reviews you have so on and so forth, right? So that's the app. And I'm still working on adding like the checkout and the shopping cart and that side of things which require like distributed transactions but jumping back to our presentation. So how does Yegobite simplify this, right? Like typically for the less static content like the less dynamic content like the title and the description, SQL like API like for example Cassandra is a great choice to store the data because you'll be able to see most of the attributes you want and you'll be able to add the ever-growing attributes to like adjacent data types. Whereas for the highly dynamic content that changes all the time, Redis is a great example of figuring out the things you want to store like for example, the average rating or the total number of reviews, right? So in Yegobite, you'll be able to model your products as a table and run a query such as the one shown and we will try this slide to be able to select some books from the business category and at the bottom you'll be able to use Reddit sorted sets and with the actual reviews as the score to figure out that most reviewed books or the number of stars as the score to figure out the most rated books. Now, let's actually do that, yeah, okay? So I'm gonna connect to this, to T server zero and using like a Sandro shell and we can actually do a select and figure out the top two books in the business category which it's able to fetch and you can like go ahead and add any number of categories and you can alter the table online, upgrade the like software online, so on and so forth. You can actually reconfigure the database to run on a different set of nodes or regions without taking an application downtime. I'm gonna go ahead and connect to Redis and so if you wanted the top 10 books by the number of reviews, you can go ahead and run that and that's the Reddit sorted set. All of this data is being stored as a persistent store inside Yegobite so you don't need to supplement Redis with the data being present in another database. So all of this is just a single database dealing with everything. And finally, let's run the equivalent robot user like the Bangladeshi click farms if you use a cliche example. So it's just like viewing products one after the other and we'll be able to go into our UI here and we'll be able to refresh and we should start seeing some load getting pumped into the various machine and the point here is that you can add nodes on the fly and the load would get evenly distributed. You can change the setup of the system to run on a different cloud or region and all of this while the system is online. Okay, I'm gonna spare my machine the trouble and go back to the presentation. So Yegobite Database is an Apache V2 project. We follow an open core model. We have a CE edition, which is everything that I showed you today in the demo and we have an EE edition that has the UI deployment, the deep integration into the cloud, built-in metrics and alerting, as well as some features that are more production date, two features such as async replication to remote regions or like hearing of data when you have a lot of data to cheaper tiers. So all of those are in the EE. You can check us out on GitHub. We have a great docs. You can get started in just a few minutes if you want to give it a spin on your laptop. In our next steps in the Yegobite Kubernetes journey that's on our roadmap and we're working on internally is to build a Yegobite operator so that people who are running this in production can do so with great ease and to do an OSB open service broker integration so that end users can consume this with ease. So the first one is making it easier for the operator. The second one is making it easier for the user. And as far as Yegobite itself, our aim is to make Yegobite a CNCF sandbox project because we really think we can simplify the way applications are being developed, especially on the stateful tier. Like we can simplify that quite a bit. And yeah, we'd love to be involved to figure out how to achieve various things like cross-region or like local disk, pass-throughs or so on and so forth. So that's all I had. There's a, please feel free to reach out to us or you can reach out to me. We'd love to hear from you. Cartiff, excellent. Thank you so much for the presentation. That was great. Thank you. We can leave it open for a few minutes for questions. Anybody out there have questions for Cartiff? I'll kick one off here. So how long has the database been available in GitHub? Got it. Great question. So we've been in GitHub for about four months. We've been building the database for about two years, but we've been building it like without having, thinking about how to monetize the project or go to market. Like we didn't want to focus on that. We just wanted to focus on the core problem because it's like a fairly hard problem to solve and it takes a lot of work to get there. But more recently, like we've tried to figure out what is the company going to look like, what we want to do. It's been out on GitHub for about four months and we're working with a community like Kubernetes where like the philosophy of what CNCF does and what we want to do or what we want to achieve is fully aligned. So we want to figure out how to make that even more accessible to developers. Got it. Whatever you can share with us here would be great regarding customers and who's actually, or what types of use cases have been looking at this and why. Yeah, great question. Yeah, so we've installed YugoBite on about 10 to 15 customers who are trying it out. We have a couple of customers that are going into production this quarter. I mean, in fact, like we're pretty much like effectively with the promise of keeping backward compatibility, all of that. But we're waiting for these customers to go into production and become referenceable around our one-to-one time, which is going to be the April timeframe. And we expect a few more to come on board and go into production soon after. We're being deployed in on-premise Google Cloud and AWS, I should reverse the order. AWS on-premise and Google Cloud is the order of number of customers using us that we see. Use cases, we are closer to going into production for single-row asset use cases. And these are like the fintech industry where you have stock tickers and stock codes and all of that. Things like logistics and tracking, which is closer to a real-time IoT, like where you want to figure out where vehicles are and how do you want to do the reporting on them. There are some e-commerce sites that are looking at us. Security and fraud is another place. So it's a variety of different verticals because the database itself is pretty horizontal. But most of these applications require like two or more of those three pillars, which is transactionality, whether it's single-row or multi-row, so data consistency is important. Distribution across the world, sync, async, hybrid deployment, microservices architecture, that side of things, and a good performance for this being a serving tier. All right, anybody else have questions out there? Quiet group today. All right, Kartik, thank you so much for presenting to us. I think that was really cool. Looking forward to working with you guys and please reach out to the storage working group if you have anything that you need and kind of looking forward to collaborating with you in the future here. Awesome, thanks a lot, thank you. All right, team. So on to the next agenda item for the day. We slated, I think last time we took the last half hour to talk a bit about our KubeCon presence in EU. And I think what we decided was that everybody needs some more time to think about it. Just a reminder, we had three sessions that we were slated for at KubeCon. First of all, the private session is one that we're trying to figure out who's actually gonna be, what that's gonna be. I think that the private one was gonna involve possibly getting the members from the TOC that come speak with the SVG about what their thoughts are on the working groups and what they'd like to see and try to get more of a charter from them so that we could start tackling some of those important things that they feel like we should be doing. So that one is still in discussion and we'll report back on where that goes. There were two other ones. I think Saad mentioned that the intro session was overlapping with the Kubernetes session for intro and we're working with the program committee to get that moved right now. So I think that one's still a go and I'll let you guys know when that time gets updated so it's not conflicting. And then the second one was the deep dive. So the ask last time was to get people to think about what kind of a agenda or what they think we should be covering in these two sessions. And that's where I wanted to hand it back to the group to chat about. So who's got some ideas or comments they want to share? Mr. Steve Watt, you out there? They're always chatty. Cold out, huh? Yeah, I think specific for me. I mean, I was kind of, I think the main thing is given I think there was a, Saad mentioned there was a Kubernetes meeting that I thought maybe we needed just one CNCF meeting in the schedule or storage. My comment on that was I know it's easy to do meet and greet like theoretically but my experience having trying to do that is everyone turns up expecting to see a session. And then it might be a good idea to do one, like a more outside of the session, like outside of the track, like an actual more open meet and greet kind of thing. So like a total of three sessions, one, you got Saad on I think his Kubernetes session then we've got a CNCF maybe just where we've been, where we're going and catch folks up with more recent advancements on the different phases of project acceptance and where there's CNCF projects fit in that and how to get involved with that. And then like I think that's a more of a presentation style and then we could have more of a high bandwidth just like cheese and wine, maybe both interpretations of the wine and like in another form, that's just an idea, something you might want to consider. Okay, thanks for that. Anybody else have any thoughts on that? Saad's got a plus one to Steve on that. Okay, cool. How about just in terms of people that are gonna be present? So I think that we've got these sessions, we can figure out exactly what they're gonna be, but who's interested in actually being involved in more of the planning and possibly the delivery for these sessions? Who's actually gonna be at the conference? I'll be there and I can help out. Okay, Saad, cool. I will be there, but I'm going to be only on Thursday. Is this all right? Yeah. Okay, I'll work. Okay, so we've got you for Thursday. Who else is gonna be there who wants to help? Who wants to participate in it? Mr. Bradchilds, wanna present? No, I'm not gonna be present. Okay. Ben, are you gonna be there as well? Oh yeah, I'll be there. Okay. Well, this is Steve, like we've got some vacation scheduled on like the Friday. So however, it's just my wife going out of town. So if you guys get stuck, I could look into, especially if we were focusing on Thursday and then I could travel back Friday. I could maybe come. So just let me know, I'm not opposed to it. I was kind of wanting to go in the first place, but you know, just, I'm just gonna figure out the logistics. Got it. Okay. So for now, it sounds like it's Saad, possibly Steve, Orit, Ben and myself. Ben, what do you think? Should we continue trying to chat about this on the SWG or do you think we should just set up some separate calls to discuss as a smaller group? Well, I think to start, does everybody feel a little bit like we understand the times and how we want to use them? Like we're not going to do the both the late night one, right? That was kind of an up in the air question. I mean, what we could use that for, was to get some of the TOC members to just chat about things, but that's still up in the air, whether that's going to happen. Yeah. And whether they're going to be able to join and not have dinners and who else. Exactly. Yeah. I mean, I love the idea of other TSC members joining. I think you probably have Camille and Brian and some other folks that would be willing to come by and just talk with the SWG about what we're trying to do. You know, I think presentations like today are great and the SWG being a place where we can have these presentations is great, but I think it would be great if we also tried to decide and use the time, the face to face to decide what else, if anything, we want the SWG to do. You know, I think we left the last face to face with some ambitious goals of defining some stuff around cloud native storage and what it means to operate cloud native storage. I don't, you know, we're all busy and I think that I must have really been able to take that on. But I think we should just like make it clear whether or not we want the SWG to have that as part of its mission or not. That to me would be a good use of the time. Sorry, Ben, what specifically part of the mission? Well, it's unclear exactly what we want the SWG to be, you know, it's output to be. And I think after the last face to face, one of the outcomes, at least from the way I interpreted it was we were gonna try to define, you know, looser sense at least cloud native storage from a operations perspective versus from an application consumption perspective. Yes, absolutely. Yeah, and I mean, just, I don't know that we've kind of dug back into that or that anyone's really had the time to do that. I think if we would have done that, I think there would have been more clear, you know, a more clear mission for the SWG which is to produce whether you wanna call it white papers or definitions or whatever you wanna call it along those lines, but that's not really something we've done. Which is okay. And to me, I think that sort of leaves the group with a little bit of a less defined and less clear mission about what its output is, you know, what role it's playing. Yeah. And it just, to me, this face to face would be good to just settle on that. Even if the role of it is not as ambitious as to find all that other stuff, that's fine. Which is good to think to have a clear understanding of who we wanna be and who we don't wanna be and what we wanna do and what we don't wanna do. 100% agree. I think that that scope of the storage working group is in the critical path of like making any forward progress. Like it's, I feel like we're kind of in, like it until we do that, we're sort of in like this paralysis phase. And I think, you know, just personally, that it's just exactly what you described. Like, is the CNCF storage working group, you can't see my air quotes, but storage, like Kubernetes storage SIG, which is basically the storage that supports the application platforms or is it all application persistence. And we've got to decide on what we are. And I have an opinion on that, but I don't wanna hijack this meeting to jump into that. But I do think we need to get to the bottom of that. Yeah, so Clint, to answer your question, I would be happy with having one of the sessions just dedicated and devoted to figuring that out. And I think we can either try to get feedback from TSE members ahead of time. We can have them be present to also get their taking perspective on it. Or we can brainstorm ourselves and then go back and say, hey, this is what we think we're doing. This is who we think we are. But it seems like a good use of time. Do you think that, I mean, we've got the three sessions, right? The one at eight o'clock is questionable who we can actually get there. Are you saying that maybe we take one of the general sessions and have that be like a round table format? Or are you saying the eight o'clock one that we try to tackle that? I mean, I think it's gonna be whichever one we're gonna get critical mass at. So rather than picking the time, I think whichever one we feel like we can actually get a sufficient representation of the group and have sufficient coverage from the various perspectives and views. Do we think that the public audience is gonna benefit from seeing some of that? I wouldn't call it dirty laundry. I think it's just open source process at the end of the day, figuring out what we need to do or what we're gonna do. But is that something that we want to be a public session? I mean, I think it's perfectly fine if folks from the public want to come in. I don't think there needs to be any shame in us wanting to better define exactly how we want the group to run. In fact, I think all groups should probably be doing this periodically as just a continued reflection on how things are working. Yeah, and I think, I guess what I'm thinking about though is it's in the catalog itself, maybe it would just need to make sure that it's well-defined, like what exactly the session's gonna be so that people aren't disappointed as Steve has said before. Yeah, I mean, I don't know if... Yeah, what else do we feel we have queued up to talk about, if not this? And I apologize, Steve, I, my phone seems to have disconnected right when you were speaking and then reconnected. And so I completely missed everything you said and all I got back to was Clinton saying, okay, sod plus one's that, so. No idea. Clinton was plus one. You got all the action items back. No, I think I can summarize it quite quickly where I was just basically saying, side as a care aid session, we should probably have a CNCF presentation rather than a meet and greet in the track because despite people, just from personal experience, despite an organizer wanting to have a meet and greet, what tends to happen is people show up expecting to see a session and they don't talk and then you just stand up, they're looking weird. So, and then the third one in the evening was the casual meet and greet, you know, and if we can get TSC folks there or some, if not, like we just have a forum for like high bandwidth conversations which we can always use because I think like one thing, it's my guess that's been something that TSC has observed is that it's taken a while to like diffuse exactly how the CNCF works, like, you know, different aspects of the governance model, it's there for like, I know personally, like I'm being routinely educated as I ask more questions. So I think that's opportunity for more education, conversation around that is always good. You know, the last time we talked about the sessions, I think there were two things that I wrote down from notes and one thing was that we could have a short presentation setting some context and then we'd have a panel discussion, so open forum. And then the second was that we'd have a like a review of what the SCUG has been discussing as a landscape and this obviously hasn't been ratified and is still to be determined who would at least express our point of view and that would be more of a presentation where we put out a landscape, describe the different components and some of the projects that would fit inside that landscape. Yeah, I mean, I think that sounds great from the perspective of if we wanna have a public session where we wanna bring people in and let them interact with the storage working group and ask questions and learn more about storage, I think that's all great. And I think we can do that as perhaps one of the earlier sessions. To me, the discussion with the TFC and sort of the internal discussion with the storage working group itself, the burning question is really more about how we wanna see the group function and operate going forward. And again, I mean, sessions like today where we have great presentations and we can ask questions and we can educate folks, I think it's a great opportunity to share and discover and talk about a lot of the interesting storage projects out there that can be a completely acceptable decision that we make which is this is sort of the extent of what we want the SWG to be. But we could also do a lot more and I think it'd just be great if we had clarity for this group, for the TFC and for everybody else about any of the other stuff that we're trying to do. Seems like it's a good opportunity to have these discussions face-to-face versus just via the online. But we could also do it in one of our future just calls. So I think I'll put it back on everybody else which is if everyone agrees that we wanna have those discussions when we wanna have them, we wanna have them face-to-face and then we wanna leave the, sorry, do we wanna have them on the polls and we wanna leave the face-to-face as more of a, hey, let's educate people out there about what's happening in the storage land. Let's talk about some of the projects that are presented. Let's talk about any of that kind of stuff. Let's give perspectives or do we wanna use it as a working session for the storage working group itself? Can I say both? I mean, I think that at the conference, the public sessions, people are gonna expect, I mean, you're gonna have complete newbies to the area who are just really interested in storage and I think they're probably gonna expect more canned and well-presented information so that they can quickly catch up I think that we've got our needs as a group which are somewhat like slightly separate but I feel like we could accomplish both at the event. I think that we could take one session, could make sure we just had a great intro presentation, landscape, open panel discussion. So at least we have a mix of intro and more advanced discussion going on there and then we have into that extra session we use as that face-to-face which is that round table with TSC members to discuss what the SWG can be. Works for me, other folks think. Sounds good. So is that what we wanna do? Anybody have any objections, any other ideas? Sounds like then we wanna maybe prepare some of that canned content, Clint. Which you and I can kick off and then we can recruit others. Okay, excellent. Should we call it a meeting for the day then? All right, we have next week we have, I believe it's Dot Mesh presenting on next week but the next session is Dot Mesh doing our first 30 minutes. If anybody has any other storage projects please do reach out. We definitely wanna get that agenda filled. I definitely enjoy hearing from all the different interesting storage projects like I was talking about out there in the ecosystem. Think it helps educate me on what's going on. So I enjoy it. So if you guys have anything else you would like to be presented here please do let me know so we can get them on the agenda. Yeah, just a plus one on the Dot Mesh for those of you that don't know that's Luke and crowd that were the creators of Flokker. So pretty relevant to the space, I'm pretty smart. Original gangsters of container storage. Cool. All right, well thank you everyone for your time and we'll meet you back 10 minutes in the day. All right, take care. All right, bye-bye.