 Oh, sorry. I was on mute. Hey, Jimmy. Yep. No worries. Find the setting at all. I don't see a more button or anything of that. So. Yeah. You only see it if you're the, if you're the host. So like now you should probably see it down there. Oh. Oh, so was it because I was trying to look for it before the time slot? Um, so it's. If you log in using the password, then. Okay. Host option. But if you log in without the password, you can use the host key and claim the host. I see. Okay. Sorry. We use, um, Google meet. I've worked. No, no worries. I'm sorry that it's been confusing. No, um, but, um, That session that you just rescued for me. 47 people. Oh, awesome. Yeah. Yeah. It was bursting with life. So, um, I was pretty thrilled. I think this next one will be smaller, but, um, that was so worth it. So one question we had. Are these being recorded or just streamed? They are being recorded. Um, it's just, we have to. Do a little bit of extra to pull the, the recording from the stream. Um, or yeah, the recording at the streaming service and then convert it back to MP4s and then we'll post them back. Got it. People asked it. And I wasn't sure if they were. It was because they wanted to be recorded or whether they were concerned that it's recording. So I said, assume that it is. I'm glad to hear that that's the case. So yeah. Yeah. How are you doing personally? Like, oh, pretty well. Just, you know, just living this weird, uh, existence, trying to get my kid home schooled and. It's a while. My daughter's 17. So I think we have it easy. You know, she. She does her school stuff or anything. Self-regulates. If your kids are younger, imagine it's much harder. So. Yeah, it's, it's been an adjustment for sure. But I don't know. Yeah. I don't want to be happy when the summit is behind us. I saw the, the whole thing kind of go sideways on the first day. I was kind of, uh, I was thinking with my operators hat on thinking, you know, someone's having a super bad day and I've had many of those recently. So, you know, from systems crashing and stuff. So I was kind of like, you know, fingers crossed guys, get it, get it fixed, push the fix, whatever. And. It's been perfectly good since the fix, right? Yeah. Yeah. Yeah. Knock, knock on wood. All right, sir. I'm going to take off. I have another session to. Thanks again. Yeah. All right. We'll see you soon. Bye. Hello everyone. Welcome. Thanks for joining. The interesting thing about these sessions is I have no idea how many people are coming. So I'm going to give it a minute or two because I think people are coming off other sessions, which may not end exactly on time. But how is everyone this morning? Go ahead and speak up. I'm shy. So, um, I'll probably repeat this, but, uh, this is a interactive session where, you know, it only works if people speak. This is not a presentation. Um, this is part of the forum part of the summit. Um, and the topic of this one is war stories, which is very specifically, you know, um, we get together and share experience. Um, so I've put the link in the chat to the etherpad for this session. Please, um, throw something up there. You don't need to disclose the, uh, the meat of your story if you have one. Um, but you can put a title or, uh, you know, something in your name and then we'll, um, we'll go through them. I've thrown mine in at the bottom of the list just to make it clear. I'm not intending to make this a presentation. I'm hoping that people will volunteer, uh, to share something that they, they dealt with, whether it open stack or just in this kind of role. Um, we have had people talking about not specifically open stack things, but data center infrastructure network type, uh, catastrophes up to nine participants. Um, the previous session, um, some of you were at, um, I think by about five minutes then we had 45, uh, participants. I don't know how many this will be, but I think giving you a couple more minutes is probably worthwhile. Everyone is, um, neutered and off video. Interesting. This is my first time using zoom, but we use Google Meet. Um, I'm still learning how it works. Okay. There's 11 of us here now. Um, still no war stories on the lightning talk etherpad. Um, it's in the chat here. If you're not looking at the chat, please, um, you know, open that window and click on the link, bring up the etherpad. Um, again, this is, this is not a presentation. You know, this is not something where you sit back and, um, other people just convey information to you. It's meant to be a working session. We talk to each other and share. So, um, no one shares. It's going to be all me and you don't want that. So, um, please do, um, think of something to share. Um, Hey, Terry, I'm just trying to, um, remind everyone this is a working session where we people need to actually share or it's not going to work. It's not, um, not a presentation from, from Chris Morgan here. Um, and I'm sure that, um, you know, with the topic of open stack operators, people have war stories that, um, you know, we'll make other people's hair stand on end. So think back. It's, it's entirely up to you. It's very open ended. Um, the purpose is really to kind of, uh, you know, share experience, of course, on open stack or loosely open stack focus, but, um, you know, they have been fun in the past. In January, we had, um, John Garbert volunteered to be the, um, um, master ceremonies. And he actually invented a scoring scheme. And, uh, I wanted private scores. So no one knew to the end. And then, um, we announced a winner and two runners up. And, but the prizes were just, um, swag, you know, like a Bloomberg branded umbrellas and things like that. So I would have to think about one story. I haven't prepared anything, but if I come up with something, okay, so I'm willing to lead if that will get things going. Maybe if not everyone is just not sure how these things work. Um, but it's, it sort of goes against the idea that I'm the moderator, not the presenter. Um, so it's five minutes in. I do respect the time of all people who have turned up. Um, so let's, let's try and get things going. Um, whilst I'm talking, please think of, now it's not mandatory by the way, I can't put you on the spot and say, you must talk now. That's, that would be very hostile, but I am asking people to be involved. You know, so the etherpad is, um, is linked in the chat. I'm going to stop giving me my story. Um, and I'm after that. If I don't have any more war stories to talk about, it'll be a very short meeting. So my war story. So we launched, um, our new open site product, um, last year and, uh, it's all layer three networking. It's all super modern. No layer two networking at all. So it's all a routed layer three network with BGP. Um, and it's also in a new day's center. New hardware vendors for switch gear. Um, and because the hardware is more powerful with more bandwidth, um, people started piling into this new version of open stack very quickly. And some of the people who are, um, very automated, you know, hit us up overnight, you know, like Kubernetes clusters just sprung up and started going, right? Really hard. Um, so it was really kind of disappointing that, uh, in one of our day centers, the entire control plane was starting to collapse and, um, having super, super bad problems. Uh, by the way, so I wanted to do one quick note about, um, um, what we're doing here. This is recorded. So please don't say anything in this cozy little zoom chat that you don't want to one day potentially be quoted on, uh, you know, because someone saw it on, on YouTube or something or on the web, the foundation website. So we'll make that clear because previous open stack operators events have not been recorded and we've been extremely frank. Anyway, back to my story. So we, we now narrowed it down to two problems simultaneously, um, possibly three. Um, so our software stack was incompatible with one of our, uh, switch vendor BGP implementations. Um, so it would fail, um, heartbeat or keep a lives all the routing would get torn down. The networking would disappear for all the VMs. Uh, and, and obviously for control plane, then it's extremely damaging to have, uh, machines go dark. Um, in addition, we found some of the machines were failing to keep accurate time. They were drifting. It turns out to be, um, a platform issue. Uh, Skylake X machines get the wrong TSC frequency or something. Um, so we had this rapidly growing platform on this, massive investment on new hardware and the damn things wasn't stable and we had to fix it in place. So we put forward a plan of, um, Well, first of all, we, we identified the fixes and unfortunately it needed, um, a new bird, which is the routing demon that exchanges routes, which is an incompatible upgrade. You have to actually remove bird one and install bird two, but that's on machines that are networked. So you have to kind of chair down networking on a machine that you can only get to by the network. We did a new kernel for time, um, accuracy. So, um, you know, it was a tough call, tough mandate from management. And they then said, Oh yeah, and you need to do all the firmware patching on every host. Immediately. You need to do it. And by the way, the firmware patching automation is not here yet. So it's a long story short. Um, we put together a plan where we, um, we did a rolling evac of every single machine, um, did a software upgrade on the empty hypervisors and then had it jump into a K exec. I'm not a developer of the music crazy here. So I might be closer to the details. And the K exacted all of, downloaded all of the vendor binary blobs and, um, patched the, the firmware, updated the host and then rebooted. And how do we do this with this huge fleet of VMs? Well, we, we basically, we had a bit of good fortune. The whole bunch of new racks turned up. So what we did was we did a one to one, each in each rack, every compute node, we evacuated to its peer in a new rack, which was at least the same capacity. So the same Ram or bigger. So like if you had compute nodes, ABC in a rack that needed patching, we did it, you know, A went to A, B went to B, and C went to C. And the thing that saved us is we just turned on. A feature that I just mentioned in the previous swap, which is Nova order converge, which is actually a libvert feature. And this will get VMs to move. I don't want to go on about too long, because it's meant to be a lightning talk, but to cut a long story short, if a VM is too busy, it slows the V CPUs down. And then they will merge, move. So we're able to reach rack that needed patching, 100% evacuated with like perfect success because yet another whole rack that was even bigger, most of the time, ripped through with, you know, tearing out bird, replacing it in place, patching the firmware and then rebooting. And then in some cases, that fixed rack would then became the target from another rack that needed fixing. And we, we ripped through about 1500 hypervisors and about 15,000 VMs in, I think about six to 10 maintenance events because we actually, by the way, all of these open-stack clouds remained online the entire time. AZs went dark, but apps are supposed to be distributed. So that's a little bit of war that we did, which it was quite a challenge, but also the feature that I came here to tell you about was the order conversion has just been magic. Of course, being on shared storage is also the thing that enables live migrate to work smoothly. So we're all set back, but it could be some other shared storage for you. So that's my, that's my rack swing mania. We never swung racks before we swung hosts. Put the VMs from here to that host, but we, on one Saturday, we eventually did six racks simultaneously and each rack was done with eight-way slicing. So we did something like 48 simultaneous host to host live EVACs. So we fairly lit the network up, but it's just all worked. It's fantastic. So if anyone has questions about what this platform is, it's canonical open-stack. We're using Calico Layer 3 networking and our project song GitHub. Ask me if you have questions. That was the worst. I don't know how much war you guys perceived there. I'm like, you know, our data center didn't switch off, but I will say that when someone said, well, you have to do it without outage and you have to perhaps patch your kernel and you have to update your firmware. I was like, this is going to take till Christmas. But fortunately, we didn't, we didn't just start, you know, one host at a time and take as long as we need, like we worked on a better plan. Okay. So we have one more war story on the list. We really encourage you guys to share anything that, you know, would kind of share information or maybe entertain. So the person who posted this has not posted their name. So I do ask for the names. It might be, is it Ivan Romenko? Is that right? May have pronounced it badly. Is that you, Ivan? Looks like it. Okay. Are you ready to share your war story with us? Hi. Do you hear me? I can hear you. Cool. We host the public cloud for a couple of years. To be honest, it's almost five years. And during all those time, we've encountered the same error. A young ops does. Every couple of years. So the first time we encountered that it was, it was a problem in the filtering output in a way that. You get a wider result than you might expect. When we, when we first did this, it was about listing listing. Listing snapshots. Oh, sorry. Not snapshots volumes, volumes of a server. We were about to. Introduce a mystery workflow to delete. Instance with all its volumes. If customer wants it. And when we were testing it, testing it in a test project with only one instance with a couple of volumes, you might expect that no matter how, no matter what common line options you provide, you will have one instance with all its volumes. And the, the scoping looks, looks well. So when we start workflow or taking an instance as an input, it was successfully filing all its volumes and deleting it. It looks great. But when we deploy this workflow in our staging environment, it turned out that when you, when you make an error in filter in the filter and. Asks API with admin privileges, you might accidentally get all volumes in that tenant, not only those attached to particular instance. So when you, and when we, when we try to test that workflow, we accidentally deleted all volumes in that. And that project, it was surprisingly bad. And in a couple of years, we faced almost the same issue once again with our new developers. It worked almost the same, but in other, in another part of OpenStack, it was Neutron. You might know that when you delete a project, it gets deleted only on Keystone level. So if you have some resources spread through Cinder, Glance, Nova and other projects, they will remain working for some time. And each operator in a great tooling to, to clean, to clean these things up. And we do that as well. And when we were refactoring these procedures this year, we accidentally provided Neutron with ROM filter parameters to get ports, yeah, ports to, to do, ports that were attached to some instances, instances that should be deleted. And it turned out that Neutron does not respond with error when you provide him with some strange filter string. It just gets you all ports it has. So if you attempt to do some things like that with a wide scope, like admin privileges, you will do almost the same with filter and volumes. You will just delete all ports, no matter what, what ports they are connected to instance, not connected, router ports, all sorts of ports. It was nightmare to fix this. Well, so that's it. Okay, very good. So one of the things that we innovated, I think the last couple of these things was after you've kind of shared your story, then it's, you know, nice to share some of the details in the etherpad, especially if, you know, in your case for instance, I think you found that there's, you said that there's still things that, you know, brand new admins keep on making the same mistake and delete volumes. Sharing that kind of stuff is useful. So if you, you know, it's entirely up to you, but if you want to, you might just share any lessons learned. I'm going to share something about, you know, my story when I get a chance, but, you know, the well is running dry here. We don't have any other war stories to, for me to select from lists. So someone needs to, you know, put their hand up and say that they can share something or, you know, we're kind of out of luck here. Terry said you would have to think of something. There must be, I mean, in your time, you know, being such a leader in open stack, there must have been things that were, you know, pretty dark days. The scope can be wider than, you know, my open stack cluster did this, you know, I'm not supposed to put people on the spot, but, you know, we, this is again, it's supposed to be a working session. You know, you have to, you have to participate or it doesn't work. So I can share a fun story. Go ahead. Thank you. So there was like, but when we started to add more automation to the open stack development infrastructure, we were tracking all of the, all of the weird errors that would, they would like to create false negatives on the tests because those would be very costly because they would reset the whole, the whole gate queue. And so there was this error that basically happened every day. You would see jobs failing at whatever, you would have jobs failing every single day. And so there was, that was weird because there wasn't so many occurrences, you know, it's just like a few jobs every day. But at one point, I just like decided to investigate that one. And the, the jobs were failing because they failed to install packages every day at 2am. And the reason, happened to be because there was a cron job on those test servers that would update the app database every single day at 2am. And during that timeframe of like maybe 30 seconds, if you tried to install packages, you would, you would get into a TPKG lock. And so the, the error was basically, we run so many tests. And at every single second that you could still hit that very, very specific case where you, where the cron job would be updating the, the app database. And the lesson I learned on that one was that, like sometimes it's just, you don't think that, you know, a cron job will, will make tests fail every day, but they can. And if you run at a large enough scale, you will encounter all those corner cases and very unlikely things. And in the same way, we, we, every time there is an outage, we're somewhere on the internet, we're affected and we had to put on proxies to make sure that we are isolated from those outages and, and, and like maintenance windows everywhere that, that con job was, was one, one other thing. So it's not really an open stack story. It's not really like an operational nightmare, but it's a, it's a good lesson that at scale, you will, you will see new issues that you weren't really paying, paying any attention before. Like nobody, that con job comes by default on, on, on, with two server. Maybe they removed it since then, but it comes by default and, and yet it's not seen as an operational problem anywhere. And, but for us, it was at the scale at which we were running test. It started becoming one. And the same thing with like, you know, clock, clock adjustments. If you're not, not using, like a time synchronization, then, then you can run into very weird issues when you jump one second. And, and that, that is also something that once you reach a certain scale, you will encounter like daily, daily corner cases all the time. So it's a, yeah, that's all I could think of in the five minutes you gave me. No, sure. So I think that's, that's an eternal thing that we encounter, which is problems you see at scale cannot be anticipated. And also you can't afford, and I don't care who you are, you probably can't afford a perfectly safe non production environment of the same scale as your production environment. I think it's almost like maybe someone could do a, a proof in economics or even in physics, you know. So in some sense, there's always things that prod can do that, that surprise us that we cannot make our test clusters do, you know, like we have one of our test clusters is an entire rack of brand new equipment. We never break rabbit there, but on the, you know, 20, 20 rack production clusters, we certainly can. So this is actually something I don't have the answer to because prod can always break in ways that we can never rule out beforehand. I think the only way that we protect against that is we actually have one open set cluster per data center. We move software to one at a time. So I don't see any more war stories here. I hate to be the person who just keeps blabbering on, but I will, I could share another story that most of you won't have heard. That's okay. But I brought much prefer if someone else had another story to share with us. And I think Terry showed you that the scope can be almost anything that we do in this, you know, movement considered very grandly, right? So this one is for structure. So let's develop some of his lead development teams. I spent many years developing or leading the development team of a part of the Bloomberg terminal software C++ windows, completely different. I'm not going to get into war stories there, but I want to encourage you to think, you know, what have you been through that you could share any, any lesson or really just entertain your, your fellow attendees here. No one's putting anything. Maybe war stories. I can take a short one. Eric from city network. Perfect. Take it away. We had a case earlier, I think it was this year or late last year and during maintenance window when we plan to replace the hardware of all our network nodes in one region. So basically we are free agents and so on running their virtual routers. And you went to 1604. The full reinstallation of the new nodes did not cover like specifics when it comes to kernel version and so on and managed to land on a, on a, on a kernel that had a very peculiar bug when it came to a lot of fragmented packets being sent through the namespaces on the nodes, cause the whole node to freeze. So we ended up in a scenario where we migrated over basically all virtual routers and load balancers from the old nodes to the new ones and then got hit by the problem that one node at a time, they started to freeze and effectively bring down like all three, all free traffic for the whole region temporarily. So that was a, that was an interesting case. When you, What lessons did you learn that you possibly share? For example, he decided to have maybe a distro diversity or maybe not change all your nodes in one operation or share with us how you do better next time if you would. I mean, in this particular case, it boils down to like trying to, to simulate on a package level as well, including kernels and so on in a pre-prod environment and doing the tests there, which is always kind of tricky when you start to talk about like on package level, you should have an identical environment in tests or dev and production depending on how you build your images. So I want to blow your nodes of course, but yeah. I think it's very true. I think the difficulty is that they can still surprise you as cherry points out at scale. You see extremely unlike the events or almost guaranteed to happen. So I think these intersect very well. Sorry. I now cut you off. Carry on. No, it's fine. I mean, and in this particular case, we were kind of unlucky in a sense as well because the problem didn't occur until we basically moved everything over. So it was just a couple of namespaces that I think we conclude at least that it was a couple of routers that caused the whole thing. And we are running all the routers in HMO. So basically what happened was that one node died, the failover occurs to keep alive. They will kick in and just move the VIP to the other node and traffic will continue and it will effectively bring that node as well and so on and so forth. So it was an interesting case where you kind of had a ping towards the different nodes and you lost one by one. So yeah. I feel that as I said, we had a vendor that's great. I'm not going to name names, but cheaper features, everything's great. And then we only found that it didn't actually play nice with our production and open stack product after we had all the VMs already loaded and control plane already running but struggling. And no one had ever seen that in extensive testing before this went live. So I'm just trying to see if we have another story here. I feel like there's so there's 18 of us now. I know that some of you have been through many things that would make good stories here and I know that's not easy to speak up if you're not used to doing these things, but we don't judge here. And in fact, maybe by sharing rookie mistakes that we make, we demystified what's going on here because in the previous session, it's not really the lightning talk, but the previous session I mentioned that another thing that we did on our new open stack cluster is we somehow managed to run the MySQL nodes with the out of the box defaults, which are terrible for months, tiny buffer sizes and, you know, flushing all the stuff to disk all the time. And we're just thrashing our root disks to death on all of those nodes fixed now, but it was all tuned in the old version. And somehow it just didn't come across to the new version of the software. And it just threw it all away. So if I don't get another war story, I'm going to inflict you guys with another one from me, which I'm even reusing stories from previous events. One of the things I wanted to mention, I did link the war stories from the past to in-person events. So you guys can maybe go and drill into those if you're interested. And if you have any more volunteers for another story, I'm told I'm not allowed to put people on the spot. I did kind of mention Thierry because I think, I kind of figured he had something that, okay, I'm going to do another story. So this one is about, I'll fill in, later, this is about recording who owns things in your open stack installation. It's important for things like billing and fair allocation of resources, but it's also very important for things like billing and fair allocation. I'm not sure how many people here have actually decommed whole cloud. We've done that, but we also had this particular instance where we decommed a flavor. So our flavors now are all durable storage. It's all Seth backed, it's all shared storage. You can live migrate these things. We never lose anyone's files. It's good, right? But for a while we had what we called ephemeral. It's not the same as open, you know, storage on a local stripe set. So I'm sure you're all familiar with RAID. A stripe set is a high performance, merging of disks together with no data protection. So if a disk fails, you're dead. So it was in the documentation. It was called ephemeral in the flavor name. But we found that a lot of users just picked it because it sounded cool, or because it was the first item in the dropdown. So they're trying to do their, my first VM. What kind of storage? Oh, ephemeral, that sounds great. I'll just pick the default. So we ended up with a lot of users on the ephemeral flavor type, which had better performance than Seth did at the time. But there was inappropriate usage. They actually couldn't tolerate data loss. So my story is the day that we tried to retire that flavor. So we stopped it. We prevented it from being used for creating new VMs. So it stopped getting worse. And then we just did automated mailers to the users. And how do we find the users? Because we had recorded from, you know, the instance ID to an identifier for the team. We could then look up and send them email to the owners. So we're actually able to get most of those instances rebuilt on durable storage. Everything could. So my boss and his boss said, OK, just delete the rest. They've been warned. It's far too late. They should know better. And I was very reluctant. I, my first job involved loading tapes for backup at the end of the day after doing data entry. I don't like losing anyone's data. So I actually disobeyed. They said, just delete it. Just tell them they're too late to delete the VMs. Because we had ownership information, I tracked down a person for every single VM. I'm not talking like a huge amount. We're talking like 20 or something. Actually it was less than 20 people because quite a few were owned by just a small number of teams. So the very, very last ephemeral backed VM that we got rid of. And I was weeks past the deadline where I was ordered to delete it. I found a guy and I gave him a phone call. Hey, you passed the deadline. I'm going to, I'm under instructions to delete this VM. What is this VM doing? And can I delete it? Oh, so the person says, no, don't do that. It's the CI CD master for, and then some name that is relevant within Bloomer. But basically imagine the most lucrative single application within the Bloomer terminal. The CI CD master controller was a VM single instance on the inappropriate storage that at any point in time, if any disk failed on that particular hypervisor would be dead. So did I delete it? No. I said, please rebuild it as soon as you possibly can on any other flavor and, you know, we'll help you with whatever you need to get off this. So I avoided breaking the most lucrative app at Bloomberg is the, is the headline thing, but the real lesson learned is when you build any new cloud, no matter what your purpose, you need a way to go from any VM and any IP address to the person whose job it is to do something if it's, if it's quote unquote bad. So that's what that's my other war story. It's kind of a, it's kind of a crisis averted story rather than actually having a crisis, but I think you can use your imagination that if, if the cloud team deleted the CICD process for the app that makes most of the money, there would be some tough conversations. So part of this was actually inspired by David Medbury, who did a great talk about retiring the cloud at Charter cable, I think it was, because they switched platform. And again, you know, that thing about being able to, it's already well to, you know, put opensack on some machines and let people make the ends, but can you find the team three years later when maybe no one in the team even built it? Can you find the person whose job it is to fix it? That is actually the, I think where you earn the big bucks, mythically, I'm not saying we earn big bucks, but if we do a good job, that's part of it. It's a bit like the old saying, you know, backups are easy. It's for stores that are where the, you know, the skill is. Okay, so that's two stories for me, which is not how I wanted to run this. I'm going to make a final plea. Does anyone else have something they'd like to share? We've had good stories from Roman, Terry and Eric. Surely someone else can share something. The scope can be extremely wide. You know, you didn't have opensack in the title. I guess, so I can use the time then. So I think that's it for the war stories, but I will say that the session we did earlier this morning was extremely well attended. We had 47 people. And in addition to a great discussion about, you know, blockers about upgrading and scaling problems and testing and tools. I actually asked the people assembled whether the, the ops meetup series that we used to do would be better in a different form given that we're all at home in front of the computer. We're not traveling. We're not getting together. And the suggestion we have is instead of trying to arrange large events focused around open stack operators, which we used to do twice a year kind of midway between the summits. How about we do something quite regularly, but it's only like an hour. So I was thinking of like these talk radio people who just have a regular slot and then you can have a topic and you can have a guest. So I called it ops radio. And you can see the etherpad. I put up two options, you know, traditional events or ops radio. And we got a very, very clear mandate from all the plus ones to try and do ops radio. So that's something we're going to put together so that the thing I want to leave you with is if you're interested in ops related events where we get together and share information about not always war stories, but more often tools, upgrades, long-term maintenance, scaling, those kinds of things from an operator's point of view where you're actually running stuff and you have the real problems of, you know, upgrading clusters that are elderly that are on old versus open stack. So the longest short of it is we are going to give a go at doing an ops radio thing, possibly on the open stack zoom platform. I haven't talked to them yet about it, but definitely in this kind of thing, if you have any zoom or it could be the open in for a Jitsie influence instance, which is called meat pad, I think. Anyway, what I want to, the one thing I want to share with you is if you have any interest now or potentially in future, please, you know, follow this Twitter feed because we get more engagement from this feed than every other method put together, you know, individual emails, mailing list emails, blog postings, added together all of them. Don't get the engagement that this Twitter feed does. So probably only when we launch this, it'll be on the Twitter feed. And we'd love for people to, you know, be able to see, oh, an hour about rabbit, I'm in. And then next week, it might be, oh, an hour about OBS now is not for me. I don't use it. So, you know, we are trying to adapt our thinking to the current circumstances. So that's really all I have for this talk. I see some good discussion finally on the, no offense, but finally on the etherpad. I will fill out a bit more about my story and Roman and Terry and Terry and Eric. If you could also fill it out a bit. You know, we bring these things forward to future events, as you see, and it's very useful to have some examples of these things for the future. The other thing I want to mention is when the pandemic eases and we can get together, of course, the OBS Meetups team is going to attempt to get things back into the more normal thing. But I will say that the OBS Meetups team right now is not me, but the other people and the other volunteers are kind of having personal issues right now. So we are a little bit thin in manpower. So the ops radio thing might be actually more achievable than, you know, the two-day events which have, you know, catering and a venue and all that stuff. That would be a bit hard to do right now, even without the pandemic. So that's all I have. We have five minutes left officially. I want to throw it open for any comments, feedback. I'd love to hear, you know, whatever you have to say, really. Again, I emphasize that these are forum sessions. They're meant to be working sessions. You're meant to participate. So, you know, in the future, if you go to other things, you know, if you're expecting a presentation, you're kind of came to the wrong room. I'm not trying to be a jerk here, but, you know, the moderator is not supposed to just speak for 45 minutes. That's not what we're doing here. So I see some great detail here about exactly what was the cron job that surprisingly could trip things up. Erica, I wonder if he's still here. I wonder if you could be persuaded to share a little bit about the, I think it was a router upgrade. You said that eventually, or a Linux sister upgrade where the virtual routers eventually locked up. There'll be kind of a good share. That's all I have. Any final comments? You guys are killing me. I don't know if you've ever moderated a room where no one speaks. It makes you kind of, makes your eyeballs swivel. Terry, do you have any final comments? I'm not trying to call you out, but you're, you're, you know, you're a leader. It's much more difficult to get people to participate in a virtual session where you can't like, you know, wave at them or anything. So you're, you're right to call random people out so that, you know, they, they might be in just to, to participate, but. Well, I'm, I'm done for the day. So I'm, you know, I'm good, but, you know, for future sessions that you go to where it might have a brand new moderator, have mercy and at least speak up and say, you know, something, right? Otherwise I have some of the times I'm even like, wait, is this on a Zoom working? Like, can anyone hear me? So anyway, I think I will thank you all for, for coming. Please do provide feedback in whatever form you want to. I mentioned Twitter. You don't have to use Twitter. My email is on the sheet. You can just email me. I'm still an active participant in the ops meetups. We want to do events that serve the community. We want to mold them to make it work. In particular, in addition to being short, but frequent, we want to do varying time zones. So that's one, the good for North America, then one good for Europe, one good for a pack like that, you know, so that we maximise the utility of it. Love to hear your feedback. So please take that phone away. I'm like a private thanks, thanks for here. Thank you. You're welcome. And I, I think we're done. I will now finish the, the chat, but you know, reach out if you have any, you know, feedback. Thanks all. Goodbye. Thank you, Chris. Thanks.