 Hi, folks, welcome to another stream. This is going to be an open source maintenance stream again. As I mentioned in the previous one, given that I have been moving for the past few months and haven't really been keeping up with my GitHub notifications, there are now a lot of them that I need to get to just because I maintain a bunch of open source things and I can't just leave them on the ground. If you remember last time, we started with 120-ish notifications and we got it down to, I think, 91. And now, a week later, we are back up to 126 unread. So we are back up to higher than we were when we started last time. Now, to be fair, the number is actually not that bad. It's not as though things get worse every week. At least that's not usually the case. You'll see on the left here, a lot of these notifications are from RustLang Rust, for example. RFCs and code guidelines. So these are usually issues where I'm subscribed to an issue because I'm just sort of following it along and it's not really worked as required by me. Before the previous stream, I had already sort of trimmed those out so that only the ones that had work remaining were in my notification list. So realistically, this is probably more something like 100 notifications maybe, which means that we're a little up from last week where we're still down overall. So we did actually make progress, I believe. So I think it's not as bad as it might seem. We're just going to dive into it. So for context here, if this is the first time you join one of these streams or if the first time you're watching one of these recordings, this is me just doing regular open-source maintenance work. We're going to be bouncing around between a lot of different projects. Basically, I'm going to be walking my GitHub notifications list from the oldest to the newest and we'll see how far we get. I'll try to give a little bit of an intro to what each project is as we get into the first issue, but this is not intended to be a sort of slow guide to each of them. So I'm going to be moving fairly quickly because otherwise we're not going to make enough progress on these. If you're curious about any of these projects, then feel free to ask in chat. Some other people might have contacts there. Some of them already have streams elsewhere that you can look up and otherwise just like browse the repo yourself. And if you want to contribute, then feel free to contribute. It might add to my list of notifications initially, but one of the reasons why I'm doing these streams is my hope is that it might let more of you get some insight into the open-source maintenance side of things and maybe be interested in that yourself. And so while initially sending contributions to these repo gives me more work, over time I'm hoping that it'll let you learn a project well enough that I can also make you a maintainer so that the workload on me goes down. So it's sort of a, I expect more work in the short-term, but hopefully less work in the long-term. We'll see whether that pans out. Okay, let's just go through these one at a time. And you'll see I open these in batches. I don't really know why I do that. It's just something I've gotten used to. This one is actually not what I'm gonna do because someone did this after last week's stream. So we found this one last week's stream and I mentioned that there are a couple of changes we might actually need to make to the code to make this land because you see a bunch of the CI tests fail. So someone submitted a PR for this that technically is now at the front of my GitHub notifications, but I figured given that it's here, let's just eliminate it. As soon as you're for arbitrary change, no long pause. Let's see. So this is more of an arbitrary use size. Okay, so this sounds like it's actually kind of unclear what the right fix is. So there's a breaking change in QuickCheck, which is this like property testing library that we're using for a tone. I think I'm gonna leave this conversation to just sort of hang for now. So we're not gonna deal with this one at the moment because it seems like there's some unresolved issues in QuickCheck itself. Static unique domains. Okay, so this is in HapHassard. This is our implementation of hazard pointers in Rust. So we ported the hazard pointer implementation from Facebook's C++ Folly library into Rust. Makes it impossible to sign such a domain to a static variable. Okay, so in hazard pointers, one of the ways in which hazard pointers work is that you have like things that can be pointed to and then you have hazard pointers which are pointers with sort of guards to those objects. And one of the things we wanna guard against is someone using hazard pointer value that's allocated in this sort of group of hazard pointers and a protected pointer from this other like instance of hazard pointers. And then you start like trying to guard this value with this pointer. That won't work because all of the logic only considers the hazard pointers within the same instance. And so the way that we fix this in HapHassard is we have this notion of domains which are basically, there's a unique type for each instance of hazard pointer. So you can't accidentally mess them up with each other. And what this person is asking for is a way to declare a static domain so that they can have a domain of hazard pointers they can use throughout their application. But the way that we did unique domains, I believe was by using a lifetime. Lifetimes are always unique in Rust but that means that it can't be a static. So what they're proposing is that instead of having this unique domain function which is going to return a value of a unique type, whether we can instead have a macro that you can pass in the name of a static and it will construct a unique type for you. And this person is basically saying would you accept a PR that implements something like this? Great idea. I think there are some details to be worked out but probably easiest to do so via a PR. So we can talk about the actual code. I do, will probably want to have two different macros. One for producing a value and one for producing a type. But that we can also take in the PR. Do you think contributing to open source software is a good way to learn Rust? Yeah, I think so. I think the trick as always is to just start writing Rust code. It doesn't really matter what project you're working on. It doesn't really matter whether it's open source but open source is most accessible to you. Just work on something you think is interesting. Great, boom. This is to my .file like my config repo. The behavior is likely different from Excel so it copies the entire line to system clipboard if this is selected and it copies that. This is a part of my VIM config where I have a keyboard shortcut for pasting whatever is in the system clipboard into VIM and a keyboard shortcut for copying what is in the VIM selection into the system clipboard because that integration isn't always great by default. And someone's commenting that this is sort of wrong. Okay, so the previous one that I had will copy the entire document. If I press leader P, so in my case, space P in VIM then it will take everything that's in the clipboard and sort of dump it to the current location in the file. And if I press leader C, so space C it will copy the entire file into the system clipboard. And what this change does is it makes it so that the thing that gets copied is whatever has been selected visually. So in VIM you can do like a visual selection using the V key and it will only copy the lines that have been selected rather than necessarily the entire file. I'm not opposed to that. The only challenge here is that this relies on VIM actually working correctly with the system clipboard which usually works like this syntax here of double code plus. Double code means choose which register to do the yank into. And plus is the system clipboard register. So this will yank into the system clipboard register. I'm not opposed to this but I have to, this one I have to feel out a little bit so I'm not gonna deal with that now. Because this affects my personal editor config and I don't wanna test that right now. Formatting to format. Oh, all right, that seems fine. If that's how it's supposed to work. This has been broken for a long time, I believe that. Yeah, a bunch of people make PRs or file issues for my .file configs which is very handy. Bump prettier, this is for, we were wondering. So this is for the Q and A site that I built. So it lets people ask questions for streams both before and during. And this is just dependabot wanting to update the dependency. This bothers me so much. Now this is not a dependabot issue but the fact that in JavaScript like the default behavior is that you actually wanna bump the lowest version here. Like I wanted to always pick the maximal version not the minimal version. Like this bump should only really need to bump the lock file but fine. I'm okay with this. I think you need to explicitly set async equals false in the VMLS P settings, why? All right, next one. This is a change to icebreaker. Oh, I brought this ages ago. This is a way for, this is like a little, little web widget you can run that lets students ask questions, students ask questions anonymously during lectures. So the idea is that like the TAs of the class have icebreaker open, have like the admin interface of icebreaker open. The students can open the student URL and then submit questions and then the TAs see them and they can like ask the question on behalf of the student because a lot of students are hesitant to like raise their hand in class. They don't wanna stick out, they don't wanna seem stupid, whatever it might be. It worked really well when we used it in a couple of classes at MIT and it's handy because you can just spin it up for a class and then spin it down again. Like there's no persistence state to it. It really is for just any single lecture. This seems fine, I don't mind doing this update. I don't, this isn't something that I actively maintain but it just kind of works. There's nothing to really maintain here. Else the formatter might run while you edit text which can lead to problems. Well, notice that this one was specifically when you press a keyboard shortcut, it does formatting. Okay, there's one more here. Currently C bind gen seems to include dev dependencies. This is just something that I've been watching in the background. I don't think that matters. Okay, so we got through one little page. Let's go one page back and open a bunch more. Open SSH. Maybe important to have, okay, so open SSH is this, it's a binding to talk to an open SSH client in order to talk to a server. So it doesn't implement the SSH protocol. It lets open SSH handle that. It knows how to talk the protocol to a local instance of open SSH which then connects to the server on your behalf through something known as the Muxing client or the Muxing protocol. This is one where nobody's view has been basically the primary maintainer of this now for a while. It may be important to have the SSH server perform DNS resolution as a client often does not use the same DNS server as the server. We use open SSH to set up SSH tunnel via bastion host. Our customers often want to establish a tunnel to connect to a host whose name is not resolvable on the client side of the tunnel. All right, so this has already been approved. Let me just look over it real quick. Okay, so this is basically making it so that when we set up forwarding, we no longer require that what's given to us is a socket adder which is a resolved host name. Like it's just an IP import and instead it can be a string. Like it can actually be a, just a path, not path, a host name or a DNS name that then can be passed verbatim to the server and then the server does the resolution. But this change though isn't okay because this changes the public API of socket. Like all of the internal parts here are fine but this is a problem. This changes the public API and so it would end up being a major, requiring a major version bump. And I don't think we have one scheduled for open SSH. Yeah, major version bump. Would you mind making this a separate constructor instead? We have to be a little careful about these kinds of changes. Maybe we should integrate. There's a crate called Cargo Public API. Is that the name of it? So Cargo Public API, I thought there was another one too. Because there's Cargo Public API and then there's Cargo Sember Checks. They both do sort of the same thing which is you check whether a PR introduces breaking changes essentially, like whether the public API of the crate changed. Yeah, so let's see how this compares with Cargo Public API uses Rust doc, focuses more on API diffing and not API linting. Yeah, I mean, I don't mind too much. Let's do, maybe we should add a CI step that invokes or this one to catch these things automatically, at least in most cases. These tools aren't perfect. They don't catch every single one. It is raining outside. Yeah, so this is something that we really need here. And this also means we can't do a release of the open SSH crate until this is fixed. So I'll do Mark is on red. Left, right, okay, so this is in a, oh wait, there was a question. As an author and a maintainer for some of these repositories, how do you go around remembering parts of the code in the PRs? Because I struggle in two and three repositories I maintain on the company's git. I don't know how I think the way that I remember these is that I have worked on all of them so much. But in some cases also pattern matching. Like it's not that I remember all the code of open SSH. It's more that in this case, like this changed the type signature of a function that's marked pub on a type that's marked pub. And that just is a breaking change. So I don't need to know extensively how this API is structured. I just, I can tell that that is a breaking change because it's almost the definition of a breaking change. The type signature of a public method on a public type changed. So it's not so much that I remember the entire code base. Has there ever been a GitHub review feature that you really missed? Yeah, GitHub is really bad at once you get to reviews that take like multiple iterations where like some comments from a previous iteration might not be resolved, but the person still pushed a change to that line or if I leave a comment, the contributor makes a change, thinks they have fixed the thing I asked for. So they mark my comment as resolved, but I disagree with their fix. I won't know because it won't be visible to me anymore. The comment will be collapsed. So I have to remember to go through and check all the comments that have been closed whether I agree that they should be resolved. And so that the sort of, it's not that I don't trust the contributor. It's that I as the maintainer want final say as to whether something has been resolved or not. Okay. So yeah, so this isn't left, right. This is an atomic primitive for a sort of concurrency primitive that works a little bit like a reader writer lock, except it's heavily optimized for cases where you have a lot of readers and where you're willing to duplicate the thing that's behind the lock in order to improve performance. Recoverable errors from fallible operation. Currently there's no channel through which errors can be recovered that doesn't require a significant workaround. Knowledge of the success or failure of an operation is key to certain systems. It'd be nice to have a method on the right handle that would return the result of the operation instead of an exclusive reference to self. Such a method could be called try append. Ideally this return type would be any type of the try trade and that type would just be returned by the methods. However, that trade is still nightly only. The try append method on the right handle would then only be available if the operation given to it implements the trade. Moreover, blanket implementation for absorb can be provided. Okay, so in this library, the way that you structure writes is you don't perform writes, you queue writes, you queue them as operations to the underlying data structure. And the reason you have to do that is imagine you have a left right over a hash map. So you actually keep two copies of the hash map. One that the writer writes to and one that the readers read from. When I do a write, I apply the right to this map, but eventually when the two gets swapped, then I still need to apply that right to this map as well. And the way that works in practice is that you enqueue an operation into a log and that operation is going to be applied to both maps. And so these methods that they refer to here have absorbed first and there's also a absorbed second. Those absorb methods are absorbed first is the first time when the operation is absorbed into the first map and absorbed second is the operation is being absorbed into the second map. And what they're proposing here is that it would be nice if there was a way for operations to fail. And for that failure to be communicated back to the writer. So currently the idea is that if you stick an operation in the log, it's just gonna be applied to both things, the left and the right. And there's an assumption that that application cannot fail. Let's see. And so they're proposing try absorb first and sync with. I'm not sure I follow though because how would you actually return this to the right? So this API is the API for implementing the data structure but it's not what the writer sees. So if we go to left, right, then right handle. So you see that the operations that you have here are published which is basically swap the two maps so that readers get access to the more updated version. And flush which essentially does the same thing. And then there's append which is append this operation to the log. And maybe I'm misremembering, but yeah, all append does is it just extends the operational log, it does nothing else. It doesn't apply the operation at all. That applying the operation doesn't happen until you call publish, I believe. Yeah, so it's only down, it's only down here that we actually apply the operations which is in publish. And I believe flush just calls publish, is that right? Yeah, flush is like only publish if there are any pending operations. So it would have to be publish that's made fallible. So try append wouldn't really work. I also don't think we would need try drop. So if you look at the requirements here, absorb is like the data structure needs to be able to absorb the operations from the operational log. And so this is the trait that makes the data structure claim that it can do so. So it needs to be able to absorb the operation the first time. So if it is the first time the operation is applied it needs to support it. And it needs to be able to absorb it if it's the second time the operation is applied. The only difference between these is the first time you just given a mutable reference to the operation and the second time you're given the operation itself because you don't need to keep it in the log anymore. So the split of those is an optimization. And then sync with is an optimization for the case where you do the first ever swap where one map for example is empty and the other one has been filled with a bunch of things. Then you don't actually need to apply the operational log you can just clone the first one instead and save yourself potentially a bunch of expensive operations. So that's what sync with does. And then drop first and drop second. These just drop the data structure. And I don't think that needs to be try based. But that's why they're proposing this fallible absorb here. Can you give a motivating use case for when operations might fail? I'm not opposed to this but I want to make sure we land this complexity only if we think the solution actually addresses a real use case. Also I think the fallibility needs to be exposed to writers in publish, i.e. try publish since append only append to the log and does not and cannot. Apply the operation at all. At that point I wonder whether the fallib, the error is even still useful because it might happen to some operation that's in the middle of the operational log which then leaves us in a weird middle state. More in the weeds, I don't think we need try drop since drop shouldn't be fallible. Comet. Yeah, left right is kind of similar to RCU. It's a little bit different in that it only ever keeps two copies and it always keeps exactly two copies but it is similar. Okay, done. Testing of client and server side Rust interaction. This is in Fantaccini which is a browser orchestration library similar to Selenium. It uses WebDriver to do this automation. I have a towery app where the front end is written using VASM bind gen and built with VASM pack. This means I have Rust both on the front end and on the back end. Best performance of transmission of large arrays. I've used this front end. Yeah, I don't think this applies. So basically they're asking whether they can use Fantaccini for communication between a web server written in Rust and a web front end written in Rust that communicate over WebAssembly. That's not really what this is for though. Fantaccini is for making browsers do things. Oh, I suppose if it's a towery app. So it's like a web-based app. So the browser is also under your control. And so you can use Fantaccini to get that built in or embedded browser to do things. And I guess what they're looking for is whether they can make the encoding of things if you call JavaScript via Fantaccini to be more efficient. But I don't really think so. Steve Pride is someone who maintains basically more user-friendly bindings on top of Fantaccini. So Fantaccini is like bindings of the low-level web driver protocol. And so he has a lot of experience with the sort of, basically this whole space of solutions too. For towerism in connecting the ChromeDriver to the web view via the Chrome debugging port and then use Fantaccini to send commands to the ChromeDriver. Yeah, so you can make the embedded browser do things. The support doesn't implement. Try to interact with the rust parts of towery or the front-to-code itself. Fantaccini doesn't really do that. Yeah, so the back-end, you could have your back-end use Fantaccini to tell the front-end to do something, but it doesn't really enable you to do communication between the front-end and the back-end. At least that's not what it's intended for. You could use it that way. You could orchestrate the browser in the sense of sending it some JavaScript to run that the back-end dictated, but that feels very roundabout. Like, yeah, I think this is the right question to be asking and then we'll see if we hear back from them. So I'm not gonna add anything to this issue. This is where it's really nice to have other people also contributing to these. Okay, this is an HDR histogram. So HDR histogram is an implementation of a pretty cool way to store histograms. So the idea being that if you have, you're collecting lots and lots of data and you wanna keep a history, let's say you're recording metrics, like a latency of requests. And you actually want to sort of store the full distribution of requests. So you don't wanna store the average request time, like the average latency for processing a request. You actually wanna store like, if one page load took a second and one page load took a millisecond, you kinda wanna record both. And so you wanna keep a sort of histogram of how long did it take to process requests. Then the naive way is to store every value, but that ends up taking a lot of space. HDR histogram is this really neat way to compress the representation of that histogram. So you store semi-accurate data, but you preserve a lot of the shape of that histogram for querying later. So it's a really neat way to do that. And this is the implementation of that in Rust. Arrow and iterating over histogram with only zeros, the iter recorder returns an empty iterator. Looks like the problem is with the iterator as count at returns to the right number. Okay, so you create a histogram, you record a zero and then you wanna iterate over the recorded values. Oh, that's interesting. So this happens in core mod. Oops, lip. I think it's just lip. The lip file is ginormous here. Actually, no, it's gonna be in the iterator implementation. And I think we have a shared implementation of this, but let's look at recorded. Yeah, so there are two parts to iterators here. All the iterators are implemented using this picky iterator trait. And the idea being that a picky iterator is one that might only walk a subset of the values of the histogram. As you have to implement this pick method to say, do I want this bin to be counted or not? And here it looks like we select bins where the count is not equal to zero. And we haven't visited that index yet. Visitors start as none. Okay, so we should return some already. So this feels like it would yield bucket zero if the count is non-zero for it. So that means it's probably this picky iterator thing that's wrong. So it starts at the beginning of iteration. Total count index is zero. Counted index is zero, current index is zero. Okay, next. Ended starts as false. Self.currentindex equals his distinct values. So I would assume that distinct values here should be one, although let's go back and look. So basically it tries to detect whether it's already walked all the values. Distinct values. And distinct values is self.counts.len. And this is the thing that actually records a value. Mute at value. Mute at index for value. Index for value. So that just gets you the index at. So something's off here because it doesn't look like this updates counts. That updates total count. Self.counts. Set counted index. Resize. And so what's the code they give? They create a new one. And I think we do call restat. Count not equal to zero. I think it's self.counts that's off somewhere, but let's keep reading and see if there's anything else. So if distinct values is zero, if that ends up being zero here, then current index is equal to zero is going to be true when you first start out, and therefore ended is going to be set to true and it's going to return none. So that might be true. Have we already picked the index with the last non-zero count of the Instagram? The Instagram, the histogram. If the last picked index is greater than or equal to the max value index. Oh, it might actually be this one. So the last picked index is greater than or equal to the max value index. The max value index is going to be zero because it's going to be the index of the bucket that has zero in it. And then it's going to call self.pickard.more, which in the case of recorded always yields false. So I think actually this implementation is wrong. I think this should be self.visited.is. I think this needs to compare whether self.visited is equal to that index. Yeah, I'm pretty sure that's what's going on. So it's the more implementation here that's wrong. All right, let's see here. Good catch. I think what's going on here is that before we hit down to line 28 and recorded iter always returns false from fn.more, what it probably needs to do instead is return self.visited is equal to sum index passed in to more so that it will iterate over that first bin at least once. Could you test out that change and report back? If that doesn't work, if that doesn't work, then could you instruct me to do that again? I don't know, I don't know, I don't know. I don't know, I don't know, I don't know. Then could you instrument picky iterator next? I'll link to that one with some debug to see if any, to see where it exits with none and what values cause the conditions that lead to that outcome. Let's see, there's a comment mentioning zeros and some original implementation and one of the methods. Most of those values especially towards the end will be zeros which the original history diagram doesn't yield. I don't think this is relevant. It is true that our implementation diverges but I think our approach is still correct which is iterate until we reach the total count and then iterate only until more returns false. So I think it's just the implementation of more here that's wrong. How come you don't just test that fix yourself? So the reason I don't test this fix myself and I talked about this in the previous stream too is because for two reasons, one of them, as you saw, we have a lot of notifications to get through and this is something that this person can relatively easily do themselves and so I would rather them do that in parallel with me looking at the various other notifications I have that are blocking on me specifically. Like this issue someone else can make progress on. The other issue is probably require my input for someone to make progress on them. So the idea here being I want to spend my time to unblock other people. The second reason is if I can get this people to experiment with a fix, potentially provide a PR, that's a pathway to them becoming a contributor and maybe even a maintainer down the line. If I just push the fix for them, they're not going to learn anything and so this is a way to enable them to learn and potentially contribute and potentially aid in the maintenance of this library going forward. It's also true that this, as someone mentioned in chat, this makes me think that they have use cases, like they have something bigger than this test case they give me where this fails and so they can hopefully then test their fix on, hopefully they can test their fix on the real data set they have as well and not just in this limited setting. Great. Done. This is in the implementation of the IMAP protocol. Refactor tag assertions to fatal error. This is a long one. Is there anything I can do to help this TAPR be merged? Let's see. So who's the original? I think my comments above still stand. So it's really just a, oh, I can't type, just a matter of making those changes if it's still interested in picking that up. And that's probably easiest, but if they're otherwise preoccupied or no longer need this and feel free to open a new PR that builds on this one so we can land it. This is also fairly common actually in this open source world of someone makes a contribution. We iterate a couple of times where I leave some comments and they make some modifications and then eventually they switch jobs or they finish studying or they go on vacation or they start a new job or they just no longer use this crate, whatever it might be. And so the PR is left in this half state where it's mostly done but there are a couple of outstanding issues. And then this was August of last year and then a year later someone goes, what happened to this? I need this feature or I need this bug fixed. Where do we go from here? And the options are either the original author continues to make progress. They just forgot about this which happens or this new person should just pick up the PR from where it was and just make progress from there. And so that's what I'm trying to spell out here that I don't have a preference for who does this. If the original person is still interested they're usually the best person to drive it to completion. Although a year later they might have lost all that context. But I want to encourage anyone to make progress so that we don't end up stuck in this sort of back and forth dance. All right, done. But this is another example of something where it kind of requires me to come in as the maintainer and make a statement as opposed to spending time trying to fix that bug in HR histogram where I think that person who reported it can make progress given the comment I left. So it's more about unblocking other people. Another IMAP issue, idle concurrency. I'm in a situation where a client... So the idle protocol in IMAP is you connect to the server and you send this command called idle and what that will do is you're not able to send any more commands and instead you tell the server write to me, like write a message in my direction when something changes in the mailbox. Like if a new email has been added or something's been marked as deleted or whatever. And it's basically a way for you to do like push instead of pull. So the other way to do this is you like send a message to check the inbox every, you know, whatever 30 seconds. If you send idle, the servers instead are going to actively push a message to you when something has changed. So it's a better use of the medium in a sense. I'm in a situation where a client is sending emails continuously by SMTP. My other IMAP client connects, fetches all messages from the inbox and starts idling for more messages. Nevertheless, if just an email is received between my program does fetch an idle, the notification of more messages is lost forever. Yeah, this is a classic race condition, right? You do a fetch and then you do an idle but someone sends something right in between. I don't know if there's a workaround for this issue. The only idea that this is the mind is to open two IMAP connections, one for idling and another for fetching. But you understand the protocols and tend to work in all scenarios using one connection. That's interesting. I wonder what mail clients normally do from this. This feels like something stack overflow might have an answer to. IMAP idle race condition fetch. This is another thing you get really good at is the right way to Google for things. Sorry for the bright screen. A bunch of code that does IMAP search idle done, search idle done. Is it possible that some messages arrive between the search and the idle and will those only be received? A properly implemented server will notify of the new messages as soon as you start idle if it hasn't already notified you about them in response to some other command. Interesting. That doesn't seem relevant. Doesn't seem relevant. That doesn't seem relevant. It really is this one. I'm surprised that there's no racing system while starting up the idle. This is interesting. This is in canine mail. Canine mail this revision. Let me see if we can dig up this. I wonder how we would even find this. This feels like a revision number that's used in that's used in SVN. When is this from 2009? Finding this might actually be frustrating. I think canine mail, which is a mail client for Android, I don't know if their Git history goes that far back. 0.x. Where was this email from? Did they say trunk? 2.0. Let's find the tag for 2.0. And then look at the history of 2.0. How on earth are we even going to find revision 1012? A limited race condition. That was entirely by luck. Caused multiple connections to idle on the same folder simultaneously. That seems different. If we just search for race condition idle for commits. It's definitely that commit, but this feels probably unrelated. That's too bad. This might just be like a fundamental race condition with IMAP. I'm curious whether there are any... Even the spec doesn't really talk about the race conditions here. Yeah, it's interesting. I think what I'll do here is that's a great question. And unfortunately, not when I think I have a great answer to. Neither does the internet. It seems. I think this may just be a fundamental limitation with idle. The stack overflow answer linked suggests that servers will eliminate this race condition on your behalf. But I don't know how true that is in practice. At least I couldn't find any references to it. The SON question also suggested maybe using connections. So you may have to do something like that unfortunately. If you do come up with a good solution, please do please post it back here. Especially if there's something we can do in the IMAP crate to make it easier to work with. Comment. I've read the IMAP spec a bunch of times and I'm pretty sure it doesn't talk about this. We can look at the IMAP revision 2 spec which might talk about this. The idle command. Untagged. Yeah, there's no discussion of this unfortunately in the spec bridal. That's unfortunate. IMAP isn't exactly known for being well thought out. You're not entirely wrong. Tracing timing. This is a subscriber I wrote for the tracing ecosystem where in tracing you can emit spans which are basically a way to group events or to relate events. You can start a span and then the events that you emit within that span know that they belong to the same span. In the subscriber it's possible to essentially query about information for the span from a given event within that span. You can imagine a span being something like a request. That way any event that you log in the span of that request is associated with that span. It's associated with that request and you can do things like look up fields of that span of that request when you're printing an event within that request. Tracing timing utilizes the same structure and computes a histogram of inter-event timings. Imagine that handling any given request does like I don't know receive, parse generate, response serialize or send. Those are the four events that we log within a span that's a request. What tracing timing will do it will record the time between between receive and parse between parse and generate, response and between generate, response and send within each span and then give you the histogram of those inter-event timings across all requests, across all spans. It's a pretty handy way to figure out which parts of your event processing may be slow. Here, for example if we have here a request span and inside of that span we do fast and we do slow. We do a little bit of work before fast then if you run this with tracing timing it will give you these histograms to show that the fast event usually has a latency of let's say about 175 microseconds with this kind of distribution and the slow work has an average of or median of like 600 microseconds with this kind of distribution. It's a very handy way to get a quick overview of where your request processing or just general where your processing is spending its time without having to put in extra bookkeeping and stuff for that information. It can just reuse your existing tracing annotations and this issue is basically someone observing that it's a little bit awkward to use which is true, like it relies on multi-threaded recording to HDI histograms and multi-threading recording is just complicated and the multi-threaded recording API for HDI histogram is also a little bit painful which is my fault but it's because it makes it really performant but it does make the API awkward and the argument here is that it's just like easy to get this wrong which is totally true. Yeah, it's like easy to accidentally deadlock for example. Yeah, this is just help for this person. Yeah, it is a very low level API. Like the things you have to do is you have to create this builder of histograms and then you have to create a dispatcher for it and then after all the operations have happened you need to call this like with histograms things which extract the histogram from the tracing state and then you need to actually print the histogram which doesn't happen by default and what they're proposing here is for example it would be nice if this type just implemented debug for you and would just do something reasonable. Sorry for the silence here. The timing is a bit on the back burner at the moment. I would love to improve the starting experience or rather I would love for the starting experience to be better though have limited time at the moment to make that happen myself sadly. If any of you want to take a stab at improving the at adding a debug, an input debug extending the examples and maybe even making the interface harder to get wrong especially with regards to deadlocks which are pain I would be happy to take a look at some PRs I wrote a comment what I wrote here and this is sadly very common I maintain a lot of libraries at this point and not all of them I'm actively using myself and I unfortunately don't have the time to actively be doing feature development for all of them I wish I did and so what I wrote here is a fairly common sentiment is that sorry for the silence it's on the back burner I'm not going to go into the starting experience to be better though I have limited time at the moment to make that happen myself but if anyone wants to stab at adding input debug, extended examples or maybe making the interface harder to get wrong especially with regards to deadlock then I would be happy to look at some PRs because in general like I know that for these libraries it's probably untenable for someone to just like take it over I would want them to submit some PRs I would want them into how to make it better before I sort of give them the keys to the kingdom so I do want to spend time on reviewing PRs but actually implementing new features here is not something I end up with enough time for maybe one day if I end up doing this like full time but even so it's a lot better if people who are actively using this themselves are the maintainers in this case like I'm not really using tracing timing myself very much anymore because I don't like it but because I was primarily using it for Noria and I'm not really building a system like Noria at the moment and so I just don't really need it need it and so that's why I would love for people to contribute who are actively using the project tracing timing is also really cool it's both a cool it's a cool tool but the implementation is also interesting if you're looking for something to delve into that has some concurrency and stuff then please have a look let's see oh right this is something that came up in the previous stream where someone filed an issue around differential flame graphs so the ones that show how much slower or faster different stack frames got in a flame graph and we commented on this in the previous stream oh man they have COVID that sucks so let's see okay so this one is currently a draft and I wrote in there some in the previous issue they basically outlined an explanation of how they think we can make these differential flame graphs better and we read through it last time where basically like this seems like a good idea the current thing is confusing absolutely fine with changing the way that this information is output so I guess they're gonna oh tracing timing is the one and so I basically told them go ahead make this change and I guess they already have a draft PR up so we can take a brief look at it but it sounds like they're gonna sort of re replenish that PR if you will once they once they get better let's see old and new I wonder if these labels still make sense if negate referentials is on so negate referentials is a way to treat old as new and new as old kind of and so they're implementing that functionality here but I don't know if these labels are going to make sense if if negate differentials is on if if x is equal to y then don't output anything because it didn't change so if not negate differentials then it's new divided by old so it's a multiplier it is right so if it now spends twice as many samples in a given stack frame then what it used to it's going to say the ratio is going to be 2 so 2x compared to this is going to say 2x and then the old value so it's going to say equals equals 2x and then the old value followed by the word old I'm not sure I follow the phrasing here this will look like if for example we spend as many samples in some frame this will look like equals to x I don't know 1, 2, 3 or 2, 5, 6 old which I think is maybe overly concise how about so I think the idea here unicode x oh this is the compose key on Linux which is fantastic it lets you create all sorts of characters so I can type like a Norwegian by writing o and then a slash and the unicode x is compose key xx no the old oh you're right it's going to be old 256 wait actually no it's not going to be that either it's going to be the count name so it's going to be this which is maybe overly concise this will look like I think yeah you're right compared to as old here overly concise well so the question is also what it prints the other thing that bothers me it doesn't include the function name it doesn't include the function name is that printed elsewhere and the old or new sample accounts I think will want to include at least the function name and one of the two absolute numbers I'm also not sure if old and new are understandable labels in this in this instance especially for negate differentials maybe cause like two x old cycles it's like very dense like twice as many cycles as what were there previously I guess old cycles does kind of get at that but how about they may even be unnecessary I think right like I don't think the word old helps here I think if you just say it's twice the number of cycles like it's sort of implicit that is compared to the old so I'll just leave that as comments great Hasmail this project I've basically abandoned because I rewrote it in Rust Hasmail is a little tray icon it's just a tray icon application that checks whether you have new email and if it does it changes the tray icon and then triggers a notification it's really handy to just like if you don't want to run a full mail client you're using mutt or something but you do want email notifications that's what Hasmail gives you but I've rewritten this in Rust and it's called Buzz and so I would just recommend people use Buzz instead I I basically don't maintain the go implementation of this anymore the new implementation in Rust lives in which should hopefully be easier to customize and understand close I think I already have it in the readme actually that this is this is now just Buzz apparently I do not let's do that right now the go implement this project has been rewritten in Rust and is now maintained under is now boom I should arguably archive the repo too okay Funtuccini can you automate the gecko driver session initialization I was working on a personal project which requires web scraping so I looked at this library problem came to mind is that you need to have something gecko driver already running which I don't really know how to start programmatically can someone explain it to me this is like how do you run like in order to in order to orchestrate or manage a browser you need to run this little daemon that implements the web driver protocol on one side and knows how to talk to the browser on the other so in the chrome world this is a tool called chrome driver for Firefox is called gecko driver and this is basically someone asking like how do I run that tool which is not really something that Funtuccini deals with as as it should just be a matter of running the gecko driver command beyond that running the web driver host is sort of outside the scope of this project close is not planned okay let's see where we're at more things this one we can close all right let's keep going bump Urboros from 0155 to 016 Urboros is on sound interesting Urboros is this it's a library for implementing self-referential data structures in Rust so specifically imagine that you have a buffer or like a Vecu8 that holds like data you read from the server and then you want also in the same object that holds that Vec you want to hold like the parsed representation of that Vecu8 as we do that a decent amount in the iMap library because you can imagine that like if you get a if you fetch an email then that email is mostly going to be like the Vecu8 is mostly going to be the message of the email and so in the parsed representation we just want to keep a like a string reference that is the that is really just pointing at a subset of the Vecu8 rather than having to replicate that entire string and allocate it twice as we use Urboros for that but apparently it's on sound and so there's some fix in Urboros itself and it passes all the tests it's just rolling Ubuntu the next question now is does it doesn't matter whether this is a backwards compatible change because iMap is about to get a breaking change a new major version anyway this rolling release update is something that's fine that's been fixed elsewhere great so this seems reasonable Urge does really mean that I need to I need to decide when we're going to cut the new breaking change of iMap currently it's released as like alphas and the reason is because we still have a couple of outstanding changes that I know will be breaking and so I don't want to release like iMap 3.0 until we've done all of those because otherwise I'm going to have to release an iMap 4.0 not too long after the problem is all of these are very volunteer based so who knows when they'll land it's a tricky balance in 015 the fix is implemented in 016 265 is now merged and according to the advisory 016 does not have this issue anymore so I think we're good we will only be good for iMap 3.0 when that eventually lands though which is awkward alright let me see if I can backport it I really I don't even remember whether Ouroboros is used in 0.2 because that's fairly old by now but if we look at something like Fetch yeah I'm pretty sure we don't use Ouroboros in 0.2 so if I go to cargo Tommel and then dig up the version from not from main but from 241 yeah Ouroboros isn't in there great I'll cut a new alpha soonish if I forget please ping me great close with comment unsoundness here means that there's undefined behavior in the library and I think that changes in all versions of Ouroboros 015 I think it is mostly a I think the reason they didn't yank it is because normally a security issue like you have to be pretty intentional about triggering it it's not something that like anyone who uses Ouroboros is now subject to huge problems it's more like potential unsoundness from my scheme of that advisory great so this I'll have to cut an imap alpha that's fine this is now done that's now done beautiful bump open us all from 01048 to 01055 did not like that this is in Funtouchini we got this in multiple things I guess this is because of a some kind of security advisory probably I think the only reason it's there yeah it's because of a security issue in open SSL so that's why it's specifically cutting this even though it's a minor release this seems fine it's not really a problem because I'm up as a library so this is setting the minimum bound this lock file doesn't apply to our consumers this bump isn't actually something that we need to absorb it's not important I don't need to do a new release because our consumers have their own lock files anyway but I think the reason it triggers as far as is because I check in the lock file even though it's a library this is fine to merge but it's not important arguably we should bump the lock file but not the cargo toml in this case but it's fine it doesn't bump an MSRV or anything so I'm okay with it here it breaks a bunch of things why if we do that depend about rebase if it can come up with a better change plus one from dependabot let's go to the next one in the meantime automatically cleaning up dot ssh connection on connection failure okay so this is an open ssh the way that open ssh works is it spins up this open ssh muxing client that does connection negotiation so that we don't have to implement the open ssh protocol and the way that we talk to that sort of local open ssh client muxing client is through a unix domain socket that unix domain socket is a file that's created in a temporary directory that we construct call.ssh connection with a bunch of random letters at the end to make it unique and currently I guess the observation is that if connecting fails then we don't delete this directory afterwards so the user ends up with like a bunch of crafty directories if they fail oh and they have panic equals abort set so we don't get to run destructors um yeah exactly yep great thank you someone else already responded and closed the ticket this is why it's great to have other maintainers on projects uh is this now rebased now it's running all the tests why just checking on nightly now work unknown feature proc macro span shrink that's disturbing that's very disturbing I wonder why this is such a brilliant it's just an open ssl bump like why is this causing a stripe it's also only there for minimal versions like it's not this doesn't change anything I guess we'll see okay uh rust ci conf so this is a uh library that I built it's not even a library it is a collection of github actions ci configuration files that I use to configure the ci on all the rust projects that I own um so it has like if we go look at the repo I did a stream on this one actually it's just a dot github it has some configuration for codecub and for dependabot and then it has these different github actions workflows that will run your test suite with all the different features enabled will run check that your minimal supported rust version is correct run miri run loom whatever um it's just I use this as the default uh and there's a lot of technical detail that went into creating the ci configuration and you noted that it wasn't documented particularly well in the youtube video covering it I've added my understanding of what you wrote as well some basic instructions how to incorporate this into an existing repo I took the approach oh so this is basically adding documentation to this repo so that people understand what they're merging in um yeah that's nice and then someone left a review that's fine great a lot of other people cooperating alright let's look at this one uh dot github slash docs.md in this folder there's a configuration for code coverage dependabot and ci workflows checks the library more deeply than the default configurations so it can be merged using a allow unrelated histories merge strategy from this which provides a reasonably sensible case for writing your own ci on by using this strategy the history of the ci repo is included in your repo and future updates to the ci can be merged later to perform this merge run that fetch ci merge allow unrelated histories ci merge um yep that's fine and this will also work even if you do another merge later then that flag is just unnecessary that's fine as consumers of this library would build with their own lock file rather than the version specified in this library's lock file yep that's why we don't do patch updates even though dependabot still triggers patch updates for dependencies like we saw with open SSL but I think it only does it in the case of security advisories um okay so we looked at this file we looked at that file we looked at that file this configuration allows maintainers of this repo to create a branch and pull request based on the new branch restricting the push trigger to the main branch and source so the PR only gets built once yep yeah so this restriction in the ci file if I didn't have this then if I create a branch of my own project and then push to that branch in order to open a PR all of ci runs twice once for there's a branch of the repo and once for there's a PR and so this ensures that only branches only the main branch gets its PR gets a ci run um you could push to PR branch then cancel in progress workflows for that PR ensures that we don't waste ci time and return results quicker so this is if I post a PR and then while some ci jobs are still running I push a new commit to that PR then the existing PR ci run will be canceled like any in progress tasks will be canceled no new ones will be run and then it will only be run for the new commit instead get early warning of new lints which are regularly introducing some beta channels yep dock generation on nightly to get dock config combinations of feature flags feature power set runs for every combination of features that's right determine the minimal rust version supported by this crate this is not quite right this isn't quite right this doesn't determine the MSRV it checks whether a predefined whether the configured MSRV can indeed still build the current crate so it will warn you if you accidentally bump the MSRV but it will not find the MSRV in such a case great this workflows is in no std checks whether the library is able to run without the std library this entire file should be removed if the crate does not suppose no std maybe also add a line here saying to saying that some of this repetition is unfortunate maybe also add a line here and in the other non check files saying that all the shared configuration lines like on push are documented in check.yml and then remove the repeated comments like if new code is pushed from all the other files unbalanced brackets in the review did I where I don't see unbalanced brackets great this workflow checks for unsafe code in crates that don't have any unsafe code this can be removed runs myri address sanitizer leak sanitizer and loom yep when scheduled rolling jobs on a nightly basis you create my break independently of any given pr updates to rust nightly and updates as crates dependencies fine section checks to update the dependency of this crate to the latest available to satisfy the version cargo tumble does not break this crate this is important as consumers of this crate will generally use the latest available crates subject to the constraints in our cargo tumble they will not update to 2.0 if we specify 1.3 1.3 in the other non-check files really oh yeah you're right I completely read past that good catch okay this file is missing top level documentation and so is check actually missing top level documentation chat is helping it's true enable the CI template to run regardless whether the lock file is checked in or not yep this action chooses the oldest version of the dependencies to ensure that this crate is compatible with the minimal versions of this crate and its dependencies require this will pick up issues where this crate relies on functionality that was introduced later than the actual version specified for example when we choose just a major version but a method was added after this version eg and ie are should always be followed by a comma should well should basically always be followed by a comma according to most style guides I think here I want oldest version of oldest version of dependencies permitted by cargo.toml because it's not the oldest version like that would be like 0.0.1 it's the oldest version that's permitted by the semantic versioning requirements we have in cargo.toml yeah so for eg and ie there are there are three ways there's eg not followed by a comma or optionally followed by a comma so there's this one by regex and this is only recommended by the economist style guide and no others then there's eg with a comma which is most I think this is most British style guides without a space which is most American style guides I may have gotten common American and British mixed up here but with comma is common in both and without is only common in one so the general consensus is there should be a comma there's a really interesting discussion on the stack overflow for English stack English comma after ie it's this one I think and there's a long discussion in no it's not that one it is from sometimes duck duck to go is not very good at finding things but maybe it is that one it's from in American English comma after ie not in British it's this one that's the one I was after and then this article which talks in length about this and about whether there should be dots in between them whether there should be commas after and the general recommendation is you should basically always use a comma after anyway this is neither here nor there this particular check can be difficult to get to exceed as often transitive dependency directly specified for example a dependency specifies 1.0 but really requires 115 there's an alternative flag available that uses a minimal version for a dependencies create while selecting the maximum versions alternatively you can add a line in your forget where I already did this I did this in might have been an I map we already did this no oh maybe it's actually in this one yeah so this is the other way to make minimal versions be happy is you specify an optional dependency where you give the minimal and then you place it behind target config any which is never true so that way it affects your generation of your lock file but you never actually take the dependency so we will add that in there in your cargo.toml to artificially increase the minimal dependency which you do with Toml see also and I think I want to recommend that people do this needed to allow foo to build with minimal versions the optional equals true is necessary in case that dependency isn't otherwise required by your library that dependency isn't otherwise transitively required by library and the target bit let me see if I can actually dig up the history for that line someone added that fairly recently I thought I don't remember why I have config any there and the target bit is so that this dependency edge is never actually never actually affects cargo build order I think is why so normally if there's a direct edge between two crates then cargo is always going to make sure it builds one before the other but in this case we basically want to or it has to it knows that this crate let me see if I can dig this up right it thinks there's a direct dependency between this crate and that crate when in reality that might not be true for minimal versions we might specify this just because some transitive dependency we needed to lift the floor of but we don't want cargo to think that there's a direct build edge because it might not do optimizations for build order it might otherwise do we'll add that in there run cargo test on macOS and windows use LVM come to build and click coverage that's fine okay viewed and viewed this is great thanks for taking the time left a few notes in line of course changes some interview done what's a Z prefix in a flag means it's the way that cargo expresses flags that are nightly only experimental unstable flags alright how did this open SSL PR go did it does it still fail it still fails so many things I don't quite understand why oh this is because Wikipedia has changed very annoying actually there's a there's a bunch of tests in Funtouchini that basically queries Wikipedia to check that we can actually interface with a real website but when Wikipedia changes finds all inner then the test starts failing which is really stupid like I should make this not be as dumb um so online 170 so that's here the thing on the left that's fine minor get clone Funtouchini Funtouchini yeah I know it's a tarot so we've started getting better at this so if you see here there's a local rs test suite for Funtouchini which is all based on local files and then there's remote which is all based on Wikipedia it used to be that all the tests were based on Wikipedia and it was a huge pain now most of them are local if you want to do something that's fairly simple work then please please please take the remote test from Funtouchini and port them to be local test instead so that this problem goes away it should not be very hard and it's a great way to just do a PR to Funtouchini that matters but for now I'm going to fix this by replacing these with that and then I'm guessing this other one also fails sub-element and so that's going to be the same thing where whoa why is that there that's a weird annotation uh this too oops that's not what I wanted those are the only two that failed right it finds sub it finds all for Firefox and for Chrome great I changed again uh changed someone please turn this test local okay this is one of those where I might as well do the fix myself because this particular fix because I just I know how to do it um and so well I do this as a PR just to see that CI actually passes and then I can land this and then I can rerun CI for the I can rebase this dependabot and then everything should be good okay this is for the Arata for us for our stations in chapter 8 listing 14 sum to okay return value is result I think we need to use okay because I use some T and question mark operating destructuring assignment was changed let's see listing 814 says this thing server this thing so the changes from sum here to okay when you call socket accept man it depends on what library you're using for async too but um tcp listener calls accept dot oh wait yeah I mean this was tricky because the the idea here was that you want to handle or the reason I wrote it this way because in some ideal world when you call accept you get back a result of option where the error is something went wrong and the none means there are no more streams like the socket is closed which isn't really an error in the same way um so this should only be while let okay if you want to ignore the errors and if you don't want to ignore the errors then this doesn't need to be a this can just be a while let stream equals there's no need for the okay here because the question mark or already unwraps um where do I go here so um so this kind of depends on which a synchrony library you're using for tcp listener um in general accept will return just a result tcp stream not result option tcp stream even though I think the latter makes more sense um uh when that is the case I would argue you should still use um when that is the case you would either need to write uh while let okay stream equals socket dot accept auto weight or while let stream equal uh equal socket auto weight question mark um I think the latter is actually better because it won't silently drop errors on the ground um when exiting um but it's also a little bit of a weird pattern um it's an infallible match um I don't really know what the book should recommend here because like it's not talking about any specific implementation of a library so um we we it could easily just be talking about a sort of broadly and accept that returns a result option I wonder what uh accept in the standard library does tcp listener accept it also returns a result over tcp stream yeah which makes me a little sad um for many parts of the book one thing I worry about is giving code and then people getting tripped up by something that isn't relevant to the thing that I want to talk about um yeah you can also have a loop that just does this inside um so we could do or a uh at that point it could even just be loop let stream equals socket dot accept auto weight um like one thing I worry about with with the book and when writing in general is like people getting tripped up by some pattern that looks weird that's not really related to the thing that I'm trying to explain like in this case um I think what I'm trying to talk about is the sort of structure of how to handle multiple clients concurrently so I start with this as the example then I expand it to be like what if you now join on the clients what if you spawn the clients to talk about things that happen concurrently or things that happen in parallel and I worry that that gets lost a little bit once this becomes a strange looking pattern like this one or where the structure sort of changes like this one and I like the advantage of the way that this was written is that people don't really question it like it's true that it doesn't work if you're using Tokyo or even indeed if you you pretended that the standard library was async and you just use the standard library because they don't differentiate between getting an error from accept and getting having accept telling you there are no more connections coming which to me are different conditions um so this is a sort of tradeoff between technical accuracy and and what is more helpful um you know there are a bunch of things that are technically incorrect about this right like for example what you actually get back from an accept is you get a tuple of TCP stream and socket address but that's not something that I handle here um technically also you would need to put an empty okay line here um because otherwise this wouldn't type check but those things aren't important so they're not in here um however I think the weirdness of that setup is um that code layout might drip people up even more the code is already semi somewhat inaccurate compared to what you'd have to do in the real world because it uh or e.g uh accept returns a tuple not just a stream stream uh and uh the function would need a trailing to type check um and I think the less um distracting thing is actually to keep it uh the way it is where the reader can imagine a reasonable API that matches yeah so another option would be to give accept the signature that I prefer but the thing is this the entire chapter there's no definition of accept like I don't uh I don't in the book give the definition of accept because I don't really want to talk about accept like accept isn't important to the thing that I'm describing is I just assume that people roughly know what accepting a connection means in tcp um this isn't about tcp and so that signature is never given it's always inferred by the reader and and that's sort of what we're looking at here right like to keep it the way it is where the reader can imagine uh a reasonable api that matches um the code um so I'm going to close this okay how did this go whoa this did not go how it should that's interesting well whoa why did it not work it clicks did not work because ooh I got a 503 from wikipedia service temporarily unavailable that's interesting and not what I would have hoped uh I'm guessing this one might be the same now this is something else that's like the rolling test that's failing why is this failing it clicks by locator also failed okay so it suggests that it clicks by locator is maybe broken um firefox what's it called it clicks by locator this is because I don't have normal firefox installed I only have the firefox developer edition that's fine this is all normal firefox as well how about now open me a browser please that passes so this is a transient error then that's fine maybe I get rate limited by uh by wikipedia wouldn't be entirely unreasonable uh the errors are transient wikipedia 503s so we're good comment merge with administrator privileges merge delete this branch and then dependabot rebase again please dependabot is going to rebase and hopefully that's going to make things work okay another pr2inferno from dependabot bumping index map uh that's fine that's a private dependency we don't have that in the public api as far as I remember and yeah that's fine this is just used internally I'm curious now though what is the what changed an index map to they have a changelog in here they do not do they have one on the repo it's not a changelog on the repo ah releases msrv stood features no longer auto-detected certi1 has been removed get mute now returns that that would be caught at compile time index map now have additional things reserve exact equivalent is re-exported hash brand was updated in certi c okay so these are all none breaking for us oops did I accidentally close that yeah I did okay this seems fine thank you dependabot uh I don't think I care uh yeah I don't this is this is a fine change to just do it's out of date with a fine all right dependabot rebase this one's going to keep running shell escape is being overzealous uh bug in open ssh trying to do a curl and those of the arg would fail turns out shell escape is getting a few more single than double quotes try something like this and review the data structure I think you need to use a different create a shell escape hasn't been updated in years uh why are they quoting themselves in here that is an intended way every argument has to be passed separately is same as process in tokyo process yeah that's right great nothing for me to do done very exciting uh how about asking the guys at the internet archive you can use a corner of their page for tests um that's not a bad idea wikipedia is arguably a bad target because it keeps changing um there was a period where wikipedia changed their whole design and that was a nightmare um but but I really want these tests to just use local files instead I think if we're going to change them we should change them to use local files because there's no reason for them to be used in external resources at all just makes them more annoying to run it means you can't run them if you're like on a plane or something which there's nothing preventing you from doing so so I would rather just fix them properly okay what are we down to 105 so we were at what 135 so we're down 30 this is pretty good um okay bump rustles connector in imap uh that's fine because this is gonna be a um I don't understand why ci hasn't run for this that's interesting uh network request interception unfortunately isn't part of the web driver spec and so at least not yet so it's not something phantocini will support as things currently stand this is like the ability to look at which request the browser is making essentially like the developer tools networking tab that's not something phantocini does um this has been closed because it's been superseded that's easy this has been closed because it's been superseded great let's work for me rust the icon the matrix for msrv needs quoted semmer or it will truncate put 170 in there and it will actually test 1.7 as it appears github auto converts this to a float that's wild that's awful that's absolutely terrifying and awful thanks for catching it so if you put 1.70 here it will treat it as a float and 1.70 is the same as 1.7 so therefore it's using 1.7 that's terrible and gross squash and merge um quote msrv version to avoid float parsing confirm squash and merge that's disgusting github but I understand this is not their fault uh request rust support this is in some I don't even know why I'm following this to get it's a feature request for aws code build docker images unsubscribe and done uh bumpsvelta this is where we were wondering uh that seems fine I don't really care um 358 to 359 what could possibly have changed squash and merge here I do need to manually do a release of this to the website there's no automation that pushes it out to like the s3 bucket that hosts the interface it would be nice to set that up maybe I'll do it one day um but for now this can have to be manual uh bump rustles connector I don't really know what to do to trigger this one like this seems fully reasonable I just don't know how to rustles connector also this should probably zero dot 18 dot zero so here's what I'm going to do this is going to re-trigger ci so I guess that's nice uh add comment and then I'll do the same here to propose here that this should be zero because when you increase the the effective major version then um assume that you can start to patch version zero and then commit the suggestion and now let's see if that kicks off ci yay something is cued great uh this is an issue I filed against the toml crate ages ago where uh if you parse toml using the toml crate then the way the thing you get back lets you deal with things like an array and it it the toml crate has its own definition for array um so that it keeps track of like inline comments and stuff so if you print it back out the toml you print back out includes any comments that were in the stuff that you brought in so it's not just a vector it's actually a separate type but this means that uh there because it's a separate type it's not like a vector you don't have access to common collection methods like retain which is really really handy to have and so this is basically just asking can we add retain as a as a feature of array and the table likes um that's great uh I did it for the wrong crate I love it when other people do things that I want to happen uh great so this adds retain which is calls retain on the underlying value of what I would expect the implementation to look like beautiful uh that's fantastic I don't remember what I wanted this for I think it was maybe some stuff I was doing at Amazon I don't think I'm using the toml crate in any of my personal projects well I'm I'm glad this happened great done all right uh bump open open SSL in Fantaccini now looks like it's passing tests the rolling Ubuntu one is failing because that's still I don't understand where this comes from this is like it runs cargo update and then tries to build and apparently that doesn't work which is a little disturbing uh bump index map with the rebase that's still working this is probably that same issue actually yeah so I think there's something wrong in proc macro it might be that I have to switch the dependency to be like 2.0 instead of 1.0 given that sin and quote both got bumped and that the 1.0s don't actually work anymore with newer versions of Rust it's almost certainly what's going on there which means I probably have a dependabot PR further up in the queue that's like bumping sin and quote but we can ignore those errors then in here but let's have CI keep running there uh this CI is running so now we're just waiting on a bunch of CI this is the downside of using the open-source funded um CI jobs it limits how many jobs will run concurrently which is very understandable but when you're doing batch work like this CI like you'll see a bunch of these jobs um uh haven't started yet well now they've all started so my argument doesn't hold anymore but it only runs a few of them at a time across all your projects and so you just need to wait and sit patiently uh libflate and rustzip okay there's another dependabot libflate and inferno so this is also internal only this just deals with how we um how we compress um how we compress representations of SVGs or something let me see here oh it's even it's only in dev dependencies I think this is because we allow you to pass in a uh compressed flame graph or maybe we use compressed flame graphs for tests um so yeah this doesn't matter for anything it's in dev dependency it's fine what's this check nightly dock that's that same thing that's fine uh and we'll do a dependabot rebase for this to make sure I didn't do anything wrong so now we're gonna have to wait for ci uh this one I think I'm happy enough with that I'm just gonna merge this confirm uh which also means dependabot is gonna have to rebase again because this is to the same repo so this one's done uh this fails on macOS latest because yeah this cookie test is broken for some reason uh this is a spurious test failure that I haven't figured out yet but all the reasonable ones are landing so I'm just gonna merge this so we don't have to wait for it those failures are not due to this bump is basically the observation uh Russell's connector all the ci here seems to be running fine except for beta and that's uh deprecated functions in chrono that's fine can merge this and libflate that's now gonna rerun fantastic okay uh fix bias towards the first element due to floor okay so this is in the zip crate so zip is a um um when you generate random numbers um normally what you want for the random number generator is to generate uniformly random numbers so the probability of sampling any given number in the range is equal across the range so the probably if you generate numbers between 1 and 10 it is equally likely that you generate a 1 or 2 or 3 or 4 or 5 or 6 etc um zip is a different kind of distribution uh so rather than being a uniform distribution it is a skewed distribution so the likelihood of getting a 1 is much higher than the likelihood of getting a 10 uh and it's sort of an exponential curve kind of I think technically it's called log normal it's not exponential um so the the elements further to the left in the sampling range are much more likely than the next element to the right of it and so on so you get this like gradation um and one of the reasons you might want to use zip is if you're running benchmarks for example and you want to emulate that you have a skewed distribution so for example uh the thing I use this for was uh emulating the number of votes on an article this is for like the noria work um so imagine that you're running a benchmark and you're generating a bunch of articles something like hacker news like generating a bunch of articles and you're generating votes on those articles then it's not as though all articles get roughly the same number of votes in reality some articles get way more votes than others uh and so I you can use something like zip to generate that distribution of votes so when you when you are about to generate a vote the way that you choose which article is you generate the article id to vote for using zip uh and the zip fly free here is a port of a very fast implementation of how to generate these skewed numbers uh there are some very slow ways of doing it but there's a java library that does very fast um implementation of this this distribution and so this is a port of that to Rust this is a correctness issue where the generator favors the first element more than it should and it's the last element less than it should for a given exponent this is due to the implicit use of floor which makes emitting the last element only possible by an exact match we did thus shift the likelihoods from the first element which was favored to the last by adding 0.5 before doing the floor as the original code also does oh nice someone's porting it to Julia nice um the original implementation here and the port here so the this is the original java code it has a loop it does the integral inverse it takes the x it gets back add 0.5 and takes the floor and then it takes a if k is less than 1 then k is 1 which is the same thing we get by doing a max here but I think this can use round I think there's a round operation for f64 around the nearest integer to self so half way between two integers round away from 0.0 um so I wonder whether instead of adding 0.5 we should round although I guess that's really I wish actually the round returned a u size like an integer type instead of an f64 because otherwise casting from the f64 to an integer is not I don't think guaranteed to give a um give the right integer but the reason I would like to use the round is it might be faster than doing this um add 0.5 and then cast um but that seems fine whoa why did this change why did this change has this code like changed recently no it changed seven years ago oh this is in a test yeah this is in a test right so what this is doing is um this test just generates a bunch of random numbers using the sift distribution and then checks that the frequency matches what you would expect to see after generating a sift of distribution um and it does that by seeing whether how much we're off by compared to what we would expect the frequency of each bucket in the distribution to be is less than some like um I don't know like wiggle room uh or jitter that we're allowing and this had used to have a special case for handling the first and last because those were more wrong but that was because of that bug which is now fixed beautiful all right uh this seems great although why doesn't this have ci marshal because it should have ci that's interesting ci just did not run for this workflow run completed with no jobs I don't understand why not is there source revision somehow not contain the workflows no it has the workflows so why didn't any of them run uh all right let me make some change here then uh I guess I'll just something arbitrary rounds towards zero so we add to ensure great and then commit suggestion really just to poke ci great well that triggers ci that triggers ci great no idea why ci didn't trigger otherwise um let's see that's going to run in the background this is the bump of libflate and inferno and that looks fine merge squash and merge commit finished close I would be very surprised if this ended up if this fix was not correct um but it's just good to have ci be green anyway all right oh we're down to 94 look at us go just waiting for this to run java can be fast yeah of course he can if you have a fast algorithm then it'll be faster than the slow algorithm oh round can't return an int because some floats are too big that makes sense uh this is setting on get up to require approval before ci runs maybe that's on no because then it usually shows me a button to say like a proven run and it didn't do that here um let me go ahead then and do just zip this is almost certainly going to be fine um and then I'll do a release of zip straight away because that one's pretty easy to release all right another page uh this I'm pretty sure I've already dealt with yeah this is like a a regression in tracing those backwards are compatible that's fine I can get rid of that this past merge no squash because I don't care about my additional little commit uh and then we'll do get pull and we'll do cargo tumble release 701 why can't I run cargo check that's disturbing found create config if compiled by an incompatible version of rust c it's almost certainly because that folder is out of date there we go great so release 701 with bias fix um and tag so this is script I have that just walks the get history and finds which commit was the one where I changed the version in cargo tumble and then add a tag to it um and get push and cargo publish here we go and zip zero 701 is tagged on the least least in 7.0.1 I forgot to actually leave a review on this one that's a great catch thank you more more we're making very good um progress through this uh support hot and cold flame graphs I've not actually used these before but they are basically very useful uh oh yeah this is for also plotting off cpu time so stuff time spent waiting uh so normally flame graphs um do sampling so they only sample what the cpu is actually running so if your process is like asleep like if it's waiting and except it's not going to be sampled it's not going to show up as where you spent your time and that can be a little misleading because it can make you think that your code is like spending all of its time in function a when in reality is actually spending most of its time asleep in function b uh and hot cold graphs will um will show those those waiting parts in blue um great idea I think this is mostly a matter of how the input is collected and then just um figuring out which uh frames to code uh to use red ish blue ish for depending on the uh on something in the output of perf script would be really neat indeed uh this is a feature to we were wondering um this is the qi q&a site um let's pure introduce a step two over from here right so this is uh when I built we were wondering I was definitely relatively new to frontend development which I still is and new to the sort of framework I was using um and it turned out there were a couple of architectural decisions that should be changed somewhat for the system just work better um and this person came along and basically implemented a bunch of things to make it better like debouncing and stuff so that animations would be nicer um but that that particular pr was very large and so I asked them can you split this up into smaller prs um and this is one of them uh so this is moving what's event here oh event is information about the current q&a session so we were wondering supports like you can go create an event on we were wondering I can go create one and they end up with separate like unique identifiers and those are separate events and each event has associated questions uh and I think a title I forget exactly but mostly like it has a separate like admin key uh and it has a separate list of questions uh and previously which event was currently open which which event the current pages on was stored in a sort of local variable in in Svelta here um and the observation is that it would actually be better to make event to be in the the store which is a special type of object to Svelta has that's like persisted across page reloads I think um and it also lets you be reactive with regards to if that variable changes somewhere in the code then some other code any code that has read that variable gets automatically run so that's the the reactive part of it which you don't get with normal variables in the same way I think um then right and we end up with in the previous styles sometimes when you change this variable it ends up like um loading things twice because it doesn't realize that it changed to be the same value um so here we're going to subscribe to the event load questions for that event okay this all seems like fairly straight forward changes okay this all seems reasonable to me uh this seems entirely reasonable to me thanks for for it back up also didn't they say something about they were moving me and hope your move went well um uh did you have a chance to also test this out yourself uh I assume that means it worked without any meaningful changes to the behavior of the page just so I don't merge it if they didn't test it because then I need to test it myself um the reason I use then catch in this code is because felt it didn't basically require it I think because you couldn't uh you couldn't have a block in the outer scope of a javascript file that might have changed but but it was by necessity like I have other code in there that's async white style um okay so that was this one um immutable iterator plus iterate with tokens what is this this is in stream unordered okay um this is a different crate again uh stream unordered implements a stream that multiplexes multiple streams so the idea is that you have similar to how you have a you know join for futures or this is really more like futures unordered so there's uh uh tokyo no not tokyo uh futures util uh the futures util create has this thing under stream called futures uh futures unordered um so a futures unordered is a set of futures and the futures unordered itself is a stream so you stick a bunch of futures in there and it gives you a stream of the values that those futures uh resolved into in arbitrary order so if you push the futures in a b and c the things you get out of the futures unordered is going to be the future values the moment any of them um respond in that order so if b resolves first the first thing the stream is going to yield is the value from b if c then resolves then the next thing the stream will yield is the value from c's future uh and then a when a resolves there's also futures ordered which preserves the order but what that means is imagine that you stick in futures a b c and d and they resolve in the order d c b a um then futures ordered still has to wait it has to buffer the responses from d and c and b um it has to keep them in like a vector basically until a resolves and only when a resolves can it release them in the order a b c d um so futures unordered if you can tolerate the change of ordering tends to be more efficient because it doesn't have to buffer anything um now stream unordered is a similar kind of abstraction except the thing that you stick in there are streams and not futures uh so so it basically multiplexes these streams together so that you get one stream that logically combines or joins or merges multiple asynchronous streams under it uh I haven't looked at this crate in a long time um but it's mostly because the stream api has stayed fairly static um so um uh so there hasn't really been a need to change this crate immutable iterator plus iterated with token exposes the existing iterator pin ref via the methods iterator and iterator pin changes iterator and iterator pins item during u size and that the u size being the token of the stream changing the item type of an existing type would be a breaking change as far as I can tell these types are incapable of being constructed as it feels of private and no other methods I could see returned it constructed hence this change shouldn't be breaking I attempted an implementation for stream unordered iterator pin but this way out of my depth so the PR is missing its implementation okay um we're going to project the queues many streams into stream unordered that are related to a single connection when this connection is closed I need to be able to clear all active streams would be useful if something like this following is possible stream unordered push some streams remove all streams for streams in iter map token instead of remove ID I'm in a similar fashion with the mutable iterators let's pull up the docs here stream unordered so when you push a stream or when you insert a stream you get back a u size which is like the the token for that stream and it's sort of a unique identifier for that stream so that in the future when stream yields an item let me see here stream when it yields an item then it also tells you which stream yielded that item the idea being that even though all the streams have the same value type or item type you might actually care which of the underlying streams it came from and this u sizes is that identifier um and you can also use that to like remove a particular stream in the set for example so the question now becomes why do they want this so there's an iter so this is 4 right so there's an iter mute and iter mute here allows you to iterate over all of the streams in the set like all the currently known streams so iter mute here you'll see has oh interesting yeah it's just a mutable iterator over all of the streams but this iterator doesn't include the token and I think that's what they're after they want the the iterator to also yields the token of the stream which is really a bummer that iter mute doesn't do um so let's look at what they added here iter for iter pin ref uh yeah this is fine although I don't know if it's okay to hold on to this reference after you do this operation so we might have to read the idea up here but that's fine this is it iter pin ref is sort of the underlying type I believe I don't think it's exposed in the yeah it's not exposed in the public API it's just like a helper for implementing um iterators this should really not add an unpin bound yeah that's not gonna fly um iter returns an iter oh I see so we had an iter type it was just not exposed oh this seagull is going nuts outside my window yeah I don't think this should require unpin why does this require unpin? iter mute why does this require unpin oh because you're taking a mutable reference to self uh so I'm not gonna go in deeply into pin here but but basically the moment you stick something into a pin or I remember your phrase you have to stick things in a pin before you're allowed to pull them as a future or as a stream once you put something into a pin you're not allowed to expose it without the pin wrapper and so here you know we have we've already pulled things as a stream because that's what this whole data structure does but we're giving out mutable references to the streams and we can only do that if the underlying type is unpin that is if it's allowed to be extracted from within a pin after we've put it in a pin the first time as you see that bound does not exist for iter pin mute um so that's fine so iter um I don't think iter actually technically needs this bound because um because it doesn't expose mutable references but I think it's okay we could always relax this later if we can find a way to do it um I don't think iter pin is important to expose um but I don't know why this should suddenly require unpin that doesn't seem right um the only thing that's awkward is that I wish iter it's going to be confusing that iter mute here um where is my here source iter iter4 yeah this also only returns a reference to s ah but they changed that here but only for iter I think it's going to be confusing that iter mute does not include the token but iter does I think we need to be consistent here and I think the way we do this is um I think we should just not expose this right now unless we have a concrete use case that benefits from the pinning in the that case these seagulls are really just going for it out here um I don't think this extra bound should be necessary um I'm not 100% sure that it's uh safe for us to still access task here so let's instead read task.id uh read instead have that id is self.task dot id uh directly next to let's stream further up and then just use id directly here um I think it'll probably be pretty confusing for iter to include a token but iter mute not to let's instead uh add this as iter with token and add a mute with token as the mute version then in the next major version we can get rid of the non-token iterators and just have them all provide tokens instead add support for fluid drawing it's from 2019 oh someone left a comment this landed in 2019 and then someone left a comment last month it's really expensive to render for complex graphs oh that makes sense so this change was basically moving from having pixel based widths for all the frames in flame graphs to using percentage widths instead now percentage widths means we don't have to compute a bunch of things ourselves but it means the browser has to compute those things instead which is what this observation is that this is now actually a lot more costly to render um like it went from what 200 milliseconds to almost three seconds of layout calculation zoom is also a bit slower um with 40 to 60 percent of the time spend update text yes we switched to monospace fonts here which made that a lot better let's check with the monospace font uh there's definitely a cost involved with having the browser do this work I think there's a decent argument for us just making monospace fonts the default so with monospace fonts this update text method which has to walk all of the text elements to like truncate them um so that the text fits and doesn't overflow um with monospace fonts we can pre-compute how many characters to show and we know that that's just how always how many we're gonna show um but monospace fonts are not the default because they don't read as nice um what do you think uh this one's already released it's gonna be closed it's gonna be closed uh non numlok numlok numpad keys kind of piece this is an issue I filed with OBS where you can't put like on numpads um you know how you can have numlok on and off if you have numlok off then like the seven key on a numpad for example is home or the zero key is like insert you can't bind specifically the numpad insert key or the numpad home key to things in OBS uh the home key on the numpad is treated the same as the home key on your normal keyboard um but I really want to be able to bind things to the specifically the numpad keys and so this is basically someone uh pointed out how you can do this like how you can implement this distinction in OBS um and this is just I need to write a PR or if someone else wants to write a PR this is potentially a good way to land a change in OBS I'll send you the link in here I'm not going to deal with this on stream though it's gonna be a bunch of implementation work um okay fontachini expose the return remote capabilities on the client so tackle 195 uh can find a way to get the original new session command response which contains the WebSocket URL the WebDriver creates for communication um okay so this is when you connect to when fontachini connects to a WebDriver host there's a response back from the when you establish a session that response isn't um isn't currently exposed in the client anywhere but it turns out there's some useful stuff in that response that users of the library might want um it's supposed to be a public getter access the remote capabilities let's look what is capabilities here though is that a type we own or is that a type WebDriver owns it's a great we own pretty sure I think we've already abstracted away all of that uh capabilities is a sturdy JSON that's fine um let's see okay so this is um when we connect to WebDriver we do like a handshake to establish a session which is essentially the same as a browser window um well we get back from that new session thing uh here they want to preserve the capabilities field of that capabilities is an object then we return the new session response here okay map, map handshake response capabilities then we store the remote capabilities interesting I like that um the only question here is so this extracts the capabilities from the response but I'm wondering whether there might be other things from that response the user wants as well so rather than just extracting the capabilities maybe we should store the entire response I also forget whether on fontecini um remote because capabilities is like an overloaded term in WebDriver because when you connect to WebDriver you send it um these are the capabilities I want and then it responds with these are the capabilities that I have um and so when currently fontecini will refer to these just as capabilities which is you know the right term but here we want to talk about like the this getter to see what the WebDriver host sent us back when we connected um is currently added as a function called remote where is it a function called remote capabilities um but I almost wonder whether it should just be called capabilities rather than um rather than remote capabilities because there are no other capabilities that are relevant here like no one's going to think that if you call dot capabilities on a client it gives you back the capabilities that you sent it when you connected so I think I'm okay with um yeah I think I'm okay with this just being a um being called capabilities this just being called capabilities actually I doubt anyone would expect that this is the capabilities we originally sent to the server and we can also clarify this in the doc texts what do you think seeing this made me wonder are there other fields of the response outside outside of capabilities may be relevant I guess let's see whether the response that we get back with the response we get back only ever has those fields oh it only ever has session ID and capabilities okay great um that's fine then so there's nothing else that we expect to ever be added to this um but maybe uh maybe more like here or more like here just store and expose the entire new session response rather than only the capabilities there's not much else in there at the moment but the session ID may be handy and who knows if more fields may be added in the future this is a great addition thank you left a few questions but none of them major prove I like to even if I have like non-blocking comments I'll usually approve it and then leave the comments and the reason for that is when I later come back and look at this PR I will know that I have already looked at the PR and so I can sort of reread my own assessment of whether this was generally okay and we'll approve and run the CI how do you batch these it would kind of make sense to me to batch this kind of work by repo so sometimes I'll like look at all of the things for inferno for example but um I'm doing a big catch-up like this I kind of like to do it by time because some of these people have been waiting for a long time and I'd rather get to them sooner rather than prioritizing sort of arbitrarily things for a particular repo um but that's a personal preference normally there aren't enough that I need to batch and I think it would make sense to do them by repo too just to keep the sort of cash state although for my sake it doesn't matter too much enough that I can context switch between them fairly aggressively I think for the purposes of a stream it might make more sense because as you've probably observed by now I jump around a lot between these different projects and that's harder for you to keep up with because you don't know these as you don't you haven't been exposed to these as much as I have and so the jumping is maybe jarring so it's a good observation um okay unsoundness and safe code from 2022 closed in 38 oh that's great so this is something that we already fixed in 2022 we just didn't close the issue nice rocket ship because it was already fixed beautiful uh document cargo features in readme users who create who use this crate as a library will very likely want the cli feature disabled and it's also not clear to me what the name after feature is for even after looking at the code so section in the readme or maybe the root of the crate docs the list what every feature is which are enabled by default would be really valuable uh I completely agree um I really wish there was a standard way to document features in cargo slash the rest ecosystem feature document um features in the cargo rest ecosystem there's some discussion in but no hard guidance as of yet um maybe you so this person has been a contributor for a while and knows the code base pretty well too um you could take us stab at this after you land 30 298 297 295 296 where's the one where they fix it they have a draft PR uh 294 okay I wish this discussion went anywhere but it did not really okay uh release open ssh version 10 uh yeah that seems reasonable um I'll do that off stream though is basically at this point this person is like the main contributor to open ssh and the main driver for bringing it forward and it's unfortunate that they end up bottlenecked for me on me for doing um releases and stuff so what I'll do is I will give them permissions to like run ci and stuff like base day admin privileges on the repo and then also give them publish privileges on great sayo but I will do that off stream so I will save this and mark it as a um what's the time it's basically lunchtime I think this is a decent place to stop we caught up two weeks ago that's pretty good we're at 86 and we started at what 120 some things we went through 40 issues I'm happy I'm also going to mark this one as done because I see that it has been superseded okay I think that's where we're going to stop for today uh I'll what I'll probably do is I'll do another one of these maybe next weekend if I have the time like I have to catch up with this backlog anyway so I might as well do it on stream hopefully you found it useful and interesting and thanks for joining me and watch out there might if you're watching this on recording there might already be a video that continues from from this exact point in time so thanks and I'll see you later