 Okay. And Jane is the founder of this group. I've just been told. Yeah. Where's the microphone? Oh, right here. I can move closer. Like on the directional. Yeah. Should I go stand by? No, it's fine. And if it turns out that anyone has trouble picking you up, I'll move it for you. All right. I'm here to make your talk easier. So I'll move whatever. All right. This is like a first class situation. Yeah. Hello, everyone in the room and on soon. My name is Jay Allen. I've been an early developer since 2010, which is quite a while. And recently, I have to admit to you, I've become a DevOps person. Sort of not on purpose. It just like one of those things that just happens to you. If you get care about something and then they're like, you're the DevOps person. So that's what happened to me. Like, we were deploying a lot of early in software and we had like, almost no process around packaging it and like putting it out for distribution. And so I got super like care about that. And then suddenly I backed into this job of being a DevOps person. So for the last few like year, 18 months, whatever, I've been doing mostly DevOps stuff. But honestly, like early is still in my heart. And I just want to share some early love with you guys. So tonight we're going to talk about early timer wheels. Timer wheels themselves are not like a new idea. They've been around since the 80s. In fact, like the paper, I hope that some of you at least glanced at or skimmed or read the summary of was written in 1987. It's still an amazing paper. Honestly, it's very short. It's easy to understand. It's incredibly powerful. What a good tech it is. So we're going to talk about that tonight. Also, any time if there's questions that you want to know about Erlang or time or whatever, just ask it's fine. Like, I'm not like one of those presenters that like, if you get in my slow, like, you know, I'm going to be totally flustered by it. Yeah. Are you on the zoom meeting or no. If you don't mind jumping on the zoom meeting, that way you can share your slides with everyone. Sure. I can do that. If you go to the website. Yeah. First have to get on the improving Wi-Fi. Oh yeah. Improving students. Proving. Well, you know, I guess we're going to have a slight technical. Difficulty. And I'm not even using Linux on my desktop. So you guys are super lucky today. You know, we're not going to have like 15 unplugged replugs. Just a quick warning. You don't want to say that too loudly. Because I'm supposed to be next to our Linux group. People. And this is a Linux person. Everyone knows that. That's only on art. I've never had a problem. I'm just going to throw some bombs out. And you can discuss it after I'm done talking. Or, you know, if you really get mad at it, you can just brass it and throw it at me. Okay. Like, I can do that. I can do that. I can do that. I can do that. I can do that. I can do that. I can do that. I can do that. I can do that. I can do that. I can do that. I can do that. I can do that. I can do that. I can throw it at me. Okay. My computer does not like improving. Are we, are we having an issue with that? I don't know. Even my, I have an alter. Okay. Capital. Oh yeah. That's probably why it's not working. It's doing something. Here we go. It's doing the thing. Oh my God. Okay. We go. Woo. Awesome. That's the best. Okay. Now I need to go to the thing and do the thing. Yes. Go to the thing and do it. precision. Yes. I love that. Yeah. So this is the thing here. I need to join. Yeah. You connect it to the right zoom meeting. And then you go to show your screen and I'll see here. Also. We're trying to do Jerry. It's one. It's making a lot of understand the day of game net. Or a long time. Yeah. This is the paper. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. I just want to check. Do remote participants have the shared screen. You should see the title slide. I feel for the speaker. We see it. Is it still a thing? We're good. We're good. Yeah. Okay. Awesome. Yeah. Yeah. Yeah. First of all, let's talk about how weird time is. Who thinks that time is weird. Who has never programmed date time. Who has never been super frustrated with date time. Everyone in this room knows that it's one of them. Everyone in this room knows that date time. It's one of the three armpits of computing. The other two armpits of computing are printers. And serial cables. You'll notice that two of them are possibly related to one month. Okay. But programming those things are awful. And they're even one of the reasons they're awful is because time is very political. Like it's this abstract thing that humans have invented. And so humans have interpreted time and time. Like how they want to do time for their place in different ways. So, you know, in our country, there really wasn't a notion of time until like. The middle of the 1850s when railroads started like having time, like train schedules, right? And then people were upset that they didn't have local moons. So they decided, well, we're going to have time zones because of geography. And then, you know, that was the start of some really terrible ideas. But the problem is, is that time itself, everyone thinks it's like this sort of always progressing. Fully, like strictly monotonic advancing counter, but the reality is that actual time is not like that. It's quite a bit next year. So I have a couple of factor crafts. Who here thinks that it's a fact that time always advances. Factor crap to say factor crap. Well, correct. Are we like, are we talking about like Earth like near a black hole? Because I'm willing to. You know what? I love that you brought a black hole. But let's talk about not singularity time. Let's talk about Earth time, you know, without like giant gravitational fields to influence how time goes. I think back. Yeah. Right. Like intuitively we think that's a fact, but in fact, it's crap. Yeah. So. In fact, I'm going to do that. I promise I will do that. In most practical programming scenarios for us, it's always monotonic. There are cases like before, before the Julian calendar, before the zero Julian day where my family. That's a reference to a calendar. I mean, sure, that's, that is a specific problem. Actually, like before, you know, ranges that are sort of outside of our normal, like we're just shooting timeline. Those are definitely annoying. But like that's not what I'm talking about in this situation. So I have another factor crap, which is NTP is a thing. Who knows what NTP is? Okay. Someone's learned out what is NTP. It's a centralized time. It's a centralized time protocol, right? It synchronizes clocks on computers, right? Right. So why do we need to synchronize clocks? Like why is that important security? Security is a good reason. Why else? General relativity. General relativity. We're not getting like on the edge of stuff. Like the Disney movie from the 80s black hole, not a click hole here. We are not. I mean, like with the GBS. Yeah. Right for positioning. Because like a pendulum in the grandfather clock, like slows down. And so the QB is, you know, you know, you know, you know, you know, you know, you know, you know, you know, I know the clock in the grandfather clock, like slows down and so it keeps worse. And then. Very OCD. And so if his clock isn't exactly. Right? And like, just to do it because always do some type of error that might happen. So maybe just have money at least eight. That's a big thing to be synchronized. That's right, it's not the top bottom in the night of indigestion. Synchronization sounds very stable. Synchronization is super stable. So in one of my next slides, we're going to talk about how white clocks are important for distributed systems. But for right now, we're just talking about time itself as an abstract. OK, so who here has heard of a weak second? Yeah, so what's a weak second? Useful to go and keep low voltage. Like if you commentate the one year on low to 365 on 25 days, we have to use the compensator that, correct? But even if you go back and leave here, they're not compensated for the low voltage, no, you know, like a couple of seconds. That's also correct. So every year, every single year, there is a weak second. OK, the weak second generally happens on January 1st. Does anyone know how computers process weak seconds? They don't. How do you process a weak second? Don't you just think that it's a weak second? I don't need those words. Yeah, it's magic. Like it's literally magic. OK, like the fairies come out of the wall and they tag your computer and then the weak second is magically passed. OK, so there's all kinds of different ways that companies have decided to approach this problem. Leak seconds are generally processed by your clock, not advancing for one second. OK, that is my time on the different. It does not match with weak seconds. That's correct. So that's a case where it does not. Now, there's also been bugging implementations of clocks and computers. Who's surprised by that? No one. No one is surprised by that. In fact, there are documented cases where NTP has rolled back the time on servers that are supposed to not have that happen and shit breaks like that. It happens instantaneously because it's unexpected. So if there's one rule about distributed systems, it is that you must expect the unexpected. The old classic saw is you be liberal in what you accept and conservative in what you said. That's the rule about building distributed systems. OK, so we talked about how weird time is, how political time is, how time itself doesn't always advance because of these meaning to match with actual hashtag science, and all kinds of things like that. So that's specifically about time. And then the next thing I wanted to talk about is distributed systems. So I'm a distributed systems nerd. That's also close to my heart. You probably guessed that because Erlang is a wonderful language for building distributed systems in. So let's talk about some of the reasons why you need reliable clocks. We've already mentioned a few of them. One of them is positioning. We need to be able to calculate where we are based on triangulation. Triangulation counts on time. So we need to have really accurate clocks so that we get a good measurement of where we're at on Earth. We have this big satellite like cloud and it's sending a signal and we're timing that. And that's how we're going to figure out where we are on Earth. So that's important. I didn't put it on here, but that's a great use case. Another is coordination of state replication, which we touched on just a little bit ago. There's this idea in distributed system transactions that there are these different isolation levels, which means if I have multiple writers, what is the ordering of those rights? How are we going to record that in our source of truth? And the highest level, highest guarantee of that is this thing called linearization. And linearization is essentially this very intuitive notion that over some time t-line that the rights are going to be ordered in the correct sequence. So if you have two rights that happen simultaneously, you have to resolve that somehow. But at the end of the day, there's going to be this total ordering of all the rights at a certain time for a certain register. So that's linearization. You need to have good timing for that because, again, you need to collect what time these things happened at and then resolve them if there's conflict. There's also retries, like failure recovery. So you have an algorithm, it didn't work, or you have a remote partner that's not responsive to you. You need to detect that somehow. That's called failure detection. And then we also have retries, which I touched on already. And then finally, there's algorithms that do traffic shaping, traffic control. So you have back pressure in your distributed system, or if you're setting up data and you don't want to overwhelm something, and so you send it out a little bit at a time and you save the rest and send a little bit more and you send a little bit more until your bucket's empty. So that's another case where you need a timer and a callback and something needs to happen on this sort of interval schedule. So let's talk about how we can implement some timers. So the paper that I asked you all to read or that I want to talk about tonight discusses a couple of ways that the authors are aware of to implement timers. The first one is direct access. What does that mean? It's literally, we're going to pick a piece of memory. We're going to put a counter in that thing and we're going to decrement it. We're going to just keep decrementing every tick of the hardware clock. We're going to go that memory location, decrement it. And when it gets to zero, we're going to do something. Whatever that something is, we're going to call a routine, whatever happens, happens. So that's a very elementary way of implementing a timer. Straightforward, easy to understand. The problem is, is that what happens when you get to very large numbers of timers, right? There's a really big scale problem here, which is there's a lot of overhead bookkeeping on every single tick you need to access every single timer and decrement it. And obviously, as the number of timers increases, that's going to slow down your computation quite a lot. So that's not a good method. The second method is sort of like the first method, but instead of just having a single memory location, we're going to put a list of things possibly ordered, possibly unordered. In the paper, the authors discuss the various tradeoffs that we're doing that, right? So then that's also problematic because as that list gets very long, and then you have to keep evaluating the things at the front of that list or the rear of that list, depending on which method you picked and what tradeoffs you've chosen. Another method that's in the paper that's not on the slide is that there's this idea that you're going to have a treat. So instead of having just a straight up array, you're going to have a treat, right? And then you can do all kinds of tree stuff. And that drops the complexity to n log n, which is better than n, but still not great for large ns, right? And then method three, which I love as a distributed system person is you're just going to ignore wall clock time. We're just going to completely forget about wall clock. And we're just going to worry about processing events as they occur, right? So as an event comes in, we're going to look at it. We're going to classify it. We're going to say, oh, this is an event x. This is an event y. We're supposed to do this when we get x, which is to do this when we get y. And then you can just count those. I've gotten 10 x's, right? So now I'm going to do something with all 10 of those. I'm going to roll them up. I'm going to summarize them. I'm going to retransmit that, whatever the thing is that you're supposed to do with them, right? That's a perfectly legitimate algorithm to use in a distributed system. But in this particular case, we are talking about wall clock time. And generally, you still have to deal with wall clock time, even if you're sort of ignoring it. Because in per debugging purposes, you want to know when those events happen, right? So if your system is processing events and you're like, who knew? I just got 10 x's. What am I going to do with them? You're going to want to say, well, this x came in at time 0, time 1, time 2, all this sort of thing. Because later on, when it fails, you're going to be like, what was I doing when it failed? Like what event was I processing? If they don't have to wall clock time, it's going to be really hard for you to figure out. Let's look at a log file or whatever and try to figure all that out, OK? So at the end of the day, you can't escape from wall clock time, even if you're ignoring it. And what's wrong with that implementation? Nothing. But you still have to deal with wall clock time. So we might as well deal with it. All right. We can do better, right? What if I told you we could do better? That's the point here. So the way that the paper talks about doing better, here's the paper. This paper is called Hashtag Hierarchal Timing Wheels, Data Structural Implementation of a Timer Facility. It was written in 1987 by these guys at DEC. Really great paper. I highly recommend it. But here's a Timer Wheel, right? So this is the idea that's in the paper, sort of this still down to a diagram, OK? So in the middle of the diagram, we have this Timer Wheel, right? It has all these slots in it. We made this really small little Timer Wheel. This is an 8-bit Timer Wheel. So every time the clock ticks, we're going to just look at the Timer Wheel, the slot that it's in, and we're going to see if there's any little boxes there. Those little boxes are timers, OK? And depending on your implementation, they may be ordered lists or unordered lists. But we're going to evaluate the head of that list. And if the head of that list is expired, then we're going to process it. In fact, we're going to process every element in that list, OK? We're going to see it where if any of the jobs in that list expired, every single clock tick. And that way, we can compress the number of memory locations that we're storing data in. And we can also do much more fasty vows. We don't have to evaluate every single Timer, every single clock tick, right? And we can also schedule these things in advance. So that makes it nice. There's some other algorithmic things that go on in the paper. There's the idea of hashing, right? So that means that, let's say we have this 8-bit Timer Wheel here. So every time we put a new job in, we're going to take the time that expires and do Modulo 8, right? That's going to tell us what bucket the little box goes into, right? So when we're building this link list off the Timer Wheel, we're going to look at that, we're going to hash it, right? Do a consistent hashing and what it's called. We're going to do that and we're going to put a little box in that list. And then every time that spot and the wheel comes around, we're going to check all the jobs in that list, right? And then as the job expires, we're going to remove them from the list and do whatever it says to do. Hopefully asynchronous. But anyway, that's the basic idea for a Timer Wheel. Is everyone clear on that? Like, does that access everyone? Is there any questions about that? No questions up there. OK. I need to almost speak up at that in the questions. Yeah, yeah. If you're on Zoom, then you have a question. Just pop it in the chat and someone else. So wait, so I'm just going to put a simple plot, right? And this plot always increments just like every take or whatever. So this plot technically has about eight different things Yeah, so just go up. So from the two between and three to four to four to five, five, six, seven, seven, eight, seven, zero. OK, so let's pretend that we have this really simple timer. And it starts at zero. So every time that thing increments, we're going to do modulo eight, right? We're going to take whatever time take it is. We're going to do modulo eight. That's going to tell us the number on this wheel from zero to seven. And then we're going to get a location and we're going to evaluate all the little boxes in that list. Right. That's the basic timer. But each time we pass one, we're only going to do one event. No, no, we're going to aspire anything that's expired. Oh, so even iterate on that list if we need to. Yeah, so the idea here is, depending on if you sort the list or don't sort the list, the paper talks about both techniques. You know, sort of if you sort the list, then you can actually have an invalid time. That's a protein O of one. It's actually O of N, but in practice, it's O of one because generally speaking, your expires are the front of the list or the back of the list, depending on how you implement it. So you don't have to do that many, you know, evaluations, right? You don't need to evaluate the entire length of the list. If it's unordered, you get really fast insertions, but then you have to look at more potential metadata, right? To decide, should I do something like this or not? Does that make sense? Yeah, okay, all right, cool. So let's talk about Erlang now. So Erlang has been around for a super long time. The first version of Erlang was released in 1906. Timer facilities were added like, I think, you know, around release five, something like that. So not out of the gauge, but pretty quickly out of that. About six years ago, the developers, the core team for Erlang decided they were going to completely overhaul the time system in Erlang. So Erlang has this primitive called now, which is a function. That function returns this terrible time couple. It has this concept called megaseconds. Megaseconds are the number of seconds since the Unic epoch times one million. So yeah, so you take that first number, multiply it by a million, and then you add the second number, seconds to it, and then you get an epoch time. And then if you want the millisecond resolution, you can add that on the end of it. So you have to do like this crazy math operation to figure out like what the system time is. And the other property that now as a function has is that it provides a strictly monotonic increasing counter. Okay, so it's super useful for generating reference IDs, unique identifier. So as stuff comes in off the wire and you want to give it a name or a tag or distinguish it somehow, you call now and it will give you what is guaranteed to be uniquely monotonic advancing timer. Even if the actual wall clock time is not monotonic, you will get a monotonic value from now. Okay, but this is a problem because to generate that monotonic time value, Erlang has to coordinate with all the other nodes in his talk. So if you, in Erlang, it's really easy to network a set of nodes together and they give you all sorts of fancy stuff like failover between run times and you all kind of enter the node communication for free like you get out of the box, it's really neat. But you also have to coordinate a lot of stuff. One of the things you have to coordinate when you call now is, okay, what does the other node have the time, right? And has it already allocated attack, right? Because it has to be unique across the entire cluster. So it requires a lot of communication and coordination to generate that value. You can see in a very busy system when you're calling now a lot, it's going to be a big problem. We're really going to have a bottleneck and not a call. Also, there's this weird couple, right? You've got to deal with that somehow. So it's not great. Like the user interface is really not good. It's expensive coordination. People really want it to get system time and this is the way they were doing it, but it's not cheap, right? It's not just asking it operating system for time value. It's like talk to every single other node, go out, get this value, generate it, give us it back, and then we'll figure out the view of it, okay? So that's terrible. You need to do it better. So after OTP-18, they basically overhauled all the stuff. So they made it more discreet. Like you could pick to do monotonic time. That's really what you want to have. There's also a new system call called system time, which is literally like giving it a system time. There's also another call called OS timestamp, which is the same value, but less interpretation. And then the other thing that early does, which is kind of cool is we talked about NTP is a little bit weird. So when you have NTP across a fleet of servers, you're gonna have like one weird server that's like off in La La Land. It's gonna be like 30 seconds late, a minute late, like two minutes late, whatever. It's just the black sheep of the fleet. All the rest of them are keeping good time or like reasonable time, but this one is like off in La La Land. No one knows what the problem with it is. Maybe it's a bad crystal, maybe it's a bad board, like whatever it is, it doesn't matter. So the question is, when NTP hits that server and updates the clock, and then the clock is suddenly like either 30 seconds faster or 30 seconds slower, how does Erlang account for that, right? So it's been going on its merry way, writing timestamps, using the system clock, probably. How should we deal with the skew, right? Before 18, there was only one way to do that. But after 18, there was like four different algorithms they came up with. And the default one is the one where it sort of gradually either increments or decrements the internal clock so that it gets to be aligned with the external time that's coming from the operating system. But it doesn't happen like in a snap, okay? It takes time, it takes whatever the adjustment period is. Usually it's like four hours. So over a period of four hours, it will slowly increment or decrement the internal clock so that it will match with your system time. Which is pretty cool. There's all kinds of other strategies that has super configurable. It's actually quite, it's like almost too configurable. Sorry, can I, it's slowing the clock or it's going backwards? It's slowing the clock and letting the system time sink. Yeah, sink up gradually, right? It's going backwards. I suppose you could put it like that, but that's not how I think about it. It's basically advancing more slowly. No, that was my question. Yeah, yeah, yeah. Because going backwards causes all types of problems. If it's out of sync the other way, does it slow down? Yes, oh, yeah, yeah, yeah, that's what we're doing. Oh, well, oh, I thought. Oh, I mean, if the system times ahead where it is, it will increase it slowly over the interval, whatever that is, so that it will catch up. Eventually there will be synchronization between the runtime and the OS layer, right? Because in early, the runtime is different. It's discrete from the OS layer, and then they have pretty similar functions in a lot of respects. So that's really how Erlang deals with like wall clock time and kind of an operating system and also a runtime system level. I want to talk a little bit about Erlang's timer facility now. So Erlang has a built-in timer facility for programmers to use in their code. And here's like one of the interfaces that it offers. This function call is called apply after. And if you look at the paper, you'll see that there's a couple of interfaces that are defined in the paper. The thing that I found interesting about Erlang's implementation is that this function call is very similar to the function call that's in the paper. In fact, it's almost identical. The only thing that's really different here is kind of the specifics about how Erlang does function calls. But essentially what it does is it takes a time, and that time is either an absolute time, like if you're doing monotonic time, it will take an absolute monotonic time or you can be relative time. So you could say like 10 seconds from now or 5,000 milliseconds from now execute this function, right? And in this case, it's a module, a function name and a list of arguments. Like those are the, that's what all the types are underneath here. And then what it does is it returns to you a tuple that says, okay, if it's successful and then it gives you a t-wrap. A t-wrap is a time or reference, a reference in the Erlang runtime is guaranteed to be unique on a specific node. And so that way it's a good distinguisher, right? So every single node will generate its own unique references and then we need to cross the entire cluster. Even if you have, you know, 10 Erlang nodes talking to one another, every single time you generate a reference, it will be unique. That's one of the guarantees of the runtime. And it does not require coordination, which is lovely. So anyway, it's this opaque term, essentially that gives you a unique distinguisher. And then reason is a term, a term in Erlang terms is, could be literally anything in RACCUS is an atom. If you don't know what an atom is, it's essentially a tag. It's, you know, if you're familiar with Ruby or whatever, Ruby has this concept tag, right? Where it's like colon and some keyword that's the same thing in Erlang, it's just called an atom. So hopefully that helps people understand the Erlanginess. And then there's another function I wanted to talk about which is send after. So this is apply after, which is after a certain number of time applied as function. This one is send after, which is after a certain amount of time send this message to this destination. Okay, so in Erlang, everything is a process, right? When you do computation, it's a process. And so you need a way to send messages to two different processes. And maybe you need to do that on a schedule, right? We've already talked about all the reason you might want to do that. And since Erlang is really, really suited for building distributed systems, it is a very common practice to send a message to another process after a certain amount of time. And so this is the way to do that, which is really interesting here. Now, there's a little message that you'll see down here that says send after three. It says see also timer module section in the efficiency guide. You're like, okay, what is the Erlang timer efficiency guide say about timers? Well, this is what it says. It says that creating timers using Erlang send after and Erlang start timer is more efficient than using the timers in the interfaces I just talked to you about. Why is that? It's because the timer module, which I just showed you guys, has this overhead as all this bookkeeping that needs to do when you create timers through it. So if you create a lot of timers, it will slow down your processing because it has to do all this internal bookkeeping. You just talked about why that's a bad idea. I don't know why the implementation for Erlang has that, but it has these primitives, okay, Erlang send after and Erlang start timer that are written in C, or actually they're written in C++. And so they are super fast. And whereas the Erlang code, the timer code is not written in C++, it's written in interpret language, actually compile language on top of C++ runtime. So it's not as efficient, right? Plus it has to do all this overhead of bookkeeping, blah, blah, blah. So basically the net net here is that when you're sending messages or processing time in Erlang, these are the primitives that you should prefer. If you do a small number of timers, using timer is not bad, but this is sort of good Erlang style, use the Erlang module instead of the Timer module. The way that the Timer module exists and all that. Historical reasons, number one, but number two, there's other functions in the Timer module that are super useful. A couple of them are for tracking execution time. So there's a function in there called TC, which takes a time and will calculate how long it takes to execute a function. And it will return the milliseconds, how long it took to execute, and also the result from the execution, right? So I ran this function, it took this long, and here's the result. That's what it gives you back. That is super useful, does not use the Timer process, so it has no bookkeeping, and so there's like no penalty for doing that. So that's a reason, another reason is for doing math with time. So like, for example, you wanna convert an hour into milliseconds, you can do that with this, if you wanna have like the number of milliseconds in five minutes, which is another thing that happens all the time, right, after five minutes, send this message or give up, like we're gonna fail, right? Give up the failure, all that sort of stuff, and you don't wanna actually like do the math, figure out how long five minutes in milliseconds is, you can say, you know, timer minutes five, and then it will return the number of milliseconds for you. So there's some good reasons to have it around, but for using it for actual timers, not a good idea, which is very ironic, but it's something about Erlang that, you know, people should know. All right, so let's go back to my little diagram again. I know you all have seen this before, so what I'm gonna show you now is some of the source code for doing Erlang Timer Wheels, like this is the actual C++ implementation that they had chosen, and I just thought it would be interesting to talk about. So I just wanna make sure, like everyone has this in their head as we kind of go through this code. So Erlang has this idea that has two Timer Wheels. One of the basic questions you might have is, well, how big is Erlang's Timer Wheel? Well, that depends on how much memory we decide to let Erlang have, but in the general case, it gets 14 bits. So it has this idea of a soon Timer Wheel and a later Timer Wheel, and there's also an immediate. So one of the things you can do is you can use a Timer, but you can say after zero, which means do this right now, right? So it's kind of this weird thing, but that also actually goes into the Timer Wheel, it just gets executed immediately by the scheduler, right? So Erlang has done a scheduler and it will dispatch that job to a processor immediately, right? So it'll get computed right away. But the soon wheel and the later wheel both get the same values. If you're doing debug, it's 10, right? Cause it's small, it's easy to track. If it's like a small memory, you get 12. And then the default one is 14. You can see the soon wheel, the 14 bits is 16 seconds, right? From now until 16 seconds ahead. And then the later wheel is 36 hours from now. 37 hours, whatever it is, 37 hours, 16 minutes. So that's something to keep in mind as we go through the next piece of code here. And then this is the actual structure for the Erlang Timer Wheel. I know it's a little bit of an eye chart, but the thing that I thought was really interesting that you might wanna see is that you can see at the top, there's the out one slot, which I mentioned. Then there's the scene wheel and then there's the later wheel. So they're all kind of smushed together actually in Erlang's implementation, which is pretty fascinating. So I think that's really cool. And then one of the other things that I really wanted to call out here is that there's like a lot of sort of interesting bookkeeping down here at the bottom, where it talks about like the sort of next time out time, like are there empty buckets between now and when the next time I'm gonna look at this thing is, like can I just skip over them? Do I even need to look at them? And so it can keep track of stuff like that. It just like, I don't even need to evaluate my time wheel until I get to that time, right? So I can just kind of ignore it. And then there's some monotonic things here, right? So internally in the Erlang runtime, it keeps monotonic time. So the monotonic time comes from where? Where does it come from? Does anyone have a guess of all clock? Yeah, it comes from a wall clock, but what wall clock? It comes from the computer's wall clock, right? It comes from the computer's wall clock. So in the Linux kernel specifically, I don't know about other systems, but in the Linux kernel specifically, there's a system call that's called get monotonic time raw, okay? And I think that it's hilarious. There's like all these interfaces to get monotonic time in the Linux kernel because many of them have had bugs in them. In fact, there's some serious WTS that I'm gonna show you like in a couple of slides where monotonic time has actually gone backwards because of bugs in the kernel. So when you're writing a system like Erlang or runtime like Erlang or Java or something else, you kind of have to deal with all these weird WTS that you get like because of the implementation of the operating system, right? They're not like bug in your code, they're bug in someone else's code. And I know we've all worked around that in our lives as developers. So I just think it's interesting and kind of funny that it's even down deep in the guts of very complex runtime things to like all this code to basically deal with like bizarreness in the operating system. All right, good. So I wanna see if I'm understanding correctly. So the wheel is advancing according to the amount of memory that you've allocated. Well, so the wheel size itself, so let's go back to this. The wheel size is based on those bits, right? So that's how big this wheel is gonna be. Okay, how many bits? Yeah, how many little blocks? Correct. So yeah, so as the time comes in, it's gonna be 14 bits, like whatever number that is, it's a power of two. So it's two to the 14, like that's some pretty big number that I don't know off the top of my head. We're gonna hash it, we're gonna get a bucket location and then we're gonna see if there's something there for us to deal with. And then in that data structure, I just showed you guys, we're gonna keep track of which buckets are full. Like if there's an empty one, we're just gonna skip it. Like we're not even gonna look at it. Well, this next time we evaluate this thing, we'll see is this thing having whatever, right? We're not even gonna go check our attack on this because there's nothing there to look at. So there's all kinds of like little shortcuts and sort of implementation hacks that they've added here to really improve efficiency and how they're computing these things. All right, so I promised you one of the WTS. Here's one, this is a kernel problem. So this came straight from the system call that calls get monotonic time, depending on what the Linux kernel is, you can actually be negative, which is obviously not that great. So there's one WTF that kind of always funny. And then here's another one, which I like. So what this does is it returns the maximum like drift of monotonic time. So even monotonic time itself in the operating system can drift, right? So it itself is not necessarily strictly monotonic, right? It can either repeat the values or it can advance like more than one, right? So, which is still monotone, but like it's like not intuitive, right? It's not intuitive. That's how it actually behave. So like even though you're asking for monotonic time, you may not actually get strictly monotonic time, which is fascinating. All right, and then like that's all I have for you tonight. I mean, I hope you enjoyed the paper. I love this paper. It's like one of my favorite papers. I think the implementation in the Erlang runtime system is really cool. I think Erlang is a really cool runtime. I'm having to talk this stuff, any Erlang stuff, like maybe provocative statements, like whatever. So anyway, thank you for your attention. I appreciate it. And I'm happy to answer your questions. The questions, there was a comment from Proctor. Backwards compatibility is something that the Erlang slash being developers keep in mind too. Maybe Proctor can unmute himself and ask the question. Well, it was for my comments, the only question. But if you want to mute Proctor kind of a, let's do it. Someone asked why Timer was still around. And I was just, that comment was that the being developers don't like removing things from things and breaking older releases. So they try and keep it. And I'm sure Jade can speak to that more. They try and keep it as parody. If you're running on 18 or you're running on 17, they try and keep as much working on 26 as possible. So it's not like, hey, this thing's broken. Don't use it anymore. We're going to leave that in there for old code to run and still keep running. So in general, that is super true. It is not necessarily strictly true though, because some of the things they took away are super annoying, but better. I mean, they're bad, they're honestly better, but like dealing that transition point was very annoying as a programmer. But yes, in general, Erlang developers, the core team, whatever, they're super sensitive to removing functionality that's present in the runtime that has been there for a long time, even if it's bad, objectively bad, because they have clients and users. One of the ways that Erlang makes money on Erlang is they develop custom distributions of it for clients. And then those clients build code on top of it and they have certain expectations that this code is going to live in Erlang literally forever, and that expectation is generally met every single time. I mean, there's stuff in there that I have no idea what it's for, but it's been in there. Like there's still a Corva object broker that's in there from like the 2000s when Corva was like super hot and the big, nice thing or whatever. So it has that in it. Like who uses that anymore? I don't know, not me, but like, I'm sure someone out there in the world is. So, but yeah, thanks. That's a good point. The questions on in person. So I kind of think of how to ask this question. So you talked about how much you like this implementation, but I'm still in clear as to why. And I guess I'm going to say this is someone who knows that time is perplexing and has fought with it, but does it necessarily, I'm not necessarily able to recognize like a good versus a bad implementation unless there's a comment saying. Okay, well, let's watch out this back. Let's think about a time scheduler that people find somewhat unreliable, like the Windows event system, just to pick one in random example. I think, you know, it's gotten a lot better, but you know, there was a time when it was very unreliable in terms of executing things that you wanted to be done. So I feel like that's an example of a timer that is not reliable and you can't have a lot of confidence in. One of the reasons I like the Erlang system is that is very granular. You can do up to a millisecond time and it's almost dead on after it every single time. Like Erlang was developed to be sort of this soft real time system. And so one of the things that they've worked really hard to do is to make sure that the timing system inside of it is very responsive. And as I mentioned before, like when you build distributed systems, especially things that process in near real time, then you want your time system to be very accurate and reliable and eventful. And Erlang's system like very bold for a long time. So like one of the benefits of using an older runtime system like Java, like Erlang, is that it's been through all these battles, right? Like they've wrung all the bugs out of it. And I find that to be very attractive in the programming environment. So there's questions. There's no questions here, bro. I mean- I usually have some. I mean, the only thing I'm just gonna say is just a comment. I hate time. And that's, you know- Well, as I said, it is one of the armpits of computing. So, yeah. I mean, anytime in general now, just all time computing. Oh, okay. Well, I think, you know, we should just give up this concept of local names. We should just all be on set time and we just do not spend. Just get used to it and get over it. I mean, it might make sense to do that, especially if you have space travel. You don't want to have care about like miles of time or the juvenile time or whatever. Just have a specific time depending on that. Yeah, that's true. Okay, so this is a random aside. There is an academic conference that's called Planet Wide Distributed Systems. And that is one of the things they talk about is like, how do you deal with clock skew when you have like light seconds or like minutes of like transmission distance between you and a spacecraft or you and a robot or something like that? Right, how do you replicate state across that kind of distance or barrier? And it's pretty fascinating, like, the ways that they try to address those problems. So if you really want to get your nerd on, are you the paper in the house, like the Byzantine laser or something? Yeah, well, I mean, that's like one of the things that they want to do is start using lasers for transmission instead of radio waves. Laser, lasers are more fast, but they are consistently approaching the speed of light where the radio signals sometimes go down. Those are dispersing, that's why it's a more coherent solution. Animal laser, but yes. The most enlightening thing about this is that it's not my fault. Another excuse to pull out of your head shows that I'm out of order. That's right. Tell your boss. It's not my fault. Time is garbage and I don't know how this works. So obviously, Berlin, this is a primary concern for Berlin. Yeah, fundamental. How difficult is it to implement this in other languages? Well, so what are the requirements? Yeah, so basically the requirements are, you need a hash table, right? Like it could be super simple. I mean, it's literally, it seems like, yeah, you need a hash table with a certain amount of buckets in them. And then you order, you have a linked list that hangs off of it or a tree depending on implementation. That's an implementation detail, like how many timer jobs are you gonna have? Generally it's cheaper to just have a list and can even be unordered if there's not a lot of timers. And you can decide what a lot means, right? Like you can make that dynamic even. But yeah, I mean, right? So it's very, very straightforward algorithm. You have this wheel has a certain number of slots. You have the timers that hang off of each one of those buckets and you can even have them higher up, right? So let's say that you have a wheel for, in the paper they talk about using a bucket for days, a bucket for hours, a bucket for seconds. And then every time you go around the wheel, you advance the top bucket, right? By one. So for example, if you had a wheel that had 60 slots for seconds and 60 slots for a minute and 24 slots for an hour and then whatever 365 slots for a year, then every time you do the second wheel, you advance the minute wheel by one, right? And so on and so on and so on. So that's another way you can kind of like do even more like storage efficiently and higher archically at low costs. Like low lookup costs, low implementation costs. It sounds like a new case for lenses. Like, I mean, are we going to like severe into astroland now? I'm trying to think about it. No, that's a good stuff, that's a good stuff. So I think I have a question. Yeah, I haven't read the paper. And do you know what this one is about? What are those divisions in the donor in the middle? They're basically just little slots. So they're just distinguished locations. And what we're doing is we're taking the current time and we're basically slicing it up into these different regions, right? So you think about like a timer. So we're going to have something that expires at time 10. We're going to figure out where to put that job on this wheel by taking the time when it expires, caching it in this wheel and then sticking it in the right bucket, right? OK, well, in sense, how would it have been just to put it on the wheel? It depends on the implementation, but the simplest one is that it just advances like a clock. So every single clock tick, you just slide it around to a new position. And then you look at the little green boxes and you say, OK, I'm in bucket three. Is there anything in bucket three that expires at time now? And if there is, you execute that. Whatever that is, like you send a message, you execute a function, whatever it might be, right? Right. Is that the time when you're on the workbook or course it's from the workbook or the security workbook? I'm sorry, I don't understand. Is there a relation to the workbook or security workbook to this? Yeah, it's generally keyed off of the system time, generally, but in Erlang it has its own the Erlang runtime keeps its own clock, separate from the system clock. It has it powers the wheel using its own internal timer. Oh, OK. Yeah, that's the big advantage of Erlang here, right? Is that so? So does it get the ticks from the system? Yeah, I mean, the underlying ticks come from usually a hardware source, right? Like there's literally a crystal on a motherboard, right? That's sending hardware in the rocks. It said, I've done 1,000 Hertz. Here's your tick, right? So that does come from the operating system, but Erlang itself does all the bookkeeping to keep that like strictly on time. So even though like the underlying operating system is bullshit and the hardware clock itself might be terrible and NTP is out there doing a little chaos monkey, the Erlang runtime is going to keep trying to do what it does in a way that's predictable. What pixel operating system could you allow NTP in there? Well, so that's something that Erlang can't control, right? There's no way to predict how that interaction is going to happen. But hopefully the Erlang runtime will get scheduled adequately and it will be able to do its processing at the end. OK. But yeah, no guarantees, right? There's no guarantees for any of this. Like there's no silver bullet. I mean, yeah, but there's failure everywhere. It's like failure all the way down, you know, especially if it's turtles on a very turtle all the way down. I mean, that's like my favorite thing to say, but it's literally put the rough side down on their shell. They're all very soft turtles. Yeah, yeah, all the way down. Yeah, said turtles all the way down. OK, I'm just going to do it quick. Well, you can. It's super short. It's like six pages. I feel like it's my fault for not reminding people to read the paper. Well, that's right. It's all because of the clock. Yeah. What was the name of that space conference? It's called it's called Planetary Scale Distributed System. Oh, thank you. We have just off topic here also from Proctor. Wants to know if you're still doing your barbecue field trips. And if so, when is your next Dallas Fort Worth visit? I mean, Proctor, I hate to tell you this, but I'm basically pescetarian these days. I am almost vegan, actually. So I'm vegan, except for brisket pretty much. I don't know if it's any more brisketarian. OK, sorry. That's not really so. I mean, look, I live in Texas. I've got a line. Also, brisket is just freaking amazing. So so we're going to talk more about that in a minute. But I'm going to suggest that we're going to stop the recording really soon. So if there are any other things that you want to get on to the recording questions or comments on to the recording, do it now. Otherwise, once we're going to say, thank you very much. Right. Continue the Q&A and discussion. Yes, and they'll be offline. They'll be offline shitposting once you're going to miss. Yeah, exactly. Yeah. Do I stop recording now? Yeah, start recording. Special. All right. So. It'll say there's a woman. Oh, it's not going to say. Yeah, so Mrs.