 Hello there. This is the Sean and Sean podcast number 002 and Whoa, whoa, whoa, whoa. Two zeros in front of it. You're a really long-term planning here. Exactly. I am sure we're going to be doing this for decades. You just booked us for eight years. So the podcast itself is not going to start for a little while because Handmade Hero is streaming right now. We decide we'd like to let the Handmade Hero community come and see us. So what are we going to do? That is a good question. I am going to open up. There's an at me, so let me see what it says. I'm rendering from one frame buffer object to a second, but the first text is showing only white when it's rendered on the second. Any ideas what I'm missing? I don't know. Did you check that your frame buffer status was complete? Are you binding the texture to the correct slot? Are you using the correct sampler? Are they all GL Texture 2D or are you using OpenGL? I guess it's an FBO, it's OpenGL. I don't know, did you use N-Site or GD Bugger or RenderDoc to make sure that there's actually data in that texture or data in that frame buffer object, like the texture ID in that frame buffer object. Are you binding the actual GL Texture ID and not the, like when you do GL Gen Texture, are you binding that to the sampler uniform and not the frame buffer ID because you have to bind the texture that you attach with frame buffer storage? Those are the things I would try. It's a tutorial, it's a debugging pre-podcast, I suppose. Do you have any other things you would try for that, Sean? I'm too busy doing technical stuff, so even try to think about it. Gotcha. How long have you guys been keeping the hair growing? I haven't cut my hair in probably... I mean, I trimmed it a little bit every now and then, but I haven't really cut it. I had short hair, I've had long hair my whole life, even like as a kid, a teenager and stuff, but I haven't cut it maybe ten, eight, nine, ten years ago, something like that, and I regretted it immediately and haven't cut it since. I also haven't washed it in two years. I mean, I haven't used shampoo on it in two years, so... I like throwing that little tidbit into three people. Maybe not two years, it'll be two years this October, so a year and a half. My wife's been going for two years now. Oh, no, wait, no. My wife will be three years this October, I'm two years this March. Okay, so yeah, I have not washed my hair in two years. Sorry about covering up that window there. How do you wash it with hot water? Oh, I've been deleted. Oh, I'm back. I was thinking I'm not cafting the desktop, so I can just throw everything willy-nilly on the desktop, but that's not actually true. If I put it in front of the window, it does show it. Dry shampoo. Every now and then, I used to use cocoa powder and baking soda as a dry shampoo, because I refused to buy anything. And I had to do that for a little while, but honestly, I haven't done that in... I can't even remember, many, many months. And I don't seem to have any problems. Yeah, and I've... The last time I got my haircut was 30 years ago, so... Not even like a trim? No, I don't even get trimmed. It just grows out to the length it is, so... How long have you guys been programming? 20 years for me? Good God, how old am I? 37 years, something like that. I've only been doing it professionally for... Oh, I guess I've been doing it for 23 years. I've been doing it professionally for 19. I did some volunteer programming at my school when I was 16 and probably got my first job doing it when I was 20, so that would be 30 plus years. How long would it take you to make something better than Skype? I don't even... That's not a really trivial problem. Yeah, I mean... How much of Skype do you want is part of the complexity of that? Like... You know, I think that does video streams to calls is like... If you can figure out what codec you can use is probably not that bad, but all of the extra wrapper and stuff around that is just a lot of code. Internet audio is... There are actually good libraries for that, so the audio part of it would be... Yeah, it's the video streaming that would be... Video is always a mess, the video codec stuff is always a mess unless you can just have it as sorts and plug it in basically, which is not very common for video. Beginner's question. What's the deal with avoiding pound-include-windows.h? Compilation speed? Yeah, well, I think that's probably less true now. Maybe it is. For running VC6, it would not be a big deal. The other issue is just that it pollutes your namespace with a bunch of like pound-of-fines. I don't even know what anymore. I stopped using it so long ago. Min and max or something like that. Point is the big one. And so, all caps. I don't tend to use the all-capped names though, so that stuff is less of an issue, but as a header file library, you just don't want to pollute people's namespace at all because they need to just be able to drop it in without any conflicts. So it's valuable there, and it's kind of like a heroic effort in some sense, the amount of work that my library has to do to avoid it, but it's generally worth it. That comes back to the whole philosophy of single file libraries that is something we're going to talk about in the actual thing. So I wrote my own platform abstraction library. I guess you're writing one now. So I wrote my own a long time ago, and I made it a single file a couple of years ago, and I removed Windows.h and just wrote my own that has everything that I need in it, and I just look now, it's 5237 lines from my version of Windows.h, and it cut my compile time on my 80,000 line game down by a third. That's your 5200 lines is your platform abstraction or just your Windows.h? That seems crazy that Windows.h is that much bigger than that, but... Well, I think the problem with Windows.h, also one of the... I think it includes a whole bunch of shit. Well, and you can turn on the lean and mean, or mean and mean. Yeah, even with lean and mean and extra lean, it saved a third of my compile time. And also, it does really dumb things, like when it wants to pragma pack something, it includes a header file that packs by four, and then includes one that pops the pack afterwards. So I don't know how fast... I don't know how much that gets optimized, because I don't know how it parses it and how it... I don't know how the compiler parses the header file and how it combines everything, but it was including this like, a pragma pack, two header files that do a push pack and a pop pack, it was including those like, a thousand times. So I don't know if that... And it did a lot of stupid shit like that, so I don't know how much that stuff sort of slows it down. I bet you that wasn't even stupid in that... That was some way of getting some kind of compiler independence or something like that, because you can't pound-define a pound pragmas. There's no way to make a pound-define that turns into multiple different pound-pragmas. So if for some reason you need different compilers, need different pragmas to do something, I've never even thought of it before you said it, but it seems obvious to me, oh, you could do a crazy pound-include to get the correct pragma. And obviously in practice, it's turned out that all compilers use the same thing, but you could imagine that that was why that... Once upon a time, that may have been necessary. I mean, some of that stuff could have come from like Win16 even, right? I don't know. So yeah, that would make sense. Why the hate for Git? Because in most cases, other than distributed development, where lots of people are publishing an official version, Git offers nothing over SVN and takes a lot of things away from SVN that I think are important, like sequential numbering and the ability to work with large binary data files easily. And people... It's like Git has a purpose, but the hate for it is the blind devotion to, oh, you should use Git. That's what I hate. I hate getting people on stream telling me I should use Git because I prefer SVN. It's like, no, I shouldn't use Git, and what I hate is the blind devotion to it. It's simply not better than SVN for most use cases, in my opinion. I don't know, Sean, you don't use either... Well, I guess you use Git at work, but you don't use... I use Git for STB stuff, just because GitHub has the... That lets me use GitHub for issues and pull requests, because before I put it all on GitHub, I actually reached the point where I think I'd gotten the same bug report about STB Vorbis over the course of two years, I just hadn't fixed it, because it was like a minor thing. I got that same bug report like 20 times over the course of two years, because nobody knew I had already gotten it, so I decided to put it on GitHub just to have a public issues list, basically. Do you like it or SVN? I'm okay. The Git branching is nice. There are a couple of things that Git does out of the box better than SVN, in terms of the default Diff that Git gives you is actually a nicer Diff, just on-screen the way it displays is nicer, but that's a totally minor thing, and it's not even internal to Git. You can hook it up to five different... It's already hooked up to five different Diffs to pick from or whatever that other people wrote. I like substantial stuff. I don't see any advantage for the same reason you're saying. If you end up using everything with a central repository, it's a little goofy. You can do the whole thing of... The local repository thing of making local progress without going to the global thing is nice, potentially. Being able to be offline and have your full version history is nice, but it's not something that I ever need. I don't tend to work offline a lot. There's that one thing, and I never remember what it's called in Git, but in Perforce it was called shelving. Yeah, Perforce added shelving because stuff like Git was coming along and had this kind of functionality, and people were pressuring them to add that. Really? I didn't know that. I thought Perforce had it first. That was the impression I got of the way things happened over time. I'm not sure I ever heard them admit that publicly or whatever, but that's sure what it seemed like happened. Oh, I thought Perforce had it first. It does have now a thing that, as someone just said in chat, Git stash that is the direct equivalent of the Perforce shelf. Right, right. That's the thing. That's the one feature that Git has that SVN doesn't have that I actually think is useful is whatever Git stash is what it's called. I don't know what it's called. Git stash. Okay. Yeah, that feature is nice. That's the one thing that Git has that SVN doesn't have that I'm like, oh, I wish I had that. The Git has, the Git branches are basically as convenient as Git stash. So you can just, if you want to do the feature branch style development where you're working on a feature and then it's not done and you decide to work on different features so you go back to the initial state and you branch off on that. That stuff is all super easy once you're in it. The Git stash is like the simplified way of doing that that's barely any simpler than just branching. I don't find that to be particularly useful. I don't find branching to be all that useful anyway. It is something I use a little bit now with the STB stuff. I probably won't use it at work. But yeah, what you're paying, any advantage that you're getting from Git there, you're paying so much worse in terms of the non-intuitiveness of the UI that it's just, it's never worth it. Well, I haven't tried Tortoise Git, but I always... Yeah, I'm only using the command line tools. I have no idea what the... Well, I use Tortoise SVN and before I used Tortoise SVN, I used Tortoise CVS and those UIs make anything good. It's like the best fucking UI I've ever used in my entire life is the Tortoise one. So maybe the Git one, maybe Tortoise Git is even better than Tortoise SVN. I don't know. It's, even then, it's still gonna have the problem that you have to understand how Git works because as soon as anything goes wrong, you're gonna have to go to the command line tool. There's no way the UIs gets you to everything you need. Then that's just ridiculous. I mean, I know... Well, part of it is because Git is so powerful that you ask it to do things that maybe you wouldn't ask another version control system to do, but yeah, I don't know. It's a giant mess and I don't really want to talk about it. So let's say that Handmade Hero is done. Handmade Hero is apparently in its Q&A now. So let's go ahead and start for real. So we have a big list of stuff to talk about, which includes a bunch of stuff from the original podcasts that we didn't get to because by my accounting, we got through four of the 20 topics or whatever that we had for the previous podcast. And then we took live questions. And it's possible we talked about some of this stuff when we did the live questions. I didn't double-check those. So like the single header topic, it's possible we actually talked about it last time and I just don't remember because we talked about if somebody asked it in the Q&A. So what do you want to do? You want to just go from the top? So we can go from the top. I just don't remember. It's possible we talked about it. Well, I guess we'll find out. We'll remember. Yeah, maybe. Time says eight minutes till end. Do you want to wait for eight more minutes? Well, we've already waited more than 14. Well, fuck that then. I'm done waiting. We've been streaming for 17. 17 minutes we talked about nothing. 17. 17 minutes we talked about nothing. Well, we didn't talk about nothing. We talked about a bunch of dumb stuff. Yes. Off to a good start. The first question, the first topic. I think I was doing them last time. I think you should keep doing them because I don't have them available. The topic that they wanted us to address was the virtues of single header libraries using them in larger projects. How they simplify the build system, et cetera. We did talk about single file libraries, but I mean, I'm all for talking more about them. My only concern is that I'm going to say the same things I said last time because I honestly don't remember what we said last time. Right. Well, I just think when you have single file libraries, you can just use them in multiple projects that are somewhat dependent upon each other and not have to worry about anything ever. You don't have to worry about keeping object files up to date with header files, and if you make a change somewhere, you have to make sure that you, linking to the same objects or object files elsewhere. It's just, and on every axis, it's way easier. It takes less time to compile if you don't use templates and stuff like that. It takes less time to compile. It takes less time to sync everything together. You don't have to do anything when you create a new project. You just set your include path and you're done. It's just way better in every single way. The stream that I did the other day about the platform abstraction thing, I brought up this topic, or this concept of local maxima, which I don't think I talked about in our previous chat, but is a thing I come back to in a lot of different areas. And it's just the idea is that that sometimes certain features sort of work together well. And if you, once you've made certain choices, certain other choices go along really well with it because it gets you to a local maxima, where maybe those choices aren't the choices you would make if some other choices were present. And I know that's all, handway made abstract, but it's a pattern I keep seeing. I think it's a useful framework to think about when thinking about some of these things. So in the single file library case, one of the issues is that in Windows, there's no standard path for your development environment, the way there is in Linux or Unix. And so there's no place for libraries to live. So you have this problem of setting your build environment to find your libraries. And of course you can do that once and you're done. And you upgrade machines or OSes every two years. So you have to do it every two years. It's not necessarily that big a deal, but it's a little bit of friction. And then the thing you're talking about, I think that you're talking about, that I've run into is sort of like one thing that you have in Windows is the runtime libraries where you build static libraries and they depend on a certain version of the C runtime libraries. And then if your app isn't using the same version of the C runtime libraries, you have a conflict. And it's not version, it's this whole thing that I assume they must have moved away from by now where they're single-threaded versus multi-threaded libraries. Because back in the old days, people didn't always write multi-threaded programs and it was more efficient to use the single-threaded libraries. But you still have the static or dynamic, like the static library or like the dynamic DLL. And so you could get in circumstances where you couldn't build your app because the library was dependent on the wrong thing. And so the single-file library helps some of that and the no dependencies of the single-file library helps with some of that. If your single-file library had dependencies, it wouldn't break that problem. You might still have issues there. And by doing the single-file library as a source library that you build into your app, it avoids the C runtime problem because it's just getting whichever C runtime your app is getting instead of trying to build it separately. So there are a bunch of things like that that sort of all work together to reduce friction. Yeah, I mean, the specific things that this question asked about, I don't even know how to say because they seem kind of obvious. How do they simplify the build system? Well, you don't ever mention them in the build system, right? Using them in larger projects, I don't know. How large a large project do you mean? I write 20,000 line programs at most. I'm not using them in really big projects. This sort of actually ties into our pre-stream discussion regarding Git and what I was going to talk about, which is dev tool fetishism that people have where they want to set up these super... And I had this too up until 5, 10 years ago where you want this really nice set of... There's a whole bunch of tools that do things, there's automated testing tools, there's deployment tools, there's build tools, there's source control tools, all of this stuff. And you want to get this really nice system with all these complicated tools in. And what I found is that the less of that shit you have, and I think everybody who programs for reasonable under time realizes this, the less you have of that, the better. You want things to be as simple as possible at every location. You want as few dependencies as possible, you want as few programs as possible, you want as few systems, as few files, as few directories. You just want everything to be... If you could have everything just one file, and that file just compiled itself and you didn't have to worry about anything, that would be the absolute ideal. So I think that the single file libraries are a step in that direction. And get, for example, is a step not in that direction. Sure. Is there some dimension of this we haven't talked about? I don't think so. I think it's pretty straightforward. Have you ever downloaded... Even Java tried to do this with their JAR files, which is you basically zip everything up and rename it to a JAR, and then that's your library, or that's whatever, right? It doesn't work because Java's terrible, but that was the idea, where everything just boils down to being one file. I just think it's simplicity. I don't know if there's much more to say about it. If you don't think that's the right way to go and you have a better alternative, I'd like to hear it because I certainly can't... I can't even envision a better alternative. There's the dumb shit where you have to have the implementation and then some stuff you want external, some stuff you want static, blah, blah, blah. That stuff sort of sucks, but that's just C really not having really good facilities for the single file thing. Before I was serious about C programming, I was using Turbo Pascal and Turbo Pascal 4, I think it was Introduced Units, which was a module system that had an explicit interface and implementation sections, and it was just one file, and it basically generated equivalent of H files behind the scenes so that you could get separate compilation or separate linking. Separate compilation. It's not like the STB libraries were inspired by that. It's just that that worked. It was the right thing in some sense, but that was back in the day where I wasn't building for multiple platforms or doing all the other stuff. It might turn out that that scheme wouldn't really be that good now, although I don't know, maybe that's how Go works? I don't know how Go works. It's definitely how OCaml works, but those aren't very interesting, or OCaml isn't a very interesting case. I don't really know if there's much more to say, but I think everything should be single file. If I see a library on the Internet and it's not single file, I'm turned off. I'm a not-invented-here person, and that is certainly part of it. When I started, nobody else was doing single file libraries, so it's hard for me to judge how I would react to somebody else having a good single file library. I assume I would be fine with using it. The problem is that since I use VC6, I can't compile most people's libraries anyway. That was the motivation for this. One of the many motivations for this was that experience that I had specifically with Libpng. God loved the people who did it. Png was a godsend that got us away from the GIF patents, or GIF patents, sorry. It's a pretty perfectly reasonable format. It was just how they chose to package it up as a library was a nightmare from Windows. I went through free type, right? Before I found your library, I used free type. That's even worse than Libpng. I've never tried to use the free type API, so I don't know what it was like. It's like a 5 megabyte download of source code. The thing about Libpng was they did this really dumb thing of- it has to use Zlib internally. Can you provide a Zlib equivalent? You'd have to go hack it. The interesting thing was if you wanted to include Libpng.h so that you could do some JPEG stuff, inside that include of Libpng, they include Zlib.h. They made part of their public interface that you had to be able to see the Zlib.h. If you tried to just build a static lib of Png and have that lib and that h file, that wasn't actually enough. You also need the Zlib.h to compile it. It's just dumb, sloppy. It's the same as with lib curl. If you try to use lib curl, which I do use because HTTP parsing is not exactly fun, if you want to use SSL, you need like- it has like 10 dependencies. You need open SSL, which has 10 dependencies. It's brutal. It's like if somebody made a really good single file full lib SSL with all the curl features, then the world would be a better place. But we don't have that. Alright, so I think we should move on. Alright, no more single file stuff unless there's a specific question. Okay, this doesn't call it single file library, but it's ultra related. Our first two topics were single file, and here's our third topic. Would be interesting to hear about effective use of other people's code. What to look out for in a lib that will cost more time down the line than making only what you need. Than doing it yourself, I think is what it means. Where's that on the list? Is that on the merged list? Yeah, it's the third one on the merged list. Oh, okay. Unless I... No, I'm looking at an old list. Yeah. Oh, here's the merged list. I've got editor and physics simulation. No, no, that was an old topic. Oh, here it is. It's like sixth on this list. Oh, okay. Well, we got five topics better than that then. Yeah, alright. The problem was I pasted that email into a text file, which left it scrolled to the bottom, and when I scrolled up, I got to the top of the list and didn't realize I was looking back in the email history, the thread early in the thread. Okay, there we go. Sean, as far as... You Sean, as far as I know, all your editor stuff is embedded into game runtime. Reasons why you prefer that over out of proc tooling, out of game... Process, I guess. Yeah, but that's a weird way to put it. Reasons why you prefer that over having separate editor tooling. Okay, well, you rated this topic highly to discuss as well, so I want to hear your opinion on this first. Well, so I don't actually do very much stuff that needs that. In terms of the games I've actually shipped in the last eight years, there are all these little indie games that in fact did not have in-game editing at all. The promised games have little ASCII text files just embedded in the C code, went for the map, and the loss in the static was a bunch of Photoshop files, or PNGs actually, and what else have I shipped? It's all similar stuff like that. But back at Looking Glass, back in the 90s at Looking Glass, I did in fact do everything editor in-game, and everyone very much valued that. But I was a rendering programmer, so I don't have super deep insights into it, so I'm willing to talk about what I remember about it, but... So hold on, I'm curious about something. I want to just interject for a second. In the DOS days, you embedded all of your tools? Yeah, so... Do you guys have high-end workstations or something? So Ultima Underworld, which was before my time, was a 320 x 200 DOS game. Well, it was a DOS game on the 2A6, or it wasn't even 32-bit protected mode. It was the extended, whatever, memory stuff. Fortunately, when I started Looking Glass, we were protected mode only, so I never had to deal with that crap. So it was 320 x 200. I don't know if you've seen Ultima Underworld. Before Wolfenstein, it was a texture mapped... Yeah, it was fully texture mapped. Well, you almost always turned off the texture mapping on the floor in the ceiling to get more speed, but it was potentially fully texture mapped. But it was in a little tiny window of the main window. Yeah, but it ran like four frames per second or something. And the editor for that was in-game and ran at 320 x 200. They had the five-pixel tall font or whatever, four-pixel tall font to do all of the editor text and stuff like that. I never saw that. I've seen screenshots of it or whatever. And then the games after I started their System Shock. With Teranova, we had a separate editor for the terrain editing, I believe. I don't remember what we did for Inmission 8. That sucked, because that'd be really good to have a real-time terrain editor for Teranova. And then... And Thief... So Thief and System Shock 2 were being developed in parallel and used the same engine. And we actually had that set up so that the editor was in-game but was also the same executable for both games. I think it used different DLLs for the game DLL or something like that for the two games. But other than that, the editor was actually identical for both games. You could just switch a config file and you could load up the other game in the other editor. It was totally nuts. That's pretty cool. But even in those days, you could see that if you didn't study it or weren't there or whatever, the difference between Duke's... Duke Nukem's editor, the build editor or the build engine editor, whatever it was called, probably called build editor, versus Doom, it was way better to make Duke levels than it was to make Doom levels just because the editor had that quick turnaround. They were doing the Doom editing on next stations, right? No, they were... So you're not really comparing... You're comparing what the end-users cobbled together rather than what... Yeah. But even then, their turnaround time must have been terrible, too. Unless they had a dedicated supercomputer at the time to compile the levels, which they didn't. Yeah. The difference between doing things out of game and in-game is just turnaround time, right? I find... The reason why I spent three hours writing my own windows.h is because saving three seconds or one second per compile or making it so that I can immediate... Any time I can get down from typing a line of code to seeing the result change or wanting something to happen and actually seeing it happen, that makes me a much happier person. The faster that is, the happier I am. And the happier I am, the faster I get work done, right? So tools that are in-game, like... There are some editing tools that at Photoshop, for example, I use that out of game, right? But I constantly monitor the files to see if they've changed, and if I overwrite a file, it just dynamically loads it, right? I haven't quite written Photoshop yet. But if I ever do, I would put it in-game. The faster the turnaround, the better. It's always a win, no matter what I have found. And I think that has to be universally true. I mean, there are some things that aren't exactly turnaround time, maybe, too, that are annotations. I mean, I guess you could... So, like, one thing is... And maybe not everybody's editor support this. I feel like I've used games where when you go from editor to game mode, and then you run around in game mode and you go back to editor mode, it resets the state of the world back to what it was, because you don't want to accumulate that. But I've definitely made systems where when you drop back... You can drop back to editor mode and still have the game running live while you're in the editor, rather than it being supermodal. And one of the things that it does is it gives you some debugging tools for free, because you can, like... In your editor, if you can click on a guy and it shows his coordinates, then in-game, you can bring up the editor, still be in the game and click on the guy and see his coordinates. You don't need a separate debugging to show a guy's coordinates. Like, you can just use the editor as a sort of debugger. So there's some extra advantages on top of all the stuff you were talking about about workflow iteration time. And there's probably... Sorry, go ahead. No, you're... I was going to say that you don't have to keep two separate chunks of code, like, working together. Usually, if you change your renderer, you don't have to go change the renderer in the debugger. You change your material system. You don't have to change that or debugger the editor. You don't have to change it in the editor. It's just all in one place. And again, though, it depends how you're implementing it, because for some people implementing the editor in the game may still mean that the editor is operating a different kind of version of the state and still has to transform it into the game state, at which point you are still kind of maintaining two versions of that stuff. And that might be reasonable. There might be circumstances where that is what makes sense to do. I think my TileMap editor library actually works that way, that it keeps its own internal state and you turn it into the actual tile maps that you want. And in part, that's because that's a library, so it has to be reusable, and so it was easier to just abstract away and not make the internal state be the same as the user state, so that the library wouldn't be tightly coupled to that, because it would just be hard to make that library usable. But certainly that's, the ideal thing is for them to be the same and for you to get all that benefit. Certainly people have tried to do stuff with that iteration time where they basically make everything hot-loaded. Like your editor is a separate app, and either with inter-processed communication or through files, you can do stuff in the editor and it immediately shows up in the game. And I would imagine that if you can do that and make it work that's just as good, but it's probably more... That might actually be even better. I mean, it's like not crashing the game or not crashing the editor from the game is positive. I'm not sure that all the other stuff is it's a positive though. Well, you work on one monitor, right? I have two monitors, yeah. Okay. When I was doing diet, I did a lot of color editing and a lot of graphics editing, so I would have the game just running, and then I'd go into Photoshop and I'd change, because all of the colors are loaded from like... Editor and game doesn't actually mean they have to use the same window. You can't just run two windows out of the same process. I mean, I've never done it that way, but you could. Right, right. I mean, I just mean like being able to run... You know, if there's separate processes too, like, there are... I don't know, I think there are advantages if there's separate... Crashing is huge, but... Crashing is huge. Um... I don't know how Windows handles large memory allocations now before it was a problem, but now... Yeah, in 32-bit, in 32-bit it would probably be pretty big deal to make it separate, so they wouldn't conflict with their memory usage. That's probably true. Yeah, now it probably doesn't matter. Yeah, I don't know. Maybe... The crashing is a big one, plus if you want to like change the editor code you don't have to recompile the game, there's that advantage. I do think... You know what? Now that I think about it, I think it's probably better to have like... the peak best thing is probably having them in separate processes, but the amount of work that that requires for the amount of gain is probably not worth it, because if you have them in the same thing, if you have them in the same program, or they're the same, versus them two separate things, the amount of work to get the two separate things to be as good as the same thing is... a lot of work, right? There's a ton of overhead of keeping two things running and making sure that when you change something in the editor it immediately shows up in the other process. All of that stuff, that's a lot of effort, and it's probably not worth doing that, but if you do do it, maybe it's better, but I wouldn't do it, at least on my... I mean if I had like a team of 200 people or whatever, if I was at Ubisoft just getting tax breaks and hiring thousands of people, maybe I would do that, but I don't know. What do you think? Do you think it's worth? Now that I think about it, I think two processes is actually probably better. The whole having to serialize between the processes would be really annoying to me. I'm not sure I'd be willing to deal with that, but... I don't think everyone would deal with it, I just think it's better. Yeah, that may be. I'm trying to think there's got to be more to this topic than that. Oh, so a pair said in chat, or a pair implied in chat that UE4 lets you edit the live values while you're running and somebody else said something that makes it sound like Unity does let you do that too. So there you go. The big guys have finally caught up with where Looking Glass was 20 years ago. Well, he also made another good point immediately after that with the decoupled architecture you can support live editing on the target... Oh, on the target device, the consoles. Yeah, yeah, that's absolutely true. I'm such a PC... I mean, I'm not at work, but in the way I think about game development as so PC-oriented, I'm really bad about keeping that stuff in mind. So yeah, that's definitely smart. Yeah, I had live editing of colors and reloading of files on the PS3 for Di-Ed. That was big, but I didn't have an editor. The level editor was all done in code, but all the Photoshop stuff could be live reloaded on the console. But I don't know, do people work? I don't know how someone like Naughty Dog works who only ship on PS4, but did they keep a PC build running? Like, you must. Some PS developers do that and some do not. I don't understand the people who don't, but I've definitely heard that some do and some don't. But I don't remember specifically Naughty Dog, whether they do or not. Well, yeah, I just use them as an example. But it's just like, how can you not? Yeah, it seems not to me too. Pear corrects me and says that UE1 had the full live editing as well. Yeah, but that was old. No, no. Because I was dismissively saying after 20 years they caught up. Yeah, but UE was still what, like 99? Yeah, well, the Looking Glass was doing this. Well, okay, yeah. Looking Glass was like 97, right? But there was no earlier version of UE that didn't do it. That's not like... Um... Uh... Have you seen the Speaking of Editors in Unreal? Have you seen that guy who compared Unreal Editor to Valve Hammer and about how terrible not having CSG is? Yeah, Joe Wintergreen, I think. Um... Yeah, he's... I watched it when it came out and then I watched it again a year later and then I watched it again a year later, because it's a couple years old now. And he has a couple... He has a couple other videos that are related. He's making some kind of stealth game that's inspired by the Looking Glass stuff. So, yeah, that's a... Uh... I'm supportive about all that stuff. But yeah, the whole CSG editing thing is really interesting that... Um... Because I just kind of take for granted, like I'm so out of the AAA cycle that I just kind of take for granted that they're doing... they know what they're doing. All these meshes that artists make and they, you know... You know, fake stuff or whatever and then get the artist stuff in. And that sounds like it's great. And it turns out they're all using these modular assets and, okay, that sounds great. I don't know any better. And so it was really interesting to see this video from him where it was like, you know, he... I don't remember... I guess you're talking about the one where he... In one of those videos, he asserts that he thinks this is sort of crippling level design. Yeah, that's... Which is just interesting because I'm like... I never... I don't think about that stuff because I assume they know what they're doing and it's an interesting accusation that it's crippling level design. Well, I think there's actually like... It's subjective empirical evidence if that's a thing. In that... Like, if you look at the level design in games like First Person Shooters, I'm going to dismiss Overwatch because that's an anomaly. But if you look at modern, First Person Shooter level design, it's far inferior to what it was, you know, in the early 2000s, right? Like with the Quake III levels, the UE 2000 or UT 2004 levels, the Counter-Strike levels, kind of modern Counter-Strike levels are good too, but they're using Hammer as well, I think. And it's just like the depth of competitiveness and the quality of the level design in those older games is way better than what there is now in Far Cry or Crisis or whatever. And even the Call of Duty levels. The Call of Duty levels have they're really detailed and they have lots of grass and they have a car that looks really cool and the older games don't have any of that stuff, like a burning car or whatever. The older games don't have any of that stuff that would have had to have been created in Maya and then imported and then artists tweak them and adjust the lighting and the materials and shit at the same time. But the actual geometry and the level layout is nowhere near as good as it was in games 10-15 years ago. So I think there's definitely some truth into that. Why did you bring that up? We were talking about in-game editor. Yeah, if you do that modern style, it's really hard to do that in-game. Unless you're writing a whole mesh, unless you're writing Maya, it's really hard to Yeah. But writing a CSG editor is pretty easy. I was just reading somebody's GDC talk where they were talking about the fact that they... I think this was for Horizon Zero Dawn that they made their own level editor for placing placing stuff although it's all semi-percedural, they take a lot of texture maps to like the density of tree density and then generate the trees directly from the density instead of placing individual trees. But they're still relying on Photoshop or some other tool or the terrain, whatever the standard terrain generator everyone uses, World Engine, I don't remember anymore. They're still using other tools to make stuff. But they at least took that intermediate step of making a standalone or in-game rather level editor that at least assembling your modular mesh is in-game is better than not even having that, like trying to do all of your level design in Maya or something like that. And I think everybody figured that out a long time ago. Yeah, that's fucking nuts. Doing a whole level in Maya. But can they do a CSG type thing for collidable objects or like gameplay-important objects? They weren't talking about that kind of stuff. This was focusing on the procedural generation stuff. Oh, okay. Because I think like a hybrid would be nice where you can do where you can do like you can lay out how you want the level to be from a gameplay perspective and then fill, you do that in like a CSG editor or something and then you just fill it in with meshes and detail in like the traditional way. I think that sort of makes sense. Yeah, that's the gray boxing I guess it was called. The valve, wait, yellow boxing? Yellow? What's the whole thing where the valve would CSG out the level and then have people come along and detail it? Does it keep the I don't know what you're talking about specifically so I'm asking, does it keep the editability how easy it is to edit it? Does it keep that after the processing after like the the preying upstage or do you have to start going back? Like you have to iterate. You have to do your full game design iteration and get the gameplay locked down before you do that. Yeah, it kind of sucks. Yeah. But I mean if you want more detail than just the CSG level, you've got to do something at some point in there. I'd be curious to know how Overwatch does it because CSG levels are, they're not like a Call of Duty level in terms of detail but they're much better than say a Team Fortress 2 level and much better than in terms of detail in terms of Counter-Strike CSGO level and they're as well designed as a CSGO level. So I'd be curious to know how they do it. Maybe they just might throw people in money at the problem. It's interesting because I don't really understand why modular mesh design is so efficient if you have enough modular components that you're forced to build stuff on a grid. But CSG, if you CSG not on a grid, you kind of fuck yourself anyway. So I'm not actually sure concretely. I need to go watch, re-watch that talk. I haven't seen it in a year or whatever. I think it's iteration time. I think that's a good point. Yeah, he definitely talked about it. You're right. He did definitely talk about it. But it seems like couldn't you just stick some other modular component down that's the right shape for the gameplay and mark that it needs to actually be replaced with the correct asset or something? I don't know. I don't know. This is way outside of my knowledge. I probably shouldn't even be. Even your OBBG work, right? I don't know how much of the editor stuff you have hooked up. I haven't watched it in a while. But just being able to just put cubes, place cubes, right? You can make a level so much more efficiently than if you had to deal with a mesh, for example, right? And cubes are a lot. I mean, if you start doing like if you're using BSPs or whatever and you start, you have like brush tools where you can actually do the CSG live in-game and immediately play it, you're going to get super fast iteration versus having to deal with a mesh that you have to orient and make sure it works correctly. And then, oh, you want to rotate it. Make sure the rotation origin point is correct and all that stuff. It's more of a pain in the ass, I think. Should we move on? Sure. In-game good. Simple, low turnaround good. All right. Next topic. How to do physics simulation at 3,000 times per second? Also, what you know about entity components is we should have split these out when we rated them because I don't know if you if we actually care about both of those topics equally. Both of them are fun. They're both worth talking about. Okay, so how to do physics simulation at 3,000 times per second? Now, I don't even know what the point of this question is. Like, is this based on the, I don't know if you saw the Jonathan Bloy talk about making braid be able to run frame rate, more frame rate independent than currently is? I saw part of it. Or is this something you've ever talked about, the physics in high... Last time, I believe I talked about running dyad at 6,000 in frames per second. Oh, okay. Well, then that's probably what the... It might have been 3,000. 3,000 or 6,000. I don't remember what shipped in the PS3. So why would you run it at that high frame rate? Because it was better than I mean, I used polar coordinates. I did a bunch of weird shit in order to make it fit on the PS3 like a frame data fit on the PS3 and be fast enough. So I do like pre-computed radians and polar coordinates and stuff like that. And when you have data in that set it's much faster to use a fixed depth collision detection rather than extruding things based off of their movement over time and start doing like capsule capsule collision detection. So it was much faster to treat everything as a disk and then just simulate it a whole bunch of times so that the movement step is smaller. So that's why I did it. And how do you do it? You just you just loop 500 times as a frame. Well, so how many objects are you simulating in a frame? 10, 20. That's part of the secret I think. If you had a 10,000 object world, perhaps it might be a little more difficult to simulate 3,000. Might be, yes. The so that was solely for the collision detection. Yeah, that's the only reason. Yeah, I mean I've treated about the fact that I have a system where I'm doing I think I said it was something like 20,000 frames per second. You mentioned it in the last stream, yes. And it's kind of the same thing. It's mainly because there's only 10 objects like I might have mentioned in this other stream that one of the things is I don't have a broad phase collision detection in that because there's only like 10 or 15 objects. It's just the n log square. I mean, I haven't tried it. And the plan was always I put one in someday, but it may not even be worth it because there just aren't enough objects in a single level to be worth it. You also mentioned that you were doing sort of like parallel, like there were different worlds or something, right? Yeah, I don't think I talked about that on stream because I'm keeping all that stuff secret. Oh, sorry. No, that's fine. I think I talked to you offline about why one I want to do 20,000. And it's possible we just we didn't even talk about this on stream. It might have only been off stream, but it doesn't matter. I have talked I have mentioned it on Twitter in passing. Okay. So, but not the reason why and the other thing that I do on that is it is fixed time step and it actually uses fixed point rather than floating point. If I recall correctly I might be recalling wrong, but it's it's important that it be reproducible and floating point is reproducible but like it's way easier to make integers like work across debug and release and stuff like that. So, the and ads integer ads are fast. So, if you can avoid doing the multiplies as much as possible, if you if you if you if you're if your update is not x plus equals delta time times velocity, but is x plus equals velocity pseudo velocity because the velocity is pre scaled by the the time step because it's a fixed time step, then you can you know that that part of your physics is really cheap. And then integer comparisons used to be faster than float comparisons. I hope they fix that by now. I assume they have. I don't know what's faster. Part of that was the 8087 stuff. It might be the case that now the SSE comparisons are fast, but the 8087 compares are still slow. I don't know. They probably are. I don't even know what load like I don't even know how much load of a game now is integer versus floating point. That's something I should be kind of curious about is is our is there more floating point work being done in the game now than than integer work and at which point would that make integer calculations faster because the integer units are just sitting there doing nothing. But you kind of need mixed. You need well right you want you ideally you want mixed right but I'm just curious like what the what the load is now because it used to be like you couldn't use like in the good old days you couldn't use floating point right but now it's you know when I started looking glass it was just the end like we had shipped like one more game I was looking at that was using fixed point system shock I think was fixed point that the DS doesn't have a floating point unit right that might be right I didn't ship a key on you just did the 3ds right yeah I did the 3ds I didn't ship a key on DS and I don't remember that was why I think that wasn't why but that would have been a factor in why because yeah I'm pretty sure that the DS doesn't have a floating point unit and I couldn't and by the way somebody says on stream that I did in fact say multiple worlds the other stream so alright you're not you didn't you didn't leak anything all right that's good um yeah so I think it's easier it's one of the reasons why I did it for diet is it was easier to do that and it was fast enough that it would have been to do like a proper collision where you extrude the geometry and then do get like the the actual overlap point and then advance the time step to that point it was just faster yeah it was less work to do little time steps so I think that's the biggest reason to do it yeah I believe it's easier I believe that's actually how the OBBG physics which is a hack right now works right now is that it it checks if it collides at the end time and then binary searches the where it doesn't even buy a search it just takes a sequence of small time steps to find how far it can go before it collides um just because yeah it's easy to get working it's because yeah you just you get you just get an intersection test and that's all you need do you take the case of if it jumps through an object in one frame no like that's I believe that you could break it then I like I said the OBBG stuff is a temporary hack like I've definitely done that correcting like at looking glass uh one of my co-workers and I were making a doom clone as we called them in those days just for fun on the side because we would play a lot of network deathmatch and um I was a little frustrated that dooms deathmatch didn't really give a lot of info that I thought was interesting and of course a lot of games now don't either but like like at the end of a level getting a map that shows you where all the kills and deaths happen or something like that um and so I was like hey let's you know where he was I don't know which of us proposed it but like hey let's try to make our own game uh and there were other things going on but anyway so uh it was uh you know a 2D 2.5 D engine like doom or whatever but I definitely remember doing the full in 2D the full collision detection of the correct you know you're a circle and the line segments and you when you're moving you intersect your your circle moving against the line segments all correctly I've definitely done that code it's easier in 2D than in 3D but it can still be done in 3D yeah it's it's it's just getting harder when you have like different centers of mass and not you're not using spheres and stuff like that well that's the classic I would probably still use the classic quick cheat which is to turn every object into a uh oriented or axis aligned bounding box uh that maybe it can rotate along the one axis but maybe it can't rotate at all that your rotation of your object doesn't affect your collision um and that's if you if you can get away with it I think you should everything should be a circle or a sphere or a cylinder yeah well the the the reason to do the the um the axis aligned bounding box in is in quake was because then they could precompute the the convolved geometry and they didn't have to have rounded corners on the convolved geometry um but uh yeah if you're doing that calculation dynamically it may not be too bad to treat you as a cylinder but but that was the interesting hack was like the cylinder is the obvious thing because it's rotation invariant and so you don't have to worry that when you turn now you can't fit or you can't fit and they are their hack of just use the axis aligned bounding box regardless of your rotation still has that property it's like yeah you if you can fit through this corridor it doesn't matter what angle you turn out you'll still fit through it it's like sure like it's a it's sort of the uh I felt like John Carmack was really good in those days at finding local maxima at picking things that work together well uh and that was maybe a case not of local maximum just a good design thing and so maybe all of his local maxima really were just good design choices but yeah he made a lot of really intelligent decisions back then about um just being like this is the this will get us 90% of what we want let's do it this way and make sure everything fits in that way and that was a really smart thing um that was applied to a lot of the earlier id games and speaking about this um this uh method of simplifying collision you know Alex Austin so you've seen uh uh did what what's this game shit what the fuck is it called is it year zero new zero uh a new zero something like that year zero the nine inch nine inch nails on this year zero and his is a new zero okay yeah that his is that's fucking nuts because he has like full collision size everything every frame of the entire body and then the camera is fixed like to where your eyes would be so if your head moves you actually like see your neck moving your camera would be in the correct position and that's totally the opposite of what I would ever do well it's not what he does in his other games right like he's doing that game and he's doing stuff or that sub rose sub rose is the game that does that no he's sub rose it doesn't do the the new zero is the one that's like fully physics up where like it's all physics controllers that turn your movement into the animations and and stuff like that the I don't think uh sub rose is doing the full sim okay uh because that one is more focused on the gameplay like and that stuff he has been putting physics stuff in like the recoil and stuff but the new zero is the one that's just totally nuts in terms of everything's physics in I think that'll be really cool if he can get it working um I'm skeptical if anybody can get that stuff because that's that's that's a very very difficult problem and one thing that I personally feel about physics games is they're all shit and I'm not saying his game is because I think his stuff is like incredibly cool but I've never really played a game that was like hey we're making a physics game that was actually fun what about the uh what about the uh gravity gun in Half Life 2 yeah that's that was good but that was like that was like the John Carmack style optimization right they're like we're not gonna do grab we're gonna do it in this one very controlled very um isolated way and we're gonna make sure this one thing feels really good rather than trying to apply it to everything which is what I feel like a lot of physics games do is they're like oh sorry normally by physics game we don't mean the gravity gun in Half Life 2 because you're talking about game where everything is about physics is right right and and yeah I I do agree that those are often I don't even actually know have I played a real one I don't know if I played a real I guess anything that like has I guess world of world of goo I guess was the only one I played the only one I played all the way through well there was Toribosh which was pretty cool but like it just wasn't all that no Toribosh was the yeah it was physics but it was like turn based it was really weird um there was like stair dismount whatever that was called that was that was just a toy yeah that was a toy anything like was like had box 2D in the marketing speak I feel like was not it was like cool but it was not like fun you know what I mean yeah like quap quap is not really physics but like that I don't know I'm not a big fan of those types of games well I mean quap is kind of like a parody of itself I don't even know what had to put that it's not parody of itself it's not it's like a deconstruction of deconstruction or something I don't know it's like it's hard to take quap seriously I'll just put it that way yeah there was that old DOS game where you were on like a bike that had to you know the game I'm talking about you're on like a motorcycle happy wheels was like a flash version of it sort of it was you were on the bike and you had to not hit your head it was a 2D 2D bike yeah yeah exactly yeah I remember that game I remember that actually being pretty compelling it was kind of interesting because it was like the first one to yeah exactly exactly and I think everything else like quap is sort of like that and it's I don't have anything against quap I just don't it's not something that I like to play and that DOS one it might have been Windows I don't remember anyways but that older one from the 90s that was pretty cool because it was the first one to do it and I found like everything else has been like oh I've done I've seen this before it's actually as a concept it's actually not all that fun Trials is still doing that gameplay now yeah I mean I know Trials is the same kind of game I guess I need to go back and look at the old one and see and I also haven't played Trials I've only watched it being played and watched all let's play of it or whatever well there was yeah Trials was actually kind of okay mostly because it was about how crazy the levels could get not so much about about the annoying controls and finicking with the annoying controls but yeah Trials was like a better version of that there was also stunts was the DOS version and then they made so there was that old racing game stunts and then they had another one and you can have like a million you can watch a replay with like a million cars going and they're still doing it it's not like that's physicsy twitchy sort of physicsy thing but it's like physics based racing game what's that called it's really popular that doesn't matter I don't know Rocket League I guess he's talking about that account yeah I mean you can play a lot of Rocket League yeah I mean Rocket League is very interesting in that if you when you go into the air it's very physicsy but when you're on the ground it's not very physicsy so it's kind of weird yeah Trackmania is that is that actually physics based I've seen it being played but I never thought of it as physicsy I was sort of like no it's well it kind of is but not in the way the Trials is but I was sort of going I went along the Trials path of games that are similar to Trials but not focusing on the physicsy part of it but focusing on the other things and I think Trials did a good job of not going heavy too heavy on the physics thing and focusing on the Trackmania or stunts type that part of the gameplay which is what made it good the physics part of it I don't think made it good right okay sure I mean I think it seems like Trackmania has I mean not Trackmania Trials has some depth of the skill then I think that depth may come from the physics but I bet you wouldn't need to use a real physics engine to get that you just need a certain amount of subtlety of the control system yeah I think that's a good way of putting it okay let's move on to the second half of this question yeah which was what what also what you know about I think it means what you I think what we actually want to hear is what you feel about entity component systems uh and you know I think we talked last time about the whole having an array of objects and making them unions and because you talked about the code generation for the dispatching and stuff like that so yeah we already kind of established that our way of doing things is to do the big union of objects and put them in an array but I do I thought you I thought you were a fan I do have I do have opinions about them and so I'll let you go first though before I bring up my well I was gonna say I thought that you were a fan of entity component systems I thought you thought it was a good idea well I'll expand on that so you if you have more to say first before I do that I think they're a bad idea because I think it makes um I think maybe no I just think they're bad I don't think they're necessary um I don't think you spend enough of your time at uh enough of a frame time updating an object to need that sort of optimization where you do that thing um I think it makes over the overhead of dealing with objects um well it's more complicated right and when you start updating things it's very nice to be able to just have all of the data and all of the state for a single object just there and then you just edit it and you don't have to worry about anything and ECS sort of makes that more convoluted and difficult and annoying to deal with and I don't think the supposed speed optimization uh is worth it that's all that's my opinion okay so uh my opinion was formed about this working at looking glass because any component system is how we made it so that it was one executable to do thief and system shock too um as far as I know looking glass invented them because nobody had ever done them before that but maybe other maybe in other industries there were similar things like it was probably a known idea of the idea of splitting up your things but runs very counter to object orientation because um because you're slicing your objects the wrong way compared to object orientation um we never did it as a speed thing it was never intended as a speed up um the uh one of the reasons for that I mean I feel like we talked about this some uh is that uh you the bad way that you go down is that you build a class hierarchy in C++ of your different object types and the thing is that the hierarchy it's one of those things where like object orientation kind of makes sense when you're modeling objects in the real world kind of like a weird variant of it makes sense and I've run into this with like text adventures uh text adventures all tend to be written in object oriented languages but they have the interesting property that top level objects can be unique like that you're not you don't bother making a class for every object every object is a kind of object uses object oriented technology but isn't part of a class it's just its own unique object um and that's very specific to text adventures in video in graphics games we are much more likely to have multiple instances of the same kind of thing um text adventures really want that uniqueness but um but so the point is that all the text adventure development systems end up with this system where they're using sort of object oriented methods kinds of stuff but every object can be unique and there's something to edit a component systems that's pretty similar in its intent um so in particular what we did at looking glass was we still had a object hierarchy but it was not defined in code it was defined in data um and any object could take on any component but the hierarchy defined sort of archtypical objects that had some standard components in them and you would subclass down the hierarchy just to accumulate all the standard components you needed to be that kind of thing but you could always just take anything and attach other components to it um and that was entirely to liberate designers was the reason for that so yeah if you're a one person team you wouldn't do it or if you're two programmers and two designers you probably wouldn't do it uh one of the things that we had happen was this system shock one engine was full of code that was like if you're on level six and you have one of these things then do this special thing that was just written in code because it was like uh it was like you're trying to get a game shipped and that's just kind of like the fastest way to get there is you know to do that if you have fast enough turn around time with your designers and your programmers uh underworld was even more extreme that way because all of the designers were programmers like there was no there were no separate level designers the levels were built by the programmers and so it was even more tightly coupled that way and uh we were looking at trying to do another game using the system shock engine and part of the way the reason we couldn't do it was it was a problem to do it was because there was so much hard-coded into that engine um about the game and so the as we were starting to build thief the question was well how can we decouple that because we know that prevented us from reusing it and one of the ways you decouple that is anything special like that that a programmer needs to put in some custom code is not hard-coded they just make a new component that's hey the property of being this kind of weird thing that does this special behavior and you can attach that to any object you want and and you flag in some way this is the level where that kind of thing happens or you know whatever there were ways to make that stuff work that decoupled better and so I still think actually that's all reasonable stuff when you're at least when you're a company that adds like design centric is looking glasses was and with a large enough of designers that that pushing that those capabilities into the designer's hands is useful see that's interesting I never I never thought of it from that perspective but even like thinking about that high level I still for some reason I mean I I guess I don't really understand what the divide between designer and programmer is because my understanding every time that I've worked on a game with multiple people on a team the designers can at least script right they can use Lua or something yeah and you and you push all of that stuff to an entity object in Lua and then just let the designers go crazy right so the the problem with that is scripting is not very simulation me like think scripts don't interact with each other well unless you hard code the interactions or whatever and so if you're trying to do the sort of emergent behavior ask games that are very simulationy that looking glasses trying to do you really want to push all that kind of stuff into a real language where you really think about how things interact that would that's that that's the quickest argument I can have off the top of my head the other thing to keep in mind is all of this stuff I was working on the renderer I was I was probably working on the world renderer so I didn't even do the object renderer the object renderer had to pull various graphics properties and look at them to decide how to render the object I didn't have to do any of that shit I never interacted with the system at all I'm familiar with its design I watched it being designed watching it being developed I'm not that familiar with how it was used other than looking in the thief editor and seeing the giant fucking list of components of 500 possible components or whatever and the giant one is free right I feel like the support code to support 500 different types of things is like I don't know what do you mean by what do you mean by simply you because you would have to be like you'd have to program behavior for all 500 the only reason to have 500 components like that was because you had 500 behaviors you already wanted to code right so if you have these 500 behaviors you already want to code it seems like a lot of these would be one time use are fairly unique things maybe just putting them in like a subclass type thing is actually the union way that we discussed last time and I don't wanna go over that, is still better than an entity component system. Maybe it is, maybe it isn't. Part that comes back to the whole wanting to cross the hierarchy of like wanting to put this object wants feature A and feature B stuck in it and you don't already have a struct that has both feature A and feature B. Like because they're from, you know, so the classic example, totally goofy example because this was a hard coded crazy thing in Ultima Underworld One is at one point in Ultima Underworld One you come up to a door and you use the mouse input that means open the door and instead it brings up a conversation dialogue because it's also the same gesture that you use to interact with a talkable thing and you start talking to this door which is a door that was turned from a human to a door by a wizard. And you know, he has some fetch quest or something for you before he'll open. And that was just hacked into the code in some way. I don't remember the details because that was Ultima Underworld way back and part of that was the designer was a programmer and he just knew he could go in and write that code to do that. But so part of the motivation for that component system is to give the designers that level of freedom of like, the designer can figure out how to make the door that also it can't communicate. And maybe you can do that in scripting if you hook up your scripting API well enough or whatever maybe that wouldn't be a big deal. And doing it with any component system isn't auto magic there can still be interactions there that don't work naturally that you'd have to go write code for. So it's not a panacea but that's the kind of thing that it was intended to enable. Gotcha. Yeah. So I'm personally still not sold on it. I think it makes it more complicated. That's fine. That's my opinion is it makes things more complicated and I don't see a huge benefit to it. It definitely makes things more complicated and I think the looking last dance there was that to do as much as we were doing in that without an NNA component system would have been even more complicated. Was the feeling and maybe they were wrong like whatever, like we don't, we're not going to debate that. It's also I can program, right? So I don't need. Yeah, yeah. Absolutely. Again, as I said, if you were just two programmers and two designers I still wouldn't do it. Like you definitely have to be scaling a little beyond that but not that much more. We weren't that much bigger a team. All right. Shall we move on? Sure. Do you have any thoughts on literate programming and the web language? Literate programming, yes. So let me go first because I don't have many thoughts. I don't have many, but I think it's a good thing. Yes, you go first. Well, so I mean, that's the thing. I've read about it and I never really tried to do it. And I think I wrote a tool once to like try to let me make more literate programs and I never didn't finish the tool and I never tried using an official tool or whatever. So like the idea sounded kind of nice and I've read literate programs and I've read whatever the C compiler is. Yeah, what's that one called? Is that LCC or is it LCC? The LCC is what was jumping to my mind. And they all, they're all something CC and a PCC that's portable, I think it's LCC. And I think I read like a Knuth short thing or something, maybe. And so I was kind of like, I mean, it's nice to read it that way but the idea of maintaining it seemed like not a good thing to me. Oh yeah, I don't think you ever want to write a literate program for anything other than teaching material. Well, there you go. Because I think like, have you read, you ever read Physically Based Rendering? I guess. Say again. PBRT, Physically Based Rendering in the book. Have you read it? I think I've read a little of it. It's really good. It's like, if every textbook was that good and I think a lot of the reason why it's so good is because it is a literate program, I would be much happier. I think that's, I think like it as a textbook really stands out because it's a literate program. So I think doing literate programming for teaching materials is fantastic. Doing it for like everyday work is insane. If that makes sense. I want to see if we get any comments on this in the chat before we move on. Because I don't think we have much else to say and that wasn't very much so. The book's Physically Based Rendering and third edition just came out. So now is a good time to get it if you're interested in learning about how Physically Based Rendering works. It's not for real time but basically, basically real time rendering is getting as much of that book like the advancements in real time rendering over the last six, seven years has been more or less getting as much of that book into real time, like the techniques of that book or that process into the real time world and finding as many hacks as possible to do it. That's more or less what rendering technology or real time rendering technology has been. All right, I think we're gonna move on. All right. I was just answering questions about the book. Yeah, yeah, no, but I mean, there haven't been any questions about the literate programming or comments on it so I think we'll move on. So leaving encryption libraries to the professionals, that was my topic. That's just a thing that people always say. So I wanted to hear your opinion on it and I'll give my opinion on it. I don't know enough about encryption to have because there's a whole, it's a mathematically complex deep subject that I haven't done enough research into. Speak about intelligently, but I will say that the advice of, quote, leave it to the experts has been wrong 100 out of 100 times that I've ever seen it and somebody who is an expert on it, Daniel Bernstein, has written encryption libraries and they are from what I can tell extremely good and I guess he is an expert in it, but maybe if he listened to the advice of leave it to an expert, then those never would have been made. That's basically all I have to say about it. Well, so I guess the question is like if 100% of the time the advice of leaving it to the experts has been wrong, that doesn't absolutely mean that the next time you hear it, it's gonna be wrong again, right? Like, you can live in a world where there is this one thing that is just absurdly different and rarely does justify it. And so it's kind of a question of do we live in that world or not? I don't think so, but not an expert. Yeah, well, so I'm of two minds, I guess which is why I wanted to talk about it because at one point I started writing SDBcrypto.h and I knew it was a risky thing to do and it was a not invented here thing. There were no libraries out there that I wanted to try to use. I mean, this is trying to use open SSH or whatever the standard one is is gonna be a pain in the ass. I have since found some public domain stuff that is maybe more useful. The actual reason I gave up on SDBcrypto was not leaving it to professionals issue. It was I was not able to get my multi-precision math, multiplication, modular arithmetic stuff fast enough compared to existing libraries. And I was just like, if it takes significant longer amount of time, like 10 times longer, that's kind of a problem. And I didn't ever have a need for it. Like that's part of the stuff it's that most of my libraries are need driven. But so it wasn't, but I didn't stop it because of leaving encryption libraries to the professionals. Stopped it because it wasn't turning out good. But I did perceive it as a huge risk because I'm not an encryption or security expert but I keep my toes in that stuff enough that I'm aware of how many things have gone wrong there and that any STB library may be vulnerable to a buffer overflow. And when I originally started the STB libraries I was like, these are for like, if you're a game developer and you're using your own assets, like you're not pulling stuff off the web, you're not putting this into a web app or a server. Like I'm not worried about the risks of there being a buffer overflow in there. And unfortunately, as it's grown in popularity and people have found more uses for it, it probably is in a server somewhere right now and is giving them a vulnerability. And so STB crypto is like, the flaw there is just wait, like it's only meant for being used that way. So flaws in that are immediately problematic. And what I would like to do is like, Bernstein made a version of his library that was packed into 100 tweets or something. I don't remember exactly what it was. And then there's a version of it that's just a single file library. And I was like, okay, that's what I'll use. But it turns out there is a particular protocol, crypto protocol that nobody has it for the elliptic curve cryptography, which is the new thing that people like Bernstein are using. It only exists for the old RSA style modular arithmetic. And it's a protocol that's very useful for game stuff. It's an authentication like a password kind of protocol that is even stronger than the traditional password hashing in that if somebody else gets your hashed password, breaks into your server and downloads the thing, it's even less useful than the normal hashed passwords are. I don't understand how that is because it seems like the brute force attack on it still would tell you what their password was. So I don't really understand why it's, but everyone like all the info about says it has this extra dimension that the threat vectors are better and like World of Warcraft uses it. Like after the World of Warcraft gotten a lot of trouble, World of Warcraft did switch to using this more secure protocol, but it's not available in the Bernstein library. So I didn't even know how you would use it. So that was sort of part of why I was doing the STB crypto was to be able to have that at all. So I don't even know. I don't know. It seems really risky. I'm sure I would make some mistakes somewhere, but because I mean, my code does have bugs. I think this is one of the cases that somebody asked another question regarding unit tests. This would be one of the cases where you would do unit test. But like, yeah, I mean the space over the testing space would be just way too big. The issue is usually less of the primitives and more in the protocol in that you're, the primitive is the thing that like does a simple encrypt, encrypt this block into this other block with this key. But then the protocol is this whole, you have to exchange this thing and then you exchange that thing and you look at this thing and you do this thing. And there's just been a host of over the years, errors in those things that weakened the, or made the whole system vulnerable in various different ways. You mean the key exchange protocol? Stuff like that. And it's just, it would be less of a bug and more of a mis-design because you end up having to design that part of the layer yourself if you don't just use somebody's library. Right. But like the thing is like, if Bernstein, if Daniel Bernstein didn't exist, like what would you do? Yeah, exactly. What would we do, right? Like we're in a fortunate situation where it's actually been covered by somebody who really, really does a good job, both on the code side, making sure that there's no bugs and on the, and understands deeply the mathematics and the procedures behind it at all aspects, right? So there is one expert that I'm aware of who actually knows what they're doing that when you're like, leave it to the expert, it happens to be the case that there is an expert that I trust leaving it to. And that doesn't really exist in a whole lot of fields. Yeah, no, you're totally right. Now, if he hadn't put out that lib, before he put out that lib, like the original thing was that his library was written in x86 assembly. And the curve 25, 19, okay. And then the, so the only way you could use that like from C was somebody else had then like back translated the assembly to C by hand and who knows if they got it right, right? It was just like, so yeah, no, you're definitely right that if he didn't, like that whole thing of the, it's a very fine line between the world where he exists and the world where he doesn't exist. And we're super lucky to live in the world where he exists. If he wasn't there, I don't think anybody would have replaced him. Like it's not that kind of situation. Right, and like that's another thing like I'm gonna put you in the same category as that is I don't do, I don't use other people's code almost ever, but I have no problem using your code because I actually trust it. It's the same with him, right? If like, if you didn't exist, I would have to write now that I know how now that I know how these things work, I would have to write my own image library, right? And now I don't. So there's like, I cannot leave that, I can leave that to the experts too. In the same way that I can leave crypto to the experts because DJB exists. And if he does, if I don't know, but the idea of leave it to the experts, I still think is wrong advice, except in the case where Daniel Bernstein exists. Somebody in chat points out the tweet Knackle, the Bernstein library I was talking about. Yeah, yeah, I spread that. Was actually written by five people. And I knew that and I was skating over it because again, if Bernstein didn't exist, I wouldn't take that library seriously. Yeah, if his name is not on it, I don't trust it. Yeah, and he did do the original curve two, two, whatever, five, nine. Two, five, five, one, nine, I think. Two, five, five, one, nine. And I believe tweet Knackle is also a curve two, five, five, one, nine. So it's still. Everything uses two, five, one. Put it this way. One of the things that part of why I trust DJB is not just his security expert, but that he has these other older things that he did, his tiny DNS, and I don't remember what the name is. Qmail. Yeah. And one of the things about those is, you know, he has bug bounties on them and has had a vanishingly small number of bugs. I don't remember if it's zero or if it's two or. He's had one bug in Qmail in like 25 years. Yeah. And so that is part of why I trust him in security is because of that stuff. Like he's a very idiosyncratic guy. Like I read him and it doesn't sound like somebody I'd want to be friends with necessarily, but I do trust him to, and maybe he's a great guy. I don't know. But he comes across very rough edge. I don't know the right word. And weird geniuses are often like that. But because we have that history of that stuff that he's done, like without that history, I wouldn't be trusting Tweet Knackle or whatever. And so to the degree that those four or five other people worked on Tweet Knackle, it makes Tweet Knackle scarier to me because it's not his baby. Like I don't want, I want it to only be his baby if I'm gonna trust it. But hopefully he reviewed it all or something. I assume if he's putting his name on it. Yeah, exactly. And like that's the thing though is like, there is no expert in any other field that I can think of that I have that sort of level of trust with. So like, I don't know, like leave it to the experts. I still think it's terrible advice. I think it's terrible advice for anything, including encryption, except for the fact there's one extra guy to trust. Yeah, in a world without DJB, you would say no to that entirely. I think so. Yeah, and I already said my opinion, but just to kind of reiterate in that framing is just that. Are you there? Yeah, I'm thinking. I guess I said it at the beginning is that I'm of two minds and it's like the not invented here still pulls strongly in that same way of that it's not that working with anyone else's code is still a shitty experience and a shitty end result. It pulls on me, but the knowing that I would screw it up pulls on me as well. So, but certainly it's the reality is that the professionals do sometimes still screw it up. Like the heart bleed or whatever the open SSH bug was. Open SSL. Well, the problem with open SSL is that it's just a 20 year old massive build it on. No, I mean, that's the part of it. It wasn't really the experts, right? And that's sort of the thing that undermines to leave it to the, well, it was to the professionals was what I actually wrote down, but yeah, it's to the experts is correct. Is that, yeah, they weren't like, it's a little bit like the X windows was an MIT thing that was written by MIT students mostly, apparently. It's like the intern problem of like, yeah, this thing, this code you're relying on turns out it was written by the interns at that company not by the like the super awesome people at that company. And yeah, like to what extent do our encryption libraries actually come from experts and what, to what extent do they come from things? So like certainly like encryption protocols and encryption algorithms, I'll leave to the professionals because they've done the, they know how to do the research to even try to figure out if those things are as secure as they claim to be. But in terms of the implementations, maybe that's where you're correct is that like, yeah, there you don't, because when it comes to programming unless they're DJB, you don't trust them as programmers. Right, right. Yeah, like I would never even consider writing, I mean, you couldn't because it would need mass appeal or mass deployment, but of writing like an actual ECC curve generating algorithm, right? Like that, I would never even consider that, right? Cause that's like 20 years of math to fucking learn how to do that shit. But actually implementing it, I think I would rather do it myself unless there is somebody like DJB. All right, shall we move on? Sure. Okay, now here's the one that I read at the beginning. Would be interesting to hear about effective use of other people's code, what to look out for in a library that will cost more time down the line than making only what you need. I'm not sure that this isn't just redundant to the single file library discussion. No, I don't think it is. I think Casey's talk on API design is pretty pertinent here. Because like there's one example, and this is a problem, and I spoke to Omar, the DRIMG, I got you about this. And I said, I would really like there to be, when you, for the layout functions, ones that don't take your vector type and take just an X and a Y position. For all of them, there's like a thousand of them, right? And he's like, oh, man, that's way too much work. I'll never do that. It's like that makes your library 10 times worse, right? Just little things like that, like, and I still use it, I think it's a great library, right, it's just, but like having that, having like redundant functions, one that takes a vector of int if you have that type, one that takes a vector of float if you have that type, one that takes, when I say one, I mean a function, a function that takes two, like a pointer, and the first element is the X, and the second element is the Y, or if you're using n-curses for some reason, the first element is the Y, and the second element is the X, and one that takes an X and a Y parameter, just having these redundant interface functions that just make it so that adapting your code is easier. Like if you have a function that takes a matrix, you better goddamn well have a function, like if you use your own internal matrix type, you better have a function that takes an array of floating point numbers, and you better have one that's row major, and you better have one that's column major, right? Like there's, you have to have all of these things, right? And if I see that there isn't a ton of redundant functions, this is where C plus plus is way better than C, is that you have these function overloads, and if you don't have these functions, I mean in C you can prefix your, or suffix your function with like an F or a 2F or whatever, but like having these redundant functions, like if you don't have that and you have like a really large API interface and you don't have a ton of redundant functions, then I'm just like, I'm never gonna use this because this is gonna be a fucking nightmare. That's a really simple, it's a really simple thing for an API to do, because you can usually just macro it. Somehow you can figure out your way to macro lots of functions like that. So that's a really big thing. Another thing that I look out for is carelessness with malloc and free, or carelessness with C runtime library functions in general. In general, I don't want you using them because I don't want to be reliant on anything. So if you're just like, oh, I'm gonna malloc and free and stirred up and all of these dumb things, it's just like, okay, that's a big sign of like I don't wanna use you. If you have another dependency on something that isn't the C runtime library, that's almost an immediate no. Those are the big ones, I think. What do you think? Well, so the question was specifically worried about things that hurt you down the line as opposed to stuff that makes the initial integration hard. But I guess you could use those as an indicator, because that you're gonna run into right away when the type, it forces you to pack it into its types or whatever. But maybe that's a good indicator sort of of their sensitivity and awareness of issues. So definitely like that one specifically maybe is not the best example, but the Casey's API design talk definitely talks about that whole path of then down the line, you want the more granular, you wanna be able to have high level ones that you split into more granular ones, for example. So maybe I kind of feel like maybe, yeah, that talk covers everything that I, every opinion I have basically came from that talk to start with. I'm also not very good at answering this question because I don't make effective use of other people's code because I just write my own libraries. Yeah, I don't either anymore. So, I mean, and that's part of why, right? Is that I had trouble making effective use of other people's code. And then the long run I just decided my life, my morale would be higher if I wrote the stuff myself. It doesn't save me time necessarily, probably saves me time, but maybe it doesn't. I've definitely written a couple libraries that I didn't have any use for myself, that was just, I'm interested in the subject or whatever. Most of them though, were written needs-based. I need this, I'm not gonna use somebody else's library any time to write it. But obviously I'm making libraries that I want other people to be able to make effective use of. And I don't always practice all the Casey design things like the Granger stuff. Most of my stuff is so, has a well-defined enough thing, like an image loader, right? The whole thing is I have that really convenient, simple API. And there is no like broken down version of like, oh, first parse the header. Okay, now stream out some of the pixels or so, I don't have any of that stuff. And in some cases people want that stuff and they can't do it. And that's why. Even in those cases you just like, one nice thing about single file libraries is you just go copy the function, paste it and then add that stuff. Yes, that's always possible, that's true. I don't think, some people are definitely not willing to hack into my code, but it certainly gives you the option. Way more than it would if it weren't single file. Right. But yeah. Well, I'm trying to think about libraries that I actually do use. And I think it's worth maybe mentioning the ones that I actually do use and saying why I actually use them. Because there is a certain, there is like, I always want to write it myself, but there are times when it's just like, fuck it. Like what this is is good enough and I don't need to worry about it. And I don't want to spend the time writing it. And a big one that I use, three that I use in the game, four I guess, are Fmod EX, which is the old Fmod from like the early 2000s. Maybe yeah, about 2003, 2004. Is that like open source or free in some way? No, it's like, if you don't sell your game, it's free. If you do sell your game, it's like $75. And if you put it on a console, it's like $5,000 per platform. But they still, it's the old version. Like they still... They don't update it anymore, but they still support it. They still release it on all consoles. So that API is really good. It does everything I would ever need for any type of sound in any game ever. I used it to do the sound effect engine in sound shapes, which has all sorts of dynamic sound stuff. I used to do the sound effect engine in Dyad, which does all sorts of dynamic sound stuff, because you can just get a pointer to the waveform and then write whatever DSP you want if you need to at whatever chunk granularity you decide or it would be like, if it's 44, 48,000 Hertz, it would be 60 or 48,000. I didn't realize you had anything to do with sound shapes. Yeah, I did the sound effect engine. Okay, just that. Just that, yeah, nothing else. It was like a very short, it was I ran out of money to make Dyad, I needed money, so I did that. It's basically how my life works. I work on my games until I run out of money and then find contracts. So I use Fmod EX, the new one I have no opinion on, it looked complicated, not interested, right? Because it does like, oh, you can set this up and it has this editing tool and then we package all your files. I'm like, fuck that, I don't want to do that. I can handle that myself. So I use the minimum thing for Fmod. I use libcurl because it's just too much fucking work to write an HTTP parser that uses all of the SSL, supports all of HTTPS, right? It's just too much work. So I use libcurl and I use PostgresQL for the database stuff because who's writing a database and the C interface for PostgresQL is actually pretty good. And I use fast CGI because I haven't written a native Nginx, just an Nginx module yet, but I will, so I'll abandon that. And like, so those are the only three libraries that I use, all because they all have really simple, good interfaces, C interfaces, no C++ crap. They all have, like if you look at the Fmod EX, it does everything that Casey's API designed, talks about it's extremely good, takes care of everything, has multiple redundancies for everything. Everything's an enum that you just set variable, pass it to enum, pass it to the variable you want, done. Everything is super nice, does everything, has no problems at all, well-supported, blah, blah, blah, all that stuff. And libcurl, same sort of thing. So all of them have tons of redundancy, tons of functions that libcurl and pissed about its memory usage because it's sort of a little liberal with malloc and free that I don't like. But that's the only complaint that I have with it. And I think that those are, if you look at those libraries, those are ones that I think are actually good and worth using. And almost everything else I have found not to, it's always been better for me to write my own. Do you use anything that other people have written? I mean, OBBG uses SDL because I didn't have it. But you're writing a replacement to that. Yeah, pretty much. Like it wasn't originally meant to be, I wasn't really originally planning on going all the way in on that, but then I saw a pair's, his platform abstraction layer, which included sound. And I was like, that was sort of the only thing that was really missing from my design that was significant. And so I was like, oh, look, the sound is really easy here. Like the way he's doing sound is really easy. And the whole, there was a whole separate thing, which is I totally redesigned my layer to work more like that his design because it was interesting, worth exploring. But that's a tangent. And yeah, so I was pretty okay with SDL other than the whole building it problem, especially because I'm in VC6. But I got past that back when I started up the OBBG stuff. And at the time I was like, I guess I'll just start using SDL. And it's just that all the things that I always complain about about, it's a library that I link in and that is a mess. And it's just like, yeah, I want the quick, like I'll use SDL if I do something to skip what I was thinking was I'll use SDL if I do something in scope of OBBG. But if I'm doing something quick and dirty, I'll still keep using my old platform wrapper that's not very good. And then I was like, okay, let me rewrite that platform like wrapper just for the quick and dirty apps again. And at this point I'm like, eh, you know, my quick and dirty apps, if I had sound in them, that would probably be good. Like people always complain that my little games don't have sound. And they're right. Like it's just like never with the effort. And so like lost in the static actually had sound when most of the others didn't. So yeah, at this point it's just like, yeah, I'm making that the features in that, it's that KC design thing. It's like, yeah, it's trying to make it really streamlined to get going really quickly. But I, as I'm doing that, I'm like, oh, it's pretty easy to put in the good version of this as well as an optional thing. And yeah, at that point it's just like, so yeah, SDL, what else do I use? I think that is literally it because like I don't have a library directory to link things from. There's no way I can be using anything else. I have SDL built in its own play, some tree that's not a general library's tree. And I just directly reference that. So I think I can't be using any other libraries. But I use the C standard library very heavily more than other people do, which is a segue to our next topic, which you were hitting on when we were talking about Curl. So shall I move on? Or did you have more to say about other people's code? No, no, that's fine, let's move on. So yeah, the next topic is why not to use Malik, et cetera? So you have no problem with Malik and I do have a problem with Malik. I don't actually have a problem with Malik. The big reason that I, the reason that I don't use Malik in games is because I want game state to be as close to a mem copy as possible. And that's basically the reason I don't use Malik. Cause if I can just take the game state and like if there's a crash and I can just mem copy the game state out, then I can look at it and I like that. Malik makes that a little trickier. Is it really mainly for that debugging? Like cause maybe it also makes save games easier or stuff like that. Yeah, it just simplifies everything. Many things that are that way, okay. Yeah, and it's like you get that, you get save games are easier, you get like how much memory am I using? Well, start pointer, end pointer, subtract them, that's how much memory you're using, right? It's like all of it just makes all of that stuff way easier. Also in 32 bit, there were fragmentation problems, not so much anymore. You don't really care about that but that's sort of a habit that I have of like even 16, I used to work in 16 bit too, right? That was a long time ago and I was learning at the time but like it was like you don't trust Malik and like one of the talks, a very good C talk that was recently done by Eskil Steenberg. He talked about how he writes C and I watched over the period of like a week cause it was like two hours long but it was good. But one of the things that I strongly disagreed with him about was he was talking about how great Rialok is and he said that Rialok is so good because what it does is it, if pages exist that have been freed, it will automatically reuse those pages to give you a tighter continuous memory address when you use Rialok. And I was like, that's wrong, first of all, because Rialok is not spec to do that and relying on that is a terrible idea because if you want to have any control of your processes memory space and you rely on Rialok, automatically filling in gaps in memory space then you're in crazy land, right? Because you don't know that Rialok is gonna do that in every system, right? Maybe it does on your particular C runtime library on Windows but fuck it Microsoft has not been known to be consistent with versions of the C runtime library from DLL version to DLL version, right? So I think relying on Malok and Rialok and free is not nearly as egregious as relying on a garbage collector but it's still relying on pretty complex behavior at a fundamental level. Like Malok is no joke, right? Like it's not as simple as get me some memory, right? There's a huge amount of chunking and very complicated algorithm behind a good Malok that like I kind of don't trust it. So yeah, this was a topic more for you than for me. Well, I just said like what I hate, what I don't like, do you disagree with any of that? Or obviously- No, I mean, so I think the biggest thing is that I don't make games as much as I make other stuff and I think we talked about this before which is that like when I would make apps on apps, when I would make little programs to do things in the DOS days, I think I talked about this, yeah, this seems familiar but I'll go ahead and re-establish it. I would declare big arrays of the stuff that I needed and I was running on a machine that had 64 megabytes so it would size the arrays so that they would fit in 64 megabytes and that's- So four megabytes on DOS? Sorry. Four megabytes? Why did I say DOS? What did I mean? Well, because you would be in non-protected mode. No, no, no. I don't know why I said DOS. I'd met some other thing back in some other days. This was still VC6 kind of stuff so I'm trying to think what I meant when I said DOS. Back in the days, no, it would have been in DOS days. It would have been the protected mode stuff, yeah. But it would have been 64 megs. I guess, yeah, how big were- Four, eight maybe, tops? Well, like when Windows 95 came out, I was still on DOS for a while after Windows 95 came out. I don't remember how- Yeah, my Windows 95 machine had eight megabytes of RAM. Anyway, I don't know. I feel like I might have actually had a 64 meg DOS machine. But I- That's insane. But I could be wrong. I could be wrong. Anyway, I would size them to, yeah, because this was definitely DOS because I would size them to whatever the amount of memory the machine actually had. And then if I had an old app that I'd written a machine or two ago, like it wouldn't use all the memory and was limited in the size of what it could do. And that was just the easiest way to do memory management and C at that point. It was just like, that's no code. There are no malice, there's nothing. You just have those stack arrays. And this is only for tools that have really simple behaviors. And then eventually I got the stb.h stuff and was able to use the resizable arrays for that kind of task. And stopped having these fixed size arrays. For those kind of tools, using malloc and free is not a big deal. Like for those, I was avoiding malloc and free in those DOS days just because, so I didn't have to write the code to worry about them. And because realloc is kind of a pain. The dynamic array stuff makes the realloc smooth enough that it's not a big deal to use the realloc to grow the things and make them size automatically. But there's still arrays and there's not a lot of mallocs and freeze. It's often just that big array. But even those kind of tools, I'm okay with mallocing and freeing. Like, if I have a dictionary and I want to make each, the keys are strings, so you stir-doop or whatever. That was actually something on a special case, but let's ignore that. I don't mind that, especially because it's a tool. So I'm not too worried about fragmentation. A lot of the pattern of tools, it just keeps allocating memory and then it exits. Like it never actually needed to stop and free things because it's just doing some simple tasks, accumulating a bunch of data, cross-referencing it, printing a bunch of shit and exiting. For games, I still mostly use static arrays of stuff. Oh yeah, for tools, I have no problem with malloc and free. Free probably not even waste my time with that, but like I have no problem with malloc for tool. So that's the thing, it's like games I'm making into games. This is what I said last time. It's like I make an array of 1,000 entities and that's the more entities I'm ever gonna have in my indie games. So I do avoid malloc there just because it's not worth worrying about. There is definitely a whole thing in the console world that goes back to when console memory was tight, especially, where lots of console developers would not malloc and free. They absolutely would have a rule don't malloc and free within a frame, but a lot of them just would try to not malloc and free at all because you just have this small fixed size memory, relatively small and no virtual memory to fall back to or whatever. And if you fragment and run out of memory, you like have to fail and nobody wants their game on a console to like just say, oh, I ran out of memory, the game is over. And so making a memory map and having sort of fixed allocations was pretty common then. Iggy one, because of the way it was tied to flash, I couldn't predict the amount of memory it was going to use and it had to have a malloc callback to the app. It would only call back to the app with large chunks and had its own internal sub-alcoholic air just to hide the details from other people and that reduces fragmentation automatically because it only fragments against itself and then has big chunks fragmenting against their stuff. Iggy two, just new allocations because now I have control over the what the runtime is capable of and so I can do that. And it basically has a bunch of entity types that at startup just creates big arrays of the maximum size it believes it will ever need of them, which it can predict from the content of the file except scripting can turn completely, start creating new objects. And so you can have your UI fail to create new objects because there isn't memory for it, but only do the scripting and so that's just a thing where artists have to be aware of that. And in general, game developers would prefer that their libraries not have any mallocs and freezing them much like you were saying. And so the STB libraries are an interesting middle ground there where I documented how STB TrueType would be really, really hard to have you pass in the memory it needs because it can't predict how much memory it needs in advance very easily to the extent of like to unpack this thing, it needs some block of memory that it doesn't know until it tries to unpack it. So you can make it extra iteration over it, unpack it and now find out how much memory it needed. Okay, now you can unpack it. Now it turns out the very next step takes that data and transforms it in some other unpredictable way. And so you have this huge chain of to even know how much memory the final thing needs. You need a block of memory and to compute that you need a block of memory and it's like at least three levels deep. And I was just like, you know, because people often request for all of this to be libraries a version that doesn't do any allocations and I would love to do it. It like, it's would be strictly superior library and it's worth the work if the API makes sense. But the API's often don't make sense because it's just like, I need some arbitrary amount of memory and if you can make a request upfront and say, here's the arbitrary amount of memory I need, then that's feasible. But it's often not possible. You know, like STB image doesn't have that, but probably should, I want to at least do the, you pass in the output buffer for STB image and it impacts to that buffer. And for certain ones like PNG and JPEG it needs intermediate stuff. And again, it doesn't need arbitrary size intermediate if you write it internally correctly. And this is one of those places where STB image stuff is written to be simple not to be optimal. And the old JPEG and PNG libraries actually do this. They can stream their data out. The intermediate data that they need, they can do it in little chunks and then decode the final pixels and only need the full size output buffer and the some fixed size intermediate buffer. It's actually, it's actually length dependent. So you need at least enough temporary memory that's row with dependent. So it has some dependency on the size of the image, but it, that kind of stream out is just a lot more code to write and maintain. And I'd rather not go down that path and which kind of sucks, but at least- If you can always do, for an image, you can always do like a reasonable guess, right? Like if it's four megabytes of pixels and it's not gonna be more than eight megabytes of data, right? Yeah, I mean, it's something like that. But so you could definitely do a thing where you pass in some temp memory and the size it is and it tells you that that wasn't enough temp memory. And then you would have to reallocate more temp memory and you would try to only ever do that offline. Like you wouldn't want the code, if your whole goal was to avoid mallocs, you wouldn't want your code to malloc and then free and remalloc and then free it, so. Well, yeah, actually, so that's what I was thinking about with regards to STB TrueType is what if you just, I mean, what if somebody, not me, use the output of STB TrueType or some immediate representation of STB TrueType to output a file format that is different than TrueType but actually contains all of the data required to know how much memory allocated and memory it needed. Yeah, I've thought about that, but TrueType is the one format that's actually well-designed other than that aspect that I really don't want to rewrite, like because it's actually, TrueType is one of the ones that is the only one that you can open a TrueType file with STB TrueType and you have a struct that, you know, is where STB TrueType stores what it knows about the file and nothing in that struct is allocated. Like you could just throw that struct away. There's no free. When you're done with using an STB TrueType, there's no free. And that's because that file format is designed well enough that you don't need to build any extra tables or other things. You need to cache a little bit of info that is fixed size and you're done. And so like, because that format is already that good, that's like the one, I mean, I don't, the Warbis and the STB image formats are also perfectly reasonable, so, but they have a kind of different task. But like, because it's so good at being what it is, that yeah, like maybe if you just had an accompanying file that just had the size info or something, maybe that's separate from the TrueType file, but now you have to manage two files, which is obviously something I'm against. Could you append it to the end or? Yeah, I guess you could do something like that. It's a little, it's a little icky because TrueType files can be compound files that have recollections, I think they're called, that have multiple fonts inside them. And so then catnig all the end of, you have to catnig the end of the whole thing. And it's all possible, but it's a little gross. Yeah, I mean, yeah, there's definitely, and so that, and the other thing though is that most, I think most STB libraries do malloc and free. I think I always think about it and just look at how much harder it would be to do and almost always reject it as being too much harder. Oh yeah, for tool stuff though, like it doesn't, like who gives a shit, right? But I mean, the only place that matters is for games. And I do, I use malloc. I'm relying on the C runtime library just because of STB TrueType, I think is the reason why. Because the way I do fonts in my game is I checked because last time I spoke about how I did fonts and I misspoke. I said that, when I say misspoke, I mean lied. Not intentionally, but I said that I regenerate the font. If I resize the window, I just regenerate the font that frame. It don't do it that frame, I do it in another thread and it usually takes one or two frames. But like, and then I just upload the texture from a PBO. So I just have a second PBO and then in the main frame, once the texture upload is done, I just delete the old texture in the GPU and upload and then set the pointer to the new texture. So that's how I do font resizing. And it is, so it's not one frame, it's like two or three frames. But I have to use STB TrueType for that, which means that I don't have the final size. So I have malloc and free. That's the only thing in the C runtime library that I use though. Well, that's an interesting thing because that is resolution dependent. How much memory those things would take? You actually can't predict. So one thing that could be done is we could use a different algorithm. The thing that it does right now is it testlates the curves down to the straight lines, which is resolution dependent. Oh, that's how you do it, okay. So if you- They're all quadratic Bezier's, right? Yeah. So if you rasterize directly against the quadratic Bezier's without testlating them, then you could upfront know exactly how many quadratic Bezier's you're going to have, you're going to be iterating over and that stuff could be, you could predict the size of that in the way of building that offline cache that we were talking about. But if it's offline knowledge, but when it's resolution dependent, that actually makes it, you'd have to say, here's the worst case memory it needs based on this particular resolution. Right, because then you would have to get like a bounding box for the Bezier curve or whatever and then figure out how big that bounding box is relative to the font size and then figure out how many line segments you would need. So yeah, that wouldn't work. It would work where you'd get, you'd get a really bad worst case. Well, you could do a worst case by just rasterizing the things at your worst resolution and just statically find out what the worst case was. So which isn't impossible, it just forces you to have a fixed size max resolution. So if somebody then, if you make your game and two years later, there's a monitor resolution that's higher than you anticipated, you'd be fucked. Yeah. But all of them are like that. Like STB Vorbis, I believe you can pass in that memory for STB Vorbis, a fixed size thing that it sub-alcates from. The whole distinction there is whether across a single call, like decoding it a whole file or decoding a chunk of Vorbis or whatever, the issue is whether you ever need to free internally is the problematic thing for the STB libraries. If it needs to allocate and free things on the way, then it's hard to avoid using something like Malik and free unless you just don't free up until you return. And if you are just, if it is a thing that really only requires allocating up until you're done and it never needs to free, then you can just pass in temporary and it can allocate, arena allocate out of the temp memory and as long as there's enough temp memory, that at least is straightforward to code. Whereas if it needs to free internally, it's not straightforward to code an internal chunking thing, which is the STB TrueType case again, with stuff like, I believe stuff like accents and things like that mean that decoding it actually does allocate a thing and do some decoding and do some work on that and free it and then allocate a different thing and decode some stuff and then free it and then do the final processing. And again, if you're willing to just accumulate all of that, I could write the version of it that doesn't free the intermediate things and just use this more memory. That could take a temp memory thing, it's just got to be big enough and you don't know how big it would be and all of that just kind of added up to, this doesn't really seem worth the effort. Well, I don't know when you wrote it, but with 64 bit, which you don't deal with, but with 64 bit address space, you could just give it like sort of massive chunk, you could just give it a terabyte of memory, and it wouldn't matter. And the bound is always going to be a lot smaller too, right? Of course. And then a nice thing that I was going to talk about FMOD, you can give an FMOD does deal with decoding Vorbis files, mp3s, all of the formats that you would expect, it does do all of that without a single malloc because they use their own, you give it a chunk of memory at the front and it subsections or it chunks it all up and has a malloc type, it has its own malloc basically that uses the back memory that you give it. So it works really, really well. That's why I think it's a good API. But can that malloc run out? Like, can it affect against itself? Oh yeah, and then it can ask for more memory. Okay, yeah. And I think it has to be page lined, it probably has to be page lined. But you're okay with that if it's asking for big chunks? Yeah, well, the thing is like it won't, if you give it like, so what I did, you could just give it a gig of memory and it'll never ask again. Well, unless it turns out that it can fragment and you run your game for 48 hours and it fragments and like there might be some circumstance like that. That was the whole thing of, that's why the console developers just didn't want malloc and free is because they just didn't want any chance of over a long period of time it would fragment. And it's like my servers, third two bit servers, this was kind of an issue as well. So like one of the things about running a server like Java is that it can compact, the garbage collector can compact the memory and avoid running out of memory due to fragmentation. Which again, I'm not anyway saying this is a good thing but the ability to do that, like so that is one of the things that Iggy2 does is it does need to store these arbitrary strings for like text fields. And so it does have this internal little, memory block that is basically malloc for string arrays, for char arrays or whatever for text fields. And that is a little garbage collected or not garbage collected, a little defragmentable heap. It's just a little custom thing that I wrote that's just for, you have a small number of these fixed size things or a large number of them, not fixed size, of these arbitrary size things but that's the only thing in it. You know where all the, there are no pointers into it, there are only handles to it. So it's a handle based, you know, defragmenting thing. And it's definitely possible to go that kind of route if you need to and for an STB library that would be overkill, I think. But mostly the STB libraries don't ever have anything long-term that they need to keep around the guy. Like Vorbis is actually a very straightforward, the Fmods stuff that you're talking about. The, it's totally predictable amount of memory use to get the next chunk of audio data. The only problem that Vorbis has is the amount of tables it needs to decode from the file initially. It depends on the file. Right. Well, for, so for sound shapes and dyad that, and because those were shipped on PS3, PS4 and PS Vita, all of them and Vita, well, PS3 actually is the most ram limited of all of them. No Vitas. I just, I made the data of raw PCM and then uploaded that basically. So there is no mallocking that happens internally with Fmod on the consoles also. There is no, it will never call back to Outlook because it doesn't, it doesn't do anything. You know, I give it no memory and just pass it a pointer. Little bit more complicated than that because of loading things. But like, it was more or less that. So you don't have that problem with mallocs or you don't have that problem with Fmod, the malloc fragmentation problem. So I think we can use it in such a way. So I think we basically agree. In tools, it's fine. In high quality games, it's terrible. It's not even terrible, like. Well, with in-frame malloc and free is kind of terrible. Kinda, but like kind of like. Well, you're saying now with 64 bit, it doesn't matter. Yeah, like I have this like long-term habit of not doing it, but I have people on my street and like people I get questions on the stream all the time is like, oh, should I not use it? I'm just like, you kind of can, it's kind of okay. Like if you're careful and you don't even really have to be careful if you just remember to free your memory, it's kind of okay. I don't know, maybe I'm wrong. Maybe I'm like, I'm personally a zealot about it, but I think it's kind of unnecessarily zealous. Oh, all right. Well, that's, I wish you had said that earlier. Then I wouldn't have spent so much time defending the SMB lips. Well, it was good. It was worth it. No, no, I'm totally joking. Right. I'm just trying to think, why not to use malloc? So I mean, the other thing about that, maybe there's stuff to talk about in terms of like performance. Malloc's fucking fast. Like the Facebook, like Facebook has put more work into malloc. Facebook has put a shit little work into malloc and their malloc is incredibly fast. Sure. That's the default malloc. It's the default malloc in what? FreeBSD. Yeah, but if you're making a game, if you're somebody watching our stream and you're making a game, I mean, it's probably an indie game and none of this matters, but, Yeah, that's true. But you're making a game and you're just getting whatever malloc is in the, you know, the compiler, the standard library that you're using currently. And I don't know what the current ones in Microsoft or in Linux are. So Linux uses DL malloc, which is the exact same one that's used on certain consoles that I won't say which. DL malloc is really old. Yes, but it's still. It actually has some flaws. In terms of it's multi-threading is terrible. At least it used to be. It used to just take a semaphore. So you were just fucked if you were trying to multi-thread that. It's probably okay. If you were doing high-frequency mallocs and frees, obviously you weren't doing high-frequency. So there was this whole thing where everyone started making their own mallocs, the PT malloc I think from Google and a couple other mallocs and a lot of that stuff was driven by that wanting to have per-thread allocations that were didn't interfere with each other. But also the DL malloc is still, I believe it's still is just using like the sort of first fit strategy, although it has some size class stuff. Like a lot of, or maybe it might have separate size heaps. I remember there was this whole story that I've told, maybe told it last stream. Somebody else had the exact same story, but I don't know if it was you. It might have been somebody else, which was, yeah, it was definitely somebody else, was trying to ship a game and replacing malloc with a function that looked for some specific sizes. And if it got those sizes, it allocated them from a little separate, fixed size heap that only allocated of exactly that size. And that was a way to get our Taranova, for example, got it to run in four megabytes. I don't remember now. Was because it just, there was some fragmentation going on and the easiest fix to it was to go through and find what the things, what the small allocations were that were fragmenting the large memory space and pushing them all off to their own things so that then the big allocations could still be satisfied. Right, I think it's called a fixed size allocator. Yeah, but so doing those two together in one allocator is a really common pattern. Making an allocator that redirects, so a really common pattern not in games just in general is everything under 256 or everything under 64 goes to this fixed sized small object allocator and everything else goes to a large allocator. Well, that's how J malloc works internally. Yeah, but I don't think DL malloc has that. That would be surprising. I think DL malloc has the same info. It keeps separate linked lists for each of the small sizes but they are still stored on the main heap and still can fragment against the main heap, I believe. But it's been a while since I looked at it. I doubt that it's changed, it's just I might have forgotten the details. Yeah, I never really looked at the DL malloc. I look at J malloc but not DL malloc. But DL malloc is used on consoles. But anyway, so, and then I don't know what Windows is doing. I bet you Windows standard libraries are generally very, very good. They're generally better than every other vendor. I believe they used to be bad in the VC96 days and they got better. Yeah, Borland was way better back then. There's also the whole thing of whether they just pass it on to the DLL or they actually have their own allocator in the standard loop. Like do they just pass on to local heap or whatever it's called heap alloc? I guess heap alloc. I don't know if they directly pass it on to heap alloc or if they do their own small allocations. I'm sure they do their own small allocations. Heap alloc I don't think is optimized. I think it's optimized for small allocations at all. My guess is that they have their own and I remember reading something about it. I have a paper that I read about it. It was old and outdated. So the information wasn't particularly useful but the patterns about how fast their allocator was, this is before JMLOCK and the Windows one was much faster than what was available at the time. That was before JMLOCK though. My guess is their malloc is not as good as JMLOCK but it's probably very close. That's my guess on there on the 2017 compiler. Maybe it's better than J or as good as JMLOCK. I don't know. My guess is the Windows allocator is very good because Microsoft's standard libraries generally are very, very good. So I think malloc's fine. Here's all the corrections or opinions chat has on all the things we just said for the people watching the stream without the chat. Somebody thinks glibc is now PT malloc instead of DL malloc. It seems unlikely to me but I could believe it's some other malloc but PT malloc specifically seems odd. Windows has a small fixed pool allocator backed by bitmap with tons of neat mitigation stuff. And somebody else is when I actually stepped into malloc once it called the heap alloc functions. Well, it will definitely call it but it'll probably keep and my guess is it will keep an internal thing in the C runtime library. Maybe it does it in the kernel. I think maybe it passes on to heap alloc. It's not interesting in the kernel, it's that they're implementing that stuff in a common DLL that the standard library is accessing rather than- So the standard library is just a- A wrapper around it, yeah. Right, okay. Yeah, that definitely confused me. Like the first time I saw that call into heap alloc or whatever, I'm like, wait, it's calling the kernel because I didn't really understand Windows very well and I just assumed that every call like that was a kernel call. Yeah, I know, I'm sorry, I know heap alloc's not, but. So yeah, it's probably that. And so that does mean that if I'm using VC6 I am getting the new malloc because it's just calling heap alloc. Can you call heap alloc in VC6? Was it part of the 132 at the time? Yes, it was. It was great. I believe so. So that's one of those other things where some people like, I think Casey, and I know Jeff is like this, does everything by calling Windows functions and I do everything by calling C standard functions. Yeah, I call Windows functions. Like I use F open and F read and stuff like that, for example and he will call create file. And that is just because I started programming on Unix and the standard library is how you write portable code. And when you were writing for Unix you had to write portable code because like I was on a sun spark station or something eventually, but before that I was on a sun 68,000 or whatever and then I had some random machine I didn't even know what it was online that I had access to or whatever. And so I just was totally in the habit of how to write portable Unix code. And so that when I went to Windows it just never crossed my mind to use the Windows specific stuff. It's like, hey, I can make this up portable. Of course I'll keep writing it portable. There's no downside to it. And of course I don't port any of it to anything besides Windows. So it's totally pointless. But it's just not worth the effort to learn the Windows way of doing it and I don't really see much upside either. There's more shit you can do. Like you can use the internal IO schedulers and stuff like that that the kernel have for doing async IO which you can't do with fopen as far as I know. Maybe you can. Yeah, I'm sure you can. Yeah, I don't think you can. So there's stuff like that. That stuff is definitely worth for files in particular that's worth doing. That's basically the only reason really there's a few more you get a couple of extra bonus features. And the Windows thread stuff is better than P threads. Like way better. Like critical sections are better than P thread mutexes and I don't think there's a P thread equivalent to critical section which is a spin lock, a fixed size spin lock before a mutex lock. That's all it is. And I don't think that that exists in P threads. Yeah, I mean there's definitely places where I obviously use the Windows. Like if I do audio, I use a Windows thing of course. So it's not like a don't use Windows thing. It's just if there's an equivalent between the Windows and the standard C then I just use the standard C. Wait, there was a bug if I remember correctly. You and Fabian were talking about it on Twitter maybe a year ago that there was some weird bug with over four gigabyte files in Create file or Read file that Windows stupidly just passed over to F-read or F-write. Do you remember this? Yes, yes. If you open a file, I think it was like different between, oh yeah. So I think it's writing a file if you open it for write but not read write. If you open it for write and you try to write too large a block of data it's limited to like 32 megabytes. If you try to write more than like 32 megabytes or something like that, it fails. It might even be small. It might be four megabytes or something like that. And it's just yeah, it's something in whatever the write to file is, write I guess. Write file. Write file, yeah. It's just some limitation in write file and somebody explained that yeah, oh the workaround is you open the file. You're opening the file for create and write and you're just playing on writing it and closing it and being done. You just open it for write and read and then it doesn't have that limitation even though you're not gonna read from it. I see. Which is incredibly dumb and yes and then it's the bug is clearly that F-write doesn't fix that for you automatically. Like there is no excuse for F-write not fixing that. Right, so like the benefit that you would think you would get like from your perspective and I probably didn't think of it as a benefit is that it's going to work correctly like using the C functions are gonna work correctly but they don't even fucking work correctly. Yeah, I don't even think of that that way because I would never expect write file to fail that way in the first place either. Why doesn't write file fix that itself? Like. Welcome to Windows. Like there's no reason they couldn't have fixed that. Like it's not gonna break anybody to, nobody's gonna rely on that behavior. I mean, I don't know, it's fucking Windows. They have so much crazy shit that they deal with on that front. So maybe I shouldn't make that assumption, but. Yeah, I wouldn't make that. Windows is like, read Raymond Chen for a week and you'll be like, okay, I get it. Exactly, exactly. That's exactly the only reason I know that is from Raymond Chen's blog. Which I have not read in probably 10 years but because I got enough of it, I got the idea. Well, so I've read, I get angry at his blog because he'll like defend incredibly terrible design decisions. Yes, there's a really funny thing which is that you learn so much from his blog. The specific thing that the pattern that I know I get mad about and that some other people I know get mad about. So I think it's more specific than what you just said but maybe it's exactly what you just said. It's that he'll explain why it has to be that way. And it's not even necessarily defending the design decision but I guess it kind of is. Maybe that is all it is. Maybe I don't have anything useful to add there. The thing is that he's like, a lot of what happens in there is like, well, it has to be that way because here's what's going on inside and because this is what's going on inside, do you see how all this fits together? That's the only way it could work. And the problem with that all is that you come away from that, you're like there is no way for me to know that. Like there's no way I could know that this crazy design decision is going to exist in the first place. You know, because a lot of these things that he's talking about are sort of undocumented or edge cases that aren't clear from the documentation. They're usually always, almost always the stuff he points out is like if you look in the right place in the docks, here's the thing that tells you that but you're never going to find the right place in the docks. You're looking at some other, I mean that's right, there's like 10 places that you could look. And the other thing though is that he had a term for something at one point and now I've forgotten it because I liked the term. There was something where he was talking about like to support remote desktop, you have to do all this extra crazy stuff to do it properly, to be a good well-behaved remote desktop app. There's some other stuff you have to do. And like for example, it's like hey, if you want to assume you're GDI, right, you're not doing other stuff. Like one of the things to make your GDI thing pretty is you effectively page flip, right? You write to a quote unquote back buffer and then you blit the whole back buffer up at once and that makes your Windows thing not like flickery redraw. And that's like good practice. He's like it's a recommended good practice. But if you're on remote desktop, you don't want to do that because if you do that in remote desktop, it'll composite, it'll do all the rendering on the remote thing and then send the whole bitmap over the pipe to the local computer. And what you actually want to do is send all the GDI calls across to the local computer because that'll be less network traffic and it'll be faster. And so to do that, he's like, so what you have to do is you have to put these extra code in your various, you know, WM paint things that does, it's like if I'm in remote desktop, do this and otherwise do this. And I actually did that in one of my apps and it was not that much extra work, right? It's just a couple of extra things and it totally worked. But it's like the only way I know about that is because it's on his fucking blog. Like nobody occupants that. And the problem, the thing I actually wanted to get to was not that complete. My actual complaint is that that's the thing you need to do to make remote desktop work well. There are 10,000 other things that like make your app better like that that are not, there's no table. Here are all the fucking things you need to do and one of them is that remote desktop thing. There's just, it doesn't exist. There is no way to know all the stuff you need to do to follow the Windows rules. To get all those good behaviors. Worse than that, figure that out and do it at the fucking API level. I don't even wanna have to do that. Windows should do that for me. Like why can't it be like, oh, you're on thing while you see your compositing here to this offline thing. We're gonna send all that shit over. Why doesn't it automatically, why isn't that part of the remote desktop functionality built into Windows to just do that translation? Yeah. And there's so much shit like that on his blog. It's like, oh, if you want your cursor to behave this way, you need to do these eight different things. It's just like, fuck off, just do it automatically. Windows, Win32 should just do this. And being like, well, it has to be like this because of these esoteric things. It's like, don't have these esoteric things. Just fix the API, which doesn't seem to be, I mean. Everyone bitches about Win32. I actually think Win32 is pretty good API, the older stuff. I think the older stuff is actually quite good. It was way better than POSIX or the C standard stuff or whatever. I think it's a very good API. It's gotten quite terrible, but I thought it was actually quite good back in the day. If the people who wrote the original one were still working on it or people of that caliber, I think half of Raymond Chen's blog would be gone. Right, yeah, yeah. Well, that comes back to the intern stuff too, I think. I think a lot of the APIs, especially the things away from primary, Win32 did not have highly skilled people doing the design. Plus the problem is that nobody knows how to do API design, right? Like the Casey's talk is the only thing we've ever seen that indicates somebody who knew what they were doing with API design. So I'm trying to, so somebody on chat was like, it can't be four megs. And so I decided to try to find the thing, the info on this. So I'm looking at the right file page and it is so full of so much shit on it of just crazy things. I don't know if there are exceptions, I don't know, it's just so much stuff. But I'm skimming down it and I don't see the number anywhere. So then I go to CrateBile and I don't see the number anywhere. And I'm like, either I skim past it, which is possible, or it's like some other page has this thing, which is already at the point. It's like, wait, but if you don't tell me on right, so crazy. It's very frustrating. But the thing is like, in the VC6 days, MSDN was really fucking good. I worked at Microsoft at the time. Well, no, I worked at Microsoft. Yeah, just before the first version of Visual Studio came out. So VC6 is still what you used. And there's a department of people who do dev support where you basically, you call them up and you get your own personal Raymond Chen. Pretty sure that's all gone. But those people who were doing that stuff, basically just read MSDN documentation to people who called, like their dev support people. It was really interesting. And so MSDN used to be very, very good. It's still the best, which is the disgusting thing. It's still the best source of information on the internet. If every other API, like public, free or OS level, that sort of an API was as good as current MSDN, then the world would be in a lot better place. And old MSDN was 10 times better than current MSDN. I still have like boxes of MSDN CDs somewhere around here. I'm still trying to find that right file thing. And so I just Googled it, but of course that gave me, there's a two gigabyte limit on F-Rite. And it's like, yes, welcome to 32 bit. Yes, now you have to use F-Rite EX. I am going to use the washroom very quickly while you do this and then talking. That's a terrible time. I need to just keep talking if you're gonna do that. So I'll have to stop looking at this. That's fine. Or okay, is that okay? Yeah, yeah. F-Rite hangs with large size count. For some reason this link isn't looking at me. So are we, is this, we were talking about malloc, right? Okay. No, this is another four gigabyte thing. So, I don't know what I should be talking about. Oops, sorry about that. Let me search my tweets for F-Rite. It's documented in a right file. There's no reason for F-Rite to expose it. That's what I tweeted at some point. Let's now find that tweet. Did Sean leave his fridge open? I'll ask him when he shows up. It's always open in every stream. That's awesome. Right file has this as well and it used to be 32 megabytes, is what? Oh, 64 megabytes, okay. 64 megabytes is apparently the limit. Here, I will link you guys to the support thing. Oh, it looks like they fixed it in later version. Well, it depends how old this support request is. It might be English. Claims it's documented enough in right file. 64. It is not currently documented in right file. So, I don't know what these tweets were saying. So, yeah, I tracked down the thing that I linked on Twitter back in the day. It was 64 megabytes was the limit. I do not see it and Fabian said that it was documented. I see something documented in right file but it's not the same thing. I'm pretty sure it's a different thing. Searching for 64 turns up nothing. So, I don't know. It may be something they fixed. They did fix. It's possible that the newer visual studios have a fix in F-right to work around that. Yeah, I have no idea. So, I normally can't see it. So, I hadn't noticed but somebody in chat mentioned the whole thing of what looks like your freezer door is open. Is that just, is that unplugged or something? Yeah, it's unplugged and then appliances that are not plugged in like that get a really nasty plastic smell. So, you just leave the freezer. You just leave the door open. So, is it unplugged because you never use it or? Correct, yeah. Yeah, this is like my basement. Oh, I see. So, you do have a refrigerator somewhere that you can use. Yeah, in my kitchen. It's not like the no shampoo thing. Okay. No, no. I was going to like store milk in there because I have a coffee machine back there but then I looked at the cost of running it for a year and it's like $110 to run a, it's like that's just unnecessary usage of electricity. So, I don't want to do it. So yeah, so the mallot thing. I think if you don't have the nasty habit of not using it, I actually kind of don't think there's anything wrong with it. For a lot of cases. Some cases though, like... Yeah. If it's easy to track its usage and just use mallot, I don't care. I got a link to the actual tweets because people are still asking questions about it. So, let me just link to the tweets. I'm sorry I brought up the weird esoteric bug that I somehow still remember. 298 days ago, so, yep. I said about a year. Yeah, that's very good. It's been 10 months. No, you nailed it. That's not a criticism. So, new topic. Shall we move on from this? Yeah, unless you have anything else to say. Well, we're at two hours. So, we could do what we did last time and switch over to taking live requests. Is that what we did last time? Okay. Yeah, we went two hours. What's the next topic? Let's see if it's a good one. If it's a good one, maybe. If it's a shitty one. The next one is, you think about multi-threading differently from other people. That's a shitty one. Okay. That's not actually true. And the one after that is, what does your distance field rendering pipeline look like? That's a shit... Well, that's a good one, but it's too me-focused. Is that the one... I think I saw a stream or something where you talked about this at one point where I'm trying to remember if I remember what it was. There were some... So, the one you saw was me trying something new. Oh, okay, yeah, yeah. Where I did like a tiled thing. Yeah, it was the tiled thing, yeah. Yeah, that was a... That was just to see if it would work. And it doesn't because floating point precision. And so, the issue... We're not gonna talk about it, but just to establish, this is like what you used for N++. Is that the... Yeah, it was the whole max render, the blend, or the alpha channel. Yeah, we kind of... We talked a little bit about that, in fact. Yeah. It's a pain in the ass to talk about. Yeah, it's that. But you haven't used it for other things, like it's just for that? Or do you have a... I actually did use it for the UI. There's a UI effect in my new game for the main menu that does use it. I do something different, though. I use stencil buffer and scissor test instead. So, it's like a new code base. It's not sharing code or anything. No, no. It shares the shaders because all it is is distance to capsule, so. But that's the only thing that it shares. All right. The distance to capsule functions. All right, I've saved out that text file and we can come back to these topics if we ever want, but... All right, they're not bad. They're just like tedious, been two and a half hours. Well, here, let me skim over them because nobody's asked anything in chat yet. So yeah, guys, go ahead and ask stuff in chat. Let me just skim over and see what else there is. Shouldn't it be possible that software is finished at some point? Games are usually just passion and bug fix, but other software is feature creeping. You gave that a nine. Well, yeah, I was actually gonna use DJB's Qmail as an example. That shit was written once and never been patched. I mean, I think there was like a couple of bug fixes, but that was it. It was like, we're replacing Sendmail. And a lot of old Unix stuff was that. It was said, said hasn't changed in 30 years. Ock, I presume, hasn't changed in 30 years. And I think that's actually super, super important. I think the idea that we have something and it constantly changes and constantly evolves. And version nine has all these new fucking bells and whistles over version eight. And you have gotta upgrade now, right? I think that is a purely capitalism overtaking actual good software practices. And I think it's a terrible idea and we need to stop. Well, you know, so I was gonna say something about Qmail. Somebody else has commented about a specific thing. What I was gonna say about Qmail is like, I don't know the details, but like I use an email client that's very old. And one of the problems it has is that it doesn't know about modern crypto stuff. So like they've changed what certificates are supposed to have in them because the old stuff is crappy or something. I don't know what goes on exactly. All I know is that every week I have to approve a Gmail certificate because for some reason, Gmail is making a new certificate every week instead of letting it sit there for a year. And I assume it's because it's new crypto that, because that would make sense that I have this old library that, unless it's using like a platform native crypto library that gets updated, it's just gonna age. And I would have thought- Which there should be. Which I would have thought Qmail would run into that problem, but maybe it doesn't do any encrypted stuff. I don't know. I don't actually know the details. I was just picking that as- Yeah, no, exactly. That never gets updated. But so, but my point there is that things that interact with stuff on the net probably do have to get updated because the net standards do change. Yeah, but they don't need like a thousand new features, right? Well, right. So there's two levels to that. So like Photoshop, I've been fucked by, on stream talked about being fucked by the Photoshop upgrades and the willy-nilly changing of keyboard shortcuts and stuff like that. On the other hand, I was playing with some aspect of it and was like, oh, this is kind of cute that they have this new feature in here that I could see using this new feature. It's got a hundred other new features that I don't have any use for, but I bet somebody out there has it because this is the creeping capitalism. I mean, it's clearly done for creeping capitalism reasons and yet it does also possibly offer benefits. So it's kind of a weird thing. Yeah, but like adding new features, so there's a difference between, like Photoshop is a good example, right? I, my preference is CS3. I think that's the best version of Photoshop that's ever been made because it was the one that was most like seven but had like a couple of really nice upgrades. But I tried, I owned CS6 as well and like the fucking zoom changed and the crop changed. And it's like, you want new versions, have an option to go back to the previous version or whatever, it's like, they changed shit like in just horribly terrible ways. And I think there's a difference between adding new features and changing existing features that you can't revert. And I think Photoshop aspires to only be adding. I don't know why they keep changing. Like there's, The brothers who, I think there were brothers who originally wrote Photoshop are long gone from Adobe. No, I'm sure. I mean, like, I think their aspiration is to keep putting in, the thing about Photoshop is this program with a thousand features in it and I'm really good at 20 of them and those 20 do what I need or whatever. And I don't give a shit about the others but everybody else all have the same experience with a different subset of 20 features. Absolutely. And so if you want to make a Photoshop competitor you kind of have to actually make all those features if you want to get everybody to switch. Like it's this weird thing of like people can make a Photoshop competitor that has a 10th of the features and get some of the market share and there will be some people who will like give up some feature that they're willing to part with because this is other thing and so much better in some way. But it definitely gives them, there is sort of a competitive advantage to that. That's not actually why they put in a thousand features in the first place, but that's kind of where you end up with it. And it seems obvious that like you keep putting in new features to get people to upgrade and you don't break the old shit because you want them to have a good experience. Like is a thing that would work. It's still capitalism driven and gross, but it's like, it's not the end of the world. The fact that they also keep raking everything just seems insane to me. I don't know what, I don't know why they do that. They have some of the most incompetent people. Are you familiar with Bennett Foddy's discussion with one of the Photoshop developers regarding gradients? Yeah, yeah, I saw that. Like I'm, that was exasperatingly stupid. That was just like, I can't even believe. I don't even remember the details. I remember it being stupid, but. He was like, Bennett Foddy was like, I want something that does like a smooth, a bezier between three colors in RGB space. And they're like, look, you can do it by setting all of these like internal color things. And he puts the grading inside by side and he's like, they aren't even fucking close. Like if you think these are the same, you're an idiot. And then the guy was like, well, you know, you could just do this and modify it like this. He's just like, all I want is a gradient between three fucking colors and Photoshop can't do it. And we're on version 400 and it costs like 10 grand. What the fuck are you guys doing? They still haven't fixed it. But on the other hand, well, it's not a fix, right? Like that's a new feature. Like in some sense. Right, but that is coming back to that same thing. There are a thousand people asking for that thing. It's like, how many other people besides Bennett Foddy is going to use that? How many people? I bet you a lot of people do. How many people care about the subtlety of the gradient being exact, being the same in that way. And how many people are just trying to shit their shit out. And like, yeah, I put one intermediate point in and it's good enough. Right, but the thing is like the guy who was defending. No, no, the guy, I agree that the guy was stupid and all that, but it is also an interesting data point in terms of the feature creep wanting, everyone wants new features and they're all very specialized and that is a pressure that wants Photoshop to add a thousand more features. Right, but the point that I was getting at with the guy is like. You were answering the question of how they're idiots. Yes, you can't get a good product when people like that are making product decisions. Yes, that may well be. If the guy was just like, oh yeah, I understand your point, but that's really not a high priority. Yeah, yeah, yeah, yeah, yeah. Okay, that's a fair answer. All right, so let me, let's go to the chat and wait, what were we talking about there? Why were we? We're talking about products. Oh, shouldn't products be finished? So yeah, I was just trying to make an argument that like the feature creeping is kind of understandable. Some of it is capitalism driven, but some of it is making people's lives. You can be motivated to make people's lives to be better and that's why your feature creeping. Yes, absolutely you can, but that's not the norm. Yes, it may all really be capitalism driven. I mean, it clearly is when that company is also switching to a licensing model and et cetera. You know, the creative cloud. The creative cloud, yeah. That's it, that's the end of Photoshop for me when that happened. Well, unlike, I ended up getting creative cloud just because I couldn't find my install media for my old version when I switched computers. Oh yeah, at that point, I'm all, I pirate, I'll pirate it. Yeah, I guess. If I need it, I'll pirate it. Like that's, I bought four versions of Photoshop and I'll buy another one. Yeah, no, absolutely. I probably should have. All right, so let's go back to the chat, like we promised. So I'm just, I'm gonna read all the questions out loud and we can decide whether we wanna answer them or not. Have you tried to scientifically quantify your programming process? Ultimately at this point, how do you know you're improving as a programmer? That's a more concrete version of that question. So I mean, no. I try different stuff still. I try to not be stuck in a rut. I try to do things differently and they're often terrible when I do them differently. Well, you know, Looking Glass shipped Thief, The Dark Project in 98, 97. And for 10 years after that, I was pretty sure that was my peak. Like I shipped this, I helped contribute to this game that was a pretty good game. It's really clunky to look at now because, hey, software rendered or whatever. No, it's really good game. Like I don't know if you realize that game was. And I didn't actually, the part that makes it good is the design though, not the graphic, the rendering engine and, but. Well, the dynamic lighting on stuff was pretty goddamn good and important to the game. I guess it doesn't matter. I still consider myself as I enabled those designers to do their thing. Even if I- Okay, but it's a good fucking game. And so I'm totally proud to have done that. And that was peak me. And I was content with that. I was like, here I am, I'm 40 and my peak was back in my 20s when I did that. That's fine with me. This is not really answering the programming question. I just wanted to, it's related. You know, and now this whole STB stuff has started and I'm getting my second peak. I wasn't expecting to get a second peak and in terms of like, you know, it's sort of like a confidence booster or something, right? Like it establishes that I'm doing good work that people find value in or whatever. And it doesn't matter whether I improved as a programmer, right? I like, hey, I'm accomplishing something in the world that's making a difference. Like that's even, and I say that about thief. Like that's making a difference in this weird dumb entertainment way or whatever. And now I'm making a difference mostly for entertainment software as well, like whatever. Like that is something that I, it's not something I worry about. I don't, I thought I had peaked with thief and I didn't care. I was like, I'm gonna keep doing stuff, but like that's the peak. And I'm happy now to find that I have kind of another peak going on. None of that was ever like, I never thought about like, oh, I should try to work towards another peak. And in the same way, I didn't really worry about becoming a better programmer. I just try to write the best code I can and always learn is clearly a useful thing, but it's not like, it wasn't always learn is like an impulse I have that's not driven by becoming a better programmer at this point. So my thing is different. When I was 18 or 19 and I got hired without graduating high school by a large software company after I lied to them and told them I graduated high school. Um, I had a huge ego, right? As any smart 19 year old has had a huge ego and thought I was the best. And I thought I was the best for a long time until, because then I worked at this other company where I did like reverse engineering of cell phone stuff and I was far and away the best programmer there too. So I was like, yeah, I'm clearly the best. And then I worked for this game company in New York and I worked with a guy who graduated Brown University with a masters in computer science who basically made Asheron's call the whole server thing by himself and his brother who dropped out of Carnegie Mellon after one week after realizing he was 10 times smarter than all the teachers. So I worked with these guys and was like, I'm a fucking idiot. I don't know anything, right? And that was really great because it made me, I lost my ego real fast working with people who are better than me. Eventually I got very good and learned as much as I could from them and moved on. And then I've worked almost, I've worked entirely alone since then. And one of the downsides of working alone is it's really hard to tell if you're not really careful and really self introspective of if you're improving. And I think working with a lot of other people or working with other people is very good for that. And I'm starting something soon. Hopefully, or I should be starting something soon where I will be working with other people and that'll be really helpful so I can learn from them and get better. And the thing that's really helped me is watching other programmers on Twitch. Twitch was really useful. Twitch was useful for two reasons. One, I got to see if I was good because I wasn't even sure how I, if I was good or not by streaming and that other people could watch me. And I found out that, okay, I'm pretty good at what I'm doing. But also watching people like Sean and Casey and Pierre and people like that and watching them and being like, okay, I learning from them and improving and also can sort of judge if I'm good or not. And that was, I don't care so much that I'm good in like a competitive way, but I care that I'm as good as I can possibly be. And watching other people on Twitch has really helped recently, has really helped be like, okay, this is where I'm good and this is where I'm not so good and I need to sort of improve. But I think it's really important if you want to be a good programmer or if you want to be good at anything to actually be honest and look at yourself and try to compare yourself with other people. Not in like, oh God, I got to be the best, fuck these people. But in a way of how can I improve? Where am I bad? Where am I good, et cetera, et cetera. And I think it's actually, I think it's an important thing to do. And I've started doing that a lot. So, so I guess I kind of disagree with all that, but I also agree with a bunch of things. So, I actually have a similar story in terms of the thinking of the smartest guy in the room. But, and I thought of that while I was giving my answer and I decided it wasn't actually relevant. So, but since you talked about it, I should talk about it a little bit. I'm very old, okay. Like I'm about to turn 50. So, my memory is not great but of anything that happened more than 20 years ago. I was in my late 20s at Looking Glass and I definitely thought I was the smartest programmer there. I don't, in hindsight, I don't know if I was. Like one of the things that I've learned as we'll get to is realizing how many other smart programmers there are that it makes that stuff all a lot harder to judge. You know, I definitely felt like Carmack back in those days. I definitely felt like Carmack was superior to me. Not necessarily in whatever your hypothetical like programmer IQ would be, like of just being able to solve problems but in terms of getting shit done and in terms of researching the right things and making good design decisions. He was clearly better than me and I knew that but I wasn't working with him. But I didn't like lord it over the other programmers there. Like they were all at Looking Glass. They were getting shit done and it was all awesome and I was totally happy with that. I was just in my little niche solving my little problems but I did it mentally to feel like I was better than them. You know, I didn't think I was better than them. I thought I was a better programmer than them or it was better at solving some kinds of problems better than them even, I should say because they were definitely doing, a lot of them are MIT grads. They were clearly smart guys. They were probably smarter than me but at the time felt like I'm the best programmer in the room at any given time. And then I left Looking Glass and I was indie. I worked by myself for a long time and so it's not until I was 41 that I, well actually Fobian didn't show up at Rad till, maybe I was 43. Fobian is the first person who I've worked with who I think is a better programmer than me. Jeff is like really at the same level as me. I can't judge it and I've never worked with Casey significantly to judge that. So I made it to 43 or so still feeling like I was the smartest programmer in the room. I mean I may have been wrong all those times earlier. Like there might well have been programs that Looking Glass were better than me but I definitely had that suffered from that problem and it was just like I learned to divest from that. Like when I was a cocky 20 year old it was fine to think that and so it was stereotypical to think that. When I'm a 35 year old still thinking it's kind of gross if it's not true. And I just learned to like not talk about it, to not worry about it and to do my own thing or whatever and I was working independently. And I feel like I still got better as a programmer even though I wasn't working with people and I did think I was the smartest person in the room. Like when I was working at Looking Glass if I got better while I was there, which I'm sure I did, I'm not sure it was from learning from other people exactly or I was learning from people who were less good programmers for me, maybe that's possible. So all that stuff is kind of complicated and weird and I feel like I did still get better all the way up to now. I think I've gotten better. A lot of getting better in the sense I'm talking about is just having experience and you get experience by doing and so always be learning, always be doing. I think I've covered why what you said. So then what you said at the end was, I forgot what did you say at the end? Cause I wanted to address the smartest person in the room. Well, I think the best programmer, sorry, not smartest. Yeah, the best programmer or whatever. I think that we both more or less said the same thing there. So what did you say at the end? I said that I think it's important to... To improve. Okay, self-analyze. To self-analyze. Self-analyze and to look at other people and learn from them. And so, yes, okay, so both of those topics. So one, I don't self-analyze. I just do shit and get better. And two, looking at other people's stuff can-made hero is so weirdly slow and going back to the beginning stuff that I couldn't learn much from it. I mean, I could learn details of the Win32 API that I never drilled down into, like he would drill down into or in the early days. And I could learn that from him, but it wasn't stuff I needed to, I wanted to learn. And I haven't actually watched any of your streams actually programming. So I don't actually know how good a programmer you are because I haven't watched that stuff. And those are the only two streams I might have seen. So I don't actually have any experience from looking at streams doing that. So I wanna, I'll just, there's one particular handmade hero thing that I think I wanna talk about. I don't learn like game development stuff from handmade hero, right? Yeah. Like I'm not gonna learn how to fucking linear blending work, right? But there was one thing that he did and it was he was doing entities segregation in terms of a working set and a non-working set of entities and how he was moving entities from one to another. And I would have never done it the way he did it. I still think it's wrong. Like I think it's just insane. Like it was so much more complicated than what I would have done. But like I never would have even thought of approaching the problem that way because it's so fucking weird, right? But like, I was just like, stupid, but it's not. It's interesting. And it's like, cause he had this like high set and low set and he was like compressing entities when he moved them out of it and then like decompressing when he moved them into the working set. And it's just like, that's a way of doing it. And I never would have even, that way never would have even crossed my mind of doing it that way. So when I say like I get to learn that I totally learned that is like, I would never do that. So that's absolutely, there's like kind of people kind of hand wave this abstract learning and definitely the like learning in the sense of discovering new techniques is absolutely something you can get from watching anybody program. They can be worse than you and you can definitely discover new techniques because they may have ways of doing things you've never thought. And they may turn out to be bad techniques or they're a bad programmer and in their hands it's a bad technique and it's a good technique if you know what you're doing. Whatever, like, I don't like any of this value judgment stuff. I have gone down this whole better programmer thing and I tried to be clear that it was all kind of hand wavy or in scare quotes or whatever. And I only went it down it because you brought it up. Cause I don't- Yeah, I only didn't mean it. No, and I know. And like I even said, like the IQ test for programmer kind of thing because I want the whole thing about that is that IQ tests are a dirty term. And I was trying to reference that knowledge that IQ tests are like, you're measuring along one very specific access and it's not very general. And how do you even compare two programmers when they just know and work at different kinds of things? So I just wanted to be clear that all of that was garbage and not to be taken seriously. You know, even when I say Fabian is a better programmer than me, maybe he denied that. Who knows? Like maybe from his angle, I look like a better programmer and from my angle, he looks like a better programmer. Right? Like who the fuck knows? Probably not. He's just a better programmer. But in general, it's not that valuable to do those comparisons. Obviously, like when you're a new programmer, watching an experienced programmer, you're gonna be exposed to way more shit because their experience, they are gonna have all these techniques that you don't know about. Assuming something's certain, like you don't wanna watch a web programmer who just connects frameworks together. Unless what you wanna do is connect frameworks together. Like, and then maybe you'll discover lots of techniques about connecting frameworks together and what frameworks are out there or whatever. So yeah, I just wanted to be clear that like that is not something to take very seriously. But I do think watching other programmer, like I was watching, like you were watching a pair stream about the push-pull design for his thing, right? And you were like, oh, that's a good idea. I'm gonna do that. I assume, right? Is that what you meant? Right, and it was kind of interesting because when I was tweeting at him about that, he was like, I think maybe I wasn't clear enough about this one thing and he kind of re-emphasized it. I was like, dude, after the first five seconds of showing like how it was supposed to work, I already, like I'm an experienced programmer, like I already went down that whole path in my head and went, oh, I see how all of this stuff is awesome and is crazy. And I didn't need any of the details of the stream at all once I just heard that high concept, which is not to knock the stream or anything. It's just like, yeah, it's like just being shown the idea. Like if people just gave me like the one minute summary talks of everything that they do, I'd probably get way more value out of it. But that's just for me as a super experienced programmer. I'm not saying people should do that. It would be great for me. It's like one of those weird things like GDC, like because there's always new programmers every year, GDC, there's a tendency for these talks to always be kind of intro low level, not low level beginner programmer oriented a little bit more. And they do give intermediate and advanced talks, but it just always feels like some of these talks are stuff we all should already know, like give more advanced talks. And it's like, well, no, the reality is that there's just new programmers always. Yeah, there's definitely, yeah, that's the thing. And so this is one of the interesting things about handmade hero. This is something you said earlier, maybe not even this question, was just that this whole idea of giving this, these experienced programmers so that people can do the thing you're talking about about learning from them by watching them program. Yeah, because this was part of that thing, where you were saying me and Casey and stuff. And I only did start doing my streams because I saw Casey doing them and I realized how valuable that was. And in part from watching the reaction to it, I didn't personally get much value of it. But part of it was just seeing it. I was like, oh, once I saw that format, I'm like, oh, that makes sense. And once you buy the idea, like part of that was he had his manifestor, not manifesto, but his description of it, which said explicitly that he got better by working with Chris Ecker and other people, working alongside them. That was how he got better, which is what you were talking about. And I was saying, I'm not sure that's how I got better, but I definitely see that for a lot of people that works and decided, hey, yeah, I should go down that path because it doesn't cost me anything to do the stuff I'm already doing in stream. It's turned out with OBG that I have started to slow down more and explain things more. Like the very first two OBG streams, maybe I tried to just go full speed. And so it is, I am giving up something now. I am actually going slower, but OBG is all throwaway anyway. Really, OBG is just existing as a teaching thing anyway. So yeah, so full marks to Casey for starting this trend. I'm not getting that much value out of those kinds of ones, watching somebody implement, somebody on handmade network implementing something I probably would get that much value. Pear is like an experienced guy who is intentionally streaming about novel stuff that people don't necessarily do. Like he's aware of that. And that was a goal of his in doing that new stream. He was like, let's look at this crazy idea that I have that people have talked about, that they were already doing that. Like he wasn't the first to ever do it, but it was not an idea that was widely discussed, so. So yeah. You might get something out of some of the rendering lighting. Yeah, no, no, I was only. I absolutely, that would be it though. I mean, there was like two things that maybe you would get something out of. It might be though that in that same way that there are just techniques I'm not aware of. You do something in some different way that I've never done it. And there could be stuff like that. The problem is if the density is too low, it's just not worth the time investment. Yeah, the density is gonna be way too low and it won't be worth your time investment. I wanna say another reason why I streamed is entirely selfish, which is just to be like, not as an eagle boost, but I just came off of a really difficult project where I was like, I felt really shitty about just programming in general. And like I wanted to stream where I could be like, all right, I want other people to watch and like it's kind of like a neat check to be like, am I actually good enough that people want to watch? And knowing that that was the case was very comforting. Yeah, well, I mean, part of this is, I guess I like teaching or I like some aspect, I don't know what teaching is the right word, but some aspect of that. Like I've now been thinking that it might be interesting to take one of my libraries and just walk through it line by line and talk about it. And what talk about means is varying. Ideally, the funny version or the fun version of this would be like, this has something to say about every line of code, but I know that would never happen. So. And for loop. But the point is that there are different things you can talk about. Like when you hit something that there's not that much to say, you can say, hey, I use the spray style, here's why I use the spray style. And then for something else, you talk about some aspect of the syntax and for something else, you talk about the semantics and for some other thing, you talk about some other way you could have done it. A bunch of stuff you could do. And I feel like that might be actually more valuable than the coding stream for a certain audience in terms of the density of useful information because, hey, I'm just talking about it. I'm not trying to code it. In terms of like solving problems and stuff, people love to see me solving problems because that's something they don't know how to do. And I can't tell them how to do it. So watching it in practice gives them some insight into the process. So it's not that I wouldn't wanna do normal streams and this is a replacement for that. It's just a thing that might be different and might have value to some people. So that's a thing I've been thinking about doing. And I'm just floating it now to see what people's reaction is. So I have a question about solving problems because I do that too, right? I'll solve some sort of a varying complexity problem. Nothing's super complex because that always takes days and paper, right? But anything that you can sort of solve in a six hour block, I solve these problems as well on stream. Do you ever get nervous that you're not gonna be able to solve it? Cause I get that all the time. I'm just like, oh my God, what if I go down some like crazy wrong, clearly wrong path and I look like an idiot? Do you ever get that? Cause I do. I've done it. I know, like, I think as part of like, I don't think of myself as a great programmer. Like, I mean, I do, right? Like, part of my, like- There's like the required modesty to not seem like an asshole. No, no, but so that's, I think it's something stronger than that. I think it is that like my self image is not tied to me being a great programmer. Like if I stop and analyze things, I'm like, oh, I think I'm a pretty great programmer as long as I'm motivated and being productive. I'm a pretty great programmer. Like in others, my big failing that we don't talk about very much is sometimes I'm not very productive and that brings my average productivity down. And you don't see that on stream. Like I don't stream that. Like if I don't feel like working, I don't stream. So in that sense, like there's an analytical level, I can look at that and say I'm a great programmer, but I don't think my self image is tied to the idea that I'm a great programmer. And so when I fail and not hurting my self, like I'm not failing to live up to my self image. So I don't really care. And I don't really care what other people think because my self image is sort of the most important thing there. And part of it is because I want to document failure. Like I think that's actually an important thing for people to learn from the stream is to watch me be fucked by OpenGL for four hours with a black screen. So that when it happens to them, they go, hey, it happened to an experienced programmer for four fucking hours. Well, it probably wasn't one hour, but you know, whatever, when it happens to them, they're not as bummed out. And it may scare them off. It might be like, shit, if that's what programming is like, I don't want to do it, but I'm okay with that because yeah, you got to cope with that shit if you're a programmer. What was the question? I asked, I said that like, I get really nervous. Oh yeah, yeah, right, right. Something difficult for me. Difficult for me, not OpenGL, like actual hardware. Yeah, so one thing that I spent dozens of hours on in OBBG was the threaded procedural generation stuff. It was just turned out to be this total nightmare. It took me a long time. I ended up rewriting the system to a simpler system. And then because it was a rewrite that modified in place, like there were more bugs in the final system than there would have been if I'd done it directly that way because there was some old code that wasn't working the way I thought it was or whatever. And that was just all a mess. And I was upfront about the fact that I don't do threading that often and that this is a hard problem. The networking, I totally fucked. Like I really didn't understand the whole networking model in my head. Like some part of it wasn't clear because I'd never done it. The whole client side prediction stuff. Yeah, we were actually having the exact same problem at the exact same time. And I totally understood, because the problem is I totally understood what the client did in client side prediction. I didn't understand the server. And so I was trying to ask these questions of how it should work. And I sort of had four descriptions of how it might work and was posing this to like, you know, Gaffron Games, Glen Fiedler or Fielder. And none of the people I asked gave me an answer. And once I found, somebody finally linked me to a talk from Blizzard or something. The Overwatch thing, right? Yeah, I did. And I watched that the first time through and I was like, no, that doesn't tell me anything. I didn't know. And then I watched it a second time and I was like, oh, there. I heard the thing that matters. My mental model, this problem is totally wrong. And I went back and I looked at my four descriptions of how things might be. And I was like, none of those make any sense. Like they just make no sense if you have to correct mental model. And I was like, okay, now I know why nobody gave me an answer because the questions I was asking was nonsensical and they didn't even understand what part of my mental model was wrong to correct it. See, one thing that was really interesting about that problem that you had is I had literally the exact same problem I couldn't, there was something I didn't get and that Overwatch video, he said one thing, I think it was about how they handled favoring the shooter when somebody moved and how they had to rewind something and that made it click for me too. And it was like, oh, I get it now and that's why I didn't understand it before. It's because I totally didn't even understand what the problem was. I think it was actually something different for me because that part I had read in the Valve stuff and. Oh, no, it wasn't that specifically but it was that part of it that triggered something else that made me. Well maybe it was the exact same thing that that would be kind of funny because I still remember what part of my mental model was wrong but I don't think I walked into it on stream. So the part of my mental model that was wrong was somehow I was forgetting that in the end, the server's view of the world has nothing to do with the client's view of the world. That the clients all have different latency to the server and the server based on that latency is just running shit based on that like everybody on the server, the server is canonically authoritative and everybody's input gets to the server, some random different amounts delayed random for each of them but hopefully constant for each of them but. A constant ish, yes. But constant for each but different for each and different across each. Okay, yeah, I know what you mean. Fucking language. And somehow it was not clicking to me that that server was so decoupled from the clients that its notion of time just doesn't correspond. I can't even put good words on it, right? Because it's the concepts are so detailed like and partly because I haven't worked on it in six months so I don't remember the exact thing about it but I can still remember that that was where the mental shift was was that my understanding of what the server was trying to accomplish was wrong. That the server in terms of the server trying to keep things in sync didn't make any sense. What I was imagining was that the server, I think part of it was just sort of that and I tried to say this at the beginning of this that the client side prediction is trying to get the client stuff to happen as early as possible so your client latency feels good. But as a result, everything that's coming from the server is just much, much later. Like in others, if you had to wait for your own stuff to round trip then everything would be perfectly in sync. You'd be seeing exactly what's happening on the server because you get everything else from the server except yourself but the client side prediction is pushing you way in advance. So what you're doing is totally discord in your world what you see yourself doing is totally out of time with what the server is doing. Yes. And that was that right there that whole the difference between what you saw and what the server saw, I somehow didn't get that even though it sounds trivial like when I say it that way, that's part of the promise. I understand it now. So I can't quite describe what I thought wrong about that but somehow I thought there was some greater consistency between those two. Yeah, no, that's exactly the thing I had too whereas I would, because you read about latency compensation on the server and you're like, oh, and then it rewinds time. And I was like, wait, what? Hold on, what do you mean it rewinds time like for all clients and then resimulates? I just don't even understand how this makes sense. And then something in that overwatch clip I think it was that thing. It was just like, oh, I get it now. And I rewrote it three times before watching that overwatch thing. And then I rewrote it a fourth time and it was like everything just worked and it was really simple and not stupid. All right, well, we've only tried to answer one of these questions. So let's maybe move on. Okay. We're at three hours. I have to go in 35 minutes. All right. Here's a question that I vote for skipping but you're welcome to say we don't skip it. How, Sean, you Sean, how do you serialize and uncerealize game state data when not using malloc? Are you creating a memory arena and always storing base pointer plus offset? When not using malloc? Yeah. When not using, it's the easy. You have, you have one, you have one chunk of memory. So serialized. So he's saying, how do you resolve the pointers? And it just doesn't seem interesting to me. It's like, duh, there's a couple of ways to do it and you just do it. There's no resolving any pointers because if you use virtual malloc, you set your internal per process memory address that's the start. So then everything's just deterministic off that. Can't you get, but if you have pointers. Yeah, but they're all pointers to what? External data? No, pointers to internally into that block. But can't that block depend on the memory random, address space randomization? Can't the address change? No, with ASLR. I don't actually know how that works but in debug, I just turned that off. But how would you save load in non-debug? Did you just save the, what your pointer, the base pointer is? But then you have to, you have to do pointer. Then you have to fix everything up. Yeah, yeah, that's exactly why I didn't want to talk about was because, yes, you have to fix up your pointers. Woo, like. But I don't even know with address, I haven't experimented with it, but with address, what is it, ASLR, address space layout randomization or whatever it is? Yeah. Does that even change? Like if you virtual-alloc and pass it an address. No, no, that's fine. But now something else, like a DLL could already be at that address or something, right? But no, no, because you just start at like two terabytes and nothing ever goes into that space. Yeah, but in 10 years, maybe your game will break because something's different. Oh no, the debug serialization thing is gonna break in 10 years, that's fine. So if you want to serialize game state not for debug reasons, like for save games, do you do something different? Yeah, well, I would never use that specifically for save games because it would be like a ton of memory. Yeah, no, he might have been thinking that though, so. You just map you a file, a file, and then mem copy to it, and then you don't have to worry about it. Okay, a lot of people are asking questions I think they're asking each other, so I'm skipping those. Okay. There's just a lot of conversation I'm scrolling by. A lot of talk about Photoshop blending for some reason. Oh, because of the Venom potty stuff, I think. Oh, okay. I have a lot to catch up on because we were talking on that one topic for a long time. What was it? It's an interesting topic, because we both, I find it really interesting that we both had the exact same problem and both solved it with the exact same talk. Because I remember hearing you saying that you had that problem, and I remember figuring it out from watching that talk. So I sent you the talk and you were like, yeah, I didn't answer my question. I was like, okay. And then it did. Yeah. Okay, here's where we were talking about Mu, the pair stuff. Yeah, my API, my platform API is not entirely like that because I can do anything asynchronously in mine. But when I watched that, I was like, oh, that's cool. But it's not how we do it. Apparently, Cuba Caleb had a C99 question that you skipped. I'm gonna go back and see if I can find it. That's an excellent interjection there. They're still in the, I wish the channel list would actually update when people leave. Okay, I got through everything. I guess he skipped my C99 question. Can you re-ask it? Because I didn't see it. Yeah, I'm going back and looking for it. Considering that both of you use MSVC and C on a daily basis, are there things from C99 and C11 that you would like to use but can't? Yes. Yeah, I can be specific, but... Well, C11, the fixed size... You're using C++, right? Yeah, I use... There's a compiler. So you... Yeah, but it's close enough. Yeah, well, there are some things in C99 that have never been put into modern C++. Which is like the static size initializer on arguments. I was thinking the labeled struct initializers, I don't know what they're called. Oh, yeah, yeah, those things. Is that C99? Yeah, those would be nice. Yeah, I think that is the one thing I want and declare anywhere, obviously. Yeah, I have that feature. As I've said before, I don't miss declare anywhere that much, but... Honestly, I could probably look at my code and find 99% of the time it wouldn't make a difference for me. Like, just... I find it really annoying to just declare anywhere anyway, right, so I'm likely not to use it. I mean, I'm strolling through code right now and... Okay, so there's one particular case where I call a function before I declare my variables, but like... Yes, yeah, yeah. Almost all the time I declare at the top of the scope. Being able to put the int I in the... In the for loop. I was just gonna say that's the one case that I probably miss the most. Yeah, that's basically the only time I use to declare out. And because I use VC6, the C++... You have to double... You have to do the double thing. You'd have to do the double brace anyway. So in VC6, I would never use that if I were doing C++. So it's another reason I don't bother with C++ because I don't get that benefit. Wait, that's not... You can't do that in C? Like 99? No, it's in C99, but VC6... Well, they've never added all the C99 stuff. I don't know if their modern C compiler even does it. And it does, because they added a bunch relatively recently. But I bet you 2003 didn't have to clear anywhere in C. But in C, in modern C, whatever. In C99. C99 added. You can't put the int in the for loop. They put it to clear anywhere. I don't remember if they put in the for loop or not. We'll have to go to chat to find that out. I'd be very curious why they wouldn't add that. The way in which you said it sounded like they didn't add it. No, I assumed that they did. I just was saying that the... Yeah, I don't really keep up to date with the differences. But yeah, I don't use... I use C++ technically. Yeah, yeah, you're using a C++ compiler, that's all I meant, yeah. Yes, and the biggest thing is, I think that I would get more out of C++ just by not having to type the word type def 4,000 fucking times. Like, that's so annoying. Well, I mean, if my STB libraries could be C++ only, I would just namespace everything and not have the fucking prefixes. And that would be nice. Yeah, the namespaces would be nice. I don't even... The problem I have with namespaces, colon, colon is just fucking irritating to look at. So, that's why I'm not using it. But you do using or whatever it's called. Whatever it is to... Yeah, but if you're using using, then I'm gonna want a prefix. Well, no, the idea there is that you could use using as long as it doesn't conflict. And if it does conflict, oh, you can't use using. You could figure out a way to do it. I honestly think that the STB underscores better. It sucks for me mainly because the private stuff, not the public stuff. Like that all the static functions have to be prefixed as well. If I could... Yeah. If only the public function, if I could put the private stuff inside a namespace and then all the public stuff has to explicitly colon, colon to get into the private stuff and then callers never see any of it, that would probably be the sweet spot. Yeah, I use a single underscore at the start for anything that's private. Yeah, which is... It's not ideal, but it's better than C++. Yes, it's potentially gonna conflict too, but yeah. Yeah, C99 allows declarations in for loops. Okay. That's good. Yeah, I really don't see like... A lot of people seem to be obsessed with using C over C++. I think C++ is better to use than C. Like I just think, I think function overloads are... Yeah, well, we talked about that. We talked about that last time that I actually disliked function overloads, but yeah. Oh, do you? Okay. I don't remember. I believe we talked about it last time. Yeah, I remember you something like that. The bottom line about that was that as long as the function overloads literally do the same thing, then I don't mind it. But if the function... Well, that's what I was talking about when we were talking about the API and having the redundant functions. Yeah, it's exactly that. It's exactly that. If you have that case, then I'm okay with it. But literally anything other than that bugs me, so. Yeah, I think so. I think I'm the same way. But that's the reason enough to be like, okay, that's a feature worth having. Sure. And vector math too. And I know you don't think vector math is right, but you're wrong. Yeah, yeah. That was the whole discussion. It's not even that I think vector math is wrong. It's just in terms of... Both of these things are also sort of a language design thing. So if you're saying what I want in the language I use, you can justify all sorts of things, which is what you do. I'm not being critical. What I'm about to say is not critical of that position. But what I'm sort of also concerned about is the ecosystem of other people's code, because I would like to use other people's code. But I can't use other people's code. And I can't use other people's code for various reasons. Someone is because a lot in the practice now, it's all because I use VC6. But even before VC6 was dead, I couldn't use other people's code because it was bad code. And some of the reasons it's bad code is because the language encourages people to write bad code. So people will do a lot of operator overloading and function overloading that is not in line with the practices that I think is acceptable. And because the language lets them do them, I can't use their code. Like... Right. And so I totally buy that like, a lot of people will argue like, who cares if the language lets you fuck yourself? Like I'm going to use it in a way that I don't fuck myself, hooray. And as long as you don't use anyone else's code, that's probably fine. And I'm a person who doesn't use anyone else's code. So it's all probably fine. But there is this whole thing of, because I'm a participant in the ecosystem, because I do make libraries for other people, I'm still very aware of a lot of this stuff. Right. And I do feel like C++, the world is worse because C++ leads to people making boost and shit like that, that should not be in anybody's code. Yeah, there's definitely, like, you're right. Like from an absolute standpoint, considering everybody's position, you're absolutely right. You shouldn't be able to write code like boost. But you should be able to overload 25 redundant functions. So I don't know how you... Yeah, yeah, no, I totally understand. I totally understand. We've definitely established in the past that we have different opinions on this and we're totally comfortable with the other people having, with each other having the opposite opinion. Right. We're not down on each other in any way, shape, or form for having the opposite opinion. Right. Well, except you think I'm crazy, but yeah, for the factors. Actually, I don't think you're crazy. I think you're in an unfortunate circumstance with regards to VC6, because I'm in the same circumstance, because I'm stuck on 2010. At least I have a 64-bit compiler. Yeah. Everything else is fucking on you, like I don't know what I'm gonna do. I don't wanna be stuck on 2010 in 20 years, the way you're stuck on VC6, but I feel like I might be. Yeah. Chat, we're still waiting for another question, although we're near the end of your time, I think. Yeah, I got another 20 minutes, 23 minutes. We could stop early. There's nothing wrong with us stopping early. Yeah, yeah, yeah. But I mean, there's still lots of questions to be answered, but yeah. I don't think there are. I didn't see, at least I just missed a little. No, no, but I mean. Yeah, in the abstract, there are lots of questions to be answered, absolutely. Yes. But like, I think, regarding this, because we're still sort of talking about this, while we're waiting for another potential question. Sure. I think if I was on a team, like if I was in, say, John Carmack's position, where I was now managing 200 programmers or something like that, I actually don't know what position I would take. Would I take a, no, you have to use C? Or would I say, these are the rules for C++, but someone's gonna break them? Because that's just how it is. Yeah, if it's five or 10 programmers, you can actually enforce it, but when you're up to 100 or 200, yeah, it's not. Yeah. Maybe the answer is UC. Slightly modern C++ is okay. No, it's not. That's the thing is like, anything that's called modern C++ is not okay, in my opinion. Like, it's just not. Like, I was reading somebody's code and it used a lambda, and I literally couldn't figure out what the fuck it did. Because it had like, because it's capture syntax, so I can't even fucking understand what that code is. The problem with these are the rules to follow in C++ is, it just, people are gonna not follow them. One cool thing about VLAs that I realized, is you can cast a malloc pointer to like, char star, non-cost lines. Oh, well there you go, that's a good thing in C. Yeah, we just learned something that we can't, that I can't use because it's not C99. I can't use it because I use C++. Yeah, I didn't know whether that was in C++ or not. No, none of that stuff is in C++. Wait, in C, you could do, in C, modern C, you can do like, a dynamic array size on the stack. That's the VLAs. On the stack, right? That's part of the VLAs stuff, yeah. Okay, actually I think I have to go now because I hear people here. Okay. So I'm gonna go. Yep, that's fine. Thanks everyone for watching. I'll stick around for just a little bit longer, but the podcast is officially over. All right, thank you guys. Thanks, Sean. Thank you. So last questions before I shut down? Anybody? And since Sean has gone, like this is not, it's not intended for questions for me. It's like, if there were final clarifications about what we're talking about in the podcast, because we did the sudden shutdown, it's not, I don't wanna throw this open to totally arbitrary stuff. One thing is, when I was scrolling through, I didn't see if I got a response to the idea of doing a stream going through one of my libraries. So if people had reactions to that and wanna repeat them, that would be helpful to me. All right, well, since nobody had any other questions, I will in fact stop it here. So thanks everybody for watching and maybe in a month or so we'll do number 003, we'll see. We still have a big list of questions and that we haven't done, although their rank is going down. So we'll probably take a new round of questions a week before we stream on Twitter. And I think that's it. So all right, thank you.