 So, we're going to talk about Rust today. It's kind of a general overview and kind of why you guys should be excited about Rust. I'm Alex Burkhart. I'm a consultant at Mutually Human in Columbus, Ohio. We do a lot of rich web applications, Rails, Ember, stuff like that. We all do a lot of different stuff, wear a lot of hats, but one of those hats that I love to wear is learning and teaching. I love learning stuff, and I assume all of you do as well, that's why you're here. And I started learning Rust about a year ago. And at this point, I'm quite excited about it, which is kind of obvious since I'm up here trying to tell you about why I'm excited about Rust. And I guess my pitch is that Rust is the best that Haskell C++ and Python have to offer. It's managed to pull a bunch of people from a wide variety of backgrounds into its community, and those backgrounds have added a lot of influences in different directions. It's using a lot of, I guess, research from the last 40 years. It's not trying to be a Haskell-level research language, it's trying to just be a better language than what exists out there, because Ruby's garbage collector was invented in the 60s. And we can do better than that. We can learn from our mistakes as an industry, and we can write a better language. We can bring joy in programming to people who aren't writing Haskell or Lisp every day. And so I'm going to cover some of the highlights that Rust gives us. When they set out to actually write Rust, they set out to design a safe, concurrent, practical static systems language. And this means that safety and concurrency are not afterthoughts, they are explicit goals. And interestingly enough, practicality is also a very explicit goal in the design language. This guy actually comes up a lot in a lot of the features, a lot of the reasons why it does things. And these goals give us context for understanding the language as a whole. As more context, Mozilla has been building both the Rust compiler and the, an experimental browser engine called Servo for the last several years now. And they've been, they're both written in Rust, and they've been kind of co-evolving language to fit those needs of a real large compiler and a real large browser engine. And that sort of informs the goals. What do you mean? I will get to that. There's a lot of baggage with those two particular words. But you know, in some sense, the goal is to gradually replace C++. And we're not going to accomplish this overnight, and that's okay. C++, there's a lot going for it. And the CU system in general is full of amazing tools. But safety and concurrency are not part of those strengths, are not part of its strengths. And we're going to win those battles, those small battles, slowly over time. And hopefully we'll be able to write better software systems with Rust. The best explanation for this was actually from Rust for functional programmers. And it's that it keeps the abstract machine model that C gives us, but innovates in the language interface. So it makes a lot more sense when you think of C as a portable assembly language, and that was its goal. That was what it was designed for. And so all of the major concepts in C map directly to hardware concepts. And Rust keeps that strength. And that's really wonderful, because that stuff is really well documented. It's fairly well understood. And in general, it's a pretty practical way to design a language. But Rust gives us a nicer language to work with as a programmer. A nicer language to work with. But as part of this idea of we're going to keep the abstract machine model, we're going to have certain things like no runtime. This means that we had no memory overhead of a runtime, and it makes actually integrating with our languages really easy. And because we had no runtime, we have no garbage collector, which is also great, because again, no overhead and no pauses, which gives us predictable performance. And so these are all part of the idea of zero cost abstractions, which was one of the ideas behind C++, was that we're going to, at compile time, give us better abstractions than what we currently have, and they shouldn't come at a cost. And we'll see this come up in Rust, in our memory management, our polymorphism, iterators, even down to low level stuff like structs and enums. And that's going to let us build a better programmer interface. And that's really important, because we all have to work in this every day. So working with a language that's really pleasant is really important to all of us. But what's nice is, since it's not a superset of C, we don't have to carry C's baggage. We can use these zero cost abstractions and build a better language than what we could have built when C++ was designed. And so we don't have those same restrictions, and we get to build a better language. And in general, the syntax is not as important as the semantics of the language. So some of the things we do have, we have immutability by default. It's a really good thing. It makes reasoning about our code much easier. If you try to mutate things that you've not explicitly marked as mutable, you get a compile time error. It's great. Catch the errors really early. We have type inferences. We have type inference. So we have to annotate our functions with explicit types. But pretty much nowhere else. And that's pretty nice. We've all kind of agreed that types are a great documentation as well. So having them on our functions is really convenient so you know what your input and outputs are. But you don't have to actually type it out every time. Something we don't have, we don't have in C is enums, which are algebraic data types, just like you have in Haskell. So these are just variants of one type. And these are in addition to our traditional C structs. So those are our building blocks for our language, or for our data containers. And then since we have these nice enums, we also have destructuring. So we have our match statement where we can take an enum and step through each of its cases, and match on its shape, on its form. And this is really, really powerful. And it's not just a match statement. Matching structuring is pretty pervasive within the language. We have a very powerful trait system. These are very synonymous to Haskell's type classes. This is our methodology for ad hoc polymorphism, where we have different behavior per type for the same methods. And so this will also let us put bounds on our types when we're writing generic functions. And so you can rely on those constraints. And again, these are all compile time ideas. And we can use these traits to do all sorts of higher level reasoning. The traits actually let us reason about compile time thread safety. This is really, really important. Rust comes with a powerful iterator library, and it uses closures in a lot of its APIs. So you actually end up with very Ruby-like APIs for iterating through our collections. And we all know, and we know that during this iteration, it's all thread safe, and there's no runtime checks. And what's really interesting is you actually get faster performance. And if you actually had the runtime checks and you're doing indexing manually, Rust comes with a hygienic macro system. So you can write code that writes code. And it won't clobber other things or have naming conflicts. So this is a really powerful way to clean up our code. And you see here, we have our VEC macro, which just sort of expands into the code we would normally write. But it makes our day much nicer. Rust comes with built-in testing. And so we have test unit-like tests. So you sort of annotate your functions as a test, and then you assert something. There's popular libraries for both generative testing, like QuickCheck, and also BDD, or specification-driven testing. And what's really cool is the Rust doc tool lets us generate documentation from our comments. But when you run your tests, it'll actually compile any of your code examples in your comments to make sure that your documentation stays up to date with what your code is. So any code examples you have in your documentation are tested as well. And then it gets really cool. We have a modern, sane module system where there's no global namespaces. There's explicit scoped imports. There's no header files. There's no painful linking that you have to do. And that kind of flows right into our modern package management solution. There's a tool called Cargo for managing Rust packages. It's our build tool. It's our package manager. It's our dependency management tool. It integrates with crates.io, which is our centralized package repository, which at 1.0 had 2,000 packages uploaded to it. Rust is the first language to ever ship with a package manager at 1.0. We also have remarkable error messages. The core team cares a lot about the error messages because it's part of the language interface that you deal with every day. And so bad error messages are considered bugs. And so this is super practical when everyone is new to language. You get really, really targeted advice on what you should be doing to fix your code. And that's part of the wonderful community that we have in Rust, because everyone right now is a Rust newbie. So everyone is very helpful, very patient. There's a lot of wonderful docs, blog posts being produced, talks. There's a discussion forum, an IRC channel, Reddit. There's meetups. The community has been really great. And that's part of the 2,000 packages. People are really excited. And so it's really fun to be part of that. But we've been talking a lot about what is in Rust. And that's cool, because it's a nice collection of features. That's not all of them, obviously. But this is sort of a highlight. But some of the more important ideas are what's not in Rust. Because being something to everyone is not really a virtue. So what isn't in Rust? Well, we have no null. Instead, we use our enums to have option result types. And this lets us have composable interfaces for when we have no values or when we have errors. There's no implicit type conversion, which is great, because we value explicitness and predictability. So you actually know what your code is going to do, because it type checks. And nothing happened by magic. There's no exceptions, because we've, again, value predictability. And doing cross-platform exception unwinding with no runtime is apparently really difficult, especially when you're actually having to deal with interoperating with our languages. So instead, we have really composable error types and a lot of tools for doing that. There's no inheritance, because inheritance is, in general, a runtime concept. So instead, we're going to truly favor composition over inheritance and simply not have inheritance be available. Again, there's no function overloading. It's a little confusing. It's pretty error-prone. And in general, we found that traits are a much more powerful, much more general way to provide the same sort of functionality. Speaking to a largely Haskell crowd here, there's no laziness, at least by default. And this makes reasoning about your code a little bit easier. And it makes composition a little harder. It's trade-off. But that's just by default. Iterators themselves are also lazy in Rust. There's no higher kind of types yet. They actually are wanted in the language. There's not a priority feature, because there's something you can add to the language in a forward-compatible way. It wouldn't actually break back with compatibility to add higher kind of types. So it's on the post 1.0 priority list. And it actually would actually improve a lot of things. And we can have very generic collection APIs in a way we don't have right now. But again, along with this is your favorite Haskell language extension probably doesn't yet exist in Rust. So there's no type families or GATs or whatever your thing is that you'd like to do research on. There are actually no higher kind of, but there are a lot of families. Yeah, so there's some stuff. But in general, there's a lot of things that we can add forward compatibility, and they were not prioritized yet. There's no strict purity. So in a lot of ways, we're aiming for memory safety when we talk about safety, rather than complete referential transparency. In practice, this is actually a pretty good balance. It makes picking up language pretty easy, and it makes actually the reason about your language still fairly straightforward without having to jump through a lot of hoops. And so speaking of memory safety, we don't have a garbage collector. I can't mention this before because it makes our FFI much harder. It makes our memory overhead much higher, and in general has a kind of performance impact. But on the flip side of that, we also don't have any manual memory management. You still have to think about how memory is being handled, but you don't have the ability to shoot yourself in the foot in the way that you do in Rust, in C and C++. Or at least Rust at least helps you aim a little better as opposed to letting you do whatever you might not mean to. So that means that because we've removed manual memory management, we don't have point arithmetic, we don't have null pointers, we don't have null at all, we don't have double freeze, we don't have never freeze, we don't have dangling pointers. So we've moved a lot of the things that you make errors with all the time accidentally unless you're writing perfect code. And so this is just the kind of way that we've removed a lot of the unsafe elements while retaining a lot of the control. Because that safety is an explicit goal. And so having no garbage collector and no memory management is kind of aligning with those goals. But Rust does take special care of its memory to achieve those safety goals. Very explicitly, our idea of unsafety in Rust is explicitly memory safety. It kind of falls into these categories where if you're accessing initialized data, you have kind of undefined behavior. If you're writing invalid data, so if you're writing random bits to an enum's memory location, that would be undefined behavior again because who knows what it's going to do. If you start breaking our aliasing rules with our pointers, data races between threads is actually considered a memory safety bug. And so is calling foreign functions because those might modify something in an unsafe way. So you can't really rely on that. But these are problems you cannot run into in safe Rust code, which is 99% of Rust code that you'll encounter. And so the other thing here is that unsafety is not recursive. It doesn't infect everything in the same way that IO infects everything in Haskell. As soon as you've touched IO, it's touching everything. This is a safe foundation to build on. And really what unsafe means is that a human has checked it rather than the compiler has checked it. You can't rely on the compiler when doing these types of things. And in general, you can't necessarily rely on the compiler to fix all your problems either. You can still go out of your way to end up with reference count cycles that end up with a memory leak. You can still end up with deadlocks when you're doing multi-threading. IO is not necessarily considered an unsafe thing, but you can do IO and then end up with problems. And same thing with integer overflows. If you're not careful, you can still make some mistakes, but not necessarily memory safety errors. You don't really want these things, but they're not things the compiler is checking for. So let's review some of our normal memory management ideas, our strategies, and that will give us context for more of what Rust does. So first, we have kind of manual memory management. This is what you have in like CRC++, where you have explicit control over every single resource that your program ever touches. And it's externally performant, but there's no help at all for doing that. I mean, you can run static analysis tools outside of your tool chain, but in general, the best thing you can kind of hope for if you make a mistake is a memory leak that doesn't go to large too fast. But there's still lots of ways in which you can have undefined behavior or outright seg faults that crash your program. So in general, when you're doing manual memory management, you must never make a mistake, and that's really hard. I'm really bad at that. I make mistakes a lot. So a lot of my experience has been with garbage collector languages, where you have List, Java, C Sharp, Python, Ruby, the list goes on. They're very popular, but there's no real control over how you allocate memory. Everything gets heap allocated, everything gets kind of collected all at once, which you don't really have control over. It doesn't really save you from memory leaks, but it does save you from a lot of the undefined behavior or crashes that you get related memory management areas. One of the newer ones is automatic reference counting. And you'll see this in Objective C and Swift. And so this was like pre-revolutionary tech from Apple, where it does a reference counting in line, and as soon as something is no longer used, it'll be collected. So it's kind of an amortized garbage collection strategy. And this does not save you from any of the memory leak problems you have with any of these strategies, but it does save you a little bit of the overhead of a full-blown garbage collector, especially in memory and performance. So with those in mind, what we have in Rust is a system of ownership. This is something that aligns really well with all of the goals that we've kind of laid out so far, where we retain all of our explicit control and predictability over our memory without necessarily imposing any sort of runtime cost, and without introducing any of the undefined behavior or segfaulting. And it'll also limit most of your memory leaks as well. It's going to handle all of the allocation, initialization, and cleanup of all your resources. So if you're familiar with RAII, it's a strategy we use in Rust for doing all of this. And so it's going to basically write at compile time the perfect memory management code that you would have written in C++ if you were perfect, which is great, because it's a computer and it can do that every time, and I can't. So all those violations that we talk about when we talk about double freeze, never freeze, those all become compile time errors that you're alerted to as soon as you write them, as soon as you introduce them, as soon as one of your team members introduces them that you didn't know they did something. And so this is probably one of the most important ideas in Rust, and so I'm probably gonna spend a little more time on this. The general idea behind ownership is that we have a single owner over every resource, which is in memory. So that resource is managed, and it's gonna be all managed for you by the compiler. It's gonna insert all of the allocations and freeze for you at compile time. If you ever want to do any mutation, you have to have exclusivity. You can't be referenced in it from anywhere else, and if you want to share that resource with anyone else, you can't be mutating it. And so this will prevent an entire class of memory errors where both single-threaded and concurrent memory errors, because as soon as you get a fairly large system, it's really hard to reason about your memory in the same way that it would be in a concurrent program. What's really fascinating about all this is that this is a compile time idea. We don't have to worry about this. There is no runtime cost to this. It's inlining each of those calls in the same way that you wouldn't see. So we have a single responsible owner. And so each time you declare a variable binding, a name binding, that becomes the owner over some resource. And so when that name falls out of scope, it will recursively drop anything it owns. And so when it drops that stuff, it's gonna run a destructor or just free that memory or do whatever cleanup it's necessary to do. And so in this example, we're gonna allocate some memory by initializing a new color enum. We'll access that memory, and then when it falls out of scope, that memory gets cleaned up. And this allows us to do safe memory management. And what's really neat is that ownership is a tree. So each enum, each struct, each function, each thread, it all owns its own contents. And so when it falls out of scope, each thing is recursively freed or dropped. And so vectors are heap allocated dynamically sized arrays. And so when we end up dropping our vector, we end up dropping each of the coffee enums, which then in turn drops each of its temperatures in this example. And so this is that idea of responsibility where if you own something, you're now the one thing responsible for cleaning it up. And this solves all of our initialization and cleanup problems. Whenever we want to do newtability, we can rely on ownership for that as well because we know the owner is the one responsible for doing everything, responsible for providing access to that resource. The owner is free to do mutation because no one else is watching. In this example, a box is a heap allocated pointer to some contents that owns its contents. And so we can sit here and we can mutate that as much as we want. We'd like and reheat our coffee. It has exclusive access to its resources. So we don't have to worry about anyone else carrying that we've mutated it. So no harm, no foul. We can even transfer ownership. And we can have the responsibility of cleanup change midway. So in this example, we start off with our coffee shop owning the coffee box. And when we assign it to the customer, that will transfer ownership. That will copy that pointer into the customer object or the customer name. And now the customer is responsible for cleaning stuff up. And what we've done here is we've now made accessing the coffee shop, the old owner, invalid because we can't rely on it for anything anymore. And it becomes a compile time error to access it all. This prevents any aliasing errors or any use after free errors because we know that the new owner is the one responsible for it, which means that if we're still trying to use the old owner, we're probably making some sort of mistake. So let's just catch that early. We'll catch that before we actually run anything to be compile time error instead. But this is kind of a limiting system of things. So it really helpful to actually be able to, whenever we want to, alias stuff. And so we can use borrowing for that. In Rust, the ampersand operator is the borrow operator. It's not address of, even though it kind of acts the same way. It creates a reference to some piece of memory. And so in this example, we can all share, we can all read from that same resource. You get as many immutable shared references as we like. And what's really neat about this is that those references are guaranteed to be valid for lifetime of the owner. So there's a system inside the compiler called the borrower checker that will actually be computing the lifetime of the owner and making sure that our references are never invalid while there's still borrows out. And so if you accidentally free something too early, you'll get a compile time error instead. And so that's a really, really powerful idea that helps out a lot with a lot of our memory management errors. But if we want to do this, we have to pick one. We can't have both aliasing and mutability. You can mutably borrow something which prevents the ownership transfer, so you don't transfer responsibility from the original owner to someone else, but you can allow mutation. While this happens, the owner is also unable to access that resource. So while that mutable reference is in scope, the owner can't read it as well. And this is a general idea of mutation requiring exclusivity. If only one person is ever able to read or write from something, then no one cares that it's changed. But if someone is reading simultaneously, it gets very fuzzy on how that works and you get all sorts of fun errors. So we now have ownership, we have a mutation without ownership transfer. And so all of these things are kind of the general rules for how ownership works and how we do memory management and rust. And this allows us to build kind of safe abstractions. But it doesn't really solve all of our problems. So in general, we actually have ways to break those rules and we can build safe abstractions on top of that unsafe idea. So even within vector, where you might be resizing stuff and reallocating new memory as the vector grows, that requires a little bit of unsafe code. But once you have that unsafe code and it's human checked, we know that we have a safe abstraction to build on. It seems like a doubly linked list. It's very hard to have a doubly linked list that's mutable where you only have one reference to it because you have two mutable links and doubly linked list. We also have a couple of reference counted smart pointers. Whenever you clone a new one, it will update that reference count. But those provide us shared ownership. But that's a runtime idea now. It's not a compile time idea. You don't know who's gonna be the last reference kind of pointer left alive. And so once we've built that unsafe foundation, that provides us a safe foundation to build new code on. This requires a little bit of human verification. You don't have the same compile time checks. But so through that, we can build a lot of concurrency building blocks because threads themselves also follow the same sort of ownership rules. They own their resources and in the same way that ownership guarantees us all these nice things in a single-threaded environment, we get the same guarantees in a multi-threaded environment as well. We actually use the send and the sync traits on our types to ensure that we're doing thread safe access correctly. But we can do all this at compile time. And it's all library based. This isn't baked into the compiler. So I talked a little bit earlier where I said that data races were categorized as memory unsafety. Or if you have two things accessing the same thing, two threads accessing the same data where at least one is unsynchronized and at least one is writing, then you don't know what each one is going to be doing. That's a data race. But we know that through all the rules we set up with ownership that we can avoid that. Because we know that we're either going to be sharing immutable data or that we have exclusive access over immutable data. And this allows us to have sort of several different ways in which we can do safe concurrency that's guaranteed at compile time or we can actually do runtime synchronization. So we can forbid aliasing. We can forbid mutation or we can require synchronization. All three of those models will provide us safe concurrency. So it's really, really flexible, multi-paradigm. So we can try a shared nothing model, right? Because we know the threads are meta owners of their memory. Whenever we move data between the main thread and the child thread, that ownership is also moved from the main thread to the child thread or from the child thread back to the main thread. So we can start off here with a channel and it ends up giving us a transmitter and a receiver. And that channel is a multiple producer, single consumer channel where we can clone the transmitter end, move the ownership of that new cloned channel into the thread, into the child thread, and it's still connected to that same receiver. We can then make up whatever message we want. That message itself in that child thread is owned. When we send it through that channel, the receiver receives that message and now we own it in the new thread. And so these things are all just moving between threads but it's all ownership-based. There's no case here where we end up sharing any data. And so we sort of have this safe access through our channels, through messages by sharing no memory at all. When all of the transmitters hang up, when they're all dropped, then it knows that there's no more messages to be received. So this is sort of emulating the actor model where you share nothing. We can instead provide shared immutable memory and this lets us know that we're never going to actually end up mutating anything. And this is sort of like the Haskell closure functional model of doing things. So we start off with the same sort of channel idea. But instead this time we have a huge struct that we don't really, we wanna share it between our threads but we don't necessarily want to copy it around. We don't want any one particular thread to own it. So we can actually clone that atomically reference kind of pointer that owns our struct. And then we can put that into, we can have ownership of that pointer be transferred into the new thread. And regardless of which thread actually finishes first, it's all being countered at runtime in a thread safe way. And then whoever is last person to own, last person to drop their reference kind of pointer is actually the one who actually is responsible for dropping the owned contents of that pointer. And so each time we increment, each time we clone it, we increment a counter, each time we drop it, we decrement a counter. And so we were able to provide safe access to immutable data. And this requires no synchronization either. So I mean, you can, so if you have very large structures and you need to mutate one little part of them, you can either be passing ownership over pointers instead of copying around, you can be transferring ownership of pointers, which give you the exclusivity, which lets you mutate. You can take sort of the functional approach, which is instead of actually mutating things, providing a new copy that shares a lot of the internals. Or you can actually do plain old synchronization. If your resource is actually not highly contested, then synchronization is actually not really that bad of a thing to do. And so in this example, we sort of had the same stuff where we've instead of putting our struct directly into our arc, we instead put our struct into a mutex and then put the mutex inside the arc. By doing this, we know that each thread has access to the mutex. And when we call lock on it, it'll give us back a guard, what is a structure that will give us access to the inside of that mutex. We can go ahead and we can change things. We can mutate stuff because we know we have exclusive access to it after unlocking the mutex. And then when that guard falls out of scope, it will relock the mutex for us. And so this kind of gives us that classic runtime synchronization that we sort of lose when we are in other languages. So the mutex controls all the access in the classic sense. It's an atomically referenced kind of pointer. So it's a thread safe way to do shared ownership. So each time you clone it, it'll give you back another pointer and collectively all those pointers own their contents. And when the last one is dropped, the contents are dropped. So I get a lot of questions when I talk about Rust, the biggest one being sort of like, when would I use Rust? Especially people who like, I don't write C++ right now. So it's kind of not compelling when we talk about C++. And in general, it's, you know, multi-purpose language. But if your application is CPU bound, right? All of our, even our phones have four cores now. If your application is CPU bound, let's actually utilize all that nice concurrency stuff I just talked about where you can write multi-thread applications and use all four of your cores. And it doesn't have to be scary. Like you can write a program that involves concurrency and pointers and no seg faults the first time. It's actually kind of fun. If your application is IO bound, then it's also something like, you know, if you're waiting on something else to finish for your application, like a disk, network, whatever it is, some extra new job, then we can utilize concurrency again because IO bound applications are highly concurrent. And Rust, by having all of its concurrency mechanisms be library-based, rather language-based, you can just pull in one of the lightweight concurrency libraries. We're the popular one right now is MIO to do lightweight concurrency. But it uses all those same ideas of ownership to ensure its safety. If you have an application that needs low latency, and not just fast, but predictably fast, you know, to hit 60 frames a second, you have to be incrementing that frame count every 16 milliseconds, predictably, consistently. That's really hard to guarantee with a garbage collector. And by Rust having no garbage collector, this is in all these safety guarantees, this is made for a really large game development popular community within Rust. People are really wanting to write game libraries to do game development. If you are memory constrained, you know, we all talk about memory being really super cheap. If you work on servers, and if you work anywhere else, if you're doing embedded software, if you're doing mobile devices, if you're running inside another process, you don't necessarily want to balloon the memory footprint. So Rust is a great choice for that since there's no runtime, there's no overhead on any of your structs or enums. If you need to be interoperative, if you need to actually be working with other libraries, other languages, you know, we don't plan to rewrite the world with Rust. We plan to work with the rest of the community, interact with that entire, you know, battle-hardened CU system. And CU is the lingua franca of computer systems. So if we can have really great CU interrupt, which we do, then we can write programs that interact with CU programs really easily. We don't have to rewrite your entire system, you can use Rust for one little piece. And this is a great thing when you have like a polygot environment, you just slot it in where it's really useful, where you need some of these other things. And so people are trying to build operative libraries or just operative development. If, you know, up till now everything's been sort of like, you know, those are the strengths of CU and C++, right? Unless you require portability. And that's not one of those, that's not one of those things that they're not great at. But Rust compiles all the Rust code into LVM Intermediate Language, which means that anything that LVM targets, you can cross compile your Rust code and target that platform as well. So now you can write native multi-platform applications with Rust. So you get sort of a dream of write once, run anywhere, thanks to LVM, with whatever caveats LVM carries with it on your platform. If you have a requirement for high security, we write most of our most critical security infrastructure in C. And when you say it like that, it sounds crazy. I mean, Heartbleed, which affected everyone, was a buffer over read. These are problems that we can solve with computers. We don't have to make those mistakes every single time. Like you can still have buffer problems if you allocate buffers, but that's not the default strategy in Rust. It would look really weird. You immediately ask why we're doing that. And so there's no crypto experts writing crypto libraries in Rust yet, but you can at least start taking advantage of some of the security benefits of not having your program written in C++. Like use some of these modern language features to have a more secure application. And we talked a lot about safety, but a lot of that comes at compile time. But if you have requirements for high reliability, even though safety gives us all these nice security features, it really provides us a lot of reliability. At runtime, we're not gonna be surprised. And this is really, really critical if you have a really high cost for downtime. If you make lots of money per second every minute you're down, it costs you a lot. Rust is a decent choice for that because you're gonna be surprised less at runtime. Or if it's a really high cost to redeploy your application. That's also really beneficial that you're not surprised after you shipped all of your embedded devices to all your customers. And then you find out you have a bug at runtime that affects everyone. If you're deploying anywhere but a server, it's quite compelling. So all those things end up meaning that Rust is just a general purpose programming language that's great to work in for making great software. All those modern tools, all those modern features, it's highly expressive. And so you can focus on solving your problem rather than working against a language and against bugs that we've sort of already solved with a computer. And so the big news was that last Friday was 1.0. We released, it was a five-year effort, largely funded by Mozilla, to build the Rust language. And 40,000 commits later, we have language that is gonna be guaranteeing stability. So thank you to the 1,000 contributors to Rust. But so that 1.0, what does that mean? That's the core language and that's the standard library, stability benefits or stability guarantees. We know that code that compiles on Rust 1.0 is going to compile on Rust 1.1, which is gonna come out five weeks from now. And then it's gonna compile on Rust 1.2 and so on. We're on a six-week train model just sort of like the browsers are because that's apparently a great way to develop software that continues to work. And so this is just the beginning. We're gonna end up getting those higher kind of types that'll make a lot of APIs much nicer. So from today, you can go out and you can start learning Rust. You can start building better software. Tomorrow at Landcom, there's another Rust talk. It's a workshop by Jared Roche. So if you're interested in more stuff, go see it. If you're in the Columbus, Ohio area, stop by the Columbus Rust Society. If you would like to hire my company to come write Rust for you, please do so. Mutually human. We're in Grand Rapids in Columbus, Ohio. Or come work for us. We're also hiring. So thank you.