 30 minutes is a really aggressive time frame to introduce a group of people that doesn't really have much context for this, so I'm going to maybe go a little fast, maybe hopefully not too fast, but also there's a, you should think of this talk as a starting off point, a jumping off point for your interest in Rust if you become interested, and maybe there will be a few things here and there that you could sort of jump off and use or look into. I'm going to be writing a blog post over the next few weeks, I think, that talks about how you specifically integrate Rust into a Ruby or Rails application, and I'm not going to talk about that today because that's kind of its whole other talk, but I think if you really, if you leave this talk thinking that you really want to use Rust for something real, that's probably going to be the best way for you to do that, so keep an eye out for, on the Skylight blog, we'll be talking about it. Yeah, what's up with that? So I think the way a lot of people look at programming languages is with this sort of quadrant grid, you can see that when they announced Swift, Apple created this quadrant grid, and among, despite the fact that it's really silly, like I don't think Ruby is faster and more performant than, and more performant than JavaScript, and I don't really, I don't understand any of it really, but I think people do think of the world in terms of these quadrants, and I think if I was going to give a talk about Rust and I wasn't me, probably I would also make one of these quadrant diagrams and try to stick Rust in one of the quadrants, and for me, I'm showing this because I think it's kind of a bad way to think about program, programs and programming in general, and the reason for that is that in programming, really the only thing that is, that stays the same is change, the only constant is change, and what that means is that a lot of people have this fixed picture in their head of this trade-off between performance and productivity, and they imagine that that trade-off is kind of fixed, so if you gain a little more productivity, then surely what that means is that you've traded off some performance. But if you look at reality, you look at reality, and this is let's say 2005, you'll see okay, there's the programming language JavaScript, which didn't have that logo at the time, it was a reasonably productive programming language, it was a dynamic language with lambdas, and not very fast, and then if you were going to look at that diagram and fast forward 10 years, obviously JavaScript hasn't improved, it hasn't really changed that much in terms of productivity, it hasn't gotten, maybe it's gotten a little bit better, but for the most part, the language that most people write today, the same language that most people wrote 10 years ago, so if you were going to think about things as just a trade-off between productivity and performance, you would expect, well, of course, JavaScript hasn't gotten any, hasn't become worse in productivity, so that means it must not have gotten faster, but of course what actually happened during that time is that JavaScript got a little bit more productive with ES5 and some ES6 features, and a lot faster, and really, in reality, every language that sticks around for any period of time tries to figure out ways to improve productivity without reducing performance over time, which means that every single language is always moving, it looks like you should have to go down and to the right, but really every language is going up and to the right, and when a language changes enough, or when a new language comes out that is sufficiently different from other languages, it enables a whole new generation of people to do something that they couldn't do before, and I think JavaScript is a really good example of this, right? JavaScript was really a slow language to begin with, but when JavaScript got fast and when it got a little more productive and when the ergonomics of using it on the server got better, it enabled a whole generation of front-end developers to write back-end code, and I think you could say, oh, well, I don't want my jQuery developer writing back-end code, you can say that all you want, but the reality is that if you're looking at the quadrant, you miss these shifts, these changes in the programming landscape that enables whole generations of people, large groups of people to do something that they weren't able to do before, and Rust is sort of trying to do a similar thing, right? So Rust is a new language, but Rust is also trying to enable people who might not have been willing to do something before, or might not be able to do something before to do something now. And just to give you a high-level sense of what's going on here, before Rust, there were essentially two kinds of languages. There were languages that were safe, and what I mean by safe is that if you write a program in the programming language and there's no bugs in the compiler or the interpreter, your program cannot have a segfault, right? So whenever that happens to you and Ruby, what that means is that there's some C code involved. It's not Ruby giving you the segfault, it's some C code giving you the segfault, and that's a guarantee if you write in Ruby, or you write in Python, or you write in Go, because there's a garbage collector in those programming languages, you're guaranteed that if you write code, you will not get a segfault. And then there's other languages, C, C++, and a few other languages of that, that give you direct control over where you can put your memory, and that direct control gives you the ability to get better performance, better memory usage, and all that. But the tradeoff of that is that if you make a little mistake, if you slip up a little bit and everybody knows who's ever tried to work with C or C++, that it's not just a matter of learning these five tricks here, it's a matter of being a guru basically in a language. If you make any mistakes, it's unsafe and you could fail, you could segfault. And what Rust is trying to do, fundamentally at a high level, is to give you the safety of a language like Ruby, without forcing everything to go through a garbage collector or a reference counter, so basically giving you the ability to control where your memory is going directly, exactly, and for people who know that that could mean putting things on the stack, or a variety of other places where you might want to allocate memory, you have direct control over that, but you don't have to trade off safety. And I think from an ergonomics perspective, if you are a person that has ever tried to write code that's more efficient by having more control, the biggest source of ergonomic pain, the biggest source of productivity is not just that you lose all these high level features, it's also that at any point you could crash. So just eliminating the ability to ever crash really is a big shift in the landscape of what's possible. So what does this enable? What does it mean? So like I said before, I think the way you should always look at the programming landscape is looking for shifts, for changes in what some language is doing, what's possible? What does it mean to have a language that is both low level and direct memory control, but also safe? What does that enable for us? And what it enables is a whole new generation of people who are systems programmers. And by that I mean you, everyone in this room. It enables a lot of people who maybe would have been excited or interested in low level systems programming to do it. And no JS, which claims to be low to the metal, that's not what I mean when I say systems programming. I don't mean programming on the metal in no JS. I mean really, really programming on the metal, really programming at the level of the machine. So before I continue, I want to just throw out some real talk here. And that's to say that all of us in this room, including myself, have been part of the high level language tribe. We spend a lot of time talking to ourselves and each other about why we find ourselves being productive in high level languages. And a big part of the high level language, a big part of what we tell each other in a high level language is a lot of yaginy. You aren't going to need it. And some of that stuff has to do with features that you don't need. But for this purpose of this talk, what I'm talking about is you aren't going to need better performance. And I think that's the thing that we tell each other a lot. And I think that in many, many cases, maybe perhaps most cases, that ends up being true. But sometimes there are some cases where you actually do need better performance. And I'll give you some examples of this. So one example is anytime anybody ever says 60 FPS or jank or real time or high frequency trading or anything like that, anytime anybody ever talks about needing predictable performance, I think 60 FPS is a good example, they need better control than you get with a garbage collector. Anytime anybody ever says, I need to use less memory. Memory's too high. If you're writing a cross platform library, like LibSat, something that is meant to be embedded in a lot of different places, that's a place where you care about performance. And more importantly, whenever you're starting to dig into your programming language internals, whenever you're reading the Ruby C code or trying to understand the JavaScript JIT, these are cases where it turns out that you do need the performance. And what it means when we say, I don't have to worry about performance, I don't have to worry about the details, I don't have to worry about whether I put this on the stack or the heap, I don't have to worry about whether this is allocating or not. What that means is not just that you don't have to worry about it, which is a great blessing when it doesn't matter, but it also means that you can't worry about it. And I think perhaps more importantly, it means that if you figure out a way to control what you're doing by reading the C code or figuring out how the JIT works or doing some hacks, it means it's very, very difficult to communicate to other developers about what it is that you're trying to do from a performance perspective. And sometimes it does matter. So here I have a couple of examples from Ember, which in JavaScript there's very, very good JITs, but those very, very good JITs are very opaque. But when you end up caring about performance, you end up really trying to understand how it works. So here's an example where we have a function called make dictionary, and we do a whole bunch of pointless things that are total no ops in the semantics of the language, but what it does is it tells the runtime to not try to make this into a struct, and that allows us to avoid certain deoptimizations. And you can see there's a big comment here. Here's another thing. There's a function called intern, which takes a string and gives back a non-rope version of the string, and that ends up having performance problems to not do it. And you can see on top it says, when do I need this function? For the most part, never. The premature optimization is bad, et cetera, et cetera. And here's the whole comment. So it turns out that when you care about performance, we should think about what is the point of a programming language? The point of a programming language is to let us communicate with other human beings about what it is that we're trying to do. And if it turns out that you don't care about performance, then you don't want to waste your time communicating with other human beings about performance. That becomes a waste of time. It becomes very noisy. It becomes difficult for you to understand what's going on in your code. But as Dave Herman, who founded Mozilla Research, says, when you actually do care about performance, then performance is part of the domain of discourse for you and your collaborators. You actually want a way in the programming language to explain the performance requirements that you actually have. So it is true that it often doesn't matter. And I think what I'll say clearly here is if it doesn't matter, then using a programming language like Rust to solve that particular problem that you have is not going to be the best use of your time, because it's going to end up forcing you to say a lot of things that you don't care about. But there are many cases where it actually doesn't matter, where performance actually has a real impact, where memory usage has real impact, like Skylight, which is the project I work on that caused me to get into Rust in the first place. And when it matters, it's actually a blessing that I have to write all these annotations. It's that I'm able to communicate to Carl and Tom and all the other programmers who I'm working with exactly what performance requirements we expect to have from the program that we're writing. So that's why you should care about Rust and the kinds of cases you should care about. Now I want to get into what is Rust. So I can spend a lot of time talking about the low-level performance. I think you can do some benchmarks if you want to look at all the performance features. I want to talk a little bit more, because this is a ruby crowd, about high-level productivity. And before I talk about productivity, I want to talk about one really important principle in Rust, which is the idea of zero cost abstraction, which I know sounds a little bit like snake oil. But the idea behind zero cost abstractions is that when you add the ability to do an abstraction in a programming language, in most cases, if you're not very careful, you end up adding a little bit of cost, a little bit of cost, a little bit of cost. And then when you get to something that's as abstract as Rails, you end up with a lot of cost. And the idea behind Rust is to find abstractions that you can enable. And I'll show you some examples of that in a minute, that you can add with very little or zero cost. And what that means is that you can get pretty abstract in terms of the programs that you're able to write without it introducing a whole lot of cost. And I think this is a big part of what is really appealing to me about Rust, which is that even if we didn't consider the safety aspects of writing in a programming language like C, and of course I do, the fact that C is so difficult to write abstractions in is really also a big problem. So I'm going to start by looking at a program in Ruby that you can look at if you go to Active Support. And that's the blank method that exists on strings in Ruby. So here's how it's implemented. I just dragged this out of Active Support that you reopen the string class. You specify blank regex. And that's there because performance reasons you don't want to be putting it in line. And then you investigate the blank regex at runtime. And then similarly, we have this method on array, which is basically an alias for empty. And then we have nil class. Nil class is always considered blank. And then there's more and more, there's booleans and whatever. And this exact kind of problem, this exact kind of thing is actually something that Rust lets you talk about. And I'll show you how that works in Rust. So in Ruby, actually, what I showed you here is sort of traditionally how you would do it. And that basically globally opens up all these objects and adds the blank method. There's also a new feature in Ruby called refinements, which allows you to do a similar thing but in a scoped way. And that's more like how it works in Rust. So first of all, in Rust, when everything's statically typed. So if you want to say that there's a blank method, you say that there's a trait called is blank. And we just make a trait called is blank. And we say that it has a function on it, that function takes whatever self, whatever it is, and it returns a boolean. And the first thing that we do is we implement is blank for strings. And that ampersand string thing over there just means that it's a static fixed size string. So there's also strings that you could push things onto, basically mutable strings. This is that symbol there means immutable string, that guy over there. And this regex over here basically is a macro. So anything with bang at the end of it, which is going to change to prefix at soon. But anything with bang means it's a macro. And that means basically that it gets converted into something fast. So even though it's a regex, it doesn't end up being interpreted every time. And basically what this is doing is essentially equivalent thing that we did in Ruby. We basically say we have this is blank trait, and it has an is blank method, and it returns a boolean. And here is the implementation of it. You can do similar things for other types. So we've now said I want to implement is blank for arrays. That basically also means a fixed sized array. This little thing here, if you're not familiar with other languages that have types, is called a generic. So this is basically saying this is implemented for an array of any type. So it doesn't matter what type it is, as long as it's an array of that type. So what this means over here, it's an array of that type. It's implemented. And then the actual implementation just says, check the length if the length is bigger than 0. Actually, that should be equal 0. Let me fix that. So if the length is 0, it's blank. And then we do a similar, so Rust also has no null. So null is always represented as a type called option. So here we're going to say implement is blank for an option of anything. And the option of anything is basically saying, okay, if it's none, which is like the nil question mark method in Ruby. So and then finally, implement the same thing for boole. So basically in the same way that you can go ahead and implement things in Ruby for any type, even types you don't own, you can basically make a trait in Rust and you can implement it for any type. And the way that that gets used is you would say something like use active support colon colon is blank and then you would be able to use it. So that is the scoping mechanism. You have to say where you want to use it. But once you said that you want to use a trait, it's now imported into that scope and you can use it for any object that's implemented for. So we can implement it for, use it for string, for arrays, for booleans. And then the last one of theirs, I made an array of one and I pulled out the last value of it and in Rust that would return an option because it could be nil and like I said before, I implemented this for options. So basically it works for all these different types. And one really cool thing about Rust Traits actually is that if you make your own type, so if I make my own type in my own library and I want to implement is blank, I can actually implement it just fine, which means that just like in Ruby, you know the blank, you just have to implement the blank question mark method and it will work. If I make my own type and I want it to be compatible with this trait, I can implement it myself. It doesn't have to be something that the person who implemented the trait decided to implement in the first place. And so I just showed you a pretty cool feature which is basically it looks like oh dynamic dispatch, awesome, everything is a message, et cetera. But like I said before, in Rust, everything is a zero cost abstraction. So how does that actually work? And the way it works in Rust is that Rust actually knows when it looks at those, when it looks at empty string that is blank, it actually knows ahead of time exactly what method that's gonna be calling and it actually will statically dispatch that method to the specific function that needs to be called or it might even inline it and you can explicitly tell it to inline it if you know it's performance critical. Also using traits doesn't involve allocating anything. So even though in a normal language, you would basically be forced to allocate because it's virtual dispatch and you need the dispatching table in Rust, simply using a trait doesn't cause any kind of special allocations. It basically works fast. It's like as if you had written a static method and called it directly, even though now it's basically polymorphic. Now there's one other thing that you can do with traits that's pretty awesome. So far I showed you that you have an object and you wanna call a method on it and it can be polymorphic, right? And now I'm gonna show you what if I wanna make a method and that method will take something that implements a particular trait. So in this case I have a method called, a function called firstline and that firstline function is just gonna call readline on whatever reader I give it. Now obviously I could write a firstline function and I could make it specific for a standard in or I could make it specific for a file but what I wanna do is I wanna make a function that's specific to any kind of buffered reader. And in Ruby of course you just do this by saying please implement the readline method and I'll call it. In Rust you say I'm gonna take, you say this thing here which means I'm gonna take, it's generic over any type R as long as that type implements buffer and buffer is the thing that implements the readline method. And then here I'm basically saying I'm taking a mutable reader of that type and then, sorry, and then when I go to call it basically, actually that's another mistake. Okay, so then when I go to call it basically Rust will know okay, well you're calling firstline and you can basically call it with a buffered reader or you can call it with standard in or whatever type that you want. And again if you just look at this you could imagine well I'm calling it with some random thing and it doesn't know ahead of time like the function is not written for a specific type so probably what that means is that it will be super slow because it will have to be packaged up it will have to come with maybe a lookup table and then it will have to get sent and then it will have to be virtually dispatched. But actually what happens in practice in Rust is that it's super fast and the reason it's super fast is that every single time you call that function it will basically say okay you're calling that function with a buffered reader, I'm gonna go make a special version of that firstline function for buffered reader and I'm gonna use that at compile time. And what that means is that the actual, all the lookups at runtime are static are known ahead of time. And again when I call standard IO standard in and I call the same function firstline it's basically as if I had called this special firstline to function that was set up. So the idea is that you get the high level the high level functionality that you expect the productivity that you expect from being able to just say I don't really care what this is it's just anything that takes a reader I'll just call a read line on it and under the hood what's happening is it gets specialized, it gets made super fast. So another way that this is zero cost is that the compiler specializes any methods that use what we call trait constraints. So that's pretty awesome and I think it's a good example of something where you can get a pretty high level of productivity in even above what most dynamic languages let you get at a performance level that's much higher than what most static languages let you get. So that's traits. The next thing I wanna talk about is iterators and lambdas. So if you've written Ruby code or JavaScript code then what you know is that it's very powerful to be able to use blocks or lambdas to abstract over some kinds of things. And Rust has a thing called has lambdas but it also has a feature called iterators which you can basically think of as being the equivalent of lazy enumerators in Ruby. And basically the idea here is that you can basically say okay give me a range from zero to 100 and then I'm gonna filter over it by saying it's filtered by you know is it divisible by six and then I'm gonna map it by multiplying by three and then I'm gonna print it by saying numero whatever and then run it. And there's, so number one if you look at this it looks very high-levelly, it looks, first of all there's no types which is pretty cool but second of all it looks high-level, it looks, you know you're using filter and map and all this stuff but again you sort of get this zero cost abstraction thing going on. You get, so first of all, iterators are always lazy which means there's never any intermediate objects that get created but unlike in Ruby that doesn't produce any additional allocations. This is basically all done locally. It also uses generics under the hood like I showed you before which basically means that every single time you call dot map or dot filter it's actually generating a special version of the code for the thing that you're actually trying to do. Also, so one thing that you have to do whenever you're doing a loop in a high-level language if you wanna look up you have an array of 100 items and you say give me the 50th item is that the, at best the compiler has to say let me check to see if the 50th item is there and don't let you access memory that's out of the bounds of the array but if you use something like map not only is it a higher level of abstraction you're using a lambda but you could actually do a single bounce check and say okay I can see this 50 items so then you can basically just do the rest of the iteration without doing any additional bounce checks as you go through it so you can get both higher level of abstraction and faster performance at the same time which is pretty awesome and finally Rust actually doesn't have any C-style for loops it only has looping over, it has a raw loop but other than that it only has looping over iterators and that's because Rust itself is very, very confident in the performance of iterators and the ability to make those very fast and efficient and low memory usage. So those are sort of two high-level features and I think you can definitely see that if you look at these features that they're features that you would expect to have in a productive language but not necessarily in a low-level language. So I talked about the fact that this is productive but earlier I said that Rust is sort of unique in the sense that it gives you something that's both fast but also safe unlike a lot C or C++ which gives you fast but not safe and so far all the things that I showed you you sort of come to expect from languages that are managed that have a garbage collector don't necessarily come to expect from a low-level language and there's a reason for that which is that it's a little bit tricky to do and I wanna show you, I don't wanna get too much into the details here but I wanna show you one kind of example that I think may help you understand number one what's tricky and number two how Rust deals with it. So I made a little program here which is Ruby program, it has a point it inherits from a struct with X, Y, make a line, the line has a length method on it and the length is just figuring out the length of distance between two points I made a distance method which takes two points makes a new line P1 and P2 gets the length and then my count method basically I have a count function which is sort of like the main function here which basically will go get the distance and it will return the right value. Now let me sort of trace through the program what exactly is happening here so the first thing that happens is I make a new point and because this is a garbage collected language because it's trying to be safe and Ruby doesn't really know what else is gonna happen with this point afterwards it's basically forced to allocate that point on the heap it's basically forced to make an actual object and allocate it and put it somewhere else because it doesn't know what else might happen with this point in the future so I go and I make a second point. Now the next thing I do is I call the distance function and the distance function also makes a line object and again because Ruby doesn't really know what's gonna happen with a line object it has to go and put a line somewhere on the heap and then it goes and calls the length method on the line and the length method actually starts pulling things off of the point which is actually super dangerous if you weren't garbage collected right because now we're just pulling random things off of this object so we're really happy that these things are allocated on the heap because if they weren't allocated on the heap if they were basically, if they were managed manually if you were basically allocating them and freeing them and you start pulling random things off who knows what could happen so we basically go and we pull a few things off we do some calculations then we call the square root function which is yet another function we give it the values that we gave it and then we return sorry and then we return and what's kind of interesting about this is that I sort of told you a story of it being somewhat unclear about what's going on like what is this point object, what is this line object I don't really know what's happening here but what's kind of interesting is that most programs actually are written in this way in which even though in theory the point object isn't sort of known what it's doing in practice it's being created it's getting passed a bunch of functions then it's getting returned and then we're done with it right so in practice what's happening is that we're creating objects they basically get used in computation they get returned right away nobody hangs on to them people aren't making threads and storing them off or putting them in other structures this ends up meaning is that even though in theory this is all very simple and you can sort of understand statically I can just put the memory in some known location pass it around, get it back everything's great because of the fact that Ruby doesn't really know it ends up being forced to go and do a lot of extra memory work to deal with the fact that doesn't really know what's going on so let's look at the equivalent program let me zoom through that sorry so let's look at the equivalent program in Rust and the first thing that you should note is that it's not that many more lines it's sort of equivalently sized there's more types obviously involved so it's a little denser but there's a little bit of a different thing going on here so first of all when we say point over here in Rust when you say it this way and when you don't explicitly allocate it what you're saying is I would like the point to actually be allocated in a known fixed location that doesn't have to be allocated and you have to do extra work to make it be allocated somewhere else so you make these two points and then you call this other function and the other function also gets these two points and the way that it received them is basically by moving them into this function so this function is now the owner of these points and what it does with those points is it makes a line and it basically takes the points puts them into a line and it's basically now created this new object and again because I didn't explicitly say that I want to allocate this somewhere else now it gets allocated in a fixed known location and when I call the length function the length function is getting called and actually you'll might notice something interesting here which is this ampersand before self and what the ampersand means is just what it means something very simple which is you can use this self but you can't hang onto it you're not allowed to go make a thread and move it somewhere else you're not allowed to do something that would cause this reference of this line to outlive this function call so this is something that you don't get to say in Ruby you don't get to say this line cannot outlive this function call but if you do get to say it now you know for sure what exactly is happening with it so then you do all the same sorry then you do all the same amount of work but when you return from the length function you're actually confident that that line hasn't disappeared which means that the compiler can look at this whole program and it can say okay I know well it doesn't have to just say I know that I don't have to allocate anything we as a programmer have told Rust exactly where it should allocate everything and so that's fine that I think you can sort of understand how that could work but you're probably thinking okay well you can do that but what if you actually make mistakes like what if you need to hold something for the longer than the size of the stack and let me look at a really simple function here which is a function which opens a new file from a particular path and prints it and I'll just sort of walk through how this works this is a more complicated example and so what happens is we make a new file and what we do in the same function is we read from the file and again remember what I said before which is that in Rust if you don't say something specific and this thing that you say specific is box which basically means put this boxes up and put it somewhere else if you don't say that it's always allocating things on the stack so basically what we said is open this new file allocate it here and then we go print you know print line that says go read some stuff from a file and don't worry about errors that's what unwrap means and then when we get to the end of this function it's basically going to go close the file automatically because it knows that only one thing has access to it at a time so that works fine and as expected and obviously you can see that there's nothing dangerous that could happen there we can't accidentally refer to memory that we didn't expect because it's all self-contained now let's look at a second example where we actually read the file to a string inside of a thread so we go and we say I have a file let me open the file and then we go and we spawn a new thread and we read from it so actually this is also fine because what happened here is that we read from the file in the first place and then the only other time we ever used it again in the entire program was inside the thread so the Rust compiler is like okay the file was allocated in one place now we can get moved somewhere else and we're good and then when the thread ends up finishing that's the point at which the file gets the allocated and closed but there is actually one case that doesn't work where it's very dangerous and where you will be doing something bad and that's here where you say make me a file and then you say okay in a thread I wanna read from it and I also wanna read it from it outside of here the thread obviously in a garbage collector language this is totally fine because what will happen is the garbage collector will hold the reference will hold two references and it will wait till both of them get cleaned up and then it will clean it up but like we said before what we would like to be able to do is not have to worry about the garbage collector and what hopefully you've seen so far is that in many many cases you can just write normal programs and everything will work fine and you won't have to worry too much about these rules but you may write a program where you decide to have a file or any object and have it be referenced from two threads at a time or if there's other ways to have this happen and if you go ahead and do that if you try to do something that violates the ownership rules in Rust you get a compile time error and what that compile time error will say is hey, you used this moved value file and there's more that it prints out which tells you exactly where it was used in other places so basically what this means is that if you actually end up doing something where you try to do something that would require a garbage collector then obviously since Rust doesn't have one by default it will basically it will give you an error and what this means is that you can write most of the time normal programs that are very memory efficient that are very fast that are also very safe but you do have to worry about the case where you're doing something dangerous but unlike in C or C++ you get notified when you try to do something dangerous so this probably was a little involved for people and that's fine I think ownership is probably the only reason I talk about it here is because ownership is the number one topic that I think people need to know when they learn Rust because it's kind of magic how you get automatic memory management but also no GC and also it's safe it's like kind of a magic combination and this is how it works that there's sort of a set of rules that you have to follow about who gets to own what pointers but it ends up being very powerful so that's called ownership and if you start learning Rust make sure you pay special attention when you come across a section of the guide or whatever tutorial that talks about ownership so let me go sort of back to the beginning which is why does this end up mattering for anybody? the reason it ends up mattering is because Rust sort of opens up the ability for people to do low level systems programming who would not have done it before like me, I would not have done low level systems programming before whenever I wrote C code before I wrote Rust I was always very afraid and I always didn't really feel comfortable experimenting because I knew that if I wrote 500 lines of C code and tried to put it in production there was a good chance that I made some kind of mistake even the people who write browsers who are the best C++ hackers in the world make mistakes and get exploited so I was never very comfortable experimenting in my own applications with C code or C++ code because they were so dangerous and what this lets you do what Rust lets you do is it lets you say I have some area that has been driving me crazy performance wise I've tried everything, it's been crazy I've spent all this time reading the C code learning how the JIT works, all this stuff and I just want to basically go in and be explicit about the performance requirements this is something that you can really experiment with and not worry that you're gonna write a little bit of low level code and all of a sudden you're gonna start having your app crashing constantly and what this also means if you're willing to go and learn Rust is that you can, if there's an area where performance actually matters you can beat your competitors if you're a high frequency trading company and all your competitors are writing code in Java and they have all these GC pauses and you write your code in Rust maybe you'll be able to beat them out and so obviously don't do this whenever performance doesn't matter but when performance matters probably your competitors have similar performance requirements and you can out-compete them and just in general it's safe I think it's easy to think about low level code and say low level code is either low level code is super dangerous and it's so hard to write and there's all these crazy C macros and what's happening here and Rust is a safe language it's a productive language it's not as high level and as productive maybe as Ruby but if you care about performance I think it's great it's safe Rust enables a whole new generation of high level programmers to write systems level code so I think what you should ask yourself is what can you do with that power? Thank you very much