 We're going to talk to you today about Rust and kind of where it's going to be in the next year or so, what our plans are, but before we do that, we wanted to take a moment to acknowledge all these excellent sponsors. So Jane Street, Skylight, GitHub, Dropbox, and Mozilla all gave a lot of time and energy and money to making this event happen, and of course, last but definitely not least, Berkeley donated this pretty cool space, actually, without which this would not be possible at all. Thank you very, very much. So I think we wanted to start off by just acknowledging that this pretty cool thing happened, which is that we released Rust 1.0 this year on May 15th, so not very long ago, although it seems like a lot has happened. And at the time, Hewan Wilson wrote a pretty neat blog post about Rust 1.0 in numbers. Hewan Wilson is another core team member. So I should say Aaron and I are both on the Rust core team. I didn't mention that. And I thought I'd pull out a few selections to kind of give you a feel for bring bring your minds back to what it was like at the time of Rust 1.0 and what it's changed in the meantime. Right. So first off, when we released Rust 1.0, we had five years of Git history at the time. The project goes a little bit further back, but so it's been in development for quite a while. And then we had 174 merged RFCs in the time since we're up to 260. So I'm curious to see where we're going to be. It's accelerating. The pace of change is good. We already had a thousand 15 contributors at that time. Now we've got 1159 overall. And we had two million downloads from crates.io, but in the few months that have passed, we're already up to five million. So I think Rust is getting a lot of attention. And of course, hopefully you've all completely forgotten this, but if you remember, we were making a lot of changes. And at the time we had just about infinite code breakage, right? Every single day, every single hour. But since then, been a lot better, I think. So that's about it for the numbers. But Erin's going to describe a little bit about adding it up. Right. So we've seen a lot of numbers, but what does all this actually mean for us? So I just wanted to give a little bit of my perspective on how I think about 1.0 and the foundation we're building on. So to me, achieving 1.0 was really about three key things. Achieving clarity about what Rust represents. Achieving stability, you know, actually killing off that infinite breakage and building up a community. So I just want to say a few words about each of these before we talk about the future. Okay. So first of all, what do I mean by clarity? So if you were following Rust development heading into 1.0, you probably saw us putting out a lot of slogans in blog posts and talks and so on. Things like memory safety without garbage collection or concurrency without data races. And each one of these represents sort of Rust's way of looking at tradeoffs or changing the game with respect to certain tradeoffs. And if you look at sort of all these slogans in their totality, they represent sort of the essence of Rust, like what is Rust about? What makes Rust special? And I think, you know, we take a lot of this for granted now. It seems very clear. At this point. But it wasn't always that way, right? So a lot of the development up to 1.0 was boiling things down to the point that we could really, you know, put out these slogans and say, this is what Rust is. And I think more recently, we've sort of gotten a single slogan that coalesces all of this together, which is that when you're using Rust, you can hack without fear, right? That's sort of the essence of Rust. You can do programming in places that you were afraid to do it before. And Rust has your back. So that's the clarity angle. But then, you know, in order to achieve that clarity, right, we had to iterate a lot on design, which did require a lot of breakage, as Nika was saying, right? And of course, we don't want to stop that process going forward. As we build up, you know, new enhancements to the language, new libraries, we want to be able to iterate on their design. But at the same time, we need to have a stable foundation, you know, so that your code isn't always breaking, right? And so we introduced this idea of release channels, sort of following browser vendor model, so that we have nightly channels, and then we have beta and stable channels, new things get developed on nightly, and they can break, you know, as we iterate. But once a feature has reached beta or stable, it is actually stable, right? And so we make a promise about those features that upgrading Rust will never be a huge hassle, right? Your code might break in very minor ways, but we'll always endeavor to make sure that there's a very easy tweak to make it work, right? And not only did we introduce this, but at 1.0, actually, we were able to stabilize the entire base language and most of the standard library, so we really have a strong stable foundation to build on. And I think anybody who's been here since before 1.0 can attest to the difference. Okay, but last, but definitely not least, is community. I think Rust's single biggest strength is its community. And I think 1.0 represents an achievement on behalf of all of those 1,000 authors, right? But for me personally, I think one of the biggest things has been the RFC process, right? So like in the year leading up to 1.0, we introduced the RFC process and we started honing it and eventually leading to subteams. And as someone who did a lot of work stabilizing the libraries, I ended up writing a lot of RFCs leading up to 1.0. And I can say every single one of them, the design improved in ways I could never have imagined thanks to the feedback and the debate from the community. So I think this is really the most important thing we achieved heading into 1.0. Okay, so with that, the rest of the talk is going to talk about what we see happening in the next year or so of Rust development. And Nico and I have tried to organize this into three broad themes. The first is doubling down. So there we're talking about investing in infrastructure, making the compiler better, building tools that help us achieve the stability promises we're trying to make. Nico's going to talk about that. I'll talk about zeroing in. So there I'm talking about sort of some of our core language features, which have been stabilized, but have some gaps, right, some improvements we'd like to make. So I'm going to highlight a few of those gaps and tell you our plans for improving them. And then Nico will tell you about our plans for branching out and Rust, so taking Rust to new places where it's not a good fit today, but we see a path to make it a good fit in the future. So with that, Nico. Great. So yeah, first thing we're going to talk about is doubling down. And I have to say as a vegetarian, this is like a really difficult slide for me to put up on the table. I did it. Yeah, there's nothing to do with this sandwich, but has a lot to do with making the kind of Rust experience, the Rust tools as reliable and fast and just generally all around awesome as you know, any product out there. And I think we've got a little way to go, but we're getting a lot of the stuff I'm talking about here is going to bring us a lot closer. So the first thing is we have this tool, which is called Crater, and it's been developed by Brian Anderson. And the idea of Crater is it's supposed to detect regressions, right? And a regression basically means we had some code, it used to compile, now it doesn't. This might or might not be a compiler bug, right? It could be that we did something wrong, but it could be that we fixed the bug. And in fact, the old code was incorrectly relying on that old behavior, right? But so when we see a regression, we get, we can examine it and we can figure out what to do. Maybe we should fix the compiler or maybe we should, instead of, if we want to make a better experience, even if there's a bug that we fixed, we can have it issue a warning for a little while before we report an error so that you have a chance to fix it, but your code keeps working in the meantime. Or maybe we can just open a pull request on your repository if it's a pretty simple thing, right? But before we can do any of that kind of interesting stuff, we have to know what regressions are actually happening. And this has in the past been sort of a challenging thing. You produce your compiler, you run it against your test suite, you let the world use it, and then you just hope that people will tell you what's happening and they'll not get too mad at you, right? But we realized we can actually do a lot better now. We have this crates.io repository. It has a lot of Rust code on it. Some reasonable sample of what's out there. And we have our compilers. We could put them together, right? So what Crater does is it takes two builds of the Rust compiler. Maybe let's say the stable build and the nightly build at any given time. First you run the stable build just against everything, just see what builds. Then you run the nightly build against everything, and then you diff the two and what's left, that's the stuff we have to investigate, right? It's pretty simple, but it's something that kind of would have been unthinkable before if I had to run this on my laptop or something, I don't know, take forever. But now you can do it in the cloud and you get it back in a few hours. It's pretty cool. So that's working and we're using it. And it's one of those kind of pieces of infrastructure that you didn't know you needed, but once you have it, it's indispensable. So from the second it came out, we've been running it pretty regularly and looking over the reports and running it against our experimental branches and just using it all the time. So it's really useful. It's already led us to fix some bugs. It's already led us to issue warnings instead of errors in some cases. It's already guiding the behavior in all the ways I talked about, but it still has some limitations that we should fix, like it only checks Linux right now. It doesn't check Windows or Mac. I don't know about Mac actually, but doesn't check Windows. And it requires some setup. So I got to run some scripts on my machine, the command line and wait a few hours and hope everything's done. It would be nicer if it was kind of a turnkey experience, where you could just put in a GitHub comment, you know, at Crater or test this and it would do it. We're hoping to get somewhere closer at least. And finally, the scope is not as wide as we would like, right? The test crates.io, but there's a lot of code out there that's never been uploaded to crates.io. Maybe it's private repositories or maybe it's just something that someone did on their own time. We would like to have ways to get out and see more code. And it's kind of a win-win, right? If we can see your code, then we can find out about problems before you ever know about them and fix them before you have to know about them or fix them on your behalf or just give you warnings. It's just a much better experience. It lets us have kind of an interaction. So that's what we're hoping to do. We'll have to figure out exactly how that's going to work. The last thing is Crater can also be used to do more than just say did or did not compile. We could analyze the code we're looking at. We could see what kind of usage patterns are out there in the wild, right? That might inform design decisions in the future. So if this library is always being used in this particular way, maybe we can make that more ergonomic or if it's never being used in that particular way, maybe we shouldn't bother fixing that pattern and so forth. So that Crater is really good for finding out and making sure the compiler kind of doesn't get any worse, right? But what if we wanted to actually get better? For example, faster and more incremental. So as you probably know or may or may not know, let's say, Rust works in a kind of different way than your standard C compiler. C compiler takes in one file, compiles it to an object file and then does this for every file in your project and then it puts all the object files together in a kind of dumb way and you have your product usually. Sometimes it'll do some optimizations at the last step, but it's hard. Well, Rust does a different thing. It scans all the files at first, builds up a big data structure called the AST, Abstract Syntax Tree, representing your entire crate, type checks that and gives it to LLVM to optimize and LLVM has access to all the code at the same time, right? If you really want the fastest code, that's the best option because that means LLVM can inline anything into anything else and it can do global analysis and so forth. It has as much information as it could possibly want. But if you're just trying to hack and like fix this stupid bug and see if the one line patch is going to make it work or not, it's a little bit of a drag because you have to wait for it to recompile everything. So what we're doing is re-architecting the compiler so that we have not just this big AST, which we can build if we want it, but we have smaller representations of each function within there or a small unit of work. And we call this the mirror for middle IR because we're compiler authors and very creative. So that's what it's called. IR being intermediate representation. We also like acronyms. So once we have this kind of per function thing, we can do a lot more. Right. If you don't need the ultimate in optimization, which is usually the case, then we can say, all right, these functions didn't change. Let's just ignore those and we'll reuse the save by product off the disk. And we'll just recompile this one that actually did change. Right. So if we take this approach, basically, you get a lot of benefits. First off, you get the incremental compilation I've been talking about. So you only have to recompile things that have changed. And I think, though, we'll see how this works out, but I'm pretty sure this will work. You can also do it across crates as well. So if I'm using a library and I update it and it only made a few changes, then I only have to do a little bit of work on my side, too. Should be able to track that, at least maybe not in the first version. We'll see. The other thing is just having this mid level IR itself is actually pretty nice. It's nice for us as the compiler authors because it's simpler. It's a simplified version of Rust. That means the compiler will be more reliable. We can probably optimize it better. But it also can be nice indirectly for people who are just using Rust and never plan to hack on the compiler because it lets us build more advanced language features. Some of them that Erin's going to be talking about later. So the summary is essentially the compiler should get faster and it should generate better code. I think that's pretty good. But it seems that there's just running the compiler is good, but you still have to kind of author the source code that it compiles. Right. And that at the moment is still suboptimal experience. So it's come to my attention that like some people are not satisfied with Emacs. I don't. I don't really understand it, but I do appreciate it. Some people I also understand are not satisfied with VIM. This I completely get. This makes total sense. So what we want to do is have better ID integration. Right. I propose a radical rebranding that we would reimplement this interface. Just turbo rust. It was going to be awesome. This got shot down too. So I'm not in charge of ID. You should talk to Nick Cameron, but we have a different plan. Basically, we're going to start out by picking two IDs. They look a little different now than they used to when I was a kid. Like Visual Studio is going to be one of them and the other one is a little bit TBD. But we're going to focus on producing the kind of metadata that those IDs need in order to, you know, make a rich experience. So to find all the users and do the refactorings and all the things that IDs do these days. So this is a little bit early days. It's going to be kind of an ongoing sort of opening initiative. But that's the plan over the next year or so. JetBrains is a good candidate actually. All right. So that's all I have for the doubling down. But next is Erin up here with zeroing in. OK. So let's talk a little bit about improving some of the language features and rusts. Finding some of the gaps that need to be filled. So I want to start by taking inspiration from Bjarn's true strip who has a nice quote. I think really getting at one of the key aspects of C plus plus. So he says C plus plus implementations obey the zero overhead principle. What you don't use you don't pay for. Right. And this is a really essential thing for systems programming where you want the ability to get sort of as close to the bare metal as you can. You don't want the compiler putting anything in your way when you really want to do that. Right. There's another aspect to this quote which is kind of interesting. He says furthermore what you do use you couldn't hand code any better. Now. Street Strip is talking about the compiler here. He's talking about C plus plus the language and its implementation. But I think since the time of this quote this kind of idea has been applied to API design as well. Right. So we have this notion of zero cost abstractions that you can build up libraries that are very ergonomic and high level but don't impose costs. Right. And this is a really essential and unique concept that happens in C plus plus and it's something that we've embraced in a big way in Rust. Right. So let's take a look at how zero cost abstractions play out today in Rust. Right. So when you're writing Rust and you think of abstraction generally you're going to be using traits. So for example I have a somewhat modified version of the extend trade in the standard library. And the idea here is you have some collection and it's extendable by some elements that you're getting out of an iterator. Right. And this is a very high level abstraction because there are many many different things that can be viewed as iterators. Right. So this says if you implement this for a vector like I have on the slide then I'm going to be able to extend that vector with another vector or a link list or a slice or any number of other things. Right. So from a library users perspective this is a very nice ergonomic abstraction. I can throw lots of things at this. And the way the way I can tell that is it's saying right here in the word clause any type I that happens to be an iterator giving the right items is something that I can call this with. Okay but on the flip side of that when I'm implementing this kind of abstraction I have to give a single implementation that works for any type. Right. If you think about this for a second it's actually pretty limiting because the API for an iterator is very simple. Right. It says you can keep asking for the next element over and over and it's hard to do anything intelligent with that little information. Right. But in practice if I am extending my vector by a slice I might want to ask for the length of the slice and maybe do a mem copy out of the slice directly into the vector without actually iterating it over it manually. Right. And today sometimes the optimizer will catch that you can that the code can be rewritten this way and sometimes it won't. Right. So this is a case where we're kind of failing to live up to the second part of zero cost abstractions. So when you do use an abstraction sometimes you're not getting code that's as good as what you might have written by hand. So we're interested in pursuing a solution to this problem called specialization and this too is taking inspiration from C++. In the Rust world the idea is tied to traits and it basically it basically says we allow you to give multiple implementations for a trait that can overlap as long as for each pair of impulse one is clearly more specific than the other. Right. And so going back sort of to our concrete example we start off with the fully generic implementation of the trait that works for any iterator. We have this new keyword default that says heads up reader there might be something there might be some code later that's going to give you a more specialized definition of this. So this might not be the literal code you run when you call extend and then you can provide a separate implementation that's working for slices and doing a mem copy for example. Right. And the key thing here is the generic one has a very broad constraint it's saying for any I that's an iterator and the more specialized one says it works for slices. So it's pretty obvious if you see these two impulse next to each other you know that you should prefer the one on the bottom right whenever whenever it applies it's clearly the more specific one. And so this is the cool thing about this is from the point of view of a user of the library all you really need to know about is the generic impulse right you know if I have an iterator I can always throw it at a vector it'll work. But you benefit from the higher performance when the implementers has actually coded some special cases using specialization right and then we're able to sort of get at this notion that when you do use an abstraction you're getting something just as good as the code you would write by hand because you literally wrote it by hand right. Okay so the specialization design is already out there in RFC form I'd love to get some comments on it but I just wanted to mention in passing that there are some nice byproducts beyond sort of the way I motivated it here. So for one thing right now our story for default methods and traits is a little simple in the sense that when you define a trait you sort of have one shot to define reasonable default implementations of its methods and they can't really assume anything about the type eventually implementing the trait there are lots of cases where you'd like to say well if I knew the following thing about that type I could you know have this more interesting default implementation or provide one where I couldn't before and specialization actually lets you do that but I think even more exciting for a long time we've been wanting to bring some notion of inheritance and virtual dispatch sort of C++ style to Rust but in a way that really fits in with Rust philosophy and doesn't have too much overlap with its other mechanisms and specialization opens the door to that basically a form of inheritance based on traits so I don't have time to go into the details of that but I would love to talk to you offline so if you're interested come see me so okay so that's that's filling the gap of the trait system but I think the other crown jewel in Rust is the bar checker right so this is something we all know and have learned to love through experience and it's it's sort of the core thing that gives us those slogans of memory safety without garbage collection and so on right so bar check is great at catching bugs but occasionally it also errors out on perfectly reasonable code right so probably the most annoying case of this is if you have a map and you want to do one thing if a key is in the map and if the key is not in the map you want to insert it so it's very natural to write a piece of code like this problem is the initial reference to map here when we're looking up a key creates a borrow that lasts for the entire body of the match and that sucks because in the none case we actually need to mutate the map which requires a new mutable borrow those two conflict and the borough checker says no dice right but this is a perfectly reasonable thing to do there's no reason that the borrow that we made initially doing the lookup needs to actually last for the whole body right if we're in the none branch we haven't gotten any information out of it basically the borough should have ended but the borough checker just looks at things with two cores of a grain to sort of understand what's going on there's some other annoyances sort of along these lines like every so often you might write a piece of code like this where maybe the borough method borrows a mutably and again borrows last too long and you're forced to explicitly like pull out these temporaries and sort of hold the borough checker's hand and say no here's what's going on it's okay right so things like this sort of turn our crown jewel into a source of enormous frustration right and and that really sucks because it's easy to get the impression that like there's something flawed about the bar the whole borough checker approach but that's not really the case like there's no reason borough check can't handle these cases we just haven't had a chance to sort of put that investment into it but the good news is with the work that Niko was describing earlier refactoring the internals of the compiler and introducing a mirror these changes to the borough check actually just fall out by the new way that the compiler is thinking about things in general so I'm happy to say that these sources of frustration should be going away in the next year all right so finally I just want to say a few words about another feature which I think is quite interesting a little less core but one of the coolest and most popular features in Rust which is plugins right so plugins let you do crazy things like write a regular expression and at compile time that compiles to an actual machine that's going to match the regular expression for you and that's awesome people use this if they're on nightly right but today plugins are a sort of deeply unstable feature we want to stabilize it and the sort of timespan I'm talking about but it's a little tricky I mean the reason that plugins haven't been stabilized is right now they basically just open up the hood of the compiler and say you have access to all of the API's so literally anything we change in the compiler could break a plug in right so we need some kind of like more abstract and stable API that we can offer to plug in authors so we're working on this and in particular we've identified some key constraints for this design I think one that's kind of interesting is we're thinking already about a macros 2.0 story that has better hygiene and plays better with our normal sort of import scheme and uses and so on and this plugin system should be designed from the ground up to support that and also all of the things that are happening in the nightly world right now like breast post-dress and reg X and things that are doing derive like serialization or sturdy all of this stuff needs to work in the new plugin system if you're interested in this I would encourage you to talk to Nick Cameron who's taking the lead on this design he's here he'll be giving the last talk today and then disappearing so make sure you catch him early in the day if you'd like to talk about this design and with that I'll hand it back over to Nico thanks Aaron so I want to put the last section here talking about different ways that we can branch rust out and make use of it kind of more widely so one of the best parts of rust design is it's intended to be usable all the way down from kind of the bare metal all the way up to kind of integrating and implementing virtual machines and hooking into that whole sphere right so what I like see is the lingua franca of the computing world we want rust to play that same role so I like this picture because it kind of has this adventurous spirit but the little guy I don't know I thought it looks a little better like this this should kind of give you the idea right we want Ferris to go out there and just be usable in any environment in any place even outside of the water and in this bizarre forest place anyway so I'll start maybe I took the metaphor too far it's all right I'll start with the low level stuff so basically rust already has a pretty good low level story right we don't have any real external dependencies and so forth there's a little bit of stuff that needs to be stabilized which we're working on like no standard so you can drop out of the standard library if even that's too much for you and so on allocators maybe you want to call Malik free we're working on that but there's also this other thing which is it's just kind of annoying to cross compile and so forth the problem is if I'm building for say a phone or a Tesla two or something like that I probably don't want to run rusty on the phone because the phone is pretty powerful but it's not as good as a laptop I'd rather run it on my laptop and copy it over to the phone right but that means I have to go look up web pages that tell me all kinds of annoying things that I don't want to know and then I have to figure out how to edit some weird files and then I can forget it as fast as possible right and hope that I never have to remember it again it's kind of like what writing a make file was for me at least before but now we have cargo and I can just type cargo build and everything works right so we want to bring that to cross compilation so it should be basically as simple as I pick the target that I want when I install rust or if I want to add some later I can do that too no big deal I get some libraries for it and then I can write cargo build dash dash target equals whatever arm and it works okay sounds good and then on a related but somewhat different note if I happen to be building an executable in rust today cargo is really great I can add a lot of dependencies and maybe I can even now or soon maybe this will be even easier but I can depend on Musil to reduce the stand the libc dependency and I can build it up on somebody else's computer and right up until that last mile everything is good but then I can't actually get the executable to go into their bin directory so they can run it right I have to go they have to copy it by hand so we'd like you to be able to write cargo install it's not a complicated concept but it's a very useful concept and should move these files to the right place and kind of make that step just a little more a little easier so that's the low level side but what was I talking about with this high level stuff right well one of the big use cases for rust has been embedding it in other applications so a common thing like what skylight is doing is embedding say in Ruby on Rails and in that case you want to take rust compile it to a shared object link it into the Ruby VM and implement maybe implement some of your Ruby objects using rust codes or they run faster or they can use rust libraries and you would usually have used C for this but you can use rust so this works really well it's really easy as long as your target platform is using a kind of C friendly memory management scheme right like Ruby uses a conservative GC which means they just sort of search the stack and look and see if anything even looks remotely like a pointer then I'll keep that memory around because maybe maybe you're using it Python maybe uses reference counting as long as you're using something like that it's pretty easy but if you're not it's a little more difficult so for example if we wanted to implement or integrate with Node.js in a really deep way this is going to be a problem because we have to make sure all of our objects are rooted because Node.js uses v8 and v8 uses a sophisticated garbage collector that's not very friendly to see doesn't support it as far as well I don't know about v8 actually I know how it's nope, okay I'm just doing this for literally and not at all yeah so they want to be able to for example relocate objects that will move things in and out of nurseries and do all kinds of cool tricks but that's kind of a pain now we're using rust but if I'm not really careful it still crashes at random times it's like the bad old days right so what we want to do is add the ability for the compiler to generate metadata about your stack frames and say so they can be integrated with the garbage collector probably v8, spider monkey things like that and say here's where I have references to JavaScript objects so don't throw those ones away right and it can do this in a precise way not just this conservative take a guess kind of approach and then once we do that we can even go a little bit further because you're implementing now these JavaScript objects using rust as the backing store right or like the backing code you should be able to also do things like embed rust data into the JavaScript objects and maybe that rust data has other references and it gets traced through and that all just kind of works so you can have a really deep integration with these advanced runtimes and I think that's pretty exciting but alright so that brings us to the end of our kind of survey of what we expect to be doing over the next year I'm sure there'll be a lot of other things too but I think the summary is basically 2015 was a pretty good year for rust right we had 1.0 we've got this rust camp we've got a lot of stuff going on but 2016 is going to be even better and we're really excited about it so thanks very much