 Hi everybody. So I'm Erin. Welcome to the second edition of RustConf. Before I start, I want to mention this awesome logo, this character is named Lucy in case you were wondering. And I'm sure that she will be showing up in future Rust events. This is an awesome artist named John who's been doing the artwork throughout. So we really love it. So quick show of hands who attended the first RustConf last year? Awesome. Welcome back and welcome to a whole bunch of new people. So I ask in part because I want to start today's talk by sort of recollecting where we were about a year ago when we were giving this same keynote on the same stage. So we were laying out the idea of a sort of roadmap process for Rust where we'd come together each year as a community and decide what our vision for that year was. What are our most important goals? What are we shooting for? And that process takes into account lots of great data that we've been gathering through the Rust community survey, talking to production users, pre-production users and so on. So the core message last year, setting out the roadmap for 2017, is that we really need to make productivity a core value of Rust. So everybody thinks about Rust in terms of speed and reliability, but not necessarily productivity. And when we look at the survey results, when we did our planning, basically making this a third pillar of Rust's story seemed like the most important thing we could do this year. So that overall vision turned into a whole bunch of more specific goals on the roadmap, which we're going to be talking about basically throughout today's talk, like where are we on these things. And I should call out specifically the lower learning curve, I think, was the single clearest message that we saw in the survey that people were bouncing off of Rust because of the initial learning curve. If they could make it through that, they would fall in love with the language, but we really wanted to do more to make it easier to get into Rust. So probably a lot of you have followed the roadmap process closely. You know the kinds of things we're doing on each of these items, and what you're wondering, you know, like any long trip, are we there yet? So as a father of two, I'm very accustomed to hearing this question. And basically the talk today is going to try to provide you with a long answer to it. So talking about roadmaps, right, let's actually look at a little bit of a map here. So, you know, we're about two thirds through 2017. We still very much hope to achieve what we set out in the roadmap in the remainder of 2017. But we're sort of envisioning a couple of events coming up in the future. So there's something called the Imple Period, which the last third of 2017 is going to cover. And then some ideas floating about something called Rust 2019 coming late next year. So let me tell you about each of those pieces. Okay. So first of all, the Imple Period. So, you know, throughout the course of this year, we've been doing a lot of planning, designing, RFC discussion about how we want to solve the problems that we targeted, how we're going to actually achieve the goals we set out to achieve. And we've also, you know, in parallel with that, been doing some implementation work along the way, right? We're a big community. We're doing all of these processes together. But at some point, we realized, you know, if we're going to ship what we set out to ship, we have to sort of acknowledge at some point in the year that all of the design that's going to actually land this year basically needs to be done. We need to have essentially a deadline. And after that point, we will sort of as a whole community turn our focus purely toward finishing the implementation work that we've been starting throughout the year and that the designs had been landing for. So I'm really excited about this, not just because of the deadline aspect, but also it gives us a chance to sort of switch gears as a community, get into a different mindset, have a sort of rhythm to the year. And I think it's going to be an awesome way for people to get involved in Rust who haven't found a way to do that yet. So the Imple period is going to run for three months, starting in the middle of September. And all of the Rust teams will be putting a lot of work before the Imple period into writing up contribution instructions, detailed guides to where we need help, and so on. So part of the message here is if you have been thinking about contributing to Rust but haven't been sure how, this September we're going to have a really good answer for you and a lot of people lined up to help you make an impact. So that's the Imple period to help us get things done this year. What about next year? So we've been discussing on an RFC this idea that has been gone through a few iterations on the naming, but I think we've settled on epochs. So the idea here is Rust, as you all know, follows a rapid release model. So every six weeks we put out a new version of the stable compiler. And this has been a fantastic model for shipping software because there's not this mad dash to get anything into a particular release. So you're not sort of always in these sprints and hurrying to land things to make the release because if you miss one release, well, the next one is only six weeks behind. So it's just this very steady process where Rust is always improving incrementally a little bit at a time. So I think it's worked really well for us. I think the community really appreciates it, but there are a few things missing from this release model. And I think the most important thing, as you'll see throughout the talk today, with what we're doing on the roadmap, we're really talking about in some a lot of changes, a lot of improvements, a lot of stuff. And while we want to land that stuff in this incremental rapid release fashion, we also want at some point to sort of bring it all together into a coherent, polished package and ship it, give it a name, make it a thing. And so that's the big idea around epochs. And we'll see how it goes. We've been thinking about this as a sort of three-year cycle on top of our six-week rapid releases. So every two or three years, probably, we will target a new epoch. And then in that cycle, when we do an epoch release, what we're trying to do is go beyond just landing the bare features, but actually make sure that we have the book or the books up to date with those features. The compiler is producing good error message for those features. IDEs and other tools understand those features and how to work with them. And the ecosystem is starting to take advantage. So it sort of marks a new chapter in Rust development, where everything is coming together, and you have a new sense of what Rust is. So I'm very excited about this, and I think this is also a really useful thing for talking about Rust to people who are not following it so closely. So it's easier to see how Rust is evolving. So when we bring this all together, we then have an actual flag day where we say, okay, the next epoch is coming out. And we decided to label those with a year. So we haven't totally decided this, but Rust 2019 seems plausible. We'll see how next year's roadmap goes. And one other important detail here is, as we are adding various new features, sometimes we need to do things like introduce new keywords, which seems maybe innocuous enough. But technically, adding a new keyword can cause problems for existing code, right, if that existing code was using that word as a variable name or an API or something. And, you know, Rust takes stability really, really seriously. That was our big promise around 1.0. And so we want it to continue to be the case that you can always upgrade the compiler and your code will keep working without any hassle. So when we do things like introduce new keywords or make other sort of tweaks to the language that could cause problems for older code, we're going to tie that to an opt-in. And you'll opt-in to the epoch as a whole. So your existing code on the sort of original 2015 epoch, which is what you all are using right now, you just didn't know it, that'll continue to compile in the 2019 compiler without you having to do anything. But then if you want to take advantage of new keywords and so on, you'll be able to mark your code with epoch 2019 and transition to it, right? And the other cool piece of this is that you can mix and match epoch levels in your dependencies, right? So there's no ecosystem split, everything keeps working and can work together. So I think we're really excited about this as a smooth way to evolve the language while keeping all existing code working well. All right, so that is sort of the overview of where we are this year, what we're trying to do. And now in the rest of the talk, we're going to take a more detailed look at each piece of the roadmap and how much progress we've made. So we'll start with Niko. Switch the mic on. Okay, very good. So, hi, I'm Niko Mazzakis. I'm going to be talking about the language and the compiler. And I'll start with the language, kind of what some of the changes that are coming and that we see coming further or that have already been planned and that are coming in now, that will be part of the Rust 2019 concept. And the overall idea here was when we first laid out the Rust 1.0 release, we tried to make a kind of core set of the language that had all the values we really wanted in it, performance and so forth, and that we could maintain over time. And now we've had several years of experience using it, seeing it grow. And we found there are various places where we could essentially take the design and improve it in small ways, sand down kind of the edges that people are hitting and make the overall experience much smoother. And so we've been calling those set of ideas collectively, the ergonomics initiative. And it works towards this overall goal of lowering the learning curve. Because the idea is, for experienced users, the basic change you'll see is that when you write code, it works more often the first time. The compiler gets in your way less and everything just feels that much nicer. But if you're a new user, you have an even bigger impact by these changes, because those same problems can really derail your whole learning experience altogether. And they can distract you from learning the core ideas of Rust, and instead you get distracted with like, do I need a star here, or how many stars do I need, and so on, that sort of thing. So let me just kind of give you some examples to make it more concrete, what we're talking about. This is one, the first set of examples are cases where the compiler essentially, the language today adds in kind of extra sigils or small annotations that you just sort of have to add. There's not really a lot of choice around it. And we'd like to make that easier. So something like match, you know, pattern matching is a really powerful and awesome feature of Rust. You could dive down into your structures and grab out little pieces and so forth. But sometimes, especially with references, you have this kind of incantation you have to follow, right? You have to have a star somewhere to go through the reference and then you need some refs. And they have to be in just the right places. And it can be confusing. So with RFC 2005, what we did was essentially move some of that, just extend the way match works so that when you match on a reference, we automatically figure out that you must need a reference binding, for example. So you can just write the code that you see on the right here. And everything works the same as it would on the left. And that RFC has already been accepted and implementation is kind of underway, actually. And a similar example, this is a pending RFC 2089 for implied bounds. The idea is when you define types in Rust, often they have some kind of constraints. So like in order to be a set, it must be a set of some type T that implements EQ and hash. And then that's fine. That's a good thing to say up front. But now every time I use the type set, I have to repeat that. And it's kind of, there's not really much value to it because it's sort of implied by the fact that even any set must meet those bounds. So it's a little repetitive. So this RFC basically allows the compiler to figure that out and says, if you define some bounds on the struct, users of the struct kind of get those bounds for free. So you don't have to repeat them over and over again. But those two changes were both cases where there was really only one right thing to do and you kind of had to follow that path and the compiler would lead you down it. But there are some other cases we've seen where in order to get your code to work, it's more than just adding a star or something. You sometimes have to do some bigger restructurings that aren't always obvious. So a classic example is the non-lexical lifetimes RFC, which tinkers with and improves Rust core borrowing system. Because today, the length of a borrow is always tied to a lexical scope. So it might be to the end of a block, for example, or an expression. So for example, here, if I reference into the map, I get back a borrowed reference of the map and that will extend all the way down until value goes out of scope. And now, if I want to convince the compiler that I'm done using this variable value, I wind up having to put it in its own block. And that way I can, for example, call self.map.insert outside of that block and the compiler knows that the borrow is finished and it's safe to modify the map again. But with the non-lexical lifetimes RFC, we improve the way lifetimes work. So it doesn't always have to extend to the end of a block, right? You can have the same borrow, but it might extend just halfway down the if statement and stop there. Kind of at the last place where you actually used the resulting reference. And that means you can write the code in this more natural way. It's exactly the same, but in terms of its ultimate effect, but it's just simpler and smoother and probably what you wrote in the first place before you got the error message. So and there are times in the similar vein where the amount of code that you would need to write is actually so high I can't really show you on the slide. So you don't see what this would look like in Rust today. But a common example is when you want to return something like an iterator, especially something involving closures. So the ImpleTrade RFC, which is partly implemented and still underway for some parts of it, allows you to say I'm not going to just specify essentially the interface or the trait, I guess, that this type fulfills, right? So you say I'm returning some form of iterator here and the compiler can work with that. That's all they need to know. They can iterate over it. And the compiler figures out the exact type. And that's really useful, especially for closures, because there you have a type that you couldn't write even if you wanted to. It's a compiler generated type. Along a similar vein, people who've worked with futures may have noticed that writing asynchronous code can sometimes be tedious. If you can use the futures combinators, that's great. But other times, you have to fall back to writing the state machine by yourself and that results in a huge explosion in the amount of code you have. But the experimental RFC 2033 introduces this async await system that is in many other languages and a sort of rust variant of it, which essentially allows the compiler to take over that boiler plate for you and generate the state machine and you can just write straight line code that you can read like a normal function. So these are the kinds of changes we're talking about. And when each one of them addresses a sort of specific problem, but when you put them all together, so those are the ones I talked about, this is a more complete listing, it's probably not the full set of RFCs that are under this umbrella. When you put them all together, I think the language is going to feel a lot smoother and just like a whole very different experience. And we're really excited about seeing it. So that's what we're calling now Rust 2019. It's kind of Rust plus these various changes. So here I've left to talisize the ones that are still pending. If you'd like to get involved in the discussion, you can take note. But this is the plans, right? But then there's this other part missing. We also have to implement all these plans. So that's where the compiler comes in. What we've been doing most of this year has been in some part implementing features, but in a lot of part refactoring and restructuring the compiler to pave the way for these kinds of features and other features. One of the biggest changes that you've probably heard of is the move to support incremental compilation. And I'll talk about that in a little bit in more detail. But we're also doing a whole bunch of other things. So for example, there's a lot of work preparing for const generics, something that's eagerly been asked for by many people, which basically allows you to write functions that are generic, not over a type, but over a value. So maybe a specific integer or a structure or something like that. And also extends the constant system so that we can actually evaluate arbitrary Rust functions at compilation time so that you can take some code that would have run at runtime, execute it in the compiler and take the return value from the function and use it as a constant in your code. And that system involves, well, we'll see implementing that involved reorganizing the compiler in a number of ways. But it's building on the work that we did for MIR, the new intermediate representation and using an interpreter and so forth. It's a very cool thing. Come talk to me later if you're interested. But the last part I would mention is the procedural macros work that's been coming through. So you've probably used this already because the new derive, if you're using serde and you're deriving serialize and so forth, and you're using this new infrastructure that we laid out. But it lets you essentially write Rust code that runs at compilation time and generates Rust code for the compiler. So it's kind of like a plug-in mechanism. And now you can write that with macro rules today's Rust, but this system lets you write arbitrary Rust code to do it. We're very excited to see that grow and take shape. But I wanted to take a turn and just talk about incremental compilation for a bit. So this has been a theme, I think, of every keynote that I've given at any RustConf or RustCamp. But so compilation performance is something we hear about a lot and trust me, as someone who builds the compiler on a regular basis, which is not only a big piece of Rust code, but a big piece of Rust code that then builds itself after it gets built once, I really feel the pain here. I wanted to. And we laid out a plan, RFC 1298, and we pretty much implemented that plan. You may have noticed the beta release came out some time ago. And it worked pretty well. So for example, if you're using the beta release to build the regex crate, we made various changes like modifying individual methods. And you can see here this chart. So the right bar at the far right, that's how long it takes if you run, it's scaled so it's always 100%. But that's how long it takes if you run from scratch. And then these little blue bars over there, that's how long it takes when you're doing an incremental build and you've just changed one little thing. So it worked pretty well. And I was hoping that we would put the finishing touches and I would be up here telling you, it's all ready to go. But life intervened. And we wound up taking a slightly different and slightly longer, but I think very exciting path. So what happened was, in order to support things like constant lyrics, but also the RLS, the Rust language service, the ID support, we found we really needed to restructure the compiler to support on-demand compilation, which means you take one function and you compile just what you need to do to get that function and nothing else. And everything in the compiler up till now had been structured to compile the whole crate at a time. And once we have that support in place, we also found we could address some of the shortcomings we were seeing in the incremental and get much better reuse by using a new system that we call red-green. And so we've started work on those and we're actually quite a bit of the way through. So the on-demand infrastructure is in there, the red-green is almost in there, and we're hoping the, you know, the input period will be the time that we put the final mail in this feature and send it out the door. But that's where we are now on this. So that brings me to my next slide. We could do some assistance to get all these things done. So there are a lot of ways to get involved if you want to be involved in either the language part or the compiler part or both. I mean, the most obvious thing is we have a lot, we need to get these designs done, as Aaron said, and we need to get started working on them. So now is the time to come take a look, see the RFCs that are pending and give your thoughts and if you have any suggestions on how to improve them and so on. But once they land, and especially during the IMPL period, this is the perfect opportunity to get involved in hacking the compiler. So for all of these different features, I hope we will have, or what plan is, to have mentoring instructions, clear ways so that you can see where in the compiler you need to touch. It's not like you have to just jump in from scratch. And I will add something. I often hear people think, would suggest that the compiler is like a, or that compilers in general are sort of a wizardry art. But, you know, it's just a program. I think, so if you've been thinking about hacking compilers or you're curious how they work, this is a great thing to do right now. And of course, there's also work not just on the RFCs, but on incremental compilation and the other ongoing bigger projects. I think in terms of compilers being a black art, probably part of the biggest problem is just that they lack, at least ours does, as many comments as it should have and so on. No other programs ever have that property, I guess. So if you're, if you have hacked on the compiler, a great way to help or is to help us prepare for this impulse period. We would like to have a big effort to kind of document not just what in the compiler does what, but the various ways that one builds the compiler in the different modes and flags for debugging it and so on. A lot of that work is underway, but I think it's still some room to go. So, in short, please, I hope you'll be excited about all the big changes that are coming up and the improvements we have in store. And if you would like to get involved, we are really eager to mentor and support. You can look for the existing issues. E-Mentor is always a place where there are instructions if they have that label or you can come to the PoundResty channel or just ping me privately and we'll find something for you to do. Don't worry. So, thank you. And I think Carol's up next. I'm Carol and I'm going to start by expanding a bit on the Lowering the Learning Curve Broadmap goal. Look, we've done it. As Nico said, a lot of the ergonomics initiative is focused towards Lowering the Learning Curve. Another big project that is underway is the book. Steve and I have been hard at work on getting this book ready for going to print with no starch press. It is due out in December this year and it's a perfect gift for anyone on your holiday list. It's available for pre-order now. It's also available online to read for free and especially the later chapters that haven't been all the way through the editing process. We would love if you would read those. Give us comments about what parts are good and what parts could use a little more work. We would really appreciate your help with that. Another goal on the roadmap this year has been mentoring people at all levels. And mentoring is almost too narrow for what we've done so far this year and what we plan to do. What has been proven to be more important is the at all levels part of the goal. We want to get more people involved with rust in all parts of the process in order to make rust better. Especially we want to grow rust pie by bringing in folks who are underrepresented in tech which includes women, people of color, LGBTQ, people with disabilities because they also tend to be especially underrepresented in systems programming. We think rust is an enabling technology that can help get these sorts of people involved and we can make rust even better by having a diversity of backgrounds and a diversity of ideas involved in making rust. One, so there are three different levels on this slide. The first one is helping people who are kind of new to programming and definitely new to rust. And one of those efforts has been the rust bridge workshop. The bridge event concept has come from Rails bridge and there are now different bridge events for lots of languages. This is a one day workshop aimed at people who are underrepresented in tech. And we had one yesterday. Ashley and I, Ashley, do you like to wave? She's right there. We have stickers with this lovely logo that Karen made for us with Ferris as a bridge. We went through an introduction to rust and we worked on making a web app in rust that gives you emergency compliments. If you would like, if you participated in the rust bridge workshop yesterday, I would love if you would wave so we could say hi. Thank you for coming. It was a lot of fun yesterday. We're also working to improve the website for the workshop and make kind of a kit so that anyone can run one of these workshops in their communities. In order to be called a bridge event, it has to be aimed at people who are underrepresented in tech. But the curriculum is available for free online. Anyone can use it. Just don't call it a bridge event if you don't meet the criteria. So this is kind of an instance of working to make things better for underrepresented folks makes it better for everyone. So we would like your help and we'd like you to consider running a bridge event in 2018 and please contact the community team so that we can help you with that. Another program that we have been, we have just gotten off the ground is called the increasing rust reach program. We have reached out to people who are underrepresented in tech and who have skills and expertise that we're lacking in this community. So people like professional teachers and professional designers, we just don't have these skills around. So we had about 350 applicants for about 12 spots, which was incredible to get that overwhelming number of applications. We just got this project started and the projects range through a variety of topics. One of them, for example, is working on improving and adding more videos to the inturust.com screencast series. So you'll be hearing and seeing more things come out of this program as it progresses. This is going to be running through October. And we hope that this is going to make rust more accessible in a variety of ways. Another effort is we want to get people who are already working on rust involved in the RFC process, involved in shaping how rust is going to progress. I along with Manish who is up front there and Alexi has started this podcast called Request for Explanation where we discuss an RFC on the podcast. We usually have the author of the RFC on and I think it's been going really well to help because RFCs can be overwhelming to keep up with. There's a lot of them. There's a lot of comments on them. So we wanted to have this way to a different way to consume this content so that everyone can keep up with what's changing and know what you might want to go look at and comment on. We're thinking about other ways to make the RFC process more accessible so that we can get better ideas and end up with a better language. So another goal of the year has been to provide good quality crates and make them easy to find. So to that end, we've made a bunch of improvements to crates.io. Jake Golding and I actually wrote an RFC that in the process we did a survey that asked people how they evaluate which crates to use. And the results of that, we got 132 responses. So it's not statistically significant. But by far and way, the most important thing is good documentation. If a crate had good documentation, people wanted to use it more. So crate authors take note, you should probably work on your docs a little more. There are a number of features, a number of things that people looked at that influenced what we proposed in this RFC and that influenced what we've implemented so far this year. One thing we shipped was adding categories to crates.io. This was with a lot of feedback from the community about which categories we should have. These are meant to answer the question of I need a crate to do something. And this might be when you don't even know what you should be searching for yet. You don't know that you should search for Serity to get something that will serialize into a certain format. We also have keywords, we've had those for a while but they're kind of more free form. Anyone can decide to add any keyword. Categories have a specific purpose and crate authors can put their crates in various categories. Within categories and keywords, we decided through the RFC process to sort by recent downloads in the last 90 days as opposed to all time. This is meant to remove the bias that crates that have happened to be around for a long time can have. So for example, this is the Rust Patterns category. And you'll notice that Quick Error has more all time downloads, about 400,000. Air Chain has slightly under 400,000 but in the last 90 days, Air Chain has been downloaded more often so it's listed first here. And that might be an indicator that the community has decided that Air Chain is a better choice than Quick Error even though Quick Error might have been around for longer. We've also added the ability for crate authors to add a variety of badges. These have been implemented by many people. And they're displayed with your crate and can indicate things like your continuous integration results. It looks like Curl is failing on AppVare so maybe we should look into that. It's not expected that every crate author will add every badge but each crate author can decide on which aspects are important for their users to know about and put those badges on their crate. Another feature we've added is the ability to see all crates that are owned by a user or all crates that are owned by a team. So for example, the RustLang Nursery Libs team, you can see all 12 crates that they own. This can let you find, if you like a particular author, you like their work, you can find what else they've worked on. For teams, you can find crates that are meant to work well together. So you might have noticed that these are not really related to documentation which was the number one thing. And I'm pleased to announce that just yesterday we deployed a really awesome feature and that is rendering the readme of your crate on Crate.io. I know this has been something that I've won for a long time and I'm really grateful for the person who implemented this. And so this will let Crate authors, you can add what this crate is for, what it's philosophy is. And a quick example, I think, is a great thing to put at the top of your readme. So Crate authors, go check your readme's, make sure you're specifying it in your Crate.io. Let me know if it's not rendering but I'm very excited to have this on Crate.io. Another thing that might be coming as part of the increasing RustReach is a Crate.io redesign which, now that we're putting all of this information for you to use when deciding what crate you're gonna use, it's getting a little cluttered so we're looking to clean it up and make it easier to find the information you care about. So, my call for help here is I would like you to all read the book, especially later chapters, and provide feedback. I would like you to consider running a RustReach workshop in your community. We always want ideas from the community for which RFCs we should talk about on the podcast. And we're working on adding a whole bunch of issues with the eMens or TAG on Crate.io with instructions on how to work on that. So please take a look for those and let me know if you need any help or are looking for some guidance there. We'd love to have you. So I wanna tell you a story about build systems. So one of the things, again, going back to the last year's survey, one of the things we heard a lot for people trying to use Rust in the context of a larger organization is that they love cargo, they really wanna use the Crate.io ecosystem, but they were feeling friction integrating cargo into their big build system. So if you've heard of things like Basil or Buck, stuff like that, it wasn't always clear exactly how cargo should fit into that picture. And so we've been spending a lot of time this year trying to understand first what the problem is. And that's been a little bit harder than you might imagine. But recently, we've kind of put together that part of the reason this has been so hard to get a handle on is there are really two very different kinds of customers for build system integration. So on the one hand, you have places that are maybe using a variety of build systems already. They are more or less okay using cargo, but maybe there's some aspect of cargo that they need to customize or fit a little bit better into their build system. So a very common request is I love using Crate.io for the open source ecosystem, but my company needs to have some closed source crates or whatever. I would love to have the same model, but have my alternative crate registry locally. Or I need to control caching of pre-built artifacts or whatever, many of these other things. The point is most of these folks are more or less okay with cargo, but there's just some particular point of friction that's preventing it from being a smooth experience. Then we have customer number two. So customer number two is also interested in customizing a sort of set of features, but unlike customer one, they really want to control absolutely everything. And so this is the case that tends to come along with highly opinionated and structured build systems that are already controlling all of these aspects. And basically when you integrate some other build process into it, you want it to yield to the larger build system so that you have this sort of consistent workflow and experience across the board. And so we've realized, picking apart these two different customers has been really helpful in understanding the space and how we should approach it. And of course, the interesting thing to notice here is that the actual list of customizations that these two customers want is the same, but there's this critical difference, whereas customer one, maybe you only need to provide one new feature to cargo and then they are unlocked and you have improved life for them. For customer two, until they can control everything, they have nothing basically. And so what we've been trying to work out is how we can serve both of these customers at the same time in a sort of incremental way. So for customer one, that's not too hard, right? As long as we are shipping these new features on a regular cadence, more and more of those kinds of people will have a friction-free experience with cargo. But to unlock customer two, we need to understand the essence of cargo and what it means for an external build system to drive so many of these other things that cargo is usually doing today. So there's been really a fantastic discussion on GitHub with lots of different stakeholders coming from lots of perspectives that helped us tease apart some of the key insights and I'm not gonna go into a ton of depth here but you can think of cargo, like one of the insights we got was that you can think of cargo as this four-stage process where one of the core features that cargo provides is dependency resolution and if you wanna use Crate's I.O. within the context of your company or whatever, you need cargo's dependency resolution to be working for you, right? That's the bread and butter of what cargo does. But after you do dependency resolution, you get a lock file and there are a number of steps remaining, right? So conceptually, after the lock file you can sort of take the configuration in various cargo tunnels involved and paint the dependency graph, figure out actually what are the profile settings, what kinds of features are enabled, et cetera, et cetera. So this is figuring out how to configure each crate in the graph at the cargo level of abstraction. Then there's another step, which we're calling a build lowering, where you lower the level of abstraction down to actual calls to Rust C with a whole bunch of flags, maybe running some binaries if you're producing build scripts and so on and so forth. So then the idea is for customer number two for the hard case, to give them the control that they want to let them make progress right away, what we can do is basically run the first stages of cargo through build lowering and then instead of actually executing the build within cargo itself, we just spit out a build plan and that can then be sucked into some larger build system that could produce basil rules or whatever and now the integration is pretty smooth. And now this rest on work that has already happened around things like dependency resolution to let people vendor crates locally and have their own mirrors and so on. And the idea in the long run is that the work we're doing here and the work we're doing for customer one will sort of converge once we have higher level ways to customize each aspect of the build, maybe you no longer need to spit out this low level build plan, maybe the integration is smoother. But with this strategy, we at least unlock that customer number two right off the bat and let them make progress. All right, going again back to the survey, one of the really clear messages we got last year, which took us a little bit by surprise, is that people really like IDEs, especially people not using Rust. So I think it was something like, I don't quite remember, I think it was like one in four non-Rust users in the 2016 survey said that the lack of a solid IDE was really the thing standing in their way. So I'm happy to report that the IDE side of things has been going really, really well this year. So there was a particularly exciting announcement just earlier this month that the IntelliJ plugin, which has been developed sort of in a volunteer open source style for a while, is now becoming an officially supported part of the IntelliJ IDE, which means higher resourcing and so on. So I wanted to thank Alexi in particular who's been doing a lot of that work for making this happen. And if you haven't tried this IDE, I really encourage you to download it and give it a spin. It's really awesome. But you've probably also heard about a different project in the direction of IDEs. So IntelliJ has sort of its model of the world. It works for lots of languages and they've been working on building Rust support sort of from the ground up in that setting. But the other strategy we've been taking is what we call the Rust language service. And this is something that's actually built into the compiler where basically we turn the compiler into an API that external tools like IDEs can peer into to get information for auto completion, refactoring and lots of other things. So this work was spearheaded by Nick Cameron and Jonathan Turner. Unfortunately, those two New Zealanders were not able to make it. But we anticipate a full beta release of the RLS in the near future. And we already have a couple of strong consumers of the RLS. So on the left, there's the VS Code IDE which is kind of the flagship IDE plugin that's using the RLS. And on the right, we have a project called RustDub which is a web interface to the RLS which gives you very rich source browsing functionality. And we're hoping to use that in the new version of Rust doc that's coming up. Cool, okay, so that's the IDE story. Another major theme, of course, if you wanna talk about productivity, you have to have libraries. And this is especially true for Rust because we've taken a pretty lean approach to the standard library, mostly in the interest of not over-committing, not over-coupling. And since cargo makes it so easy to include dependencies, a lot of the good stuff is out there in the larger Crates.io ecosystem. So that's been a good model, but for it to really work, we need to make sure that that ecosystem includes the libraries you need, that they're highly polished, that you can find them. And so to pull that off, the Libs team this year has been undergoing what we're calling the Libs Blitz. And is Brian Anderson in the room? No, okay, so Brian sort of spearheaded this. David Tolne has also been helping a lot. And so, yeah. So let me tell you about what this process looks like. So we targeted 18 foundational libraries, mostly foundational, some that are a little bit higher level. So these libraries are widely used in the ecosystem. They're doing sort of low-level, very important tasks. And we set out to improve all of these libraries over the course of the year. And the way we did that is we set a cadence where over the course of 36 weeks, every two weeks we would be looking at a different one of these libraries as a community. And so we have a public evaluation process where we have a dedicated internals thread. We look from soup to nuts at the API, design, documentation, and everything else. And then at the end of that process, the library team meets to talk about the Thornier issues that were raised. Then after that meeting, the outcome has turned into a whole bunch of issues open up against the library and a bunch of PRs closing those issues. And this has been an amazing community success as with so many other things in Rust. Basically, we can't keep these issues coming fast enough. So these libraries have seen a ton of activity as part of the blitz, and that's been super exciting. But wait, there's more. So one of the really cool things as a byproduct of this process is that we're producing essentially two books. So the first one is a book about Rust API design. And this lays out a bunch of principles for how you should set up your public API in a Rust crate. And it's been the backbone of the library evaluation process. So we started with some initial sketch of this book when we started the process, but then we use it to evaluate crates as questions come up about crates that are not addressed in the guidelines. We use that to inform improvements to the guidelines. So it's been this very synergistic process producing this book. And so by the end of the year, we'll actually have a sort of first edition of this ready to go. And then in addition, and this kind of goes back to the documentation points that Carol was talking about, we are building a Rust cookbook. So for each of these libraries, we again collectively as a community come up with several small examples that show important ways to use the library so that you can quickly get up to speed when you wanna try out a library. There's code samples you can copy paste and you're off to the races. So this is I think one of the key ways that we can deliver a batteries included experience for Rust, even though the batteries are not in the standard library, they're sort of out there in the ecosystem as long as you know where to find this book, you can find your way to some of the best libraries out there. And then the, in some sense, the final goal for the libraries at the end of the day is what we've been calling sort of 1.0 status that doesn't necessarily mean literal 1.0 version. Some of the libraries actually were already above 1.0, but what we really mean is that we have kicked the tires. The library team has given a stamp of approval according to the guidelines and other issues that were raised. You can expect relative stability of the library so no major breaking changes planned and at the bottom line, this crate is ready for use. You can feel safe depending on it. It's a low risk proposition. So this has been really exciting and I imagine that this process is something that we will continue in future years as we expand the ecosystem. Now, we targeted one specific part of the ecosystem in this year's roadmap, which is the server and networking use case and Rust, particularly around really high scale servers. And this was something both that was mentioned a lot in the survey, but also where we're seeing a lot of production use of Rust in this space already. And so we're really eager to unblock further production use and enhance the experience of those users. So you've probably heard about the Tokyo project, which is an async IO library in Rust that's based on Futures API. Tokyo was released at 0.1 at the beginning of 2017 and there's been a lot of work since then building up a larger ecosystem around it. And Tokyo is already being used in production today. So I think this has been a very exciting and important step forward in Rust story, but I very much want to acknowledge that it's not finished in the sense that there's been a lot of feedback as people have taken a look at this library that it's hard to learn. It's complicated, it's tricky stuff. And we take that very seriously and part of the plan for the rest of the year is to take a lot of steps to lower the learning curve on this library just like we're doing for Rust. So we're gonna do that in a few ways. We have a really awesome API revamp in mind, as of this week. And there will be an RFC actually coming about that. I hope if you've tried Tokyo in the past and struggled with it, you can check out that RFC and tell us, oh yeah, that would have made my experience a lot better. In addition, we plan to totally revamp the docs, write a lot more examples and just make it a lot easier to get started in Tokyo. Alex Creighton over there. So of course Alex has been a huge part of this project and so many others, but in particular, he sort of made it a mission to make Async await a thing in Rust as soon as possible. So this is something that as Niko was saying before, can go a long way to making Async IO more usable and Alex has figured out how to carve a path to Async await on Knightly Rust any day now. There's one PR that still needs to land, but as soon as we get that, at least if you're using Knightly Rust, I think that the Tokyo experience will be much smoother and we hope to push towards stabilization of this as soon as possible. Okay, and in addition to that, there's lots of great stuff going on in the ecosystem. So a few weeks back, the HTTP Creight came out which gives you a bunch of standard types like request, response, header maps and so on that just provides a common vocabulary for anything that wants to talk about HTTP. Hyper, which has been around for a long time is now hooked up to Tokyo and soon will be hooked up into this new HTTP Creight and then very recently, a new library called H2 was publicly released which is an HTTP2 implementation by Carl Lurch. Carl, are you here? Thank you, Carl. And a really exciting part of this. So Carl recently started working at this company, Boyant, who has a product very deeply based on Async IO and they are really looking at Rust and sort of pivoting toward Rust as a way of taking their product to the next level. So they were dealing with, the previous library they were using was in Scala, it's called the Finagle Library which Tokyo drew a lot of inspiration from but they were running into all kinds of problems around things like memory footprint where Rust has a far better story especially with Tokyo. So they started with hiring Carl but they're sort of ramping up more and more to hiring Rust developers and they're very intent on sponsoring a strong open source ecosystem in this space. So I expect over the next year to see a lot more really key libraries getting financial resources behind them in the open source world which is super exciting. All right, so one last piece for me. Carol talked a bunch about mentoring especially sort of people who are new to Rust in various ways but we've also been working on mentoring within the community and especially growing Rust's formal governance structure. So I think in a sense I've been saying recently to Nico that I think in some ways the Rust project has just gotten bigger faster than any of us really anticipated and it's been a struggle to keep the leadership structure and the governance structure at scale with the growth of the project but we've made some really big strides this year. So a lot of these mechanics maybe are not evident if you're not watching closely but I just wanted to walk through a few. One of the things we've done is added new dedicated teams that we didn't have before. So we used to have a tools and infrastructure team which covered just a huge swath of areas and as a result didn't function that well. It was too unfocused. And so we split that up into three teams. So there's a team dedicated to cargo, one for infrastructure and one for DevTools and this allowed us to bring a whole bunch of more people onto each of these teams and made each of them more focused on a particular task. And I think this has been working extremely well. I feel like we've unblocked a lot of stuff as a result of this and brought a lot more people into project leadership. We've also grown the core team itself with Carol being one of the new members this year as well as Nick Cameron who's heading up the DevTools team. And finally we have been exploring ways to add intermediate steps on the way to formal team membership. So team members generally are the ones who are the final deciders on RFCs on a consensus-based process but we want people who can work with the teams and attend meetings and sort of be involved but not necessarily have that final decision-making role. And so that looks differently for the different subteams. Sometimes we call them peers, sometimes shepherds but it's been incredibly effective this year and a huge source for growth. So you can see the numbers here. We've added a lot of peers and we plan to keep going with that. And then a final note. So in terms of again the formal governance structure we're an organization of almost 60 people and only 10 of those work at Mozilla which I think is interesting. So going back to our general theme around the impulse period absolutely everything I talked about is desperate for your help. Let me give you a bit of detail on that. So on the cargo side there's a lot to do on cargo but for build plans specifically we're still hammering out the design of the build plan format. So if you are a stakeholder in this area if you have a build system you would like to see cargo integrated with we would really love your feedback on the early design that we propose help us iterate, make sure we're addressing your needs. And then once we've actually nailed that down helping out with the implementation would be really welcome. So on the IDE side I should have had IntelliJ here as well but I already told you you should try it out. But also the RLS we're hoping to ship a beta soon as I said and we need as many eyes on it as we can get because there are lots of subtle bugs that can arise. So if you can help us find them and report them that's super helpful and Nick has done a great job creating mentored issues here as well if you wanna actually hack on the RLS itself. The Libs Blitz is like completely designed for community involvement but we can always use more. So both the cookbook and the guidelines are there and growing but they can always use more work. There are tons of open issues on both of them and there are still library evaluations ongoing and we'd welcome your participation in those and we'll have more to say going into the ample period about other ways you can help. And with Tokyo as I already said we're gonna have an RFC soon. Feedback and discussion on that would be extremely helpful and helping with the docs would also be great especially if there are examples you'd like to see that you could work with us to help figure out and then make part of the docs. And in general, I think we're at the early days of the Async.io ecosystem in Rust. So if you've been looking for a good project to get to know Rust better this is a great space to work in. You can talk to the Tokyo team to find out where the gaps are today but I think there's a lot of opportunity to make libraries and apps to make this space bigger. So thanks. Erin mentioned the teams are growing and Rust is growing and there are a few other ways that we can see how Rust is growing lately. One of those is the production Rust users. This is a graph of how many logos we have on the Rust friends page since that page was created. This started in April 2016 and we are now up to 88 companies that are using Rust in production. One of our biggest production users of course Mozilla with Firefox has an exciting upcoming milestone. They're shipping Stylo which is Servo's parallel CSS rendering engine in Firefox 57. It's available nightly right now if you turn on a flag. I've been using it for a few weeks and it's incredible. It's very fast. I know there are some members of the Servo team in tenants. Would you wave or stand so we can? The ecosystem's been growing. The number of crates that are available on Crates.io has been steadily going up. We now have over 10,000 crates available. And are those crates being used? Yes they are. In the month of July we had over 15 million downloads. That sounds like a really big number right? Let's put that in perspective. You may have seen this tweet from Lori Voss who works for NVM who by the way is a Rust production user. And they tweeted out their downloads stats by month. And let's see, where does 15 million go on this? Oh yep, right about there. Yeah, so we're still pretty small potatoes in the grand scheme of programming language ecosystems. But this is actually exciting to me, not depressing. We're still on the ground floor all of us. We're still creating the ecosystem that is going to be around for a long time. We've still got room to grow and we've still got time to improve Crates.io before we're at NPM scale. And so the answer to the question that Erin posed of are we there yet? It's a resounding no, we're not there yet. But there's good news, we can get there with your help. We've shown a whole bunch of ways that we would love to have your help. In about a month, it will be a great time to get involved if you've been thinking about getting involved and have been hesitant. We're gonna have a ton of opportunities with mentors and instructions and ways that you can help. So I hope you'll join us in this road trip together. Thank you.