 I've never tried to use one of these things, there's nothing useful for it. I've got to put that on for you. Alright, give me a sec, I'm just going to get out those small pieces. I think I need some number of these. I was wondering if you could see that, can I? I'm going to try to close out what I've got. So one of the speakers in front of the lecture. If you walk around in front of it, you're going to get feedback. I think it's the preferred option. So try using the one you've got. Wasn't HD mic preferred? No, a bit like VJ, do we? It didn't just go on. Perhaps I should plug into a different one, I can plug into something else instead. So if you want to try your... My doggles thing. Yeah, right. I'll try this display port one. Alright, that's about as much as I know about computers. Try your one. That one. This is the VJ port, isn't it? Look at that, there was an insufficient amount of DRM in my other choice as a dongle. Alright, good. Yeah, and then what's after me, there's like nothing after me. Okay, I've got a bit of talk about this, and then Rust, which I could just talk to, people told me to stop. So I can fill up whatever they told me about. So when do you want me to finish? Three o'clock? Okay. Three o'clock's enough. Wow, that looks really big and red. I've got a Mac this week because my Lenovo died and is in repairs at the moment, and I have no idea how to use it. I got as far as installing VirtualBox with Debian on it, and that's all I've got now. So this is just a talk about some things I've been trying the last couple of weeks. It's a talk with sort of more questions rather than answers. But I thought it might be interesting, some of the things I've discovered along the way might be interesting for people. I'm certainly no expert in any of these areas, and I haven't finished packaging Rust for Debian, so this is all very much kind of what I've learnt so far, and I think is accurate. So please tell me where I'm wrong or where I should do something better. First of all, Rust. Rust is a new programming language being developed by the Mozilla folks. It's really interesting. A lot of interesting ideas. I think it's very interesting community developing and very interesting attitude behind the development. So I'm very keen to see what happens with it, and I've been looking at it for close to a year now. I've been following along with development and trying out the nightly snapshots and the pre-releases they've made. They're building up to a 1.0 release very soon now. They've just shipped 1.0 Alpha, I think last week or something. Two days ago, yeah, their original timeline that I saw in December had them shipping 1.0 in February, and it looks like they're on track to doing that. So very soon now we'll have 1.0 release. It's a compiled language. It's a little like CC++. It's down that end of things. Compiles to an executable that you run. It's strongly typed. It's not at all like Python and Perl and Ruby and those things. It's down the C end. For the purposes of the first half of this talk, which I'll be talking about the packaging, the Rust compiler is written in Rust. This is actually not unusual for languages. The Haskell compiler is written in Haskell. The Python compiler requires some Python to get going. The C compiler, of course, is written in C. So this isn't actually that unusual, but it's what makes the packaging a lot more interesting. At the end of this talk, I'll give a little bit of a blurb about Rust, the language itself, and just talk to people to tell me to stop, basically, at that point. So we'll see how we go. But if you've got any questions, just shout. It would be a reasonable... Sorry, the question was somewhat jokingly, when are you proposing Rust as an official language for OpenStack? The answer would be never, given that the OpenStack community is strongly Python-figured biased. It would be suitable for those problems after the 1.0 release is shipped. Anyway, so the challenge is in packaging this. So when you're packaging a new toolchain, how do you get the first package? You need to somehow miraculously break the circular build dependency and cause the first package to come fully formed out of nothing. And then somewhat related to that, there's what happens when a new architecture comes along. Once you've got it working on one architecture, how do you get it on another one? The advantage there is, of course, you can cross-compile from your first architecture, is basically the answer there. Some unique challenges with Rust. Rust knows about cross-compiling and has multi-arc support built into language. And all of the Rust toolchain and the various features around that do a lot of string matches against the GNU architecture triple. And it really wants exactly that architecture triple. And of course, the official architecture triple for a Linux system as generated by the GNU config.guest and sort of related scripts is X8664, unknown Linux GNU. And on Debian, we decided, well, we're not unknown. We know what we are. So we're just going to be X8664, Linux GNU. At the moment in my packaging, I'm just blindly glossing over this and saying, I'm just going to insert the string unknown in there off we go. But I strongly suspect that will come back to bite me at some point. The worst that happens there is I have to define a new architecture as far as Rust is concerned, which is called X86 Linux GNU. That won't be so bad, I don't think. But so far, I'm just ignoring that problem. The more interesting complication is Rust is still a very fast-moving language. And from the conversations I've had with the upstream devs, even after 1.0, they still want to be very agile and be very unrestrained in what they can do with language. They've tagged certain language features as stable. They basically have better call on that. There's basically a stable, testing unstable kind of split of language features. And when you're writing Rust code, you can tag it and say, I only want to use stable features. And the compiler will tell you, and when you're writing a library, you can say, this is still experimental. This particular function call, I'm not happy with that API just yet. And the compiler can tell you, you know, error, you tried to use some experimental features in the program you said you only wanted to be limited to the stable subset. So the Rust compiler itself, they don't want to be limited to only the stable subset, which I think is something they're going to have to change. But so far, they don't want to do that, and we'll see how they go. So currently, they have an incompatible change to Rust about several times a week, sometimes several times a day at the moment, which when you're writing Rust is fine because you're only using a smallish set and you probably won't have to change anything in the Rust code that you've written yourself. But the Rust compiler itself is a very large project that uses all of the Rust features. I don't know the numbers off the top of my head. It would be in the tens a day of commits and the ones that have the breaking change are realistically probably a few a day on a bad day. Post 1.0, I suspect that won't be the case. The language features they break won't be the stable ones, obviously. They'll be the features that will take experimental anyway. Sorry, the question there was what about post 1.0? So anyway, for the purpose of packaging, what this means is you need to build depends on a very narrow and very recent Rust compiler. And you can only really use that to build the next compiler and no other one. And then you have to use that compiler to build the next one and no other one. So it's an extra challenge. Some terminology which I'll be referring to later on. The compiler is broken out into stages like many other compilers. GCC uses similar terminology. The stage 0 compiler is the one they just expect to already be installed. The one that's already available. When you start compiling, you somehow magically acquire a stage 0. And then you use that to compile stage 1, which is a very simple minimal compiler. It doesn't have any libraries built in in GCC. You can't use floating point and things like that in stage 1. In Rust, you don't have any of the support languages. So you can't use a number of exotic language constructs. But it's a basic Rust compiler. And then you use that to build stage 2, which is effectively a full compiler with libraries and everything. In Rust, they use the stage 2 to build stage 3. And stage 3 is theoretically identical to stage 2. They do that mostly as a test to check that they converge at that point. And then stage 3 is what gets shipped, what gets installed on your system is user bin Rust C, the Rust compiler. So again, the interesting part here for Debian packaging is stage 0. And where do you get this from? So normally when you type make install, or when you type make off the upstream source, it goes and downloads a pre-built stage 0 minimal compiler off the internet. You know, Wget in the middle of the script there somewhere. And it uses that to build the rest. As someone using the upstream source, it works perfectly fine. It's very easy. It's simply make, make, install, and done. But for Debian packaging, that's not so good. We don't want to build these downloading binaries on the fly. We generally as a community don't like the idea of blobs that we can't point to the source for and build ourselves. But for the very, very first package, there's no other way around this. You must have a Rust compiler somehow that has appeared. So our choices really are download like the upstream source does. That's not so nice in an auto build environment like Buildies. I could as a Debian package download it beforehand and ship it in the Debian source alongside the Rust source I'm about to compile. Here's the Rust source. Here's the compiler you should use. That would work from a hermetic sense and a reproducibility sense and not require network access while building. In Debian again, we really don't like binary blobs. And even though we can tell ourselves that we could possibly go and build every version of these Rust compilers back through history, it doesn't feel very nice. So what I think I'm going to have to go with is the third option which is simply I make sure every version of Rust compiler that is required is uploaded to Debian and is it unstable and is used for the next version of the Rust compiler. And if I was doing that right now, that would mean shipping a new Rust compiler to unstable several times a day or certainly several times a week. But I don't see a fourth choice here. I had a little bit of thinking about the option number two there and thought does that mean I should go in contrary because I've got a perhaps a non-free binary or perhaps a binary that we're not happy with. But then it's a bit strange too. If we were another distribution, I think you'd be very happy with either one or two and we just ship it if we were less demanding of ourselves. And the output is still the same. The final compiler is still the same no matter which one of these you choose. So here's what it looks like in Debian terminology. So you have a Debian control file that lists all your metadata about your package. The general problem is that first box there where you're going to have I'm going to build a Rust C package and it's going to build depend on a Rust C package. That's what the circular dependency looks like. And in particular it's going to be a particular version of it. So I went looking around as to how to do this. For other languages, C, this is still a problem. But the problem arises less often because the languages are moving as much slower. And for GCC, they're very liberal and they say you can compile with any C compiler, not just another GCC. But the same problem still exists. And the solutions there historically have been, well, Debian Packager, you're a very clever person. Here, have some files on disk and an editor. Enjoy. And you would open up Debian control in your favorite editor and you would just comment out the bits that build depend on itself. And then you would get a C compiler from somewhere and you would magically or a better example of your Haskell compiler you get from somewhere and then you would use the Haskell compiler already installed and use a local bin to build the Haskell compiler package that you're about to build. And then you would uncomment the bits from the Debian control file. That's where you go. Now there's some new features coming somewhat through Ubuntu and somewhat through various Google Summer of Code projects over the last year. As far as I can work out, these aren't used anywhere in the packages yet, but they exist in the tool chain and they're documented. And so this is what I'm trying to use now, these very, very new features. So the first one is build profiles. You can see on the right-hand side box there, you put some little angle brackets things and you can list profiles in your build depends. And it's basically a filter for if you're building with the, in this case, not with the Stage 1 profile, you should consider this build dependency. So in this case I say I would like to build with the Stage 1 profile and that looks like that DPKG build package command there. And then that says, right, I'm going to ignore those build dependencies. That's all done automatically. You don't have to use an editor to achieve the same result. Now the way I've chosen to do this as well, I'm going to build a Rust C bootstrap package, which I only build if I'm building in the Stage 1 profile, build profile. And I know that that's a minimal package that is not fully featured. I'm only going to build the Stage 1 compiler. I could build the full compiler, but I'm not going to bother wasting that much CPU. I only build the Stage 1 compiler. I put that in a package and I intend to never upload that to the archive. So you would build this, it spits out a Rust C bootstrap.deb. I would DPKG-I that, and then I should be able to immediately go and build the full Rust C package using a normal DPKG build package, because the build depends on how satisfied using the Rust C bootstrap package. That's the theory anyway. I haven't quite got that far in the packaging, but I'm pretty sure that'll work out as described there. So that's one new feature. This turns into just a bit more detail there. This turns into setting the DebBuild profile environment variable. So in your Debian rules or anywhere else, you can check that environment variable and enable or disable certain features, depending on what you want to do with that. So I'm changing the build target that I build later on. And DPKG build package has support to pass through the right flags and environment variables down through the various DPKG tools and your Debian rules file. So that tool chain is, as far as I can tell, all there. And even apps, in fact, can run feedback source, dash dash compile. It will also set the flags to DPKG build package appropriately. So that feels like it might work okay. So far, I've got most of it through the packaging and it seems to be working out okay. This one I know much less about, and it's something I've never tried with or without the new way of doing things. So I'm much less certain about this part of it. New architecture. New architecture comes along. The theory is very simple. The theory is you use a cross compiler on an existing architecture. You have all of the appeal dependencies installed on the local architecture. You use the cross compile to make the first couple of packages for the new architecture. And then you can install them and away you go. And again, in the past, that used to involve running editors and hacking things around and using a cross compiler that you might have got from some other source. It might even have been Yocto or Open Embedded or something. And you use your cross compile environment to bootstrap that first build essential set. There are also some new features here, which again, nothing that as far as I can tell uses yet, but the pieces seem to all be there. The idea is to use multi-arch support. So you know nowadays, Debian can install more than one architecture on your local system. This is usually only used for AMD64 and i386, being installed simultaneously, so you can run one. That's about the only use for this rather complex feature. But you can do this as well. You can get any architecture. You can tell your local DPKG about it, and then you can install packages from that architecture. You might not be able to run the binaries from it, but you can install the packages on disk. So in this case, we say, I'm going to tell about the new architecture, and that way the various libraries can be installed from that architecture. And you can do that to get source, and it'll pull down the build dependencies from that other architecture, but run the compile using binaries from your local architecture. And hopefully that'll all just work out. Under the hood, this sets these environment variables. There's a family of environment variables about the builds, environments, and the target environment that you're trying to work on. I did try to build four. And yeah, so long as you've built your Debian rules, set up, and your upstream source deals with cross-compiling, which Rust does, and my packaging hopefully does, that should all work out just okay. I'm a lot less certain about all that part. So if anyone knows about these pieces, I'd be very interested in talking to you after this. And again, this is still, I don't see any existing packages from my look-around at GCC and Haskell, and the very slightly suspects. There's some experimental packaging that's taken advantage of these, which are proof of concepts. But again, as far as I can tell, the pieces are actually in the base system, so I should be able to use it. And there are the two sort of documents that I've built a lot of this on and am learning from as I work this out, particularly the first one has a lot of information. The second one talks a bit more about the Ubuntu features and the Debian features, and it's a bit hard to tell what's landed where and things, but I think we're good. Yeah, and that was that part. Is there any sort of questions about that? There's not a lot of information there I know, but mostly because it's something I'm still learning about. But that's the principle of it. The previous world was a lot more packed up and using pre-existing cross-toolchains, typically open embedded or something like that to make magic happen, and then you would rely on a clever person to make magic happen and it would just work. Yeah. So there's another thing that someone was working on to do the stage 0, stage 1, stage 2 automatically to teach the archive machines about the different stages. Did you look into that? I don't think it's sort of polished in the tool chain yet. I'm just curious if you looked into it and whether you think it would have worked for Rust. I don't know what you're referring to there, so no I haven't. Teaching the archive. Oh, you mean breaking out separate packages for stage 1 versus stage 2? That was... There were a couple of implementations discussed on dDevelop. I'll take it to the hallway, I think. Yeah, I've got to cut up. I have messed quite a bit with open embedded previously and open embedded or Yocto as a lot of that community has regrouped around is a really impressive piece of work and it's designed for embedded uses where cross-compiling is normal. You often can't or certainly don't want to run your tool chain on your microcontroller or even an ARM phone or something. You really want to do it on your powered machine and then build binaries for your ARM that you then run there. So they have a very strong, very clever, very powerful set of packaging, essentially. They've got all of the upstream sources and patches to them where it didn't already work to make it work with cross-compiling and they can do things that we know and they're close to being able to do like Canadian cross-compilers where you're building I'm running, I was actually using this once. I was building an unofficial Android tool chain and I was building on Linux building a compiler that would run on Windows that when you run that compiler would produce executables for Android. So that's called a Canadian cross-compiler where you have three architectures involved and the Opera stuff deals with that just fine. It's very, very impressive, whereas the Debian stuff we're only really getting to the point of thinking of the two architectures but still better than before. So, I'll jump on to some Rust language stuff. There's another talk later on in the main conference about, by someone a lot more qualified to me, talking about using Rust in the how you would use Rust in the Mozilla sort of rendering engine. So I'm certainly no expert here but I'm just going to talk about it anyway because I can. I've got a microphone. So it looks like CC++ it uses LVM underneath so it's got quite sophisticated optimization and cross-compiling and all the sort of features it gains from LVM including a debugger, the LVM debugger LDB which can be used to debug Rust binaries. It unashamedly borrows lots of ideas from all around what's going on at the moment with language development. It gets a lot of strong type system from Haskell all it gets some of the interfaces and channeled ideas from Go and from other languages like Haskell. It has a very strong, the sort of new thing it brings is a very strong ownership of data. Every piece of data is owned by exactly one owner at any point in time. And when that owner is finished with the data, we're finished with that structure, it's deleted. No exceptions. The compiler to help you with this, the compiler has a very you can annotate things as pointer lifetime and I'll show examples of this later. You can say I'll accept any pointer, my function can accept any pointer to this thing as long as it has a lifetime of at least this and you can name the lifetimes and then I'm going to return a pointer that has the same lifetime as that. So the compiler can make quite strong assertions about when a pointer is valid and the memory that it points to is valid. So you should always have safe memory in your Rust program. A segfault or a dangling null pointer should be something that is impossible to happen. Assuming the Rust compiler actually compiled your code. The other thing that interests me particularly about Rust is that it has a very minimal run time. All the cleverness happens in the compiler. When the compiler is finished and given you an executable it looks a lot like what a C compiler would produce. It has almost no run time, a couple of libraries that might get deal opened but there's nothing very clever anymore at that point. So it's very easy for example to call into a Rust program from C or the other round. You can create a .so from Rust and then deal open it from a C program and call functions in it just like it was a C library. You can even go as far as doing embedded programming in Rust where the embedded program itself doesn't have any of the cleverness and doesn't need any of it but you've got a very safe program that is now running on that embedded program. Theoretically I don't think anyone's tried it yet but it's something they're aiming towards is you should be able to write a kernel module in Rust that then gets loaded up in what is otherwise a very C program. You can tag your structs and your functions are saying these should be C compatible and it makes sure it doesn't mangle the function name and it makes sure that the struct padding and field ordering is the same as what a C compiler would do so you can pass these straight through you can get pointers from your C program pointing into struct members and it should all just work. Which is for me quite interesting because it makes it suddenly a very useful language for real-world problems unlike some of these new ones like Go is a nice language as well in a lot of ways but you can't mix it well with existing code quite to that extent. I started putting bits of programming language in here and realised that there was already a website that did that better than I could in particular it has syntax highlighting in a way that looked better than I could do in my own slides so I'm just going to borrow from here in the world slash slash comments they look just like normal it looks like C++ or something like that we'll see from a first glance fn is a function declaration main is the main function statements end in semicolons you'll notice there's an exclamation mark after printlum that means printlum is a macro it does a bit more than just a regular function call you can treat it like a function call you should be slightly aware that it does a bit more in the compiler in Rust's case you can do things they're not the same as printf format strings but they achieve the same sort of results you can put curly braces in there and you can refer to arguments and substitute them in nothing very exciting as I said because this is a macro that's not being done at run time it's actually done at compile time the compiler will parse the string during its compile and it will break it up and it will generate instead convert what's a good example the second line there, the second printlum that will actually turn into make31i as a string and then print of space days it's not trying to actually interpret the format string at run time like a regular cprintf would do so you get very accurate compile errors of course because it's already tried to do the hard work at compile time there's a wonderful example of what macros can do which this site doesn't have examples of which is the regex macro there's a regex library which is just like an language which would use a regex library but it includes a regex column macro and if your regex is a literal that's already available at compile time it will parse that and generate it at compile time so you're getting a full not only do you get first of all the compile error out if your regex is not valid so you're getting checking at compile time of your regular expression but you're also getting optimization of the VLVM engine as applied to your regular expression after it's been converted to all this code so you don't even have to re-compile the regex at run time it's already happened for you which is actually pretty impressive and you can implement your own macros but that's experimental they still haven't worked out what they want the API to be like and how they can guarantee forward compatibility so it's a little questionable right now and of course running the Rust Compile is very simple, it's just Rust C in the name of your source usually ending in .rs there's nothing very interesting there there's things like in compassion query things like that yes, I'll get them a little later they work kind of the same way so variables look literals look like C operators look like C there's all the usual things you use to one slight difference is you introduce a variable with let you define a variable with let and they're all read only by default unlike C everything in Rust they try to make the laziness the lazy version of typing it is the most flexible for the compiler so it is read only by default and you have to type something extra to make it mutable the MUT in that second variable of that fields in a struct can be reordered by default and if you don't want that you have to type something extra and similar there's lots of similar decisions like that all the way through where the laziest option is the most safe and the most aggressive for the compiler option which is interesting choice any unused variable is a an error or warning unless you prefix it with underscore and you see the mutable example they're in that they're trying to modify mutable variable which works because we've tagged it but if you try to modify the other one it throws a compiler right there are the usual types there's string types which is actually a pointer to that's not the example I wanted but anyway there's a string type which is a pointer to a string kind of like a C string and then booleans and various types of integers and floats as you'd expect there's a unit type which only ever has the value antiprints it's used in some places in the templating engine later on where you don't really want to put a type in type inference you very rarely have to actually say what type you're using whenever you're declaring a function you have to give types for the arguments and the return value almost everywhere else you don't need to mention types the compiler can work it out the compiler goes oh I see you declared a variable there oh I see you passed that function down there that function takes a float therefore your variable must have been a float and similar sorts of things this example is an interesting one the VEC here is a vector, one of the built-in libraries and it's a generic type it's like a template type in C++ so it's a vector of something you didn't say what type it was as part of the declaration but later on you've pushed elements on there and that element was an unsigned int thanks to the U8 suffix or an unsigned 8-bit int so the compiler can then work out that a VEC must have been a vector of unsigned U8 types so if you tried then to do something that conflicted with that assumption you'd get an error because it would be now a type conflict so it's interesting and you very rarely have to actually mention types everything is an expression which is another way of saying everything returns a value there's no such thing as like in C you have an if lock and then you have a separate thing which is that question mark column the turn operator in Rust both the things are the same there's no question mark column you just use if else and if else blocks have return values the last statement in the if block is it's a value so if you want to do the equivalent of a ternary you do that's not an example of it you do the same thing so there's one being treated like a regular if block there's no kind of return value that's interesting there it's returning actually the unity type which is just discarded this one down here is returning a value here and a different value here so this is acting like the C ternary construct and compiler will verify that both branches return something of the same type now this one needs they didn't have to have this in the language but they chose to make it sort of this version where you're returning a value need the semicolon at the end to make it look like this whole thing is a statement here let begin equals something something semicolon and if you this one doesn't need a semicolon and that's they didn't have to do that in the language and they chose to because that's now a hint to the compiler to say hey I actually wanted to use a value from this and the other one is hey I didn't want a value from this and so if you mix up the two uses it actually throws an error it's able to tell based on the presence of a semicolon how you intended to use this if block yeah well it can but it's always safe to throw away a value and in the second example the question was why don't you just infer that and you certainly could have but in the second example I could have I could have missed some syntax up here and I would have gone to all this work here and then discarded the 10 times n or the n-2 value and the compiler couldn't have told me I was wrong because it's still valid you're allowed to throw away the value so they make this if you put a semicolon after it says oh I meant to use that value tell me if I didn't use it for some reason for some reason it's just a thing there are looping constructs like C they're not very interesting there's a 4 over an iterator loop and there's a loop which is for infinite loops and you've got continue and break like normal and you can break more than one level using a little label to refer to the one you want to break out of again not anything particularly novel compared to some other languages here's the hash includes part instead of includes you have modules which are more like to use them they're a little bit more like python where you use a module up atop there you use a module and it gets entered into your namespace and you can refer to things in there you're defining them they get defined like this with a mod keyword and it's not worth going to details here but you can declare kind of namespace functions and that sort of things when you, when these get compiled they call the compilation unit the .so or the .a is called a crate in C and particularly in sort of packaging experience that deviance had you remember a few examples like OpenSSL where for a while there was two versions of OpenSSL floating around and they had various other things that would depend on one of those two particular versions but the two versions of the library had the same symbols in them the same I don't know what OpenSSL symbols are OpenSSL sign this thing for me function and this brought all sorts of very subtle problems because when you're in C you've got one sort of symbol namespace you can do lots of tricks with the down at linker but the fundamental problem is you have you don't know whether you mean the symbol from OpenSSL1 or the symbol from OpenSSL2 and one of the ways around that probably the better way around that is symbol versioning where you can tag all of these symbols in the dynamic linker as you know with the particular version you want and then when you link your program you say link against exactly this version and then if later on you deal open another library that linked against the old version they're both referring to just their version of OpenSSL and that was kind of the solution that Devian had to go through Rust has that same thing but from the beginning because they've learned from that experience so every library gets built with it doesn't put a symbol on it puts a like a checksong of the ABI effectively the compiler used to compile it and grows things about the library and that's your SO name and all the symbols are tagged with that so it's very straightforward to have somewhere in a Rust program two versions of the same library loaded up and used by different pieces of that Rust program and the symbols won't conflict it'll all just work out which is interesting yeah pattern matching a good example of this pattern matching was actually on the Rust front page so I'll use this example here's another example of a Rust program which shows up a bunch of other features so there's a for loop going over an iterator there the for token in chars so it has iterators built in they're very simple you can define your own and very nice syntax to use and then this match operation matches sort of like a select in C a switch statement sorry in C except normally it's exhaustive it makes sure that every single case is covered if you're matching over a new meta type it'll say are you matching every value without a new meta type and give an error if you're not matching the wild cards so the underscore at the bottom is the anything else match which will work fine and it's a lot more powerful you can do a lot more things than what's in a simple example but you see the match key would use a lot in Rust and so this example here is we're declaring in a mutable program we have a mutable accumulator and then we're going over each of the tokens in this string and if it's a plus we're matching the accumulator by one if it's a minus we're decrementing if it's a 2 and skipping over so this will skip over the spaces in particular the spaces between the characters and project out so when you similarly get 0 then you divide by yep, good question minus divide is that right? would that have done 0? yep divide 0 by 2 is the most fine oh divide 0 by 2, yeah so let's, alright fine there you go um so by example that one there yeah there are structures, oh so pattern matching in the match statement you can actually match on all sorts of interesting things, you can match on literal values, you can do structure like tuples for example here you can do a lot of very powerful things which you certainly won't understand from my simple, I'll show you an example there there's structures, structures are boring they're just like C structs there's generics, so these are like the templates in C++ they're more powerful than templates in C++ and not quite as powerful as what you can do in Haskell there's somewhere in the middle um but yeah, whatever, so you can say a list of something or a list of something that implements this particular behaviour for example, so you can have um abstract types that build on other abstract types and they don't care what that other one is, so long as that other one also implements certain types of behaviour um, so you can build quite complicated things all around abstract types um here's where the memory ownership comes in, so you can have if you're a struct, you can have another structure inside you and you obviously own that quite clearly because it's embedded in your larger structure you could also have a reference a pointer to that structure stored somewhere else um, and you can have a box is an owned pointer so you're pointing to it and saying and I also own this, so again if I get destroyed you should also destroy that other thing over there um and this is where some of the real type safety comes in, if you end up passing this watch this trick, we're borrowing they call it, so taking a reference to something is known as borrowing and you can pass a read-only reference to something as many times as you like you can have lots of read-only references and they're all pointing to the same thing you can only have one mutable reference and if you have a mutable reference while that's in scope you can't have any other reference, mutable or readable read-only so this is where a lot of the type safety comes in one and only one owner and if you pass by value to a function you are passing ownership of that whole object to that function and if you try and use it later on in your line of code you'll say no, sorry, you've given it up you've handed it over to someone else um, so this leads to an interesting effect of what's a good example here, ownership so the most surprising thing probably coming from a C background is it equals, doesn't mean what you might think it means on a complex object if you do let A equals B you are passing ownership of everything that was in B to A you are moving B to A and you can no longer use B anymore let A equals B print B compile error which is probably the most surprising thing about this language coming from a C-like language now if that type implements a copy operator which of course ints does and lots of your basic types then let A equals B is fine, what it actually does is a copy and you now have two copies which are just fine but for a big complex type it'll be a move um, you never get caught by surprise there because the compiler knows very well which one of those two is done and will give you compile errors if you expected the wrong behavior than what it implemented um uh, lifetimes lifetimes are a little quirky to explain so I don't know how this is going to go but you say things like I have a reference to something and that something must last for at least this long and then you can say my function takes a pointer that must be some lifetime and then it returns a pointer of the same lifetime might be a common thing to do you're basically taking a pointer and doing something then probably returning the same pointer and then your compiler who calls you knows that argument you passed in that point you get out is valid for just as long as that thing I put in there so I can use it in some surrounding code quite a bit um, and there's lots of stuff around it but the compiler can make very strong assertions about how long, how much of your program a certain pointer is valid for which is quite powerful and again you never end up with dangling pointers it's just not allowed to construct them there's a you have closures, you can create functions and callbacks which is something that Go can do but something that other C-like languages have a pretty tough time with they use the slightly unusual pipe symbols to introduce the closure arguments and that's most of the interesting features, the rest is getting very esoteric you'll see things like results, no I wanted option, where's option option is used frequently it's a type that has something or none and to use this often where you might use a null pointer in C you might say it's a pointer to something or it's null in Rust you have the option type which is the same sort of idea except now when using a match statement you have to include a case for none or it's a compiler so you have to have thought about none case you have to do something sensible and in particular you can't get a pointer to what's inside the option structure unless you've got some code that has said is it some and not none so it's always safe you can never get a null pointer where you want weren't expecting one a null value where you weren't expecting one from an implementation point of view it has an unsafe idea you can say this bit of code is unsafe and then all these checks are out the window you do anything you like and so a lot of the standard library the hairy bits of the standard library is implemented using unsafe blocks so the library itself is implemented in fairly straightforward Rust even when it's doing something amazingly scary like atomically reference counted shared objects and all it is is a little unsafe block that goes okay increment the value increment my reference counter get the pointer to the value inside now exit the unsafe block everything is safe again now I know it's not going to be freed because I took care of that with the reference counter and return a pointer again so as long as the memory safety is guaranteed again once you exit the unsafe block everything is fine so it's very powerful and very easy to do something get it out of the way if you need to do something scary when you have threads you have tasks which are normally threads and you can't share anything between tasks which is interesting unlike Go again very strong ownership this bit of data is owned by that task and now the guy can't even look at it you can pass objects between tasks and if you want to do something like a shared cache that's when you use this ARC atomically reference counted type which uses a little unsafe bit of code to say okay I'm going to cheat I'm going to look at my bit of memory and go yeah yeah whatever it's read only but it's reference counted and you can get a reference and you can get a reference and when you've all got rid of your references then I'll free up the shared object but unless you use one of those sorts of unsafe options or one of those options that uses unsafe to do something scary you've got every thread owning one bit of data and you can pass them explicitly and send channel idea to another thread so you'll never have should never have data races of those sorts you can still of course have races to external resources if you're creating files on disk and someone else is deleting them you can still confuse this off that way but hopefully it won't be a memory corruption type of race so this is why I quite like it it's got some really interesting ideas it has the only other bit not mentioned here which is a rust packaging tool that helps you build it takes a place of make an autoconf and distribution sort of channel and all those sorts of those problems and it's quite interesting it has a few interesting ideas of its own it's designed to be portable even to non-posixy places like windows so it doesn't call out to make it has that functionality itself it has an autoconf like idea but they said shell is a bit hard to guarantee it's going to be everywhere what can we assume exists on the target platform only a rust compiler so it has the ability to run some compile and run some rust code which may in turn use lots of other libraries so it doesn't have to be simple rust code that then gets run works out some things about your platform you're on and then easy use to influence the actual build proper for your package on that platform which sounds weird but it's quite novel, interesting anyway if you have questions come and ask me don't know why I should stop talking now but by all means tell me about the ask me questions and I'll tell you about it I've been trying to write a bit of code in rust over the last couple of months and I've been using both Python and rust and they're kind of completely opposite languages in everything they do thank you