 Okay, so let's go. Hi, everybody. Thanks for coming here. It's a pleasure to be here again in Bangalore to talk about functional programming. So this time I'm here to talk about a language called Rust. So I'm curious who never heard of Rust before. Did there anybody that never heard about it? Okay. And did anybody here already use Rust to write some stuff or anything? Yes. Two people. Okay. Okay. So Rust is a system programming languages in the vein of C or C++. So I guess most people here, did you already read some C code, right? Yeah. Can you write some C code? Who think can write some C code? Yeah, some people. Yeah, most people, right? But do you think you can write safe C codes? Right? Because it's a thing. Once you start using CRC++, you're like, okay, I can write efficient code. But to make it safe, it's complex, right? There's a lot of ways that you can make things wrong. So Rust is an alternative to these languages. And it brings some really nice features, which actually, even if it's in the vein of CRC++, I think it is actually a functional programming language. So some of these features that I will show today, there's a pattern matching algebraic data types, higher order functions. And it's also immutable by default. And these things really brings a nice abstraction in the language. There's also some good tools to deal with concurrency. And I mean, the big thing about Rust is really deterministic Rust management. So it runs without garbage collector. But you can still write your code without having to do all the malloc and free stuff that you will usually have to do in CRC++. And the goal is to be as fast as CRC++. Not necessarily as fast as CRC, because when you write CRC codes, very often you can do fine-grained optimization, right? You can tune things really nicely. But really as fast as CRC++. And even sometimes faster than CRC++. And even in some case, faster than CRC, because of some optimization. I won't go into that detail, because I'm more sure, you know, what is functional in Rust. So why learning Rust, right? Because I mean, if I can, I would use Haskell, because that's my favorite language. But there is some stuff that are really difficult to do with a language like Haskell. Like, for example, let's say, well, an extreme case will be you want to write an operating system. Well, okay, maybe not everybody wants to write an operating system. But maybe you want to write a hardware driver. Maybe you want to interface with some hardware you're building then, well, you would have to write some C code if you had to do that. But actually today you could Rust to do that. You could use Rust to do that. Another one would be, let's say, writing like a run time for programming language, like a virtual machine. For example, Java, you can find some pieces of C code in the JVM. Even if they try to replace, you know, with a new just-in-time interpreter, there's a lot of C code in the JVM. So if we could instead use Rust to do that, I think that would be good. Otherwise, if you can use Haskell, you know, just use it. So just to share about a bit my own experience, I always wanted to do some systems programming, but I was always afraid of using these C++ C stuff. So even if I had stuff I wanted to do, I was like, ah, no, no. And I got some hope years ago when Google says, oh, we'll release a new language, you know, low language. And at first I got excited. But then, I mean, I don't know if you used Go, but there's no generic programming. I mean, it's like template-based. So I was pretty disappointed by that. And I had to wait until that Rust came out to actually change my mind and start doing some system programming. So I will just give you a brief history of the language where I come from. We use it. How is it used? And then I will show you the functional feature of Rust. And then I will show you what is kind of dysfunctional about it, because as Eric Meyer says in his paper, I think it's a cross of the excluded middle. If you don't have effect tracking and lazy evaluation, you won't really get the full-fledged functional programming. So I will show you what is this thing that you don't get and what kind of issues you can have. And then at the very end, I will show you some practical Rust features that are not necessarily very functional, but that help a bit. Some are kind of functional, but halfway, half-baked. I will show you that. So first, just a little quote. Maybe some of you know the book of Mozilla. It's not a religious book, right? It's just an Easter egg hidden in Mozilla that tells the story of Firefox, basically. It's a Easter egg in Firefox. And that one was added, I think, two weeks ago. If you do about Mozilla, you can read it. And basically, it described the oxidant metal is Rust. So it described that Rust is not... Firefox is now using Rust internally. So how did that went? First, in 2006, there's the original author of the language, Graydon Horro, that started hacking on it in his spare time, really, just a hobby project. 2009, Mozilla decided to sponsor the project. And the first release happened in 2010. But really, people started to hear it more around 2012 because Mozilla decided to start writing a new web browser engine called Servo, maybe you heard about that, which is completely written from scratch in Rust. And the idea is to replace the current Gekko rendering engine in Firefox with this new one, which should be much faster. Probably if you used Firefox Chromium in the past, you were like, oh, Firefox is slow, Chromium is fast. But that changed, like, very recently, like, two weeks ago. Because they released, like, a Firefox content. So Servo take a lot of time to be designed. So they decided to create an other project called Firefox content, which basically took part of Servo and put them into Firefox. So now if you use the latest version of Firefox, you will see it's actually much faster, because it do a lot of parallelization. And all of this code is written in Rust. So the story which is funny is that they tried, I think, three or four times to put parallelism into Firefox using C++, but they failed. They already had issues, they didn't work. And when they use Rust, it works at the first time. So I think it's a nice success story about this language. So now let's go into the real stuff. I will show you a few functional features of the language. So first thing, it is immutable by default, which means if you declare a variable using that let syntax, if you try to assign something on it, you will get an accomplishing error, right? You cannot assign twice to an immutable value. If you want to do something mutable, you have to use a mute keyword here. So kind of like what you have with val and var in Scala, for example. So by default, it's immutable. That's already a big shift for people, because keep in mind that this is targeting not really people like us, but people like that we're doing a C and C++. So that's already quite a big change for them. So as of what is a great feature, that you have first-class functions, really, function as value. And we will see in detail what I mean by function by value, because Rust is pretty neat into optimizing these things. But it means, here I create a lambda function that takes a as a parameter, and I just do an addition here. So something as well that we can see here, these code works and compile, because there's a local type inference going on. So here I just add the type, right? And this is a type definition for a function. The syntax is a bit maybe surprising at first. The first parameter of the function have that fn word, and then there's our to destroy the return type. So that is for the first-class function, but it's brought as a closure. So closure, it means it's a function that actually captures some stuff from the autoscope, right? So here, instead of having one directly in line in my lambda, I just put it outside in the environment, and this works. But if we try to put back the same type word before, that won't compile. It will tell you, oh, here, f expect fn integer of 8 bits to integer of 8 bits, but you actually give something, and you get that weird type here, which says closure adds, and it gives you literally the line where the closure is defined, and then you have the list of things capturing the environment. Like, when you define a closure, you get a type which, including it, all the dependency it has, right? So this type here is really a pure static function when it's f lowercase, which means you can only pass something that, you know, is totally static, but once you change that, actually the type change. So how can you, how did the language designer work around that, you know, to be able to abstract over that? How can I pass a closure here? So now let's see, to show this example, let's see how a higher-order functions work, right? So you can actually write a higher-order function that accepts one of these functions, the same function as we wrote before, right? So it takes one integer, and it simply applies two times the function to the thing you give, right? You give something, you give a function, you take the function, apply two times, and return the integer. So it's all good. We can define our function like this with the one inline, right? So there's no environment, nothing. It works fine. But what if we actually extract the value? So we cannot use the same time as before, right? Because we would have a similar completion error as this. So what Rust does here is that it uses actually parametricity and polymorphism on the function type to allow you to be able to pass different sort of functions, right? So here now I change the type of function to f, which is a type parameter, right? And then here I give a constrain. I'll explain more in details about this constrain. You can think it's like a type class, sort of. So you say, okay, f is a constraint of the type class fn i8 to i8, and this time the f is uppercase. It's uppercase because it's a trait. It's a trait type class. It's a variable selector. It's kind of the same thing. So now this code is generic over the function type. What does it mean? It means that... So something which is important to understand here is that, you know, such type, when you use list, when you use closure, scale everything like that, when you define a closure or a function, it is actually a value in the heap, right? It's heap allocated. But when we do that in Rust, it is not. Really, when the compiler generates your program, it will inline and link everything as much as it can statically. So by making this... By making this polymorphic, when it compiles, if it's called from something where the function is static, it will be... Actually, there will be no heap allocation at all, and that's a big deal for performance. But you can also pass something that needs to be heap allocated. We'll see later how. But by making it polymorphic like that, you can do both and still get, you know, a good performance. So there's one thing here that's... In all the examples I gave, I just defined the function here, and then I use it straight away. But what if you want to hand out a function from another function, right? Which is like... You have the return type, which is a function. So then you have to heap allocate it. I mean, there's no way... It's the way it is done, right? As soon as you want to have that function and pass it around, you need to allocate in the heap. So there's a type called box, which explicitly box something, right? It can be a function. It can be a value. It can be anything. As soon as you use box, you move from the stack to the heap. But you do it explicitly. So you keep a good understanding of how the resources will be unloaded and how you can optimize things around. So here I just have a function that takes that one that we had before, but now we can pass anything we want. It will create that closure, and it will return it. Here we have to use the keyword move to let the compiler know that we want actually to borrow the context, because we will... If you don't use move, you think that this will be unallocated at the end of the scope, right? But actually, no, we want to put it in a box, so we want that to survive, right, the scope of the function. So then we have to use the keyword move that will make that move in the heap as well. So then I can define my one, create my function. So then I receive my box, and then I can call my higher order function, but my higher order function does not accept the box, right? It accepts the function. But there's a method as ref to borrow the ownership of the function and pass it to the other one. So you can only have one ownership at a time, but we will see more about that later. So this comes out from box T to actually a reference of T, right? So as it's polymorphic, it will accept also a reference of the function. So, yeah, so actually, a closure like this is exactly like a closure of the traditional functional language. It's heap allocated. So now another good feature which seems really functional when you see it's from REST is iterators. So you can think about iterators that's what... Or does key try to do, for example, in Scala with you, but probably nobody uses it because it's half broken. So it's basically a lazy collection that you can... It's a lazy iterator, and you find all this traditional method that we have usually on list in the functional programming language. But it is lazy. You have to start by transforming... So VAC is like this. It's a vector. It's a general type for a list in REST. But first, you have to get an iterator out of it, but when you do that, nothing happens, right? Just get back a function that allows you to describe what you want to do, but nothing runs yet. Then you say what you want to do. So I take that access list, I zip it with YS. So zip it will, you know, make a list of tuple of these two elements. Then I just simply map them to multiply them together, and then I can filter them just to keep this here. So if it were not lazy, it will actually do, I don't know, free, for pass, you know. Every time you... I mean, you do iter-zip, you will zip things together. Then you call map. But here it's not the case. Because it's lazy, nothing happens until you call collect, and when you call collect, actually the computation is run, and the actual data is retrieved. So this is pretty neat, because, I mean, usually when you were in C, C++, you were just doing all this mutable thing with for loop and blah, blah, blah, and moving the stuff around, right? And then you were like, okay, but if I want a better user experience, basically I lose efficiency here. I mean, this is exactly the same as the CRC++ code, right? So there's no reason anymore to write this ugly mutable code, because you will actually get the exact same performance here, and I think that's great. So it's like fusion, right? So another core feature that makes REST quite functional is algebraic data types. I mean, that's the bread and butter right of what we do across. So here, just a simple example, where I encoded the same maybe as in Haskell. So it's called enum, like enumeration, where you define your type with a type parameter, and here you give the type constructors, right? So let's see how we can use them. We use them with pattern matching, of course, right? So syntax is kind of similar to Haskell in some ways. So you match on the value, and then you have to give the case. So of course, if you forgot one of the one, you will get a compilation error, right? Telling you, hey, you have to deal with that. So you deal with that, and then it works. So that is, you know, extremely useful to write a safe application. So let's see. So it supports other requisite data types, but it's not as trivial, because if you define, so NAT is like natural number with the church encoding, you know, it's like you have zero, and then one is a successor of zero, and you can encode numbers that way. You know, it's a neat way of encoding numbers. So if you want to do that in Rust, and you just put it this way, it will tell you, well, I cannot compile this, because Rust is not lazy, right? Rust is a strict language, so there's no way here you can build something like that, because for him that will be infinite in terms of space. Because, yeah, let's keep in mind, everything like this is not heap allocated by default. So to make it work, you actually have to box the records, what makes the recursion in your data structure. So once you put a box here, it compiles, and then you can work with it. So, I mean, it makes sense, right? If you want to have a recursive data structure, you will have to allocate a reference on the heap to keep track of where you are. So here you have your explicit about it, right? Then we can write an add function. So the reason I show it here is just to show that... So when you create a value, you get ownership of that value. It means that you have to take care of releasing it, not you, but the scope in which where you are. The thing is that because we work with a box inside, as soon as we want to do a method that do addition, to get something from a box, as I showed before, we can do only as ref, right? So we get a pointer to it, right? So there's no way that in a computation where we work with pointers, we can create new values, because there's no way to create new ownership to give back to the parent scope, right? So we actually have to implement it in a mutable way, because this way we can accumulate the numbers in this one, but that does not mean that then we have to expose that in a mutable way, because then later, so I won't go in detail of the implementation, but it's just to explain that it has to be mutable, but later, I forgot your example, later we can add another interface that is mutable and where you just have the plus operator and you don't have to think about that, but this is some kind of the thing where Rust will guide you, really often when you start using it, you're like, oh, the compiler is always telling me I'm doing things wrong, right? And you have the feeling to fight against it, but then after you get used and you start to understand all this borrowing semantic, it feels like it's taking you by your hand and showing you, hey, look here, you really want to do that? Maybe you want to do this, maybe, you know, and it helps you to improve your program. Yes, it can be encapsulated later. So type classes, so they're not called type classes, they're called traits, but it's literally the same approach. For example, there's no inheritance in Rust. You can actually do object-oriented programming. You can do some ways of object-oriented programming, but let's say it's not really encouraged that way, you know. It's more like, well, let's do ad hoc polymorphism like in Haskell basically. So here I show you can encode just a simple basic hierarchy of semi-group and monoid. So, you know, you define your function and then you can have a subtype relationship, but the subtype relationship is only at the type class level. It is not at the data level, right, like in Haskell. So this is how you define some type classes, and then you have to implement it for some types, right? So here we implement it for the unsigned integer and just give a, you know, simple implementation and there we can see how we can use this stuff. Then you just import the trait and you have access to all the method. So it's nice because you can have, you know, this type class method that does not take an instance, right? They are totally static. So you say empty inside integer 8 and you will actually get that zero value, and then you can append another value to it and you will get your result. So that's pretty much like, yeah, it feels a lot very functional, right? So there's another feature which I found interesting as well. It's called associated types. For me, it feels a bit like type family in Haskell if you've seen that. So basically I'll give a concrete example. Here, let's say we have a graph types, right? We have two type parameters, one for the edge, one for the nodes. Now we create a new function that will compute the distance between two nodes, but we have to define E because it's part of the main type, right? So even if here we don't use E at all, we still have to deal with it at the type parameters level. So what associated types allow you to do is actually in a type class trait to have some abstract type numbers, right? So they are there, they are defined, but you don't have to explicit them when you talk about graph. You can pass a graph around, but you don't have to talk about this type. Maybe if some of you have used Scala, maybe you've seen that. It's like these abstract type numbers in Scala, but actually don't better, I think, because the inference works well, et cetera, et cetera. So then when you implement a graph, you actually define these types, right? That pretty much feels like the type family. And then when you define your distance function, you don't have to repeat any of the types. You can just... You actually use colon colon to get the reference inside the value. So it feels, yeah, it's pretty useful, and it allows you to, you know, avoid showing some types to the user, but still get the inference working and all the fancy stuff. So that is for all the nice functional features, but I... Intentionally, I didn't show you some limitation, and now I will show you some limitation, because if it was exactly like that, you could do... Well, it could literally code very close to Haskell, right? But sadly, there's some important limitation. The most important one for me is the lack of higher-kind of types. So that's really a big deal. For example, if you want to define Functor, I would love to define Functor like that, right? Here I have f, which is a higher-kind of type, so it takes an overtype. It could be option, it could be list, it could be blah, blah, blah. And then I would have map where I would... put, you know, fill the hole with an overtype. But sadly, this, that won't compile. So you can try to implement some higher-kind of type class like Monad and Functor. There is some experiment. If you look up on Google, you can find people trying to encode this stuff. It's kind of like how you could, you know, have a trait for Monad and you inherit from it, blah, blah, blah. But it's... I mean, it doesn't work well. I mean, the inference doesn't work. You can get some example working where you can use some Monad instance, but as soon as you write, you want to write a piece of code, which is generic for any Monad, you won't be able to do it. And that's the whole point of it, right? I mean, if you can do that, then what else? So you will find online, if you look up for it, some people doing some experiment, writing some Monad, some Functor and stuff like that, but really just playing around. But there is hope. There is a RSC, which has happened for maybe a few years now, and when I looked it up these last days, there is actually new stuff going on. It seems people really want to get it. There is a strong really interest for it. I think it's the highest things in the request that people ask for. It's really not trivial, because to do that and to keep good performance, it's a challenge. So if you're curious, you can look at these RSC and you will see what is currently going on in RSC to get there. So there's a few features that have to be added, and people started working on it. So there is definitely hope. So what is also dysfunctional, but it might be obvious from what you've seen before, but there's no effect tracking. So basically you can define a monoid instance, and in your empty instance you can do whatever you want. So I can print in the console, I could do launcher rocket. There's no way that will be tracked. But there's really a few programming languages that actually deal properly with that. But yeah, this is sad. I think, you know, maybe by having a Yo-Kinder type, maybe it will then be possible to design libraries that will help with that. In the same way that you have libraries in Scala that can help with that, Eric will show that in his talk as well. Yeah, so I would really love to have a Yo-Kinder type, obviously. Something else which is a bit crazy is the variance rules that remind me of the variance rules of Scala, actually. But it is a bit crazy because there is two sorts of type parameters. There is the traditional type parameters as you see here, and then there is the lifetime type parameters. I will explain that more in details later. But that gives rise to a good amount of complexity if you want to go into the details of the type system and how it works. Well, there's variance and that makes things a bit complex. So when they call variance it's actually covariance. There's only covariance or invariance. But anyway, just to show that it's not that simple to train to get the best of both roles. So this was the main, you know, the big pain points I have with Rust. And now I will show you a few other features. This time they are not like what you will see in purely functional language. It's more like trade-off that it should have, but pretty useful. Some pretty useful stuff when you are in the wild writing code. So first error handling. So there is a results type, which is like either, or you've seen that in other languages, right? It just swapped to what you usually see. If it works, it's this side, if it fails, it's this side, and then you have to type okay or error. So this is a core type in Rust. It is used, you know, in all the standard library everywhere. So let's see how we can use that, right? So I just define a type user. It's a fake type just to play around with functions and define stuff. I define an algebraic data type for my error, right? It's something we usually do when we do a functional programming because we can track exactly what's going on with button matching and everything. This is great. So now I define two functions. One which is about logging a user. This is a user ID, which is just an integer here. And I get the results, the user if it works, or I get an error. And then I have another function which will extract the name from my user, particularly, you know, from a service. I don't know what, but here is just a fake implementation just to show you how you can use this stuff. So now that we have defined this, we can actually use this method, right? So let's say I want to print the name for a given user ID. So I can log in, and then I can go and then get the value and call my other function getName, right? Which was also returning a result, right? So it will actually, if it works, it will pass this around. If it doesn't, it will fail directly, you know? Like this, you really, this either an or validation type you may have seen in other language. So here it's really functional style, right? Actually, this is flat map, right? This is bind, but there's no monad, so, you know, it's specialized. And then I map and I print the name. So something I want to show from that is that there is an alternative syntax. Maybe you heard of, if you did some Idris, I don't think it actually was implemented in a Haskell. It's called the Idiom brackets. So there's a paper from Conombrack right where this was described. So here, actually, it's like if we have Idiom brackets. So what we can do is that we can directly call login, put a question mark here, and actually here, it's like if I have user, it's not anymore if I have a result. In fact, at the end, there will be a result. But for a small amount of time, I can stop thinking I'm in that result thing, and I can just work with my values, you know? And that's pretty useful because sometimes you just want to write that code more shorter, especially if you have a level of nested results. That became really handy. And it's funny because it's actually something that the only place I've seen it implemented is in Idris, which is quite an advance functional programming language. The big difference, though, is that here, this is completely specialized for results. In Idris, it works for any applicative, you know? But here, again, we don't have applicative because we don't have a hierarchy type. So we are limited this way, but we can hope that later it might actually be more general. But I found that to be a really useful feature. So now let's talk a bit about Lifetime. Lifetime, if you get into Rust, really, take times to read about Lifetime because it's the big things which is different from the rest. You have a type system, you have the type parameter, but you also have Lifetime types, you also have Lifetime inference. And that is what is called the borough checker. So the borough checker is what is... You have the type checker, and then you have the borough checker. The borough checker will check if the ownership is properly in your application. And that relies on Lifetime. So really, when I was starting learning Rust, the big pain point was this, because I was like, okay, don't read documentation, let's just go and write stuff. But this is worth learning about properly. So when you write actually foo like this, it actually has an implicit type hidden. Even if you don't write this type parameter here, it exists. So the difference with Lifetime type parameter and a normal one is that it starts with that quote, and it's a lower case. Okay. So now let's see when you have to use it, you know, a concrete use case. So let's say you have two structures, foo and bar. Bar actually includes foo, and foo have an integer in it. So now you write... So Lifetime is really important when you deal with references, because when you pass stuff by value, well, it's clear that you pass the ownership with it. When you pass a value, the function that receives the value will take care of releasing it later, because it means if you pass it by value, you cannot do anything with it anymore, right? So you pass the ownership, and then at the end of the stuff that receives the ownership, Rust will deallocate automatically. We've got garbage collector, right? So now if we work with references, and you have a function that takes two values and returns another value using references, Rust actually is unable to know what will be the ownership of this. Will it be the ownership of this one or this one? The reason you have to know is because once the caller gets back that value, he needs to know when to deallocate it. Should he deallocate it at the same time as this one or this one, you know, because then he will track the scope in the parallel scope. So if you write the code like this, you will have an error message telling you, here, missing a lifetime specific value. You have to tell me, you know, I don't know. I think he could infer that in some case, but it seems it's not... I don't know if there's some work to be done, but I think he could be a bit smarter sometimes and infer it by himself. So now let's see how we fix this. Here it's a simple case, because they basically all have... they must have all the same ownership, because you might return one value all the over. So you only know at runtime, right? Let's keep in mind that all these types and in France here, it's all compile time, right? So here, we just simply add a new type parameter and we assign it to all of them to say, okay, it will be the same ownership, you know? It means that when the caller... a caller here, these two value... oops, sorry, that should be B2, but... they must have the same ownership. And then the resulting value will have the same ownership and then he knows how to deal with that. So let's take a different case. Well, this time we have a method that just take... that you pass these two value, but you will return the first. You will discard the second one. You will return only the first. You could ask why, you know, but you could say, well, maybe I have to do something with the second one. And then I won't actually return it, but I still need it to get some information. I don't know what. So if you do it like this, you will, again, you will have the same error message. But this time, here is how you fix it, right? As this is the first one that you return, you actually assign the top parameter here. You don't have to put anything here because it will be inferred. And then you put it under return. So what if you did it wrong? Oops. Ah, yeah, it's here when you do it wrong. When you do it wrong, if you, for example, you return A but you say that the lifetime is B, you will get an error, you know, that will be checked. That's why I think that could be inferred in some way, right? It seems like it should be possible. So something which is pretty good about Rust as well, and that helps a lot when you write all these low-level stuff, is the tracking of mutation. And so for me, even if Rust has no effect tracking, I can get some of the safety of effect tracking by tracking mutation. And that is actually a good example. For example, with file. Because you can write functions that receive a file but use his name to, I don't know, compute a path, compute something else. Or you could pass a file and actually read or write from it, which is an effect, right? So here, if you pass a file like this, and then you try to read something from it, you will actually, so it's here where we create the file and then we pass it to the function, right? So you will actually have a neural message telling you, you cannot read from this file, you know? Because it's an immutable reference, but I need it to be immutable, because in the definition of this function, file must be immutable, you know? So then when you compile, well, it doesn't work, right? I want this to be immutable, and that's pretty good because it gives you already quite a bunch of safety. So if you want to make this code compile, you have to tell explicitly that you receive a immutable file, and of course here you have to make the file immutable. And again, if I were calling the function without this, it will make it immutable. So even if you have a mutable reference in scope, when you pass it around, by default it became immutable. You have to be explicit about making it, which is, I think, great, you know? Because you have to think when you pass something immutable. Yeah, so that's it about mutation tracking. But by using that, you can, in your API, you know, add safety and use this feature to make things more safe, and it's extremely useful when you do IOS stuff. So one over really, yeah, sure. Yeah, it's a very good question. So there's only one keyword for the data types. So it's only just mut for the data types. But when we talk about function, it's different. So before we saw the fn, you know, uppercase, so actually there is three of them. There is fn, fn1s, fnmuts. We'll see fn1s later once I explain the next step, but fnmut, basically, if you want to pass a closure that will mutate something, it's a different type. But for values, it's just this keyword, and then it works for everything. Yeah, so now the really, really interesting feature, you might have heard about that. Some people sometimes say that Rust has linear typing. It's not exactly true. It has affine types, so just quickly explain the difference, right? Maybe you never heard about linear typing before. Linear typing is being implemented right now in JHC, so it's something we will have in Haskell soon. Basically, linear type is to say, so the global efficiency is that you have a function that has an argument, and if it's a linear type, that argument has to be used exactly once. If you don't use that argument, that won't compile. If you use it twice, it won't compile. So now the difference with affine type is that you can use it at most once. So affine type is what? I say affine type, but it's really when you pass the ownership, right? You can only pass one time the ownership, because once you give it to someone, you cannot get it back. That person keep it. You could give it back by doing a copy of the value, I mean returning again the value, right? But then you lose it. So this gives you, give us something really nice, because we can do the same stuff that people do with linear types to make things more safe. For example, here again we have that same code that read a file, okay? Usually you don't have to use drop. Drop what it does, you close the file. Once you finish working with the file, you're done with it. You want to release resources, you call drop. Usually you don't, because it will be called automatically at the end of the scope by the finalizer, you know? But, well, sometimes maybe you want to do it explicitly, because for performance reasons, right? The thing is that drop, unlike most of the other file function, it accepts file as a value and not as a reference, right? So if I do this, it will actually not compile. It will create, generate a compilation error here, telling me that, well, you try to use file, you try to pass file around, right? But you cannot, because you actually already gave it to someone else. So here the compiler tracks who own a file, right? And he knows that, at this point here, file is not owned by me anymore. I'm not allowed to do anything with it. So if I try to close the file before reading it, I get a compilation error. And I think that is, you know, that is really good. So to fix this code, I have to put it at the right place. And then it compiles. So I discussed about a box before. There's a few overs of this wrapper type that helps you to reason about how it's used in memory and stuff like that. So box is to explicitly heap allocate something, okay? Usually it's when you want to hand out a value back to the caller, but you cannot create a reference because you cannot create an ownership like that. There is arc, which is because if you try to share a box or anything between two threads, that won't compile either. Because, again, you track the ownership, and REST is able to understand that the ownership moved from one execution context to another. So you will get compilation error. Sadly, I didn't have time to get into the concurrency stuff, but there is a structure called arc that allows you to deal with that. Still, we've got garbage collector, you know? And the very last one is reference counting. So let's say you really need to share a value between... You really need to give ownership of the same value between different methods, different threads, I don't know. Then, well, that happened, right? Then you have this structure which will use reference counting. And, you know, that way you can share that like you usually share in any of the language. But of course, it's good to try to avoid to use that because when you want REST, you want performance, you want to avoid... This has runtime cost, right? Because once you start using this, when you want to get the value, you will have to check who is using it or not, you know, it can beat performance quite a bit. Just a two last thing I want to show you that I found really useful in REST. And I'm actually... I don't think I ever saw that anywhere else, is that there is testing integrated directly in the language. So you don't have to download a test library or anything. You don't have to have, you know, tools to run tests. You just have some annotation that you put here. And you can then do cargo, cargo is a build tool. You do cargo test and it will just run your test. So you can inline test directly in your code. You can make a different module with your test. It will work like that. Same thing for benchmarks. I mean, I've never seen a language that includes a benchmark system directly with itself, right? And that's pretty cool because usually when you write REST, what do you want? You want things to be fast. So here you can directly test if it's fast and it gives you, like, what is called a bencher and you iterate on it. And then externally, you can configure the bencher or many times you want that to run and stuff like that. So I didn't go myself much into details. I use it a bit, but I found that really useful because I don't have to learn to download other libraries or tools, everything is there. Just before finishing, a few more points that I did not expose here, but that are really useful and I think really nice. You have a thematic type as derivation. At some point before maybe you've seen that. Well, I don't really want to go back, but at least there's a macro you can use to derive, for example, a debug. Debug is like show in Haskell. It's a way to display something. By default, you cannot. You need a type test for that. You can derive it automatically. There's a lot of other stuff you can derive automatically in the same way that you can derive a show in Haskell or stuff like that. The concurrency model is quite sophisticated. One of the abstractions is called channels. It is a message-passing style abstraction for concurrency. Pretty much like Haskell, really. The macro system is really sophisticated. It is hygienic, as they say. So you cannot write a macro that will actually generate code that will fail at runtime. Everything is tracked. Everything is nice. It supports recursive macro. It supports also variadic macros. It means you can have macros that have a variable number of arguments. I'm not necessarily a big fan of metaprogramming, but here, you know, at this level, it can help a lot. And the way it's done, I think it's pretty nice. Last thing to mention, it's called rayon. It's a library that allows you to do parallel iterators. So maybe you've seen that in Scala with the parallel collections. At Scala, I think you can do it as well. Basically, you can do a map on a collection, and it will parallelize all the operations. That is really, really useful. But there's much more, again, enough time to discuss about everything. So yes, I hope that you see the motivation between Rust. Even if you might look like a cc++ replacement, I mean, the name of this talk is the Toll of Troye, because I really think that this language is a Trojan horse for a cc++ programmer for them to become functional programmers. We thought even realizing that it became functional programmers. Because you've seen all these features, right? If you read the Rust book, it's not written the same way I explained to you. It's written in a way that they're not talking about algebraic data type or anything like that. They don't want to scare cc++ developers. So maybe that's why you never heard about it as a functional language. But I really think it is. And you might have stuff in the past that you wanted to do at systems level, but you never did, because the tooling is crap. I think now we can, and you can try with Rust. So that's it. If you have questions? Yeah. So, yeah, good question. So how is a type in France implemented compared to other programming languages? So the main difference is that it's only local type in France. I mean, there's no way you could skip the definition of a type in a top-level signature. From my experience, it feels better than Scala. You know, Scala sometimes is not able to do the inference in, you know, you can infer in two directions, and Scala works well in one direction, but really bad in the other one. Well, a Scala is perfect for that. Rust is actually pretty good as well in that direction. Maybe it's because there's no how-to type, I don't know. But I would say it's probably even better than Scala in that regard. And especially for the associated types, that works really well as well. No, no, no. Yes, so the question is, you know, Scala is compiled to JVM bytecode. Does Rust do a translation to C or C++ in the middle? No, Rust goes directly into... Actually, I don't know if you use LLVM. I don't think so. I think it goes directly into native code. There's no C or C++. It's binary directly, right? Yeah. So I don't think they even use LLVM. It's directly like binary code. I'd use LLVM. Okay, good. Okay, good. So yeah, yeah. So you choose LLVM IR, internal intermediate representation, and then I guess they get some optimization for free. Thanks to that. Yeah, makes sense? Yes, you can. That's a really good question. And that's actually... So I didn't... The question is, if I have a large C code application, can I call Rust from it? Yes, you can. You can also from C++. Basically, what you can do is that you can expose Rust like a C method. I never did it myself, but it's something I want to do. The reason I want to do it is because basically, just to explain the motivation, I wrote a machine learning library with Rust, and now I want to do Haskell binding to it. So I will look into that. But it seems totally feasible, yes. Exactly. Yeah, exactly. You need to describe the FFI, the flowing function interface. You have to be explicit about it. It's not like all your function will be exposed. You have to tell how to expose the functions. And I guess there's some specifics about Russell's handling and memory management, because you might be able, when you use FFI, to do stuff that might be unsafe from C. So I guess you have to be really careful, but I didn't go there yet. So, yeah, I think your friend is wrong, because basically, I mean, what I was saying at the beginning is that, I'm not an expert in Go. I guess it's yet better than C++, but Go is really far from what you can hear about safety of Russell's management. And there is a blog post, this one, Peerless Concurrency in Firefox Contum. You should check that. You should read that. Go is garbage collected, and I don't think you'd have all this safety feature to memory management. So if you look into this article, you could try to say, okay, because they explain exactly what kind of stuff they use from Rust that make it more safe. So it would be interesting to try to see if you can port some of this stuff to Go. And my guess is that, no, but... No, no, from what I understand about the previous failure that they had with C++, it was about memory safety. They were not able to write safe codes for memory, while doing this complex parallelism in Firefox. Yes, yes. Yes, it's... Yes, exactly. Yes, so the question is, why using linear types or fine types? What does it give? That's a really good question, and that's one you see as well on the GHC proposal, because people are wondering, why do we need this? Here really, I mean, here in the case of Rust, but it's the same in Haskell. So it's even better in Haskell because here the idea is to be sure that you won't be able to read a file which is already closed, okay? So this is the feature you get from a fine type. It's really that. I will never be able to read from a file which is already closed because of the borrowing semantic. The thing about Rust is that because it tracks ownership, even if I forget to close the file, it will do it himself because he has a concept of finalizer, which means it's good to have a fine type. But in Haskell, that won't work because what you want with linear type is that you want to force the person to explicitly close a file, for example. Right. No, so it depends. Some operation, you can do it multiple times, some you can't. This is the key thing. If you pass file as a value, you can only pass file once as a value. But if you pass as a reference, you can do it many times you want because when you pass a reference, you pass the ownership for a temporary amount of time. So you let the person do the thing and then you get it back. But when you pass as a value, it's lost. So it's only the last thing you can do to pass it as a value. So it forces you, but basically when you design your API, if there is a function that is a finalizer, is what has to be done at the very end of the resource, you make that function accept the value, not as a reference, but as a value, and then the user will have no choice. If you call this function, it will have to pass like that, and then it won't be able to use it anymore. The big difference with Haskell, though, is that in Haskell, there's no finalizer. So you want to actually force the user to call that function. It's not optional. Here it's optional. If you don't call it, it will be released anyway. In Haskell, with linear types, they want to force the user to actually call it. And you force it to call it, and you're forced to call it at the end. Exactly. Exactly, exactly. It is complicated, but if you want this trade-off, I mean, if you want safety and performance, it cannot be more simple yet. Maybe tomorrow, yes, but as of today, you know. So, yeah, and it's really true. The reason usually people rely on C and C++ is to do these things, right, because they think they're smarter than the compiler or anything. Oh, yes, I can be smarter. I know I can de-locate here, because I know what I'm doing. But you know what happens when the programmer says, I know what I'm doing. You never know what you're doing, right? You need a compiler. You need a borrower checker. And it's exactly what you were saying. The borrower checker is doing that. It's like if you had a new phase in the compilation to check this stuff. I would say you hit the same bug. You hit with Haskell and Scala. Logical bug. I would say bug that you hit with Scala. Logical bug, and then the lack of effect tracking. Maybe you might do something and you didn't expect that. You know, you do it somewhere, and then in another context you call that function, but you don't expect it to do something heavy. So that's the kind of stuff you would have. But I never had any memory issue. You cannot do pointer arithmetic. It's forbidden. There's no way to do it. So you cannot access an unsafe part of the memory. It's impossible. Right. Yeah, but why do they do it? I mean, that's the question. It would be interesting too. So the question is usually when people do graphics or I guess video games and stuff like that, there's a little trick about doing pointer arithmetic for performance reason. That's... We need to see case by case, but that's where Rust sometimes is not faster than CoC++. Sometimes it is because sometimes maybe you want to do this stuff, but it's too complicated for you to do it right, and maybe Rust will do it when it compiles. Because Rust has a really good optimizer, so maybe the optimizer will be able to do this. Maybe not. But yeah, it's a good point, yeah. Oh, sorry. Feel free to catch me just after and I'm happy to discuss it a little more. Thank you.