 Alright, so I'm not actually going to talk too much about Rust. I'm just going to do an introduction on why I think it would be useful for embedded programming. And I've actually just put up some Haskell code here, not because I'm going to talk about Haskell tool, but my point is that I really think that programming languages, you know, they're all good. Well, you know, some of them are better than others, but, you know, they're good. I should stay here, shouldn't I? But I really think it's about using things that are good for their use case, right? Haskell, I think, personally, this is a compiler that I'm developing for some DSP stuff. It's brilliant for writing compilers. It's really good for doing symbolic processing, very good for doing static checking. It's not that great to run on a tiny little microcontroller with very small amounts of memory. Now, people will try, but they're research projects, you know, the garbage collection that's debatable wherever you want to do that on those things. There we go. And so all my point is, is that I think there's been a lot of developments in programming languages over the last 20, 30 years, particularly driven by languages like Haskell, but also languages like JavaScript. Dare I say Java, but two, probably. And actually in the last seven or eight years, including C++ as well, right? C has been slowly catching up in some ways, and it doesn't need to change that much because it's very good. It doesn't, anyway, it doesn't attend. But when we think about programming microcontrollers, I think that we probably could do, you know, take on some boards, some of those modern developments and apply them where appropriate. That doesn't mean I'm not, once I think it's interesting, and my kids use it, MicroPython is really cool when it's great that you can run it on a BBC micro bit, for example. Do I think you should do production development in it? Probably not, right? But it's a great teaching tool, so it just depends on what you're trying to achieve. Scratch, I found it quite hard to teach to my students, but it's still a good tool for what it does. Okay, so now that was just a little precursor. I'm from the University of West England in Bristol. Is it going to miss off half my slides now? All right, let's see how it goes. It's so going to cut them off. All right, so I'm just going to move on. So it's an embedded world. I'm going to have to change the resolution. Sorry. Let's see if we can see it now. Go back to here. Wow. I just, it's mind-boggling. Just continue your talk. Okay. All right. All right, I'll just continue. No, indeed. So here's, you know, I'm coming from an embedded perspective. I do a lot of stuff with the Internet of Things and physical computing, with building devices, all of these sorts of things. And what we're really seeing, I'm teaching a course on Internet of Things to my second year undergraduate students. And very few, they're all computer science students, but very few of them are that good at programming, to be frank. They're certainly not that good at C programming. You know, a lot of them do Java in their first year. You know, it's fine. If you want to, you know, I've got nothing against it. I'm not sure if you've found it personally, but it's fine. But they come along and we teach an operating system course in the previous module. And then this one, we're all using C and pointers. And, you know, they kind of bash along with it. But some of them find it quite difficult. And to be honest, it's, a lot of it's unnecessary, I think that, you know. But we need to program these devices at this level. It's very hard, you know, they have to be system level languages as far as I'm concerned. And so, you know, we care about high performance. But we do want an easy software experience. That's the thing, you know, I've been teaching embedded programming for quite a while to students that are expecting embedded programming. They know what JTAG is and all of those sorts of things. They know what OpenOCD is by the time they arrive at my course. It's really not that hard just to continue with that. But if you've got students who have no idea outside of Visual Studio or NetBeatons, it's quite a different thing. And then, moreover, I'm starting to teach some of this stuff to schools and everything. And it can be a difficult experience. You know, GDP is a pain in the ass and the development tools on Linux aren't great unless you're really, really experienced with it, right? It's not a nice experience to come to it straight away. I see students go to Visual Studio and it is a pretty nice experience and I think it's naive to think it's not. And I want the embedded development to be similar, right, to have that kind of feel. Still, when you go to Visual Studio, you develop in C++ and I think that's great. I don't pretend that all of a sudden we're going to replace systems programming languages with these high-level languages on embedded architectures. That's not going to happen. So I want to find some sort of middle ground to make that. So we've been looking at the Cortex-M architectures, you know, Armsbridge company, but actually, to be honest, the Cortex-M are really quite good processors. They're very cheap. They're accessible. They mostly use ST, not for any other reason than they produce lots of chips. They're cheap. They're good. And there's a wide range of applications that you might want to do a bit. For example, the M7, which is their monster processor, has everything in it. I mean, you know, if you think about 20 years, this would be a pretty good desktop machine. And actually, we've just got the H7 from, which is an STM implementation and it runs at 400 megahertz, which is quite impressive. You know, it's got a kilobyte of RAM and so on and so on. It's kind of an impressive processor. You can have off-board SD RAM, all that sort of stuff. So it does do, you know, it's a very powerful chip, okay? And that means the consequence of that is that we are starting to write software that goes way beyond what traditionally would have been found on embedded architectures. And what I mean by that is that software development is much more complex. You know, you are writing, you know, large programs relatively, okay? They're not the millions of lines that you put in operating systems, but they're still reasonably large programs. And if you think about, I went to the Unix talk yesterday on the history of Unix and they said the first version was of 2.5K, I think, lines of pure assembler code. You know, and today it's millions or whatever. And you know, I'm not expecting Embedded to go like that, but still it's going to grow quite rapidly. And using the techniques that we've used to program embedded architectures continue. It's going to make it very hard to sustain that. MCU, you know, there's a whole set of range of MCUs. You know, the STM is the ones that we use. I'm not promoting ST in any sense. I did work there for a while and so maybe I shouldn't promote them. But they do provide a very nice, well, they provide a kind of very wide and varied set of architectures for depending on what you want to do, all the way down from M0s, which have been a tiny little stamp size chips running at hardly any power with nothing, you know, don't even have cystics on some of them, all the way up to these very high-end microprocessors, which are getting close to not quite application, but they don't have MMUs and stuff, but they do have hardware protection units, those sorts of things. And so it's varied. The nice thing, I think, about the ARM architectures, and again, I'm not advertising ARM in any reason, is that it kind of goes up, you know, it's always backwards compatible so you can take your programs and they're fairly easy to port up the hierarchy as you feel you need new features and so you can choose what's beneficial. You know, if you need an FPU, you can add that. You don't get that to an M3. You get an ASP, I think you get that at an M4 and so on as you go up the hierarchy. And I haven't looked at it at all, but the M33s, I think I was looking at yesterday, just on their website, now have Trustzone as well, so you can start to have areas where you can embed keys and all of that sort of stuff, you know. And so you can imagine it's quite a complex software stack. And when I come back to the beginning when I was talking about the Internet of Things, it's a software stack that has to care about security, right? It has to care about, you know, things like Denali servers, you know, then these things are going to store your keys on them, potentially you're going to be buying bitcoins with them, I don't know, whatever, and they need to be secure. And to be honest, we haven't done a great job of that with some of the other OSs and so forth and things, and we're doing much better now, but I think that we do need to be aware of that and obviously ARM are thinking about that by adding Trustzone, which is their secure area where you can have secure bits of memory which only certain bits of code can access, and those sorts of things. They're seeing that as a potential market as we go forward. Okay, I'm sure you've all seen, again, I personally think it's an awful thing, but the Amazon Dash where you have this little button and then you press something and it orders your washing liquid, really. It's hard to go to the computer anyway, whatever. But you know, every application, anything to make money, I suppose, but you know, these things are there, that works by a little optical thing, you know, you take it to your phone and it quickly enables you, you put in your password and go on to Amazon, it's all linked up. It's all this onboarding process, you're sharing all your keys and things. I don't know if there was an example, this guy from Dyson showed me where he took this carp, he bought it in a shop called Maplins in the UK which is basically a bit like Radio Shack or whatever, but not as good. And it's carp, the internet of things, it measured the temperature, I think, on the humidity of the room. And he went and showed it, he set it all up in this example, put it online and then got a little text app and started to connect to it and it just sets up a hotspot and you could just see your password going by in text, you know. It was incredible, you know. They'd spent three years working out how you could do this 256 bit encryption to connect their Hoover, you know, and it was like a little, you know, these automated hoovers around the house. He showed the forums, 99% of the complaints was it was too hard to get online because the security was, you know, so it's a real trade-off, right? And they said they think they lost a huge amount of sales because it was too complicated. The carp was 20 quid and everyone just bought it and that's their phone, didn't think, you know. Anyway, the sign. Okay, we're building these boards. Fortunately, I completely forgot to pack one when I came anyway, but we're using multiple ARM cores and so, and this is an audio board. We do a lot of work in audio research and very specific, so it's got a heavily M7 DSP because we want to do DSP on one of the cores. The other one's the controller of handling MIDI and that sort of stuff, these examples. And so how do we want to program these? I want these boards, you know, if you think about the audio board as our example, it's because the use case, the users of this board, we're trying to build a digital instrument tool kit so you can build a digital instrument and play around and experiment with them, are musicians, okay? More and more clearly, we're seeing technical people, sorry, we're seeing non-computer science trained people become programmers one way or another. There's lots of great softwares, Max MSP and other things. People are emerging there. A lot of web developers have come from more, you know, humanity subjects and things like that, which is great. I think it's a great contribution. There's no reason why they couldn't be brilliant at it, too. However, again, if we spend months then fiddling around with JTAG and stuff and things and all that, it's not a very pleasant experience and we can be pretty sure that not many of them will stick through the grind because they're trying to produce their next album or whatever, right? And I suspect we haven't done it yet, but even if we went to sit, talk to companies like Ableton, or a big audio company that makes software for computers for designing tracks and everything, and they're more recently starting to move into hardware and we guess that it actually might be that they want to move some of their synth generation and stuff onto these bits of hardware. Even with their technical people, you start out and say, oh, it's just all right low-level, it's going to take you ages. They're going to put them off pretty quickly, okay? So I think thinking about the software stack is really important. ARM provide a really excellent low-level set of libraries called SimSys, which is across the Cortex series and so it gives you access to the way that they have the blocks or the GPIOs and all that sort of stuff. It's all C-base, some of it's obviously assembler because you've got to bring up the machine and some bits, you know, your bit banging and all that sort of stuff over that side. And then the ST, because we use ST again, I'm not promoting ST in any way, is that they have a thing called STM32 Cube. Anyone ever used that here? Yeah, everyone wished they'd never used it. Yeah, me. And it sits on top, I've got to be honest, it's a really shockingly awful piece of software, right? It's unbelievable. You go in there, you can find a function to twiddle a bit, a pin, and it's three functions deep, and then none of them are in-lined and all you're doing on the ARM architecture is flicking one bit. I mean, that's, you know, it's just unbearable, right? You can't design software like that. But it's pretty easy to imagine how they got there if you've ever worked in an embedded company. You know, someone writes something over here, someone wants to switch the arguments around because they're the wrong order, something like that. It's pretty easy to see how they get there. They've been taught at school not to use pound of fines because it's bad, abstraction, whatever. It's easy to get there, right? And you can be sure, and I've done it and I've actually seen it, that the performance is absolutely shocking, right? On chips that are only running at 16 megahertz, you know, it's insane. Okay, and I suppose the code's been developed over a long period of time, that sort of stuff. Anyway, so, Sam says, ARM, I've got to say, I suppose it's their business model. You know, they share, they sell IP to other companies to build chips and everything. Their documentation is amazing. I'm not going to go on it. They have a whole set of libraries, for example, really excellent, optimized DSP libraries, you know, FFTs, all the convolutional, all those things you'd expect for their architectures. A lot of it's, you know, written in assembler or at least, you know, with built-ins and stuff. And they go all the way up to these more complex things, which I haven't used, but handle all the debugging interfaces and all those sorts of stuff and things. So it's really good, very well documented. Yeah, perfect. And it's portable between all ARM contexts, which for me, okay, we've made the choice to stick with ARM. Clearly that's not that restricts some other architectures we can admit or whatever, but that's our thing. But we don't want to pick a particular processor or a particular manufacturer, you know, because at the end of the day ARM is only an IP company. STM, which we do use a lot of, because they are cheap and they're good, provide these additional tools on top. But again, as straight away, you're restricted to the STM, which is, of course, what they would want. I can understand why they do that. But then tie that in with the fact that I find the tools aren't actually very good. They're really good to get Hello World going, really nice, they have initializing, but they're actually pretty poor once you get, it's a personal thing, but that's what we've found. Beyond that, which is a bit frustrating. And so, given that they're non-portable between ARM Cortex-M and only STMs, you know, and the code is not that really well optimized. I've found it that we've tried to move away from it. So, the conclusion so far, and this first bit of the talk, is that we'll use STM32 processors, we do, but basically we're picking the Cortex-M processors. We're using SimSys, because it is really nice, a very low level, well optimized, very well documented, and it spans all of the ARM architectures. We've decided not to use the cube stuff because of its lack of portability, but also I just did find it difficult to use and not very performant. The one really nice thing about the cube is that if you start to put on external oscillators and everything onto your chips, which you might quite likely to do, if you want to run them at high frequencies, they'll do all the calculations for you and stuff and things. So, do the initialization of the clocks and stuff. So, we always use that to generate it and then I just put that back into our bit of code because it's very handy tool. You don't have to sit down with a calculator and work it out yourself, which, to be honest, I'd probably get wrong if I did it. So, I just let them do it, so it's great. So, that is very convenient, and obviously it gives you all the pin layouts and all that sort of stuff, which is very useful. And we can't get away from using assembler because you do need it, which is actually quite fun when I'm teaching it to students who don't even like C, so it's quite good. But it's actually tiny amounts of assembler. It's really great compared to what you think in Linux was probably half the amount of the Linux kernel that you'd have to write now. So, that's quite nice. Clearly, you could use C. I'm not going to put down C.C. It's a great language and I use it all the time for lots of things. I use C++ as well. I'm quite a big fan of C++. I know there's lots of downsides of it, but whatever. We could probably do better. What I mean by that is that when we start to care about security and to care about the fact that we don't want our programs to crash because that might lead to be able to access the stack and those sorts of things, it's pretty difficult to verify C programs. If there are tiny bits of firmware, we probably can. But it's getting bigger and bigger, and that's getting harder and harder. So, I think that with a lot of other people entering this domain, we might be able to do better. So, we can apply the programming language, which I haven't got that long. We could adopt C++ for 17. Absolutely. We could. I'm all for it. It's good. I think there's some really exemplar examples of this. The ARMS Embed OS is an excellent example of a really well-engineered, structured design. It's not over-engineered, but it's been thought down. They've actually set down and done some design before they've hacked on implementing it. The University of Lancaster actually worked with the BBC and with Broadcom when they were developing the micro bit and have this really nice, what they call device abstraction layer. It's kind of limited, but I do think it's a really nice example of what an API could look like. That's a good example. Of course, C++ suffers from all of C's shortcomings, no point of dereferences, access of the three buffer overruns, things like that. And moreover, I do think this is quite important. Any of us have programmed with Python or JavaScript and things like that and node and stuff, we'll find that the lack of modern package and module system is quite frustrating. Of course, they're still arguing about it in the Standards Committee at C++ 17. Maybe one day we'll get there, but not yet. So we're proposing Rust. This is just taken from the Wikipedia page so you can read that in your own will. But the most important thing is that it describes itself as a safe and a practical language for system programming with a goal of being performance compatible with C++. Now, my experience is that it's not quite there yet, but it is pretty close. And it's really annoying. You'll see from this standard, I don't know, it's a C-like language. It has a bunch of different things. I put Haskell up at the front there because it's taken a lot of its additions. It has this thing called traits, for example, which are effectively Haskell type classes. It has type inference. It has lambdas or closures, whatever you want to call them. And it's pretty modern. But what's really nice is that it is completely ABI-compatible with C if you want it to be. Now, because it has things like overloading and all those sorts of things, it has to do mangling to handle that. But at any point, you can just drop out to C. So this is an extern bit, just like in C++ where you wrap the extern around it. You can have functions. And so you can link to it things. When you call a C function, though, you have to put it inside an unsafe block to say that you are willing to opt out of some of the more stricter things, i.e., you cannot get null pointers in Rust. You cannot get a free after free access or an access after free. You cannot do that. It uses a very strong type system to avoid to track, basically, who owns a pointer at any given time. That's a really nice feature. It does make programming in Rust quite tricky at times. There's no doubt about that, particularly if you're a beginner, and you're not familiar with things like Haskell and stuff, but it's very powerful. So there are lots of benefits. Safe pointers compatible with C. Some of the more modern, it doesn't have classes traits. They're a kind of merger between type classes and classes. Type inference has a lot more, but it has many of the modern features while still trying to achieve it. And it has a really great module system and package manager called Cargo. And there are a few constraints. It's a bit of a problem using it. We've been better at the moment. It's LLVM-based, and LLVM is great for its x86 target. Some of the other targets are still not as optimized as they could be. And we found some examples. So, for example, we had this single wire protocol with these cheap temperature sensors that came out of many of you have heard them do at 11. And it's just based on timing, you know, a microsecond timing. And we found that LLVM was producing code that made us miss it, you know, so it was very problematic. And so there was, and it took me a long time to bloody debug that anyway. All right. There's lots of stuff that can be improved with Rust for embedded, but it is happening. There's quite a lot of work going on. There's a, these are crates, which are basically modules in a thing. And there are some exemplary examples for particular boards, board support packages, effectively. But they're quite limited. That's the problem at the moment. So this is my last slide. The problem at the moment with Rust as for embedded architectures is I think that it's one of those open source problems is that you've kind of, there's quite a few projects going on. There are often students, PhD students and stuff doing that. And once they leave, they kind of go, you know, rusty and never get to the plan and never get work on. And there does need to be something to step it up to take it over to the next thing. Mozilla, a huge investment in Rust, but they're really interested in the PC market because that's where their browsers run and everything and stuff and so, you know, and Apache and things like that. So I think that whilst it has used potential for MCUs and embedded, there's still some time to go until it actually, I would recommend using it for real projects, like real and quote, we're using it, but then I'm at a university, we don't have quite the same deadlines and requirements to do that. We're expected to publish papers and so Rust is quite a nice place to be working because, you know, there's quite a lot of low hanging fruit that you can grab and do stuff, particularly in embedded. Would I recommend it for doing your product that's got to be delivered next year? Probably not, not unless you've hired a bunch of very, very smart Rust people, you know. I think there's still a way to go, which is a shame because I do think it could offer a lot, but I've got to be honest, I don't think it's quite there yet. For embedded, I think potentially for if you're doing concurrent programming on X86, then that'd be great. Well, that's my last slide, so, and I've got five minutes left, so perfect for any questions. So that's a really good question. Before I looked at Rust, before I'd really heard about Rust which was a couple of years ago now, I spent quite a bit of time with D and I really like it and I've got, I don't think Rust is necessarily any better. I got into a conversation with someone about Rust at a conference and I just went that way and so I wouldn't want to say I haven't done any analysis to say whether the compilers are better or anything like that. I do like, for me, you know, I've done quite a lot of work with LLVM before I joined the university, I was at AMD and so Rust kind of fits quite nicely for me, but just from a, you know, I understand the tool chain already. I like that Rust has very similarities to Haskell because, you know, my PhD was in Haskell and stuff, but I don't think that makes any argument that one is better than the other. Right, exactly. Yeah. Yeah, it's true, I haven't looked at that at all. So when I first got in, I wasn't thinking about it, I just learned it from a, you know, a PC perspective and from the concurrent, but they both offer concurrency, so there is a lot of overlap. It would be worth doing some work on it, but I haven't done it. Sorry to say. Yep. I've not been using Rust as a teaching aid. I've got a couple of PhD students who are using it, one in particular who's doing the audio project and he's taken to it quite well, although he felt in the end more comfortable learning Haskell first. He hadn't had any Haskell experience because so many of those ideas translate into he's very confident C and C++ programmer. So I think he found it easier to learn some of those ideas in Haskell than to learn them in Rust, which I thought was an interesting insight and I think that probably is the case. I think if you're coming from Haskell, you're familiar with C and C++, you'll pick it up really quickly. If not, it can be quite tricky and my guess, my undergraduate students will find it quite hard. It's the borrowing, you know, the kind of the pointer model that is pretty tricky because all of a sudden you can't assign a variable in an if when you could before, you know, because of the borrowing cement. So C will kind of compile and break whereas Rust makes it all the same. Right, yeah. Yes, exactly, which is very much the Haskell way of doing things as well, right? They say that once you've got a Haskell program working, it's more likely to, I mean, it's not really the case, but you know it's a lot of good charts, but that is problematic if you're not familiar with it and used to that kind of way of working, I think. Yeah. Well, that's an interesting question. So it really depends. I mean, obviously if you could just go to an unsafe block and you just didn't see any way, so you could do that. And that's what quite a lot of people do with that very low level stuff and they just say it's trusted. There's been some work. Yeah, up here, look, these guys at top, they've got this paper recently just came out where they were saying in the kernel there's this thing called sale in Rust which allows you to have and so they proposed a new type called, I think it's called type cell or something like that, which works around that, but that and it's a software, it's purely implemented in software, but it's a trusted thing and what I mean by that is that it breaks mutual exclusion, for example, and so people can have more. And that's the only way they've managed to get efficiency at the kernel level. So you could argue that that points to Rust, actually, their type system being broken. It's too strict. I think it's up for debate about how you want to look at it. At one point, you have to trust something. But what we've done so far, which is the rest of the stuff, is just we go to Unsafe for that and just trust it. Okay.