 I know exactly where it is, it's like a five minute drive. Alright, so the last couple days have been really interesting, a lot of people keep calling me the new guy in the community, and it's interesting because I feel like I've been around a long time, and I've certainly written code for a long time, but yeah, I guess I am new to the Elixir community, and I want to see what I can do to help provide something to it. Before I talk about why I think Elixir is really well set up for the next ten years, I want to give you a little bit of context of where I'm coming from. I was in the proprietary software world for kind of a long time, I did desktop publishing and then I worked web servers, and in 2000 I got to join the Xbox team, and I did three things there. I got to start and run engineering for Xbox Live, I did that for four years, and then I got to start and run all of XNA, and I did that for five years, and I don't know how many people know what XNA is, I think that was the developer program for Xbox. And then the last five years I got to run software engineering for the Xbox One console, not live, not firmware, but everything from vectors into the firmware, how the kernel is going to work up to delivering the UI. So after that I was crazy burned out, a lot of people were burned out, we could talk about death marches and what they look like and why it's good to avoid them. And then I needed to take some time off, and I did. And I was in Italy with the family sitting on the beach, and I felt the urge to build something again, and it had been a while, it had stopped being fun. And when I did I said, okay, you know, there's this exercise I've done in the past where before you start building something, you have to make sure you know what problems you're trying to solve, then you have to look at the world around you and what trends are happening, and what observations can you make. And it's important, these are not trends that you can affect, you have to identify the ones that are going to happen no matter what you do, because those are the things that you can either hook onto and take a ride or they're going to run you over. Then you can make decisions about what you want to build, and the end goal is to build something that's going to be relevant when it ships instead of when you started. That's right. And I've seen lots of products that when the team started building it, it was a really good idea, and they looked at who the competitors were, and they built something that would definitely be a good competitor, and two years later when it shipped it was two years out of date, and that's not what I want to do. And this applies just as well for when you're picking a new language and a new stack for things you're going to build. I know that, well, we'll go through some of the predictions. If I'm going to invest my time learning a new language and learning a new set of tools, I want to know that I'm making a good investment and that it's going to pay off over at least a 10-year time frame. So that's what I'm hoping to provide, walk you through the steps that I went through. Here's my observations, and help you know that you're making a good investment in this community and that it's going to pay off over a period of time. And we're going to do it in a couple of steps. We'll go through these are the observations. No, first we'll go through here's some of the big problems I think that we need to solve from a really big point of view. Then we'll go through here's some of the observations I have, and then we'll talk about what are some of the bets that are being made in the electric community and some of the open issues and places we can all contribute. Okay. In servers, by the way, there's a lot of problems here, and I've only listed the ones that I felt like talking about. So if I've missed a problem that's really important to you, oh well, I didn't feel like it. And that's actually the beauty of doing this kind of talk, is if I'm wrong on any of my predictions, too bad. We'll find out over 10 years. It's hard to make predictions and be right on all of them. So big problems that I've seen in servers, and these are things that we saw when building Xbox Live. These are things that I've seen other people facing, and like I've dealt with the Azure team, and I've been living my life in AWS lately, this is how they think. First, performance per cost. This drives data center decisions. And what I mean is how much money are you spending in aggregate to run your servers? And anything you can do that cuts, let's say a really successful product, cuts you from 1,000 servers down to 800 servers, is good. You save a ton of money. Security is very, very important, and I don't think people take it seriously, and we all take it very seriously, and it's still not serious enough. I'll talk about that. Data processing in the data centers, I'm gonna say on the server side, is more advanced than other areas. And yet, as we saw from Jose's talk this morning, there's a lot we can learn to make it approachable. Machine learning is coming, and I haven't heard anything about machine learning in this talk, and I'm gonna call that an opportunity space, because this is super important for the next 10 years and beyond. Communications, yeah, we all know how communications and servers work unless things change. And robustness, not a word I hear enough. I've heard it in this conference, but usually I only hear it from my security geek friends. Now, devices. Wait a sec. Yeah, it's pretty much the same list, right? And this is a good thing. They have different flavors for each one of them, but it's basically the same problem set. And this is part of what makes coming up and making good choices in your language and your tool set really interesting, because you can leverage it in more than one space. All right, now a quick example. Game consoles, I know this space really well, and I just wanna talk, problems that you have to solve if you want to build a games console, and there's way more than four. But I just wanna give an example of how some of these things would manifest. So performance per cost is why game consoles exist. You could go buy a PC today that has four times the graphics horsepower of any console on the market. And this is why it drives me nuts when people wanna talk to me about, well, which game console's more powerful? And I don't care, right? Go buy a PC. The reason consoles exist is because for the dollar the consumers spending on that machine, you're getting more dedicated power than you would spending your money on a PC. And this is because there's nothing else running on it. The game is the only app. It's completely dedicated. It's focused on one task. Security is super important in gaming. Everyone's trying to cheat, right? They wanna win, and if it means cheating, they'll do it. Everyone wants to steal the games. And they go to tremendous lengths to be able to do it. We're talking fibs, we're talking oscilloscopes. Fib is a focused ion beam. You decap the chip and you actually change the transistors on it. Serious stuff. Scalability is hard. From both the data center point of view, you've got 100 million game consoles all trying to talk at the same time. Coordinate on the game point of view it's hard because you've got multiple cores and game developers don't know what to do with that. And then robustness. You have no idea how hard it is to keep games up and running. There's been so many conversations I've had where you're playing a game and you ever wonder why this level just felt like it wasn't as long as it should have been? It's because in testing they found if they let it run three minutes longer, it would crash. Right? So robustness is an issue in that space. Okay, let's go through some trends. Let's talk about these things. Power is what drives cost. Now, I said the first problem was performance per cost, but your cost is driven by your power consumption. In the data center, you have to think every watt of electricity that gets used by your app results in one watt of heat that has to be air conditioned out of that room and put back into the environment. Right? So that you're paying for the power plus you're paying for the heat removal. This is one of the biggest things that drives cost in a data center. And as you're building your applications, you think, all right, well, if I'm gonna use tools, I'm gonna pick my framework. I wanna pick one that results in greater power efficiency because especially if I'm successful, this is what's going to determine how much money I'm spending on running my app. You will probably worry less about it in the development phase, but it becomes a big deal later. By the way, big enough deal that the reason Facebook has a data center on the north side of Norway is because electricity is cheap and the cooler servers, you just open the windows. And I'm not kidding, that's why they chose to put it there. We'll talk about communications in latency second and why that allowed them to put it there, but that's how big a deal cooling and the cost of cooling is. In devices, and by the way, remember in the previous slide, I said devices and in quotes, it was IoT. I hate the phrase IoT. It doesn't mean anything anymore. I would rather talk about dedicated devices, which is effectively what a game console is. It feel like I've been building devices for a long time. It's just a really powerful one. In devices, the amount of power your code uses impacts the battery life you have available to you, which directly impacts how much the device costs. It is in your best interest to choose code paths and ways of doing it that reduce the amount of power your chips are using. Every sensor you put on a device uses power. Every radio you put on a device uses power and some of the devices I've been playing with with NERDS. As soon as I look at putting the radio on, it's like, oh my God, it's using two watts. Well, it's talking and I have to spend all this time turning the radio off and only turning it on when I absolutely need it. These end up driving your decisions. The hardware you choose in both the data center and on end devices is chosen to fit the power and the problem envelopes you have available. It has to solve the problem you're doing it and you're gonna choose it to be the minimum power usage you can. And in a NERDS type device, this means you don't have lots of processing power. You will choose the cheapest, smallest, lowest power chip that still runs your app. So it's a different way of thinking than what we're used to. Insecurity, please just assume every single one of your servers is hacked. Assume every one of your databases is hacked. If there's personal information in your databases, please encrypt it. I just saw that someone put a hex package up called Cloak, Daniel Burkamas, I think it said his name right. So I'm gonna look at that one. As soon as I get back from the conference, it's about encrypting rows in your database and how to maintain that. I might have opened that up in my web browser. Oh, I turned off my network. Okay, so Cloak, look at it in hex. I think that kind of thing is interesting. In devices, this is even more important because not only are people trying to attack the devices you put in the market, they're also trying to intercept and attack this communication stream between your devices in the data center. And these are high value targets with no physical security examples. Why this is important? Let's say you've been given the job where you're gonna put a controller on a water pump. And it's sending signals up to the control center of what's the current pressure in the system, what's the current usage. Someone intercepting that data and faking data coming up to the data center could say, yeah, I'm gonna give you bad data and cause you to change the values going to the pump. Someone could try to turn the pumps off. Someone could try to overrun the pumps. High value targets. And it doesn't have to be a pump or what's called a SCADA device in other parts of the world. Even a small device, something sitting in the home, they have no physical security, so there's someone there with a soldering iron and they're hard to get to. So as soon as you have to roll a truck to get to the device to see what happened, it's a high value target and it's expensive because it costs money to roll the truck. As we talk about these scenarios, keep that in your head and keep that in how you think about security. And of course there's no higher value than satellites and we'll talk about that in a bit. So this is an arms race and it will not go away and we're going to have to stay on it. So for the next 10 years of Elixir, I know that this will be an ongoing subject. Now there are some things coming that we can use as tools in this battle. But just remember, it's not paranoia if they are out to get you. In the next 10 years, we will become friends with FPGAs. Those are field programmable gate arrays and I remember first thinking about these things in the 90s and they make my head hurt because effectively what you're doing is you're writing a bunch of code and compiling it down into a map of transistors which you then burn down into a field programmable chip and you effectively create one big custom instruction. It's like the exact opposite of a risk chip. It's an extremely complex chip with one instruction but it's an instruction that you make. Good news, you get very low power usage for the performance. So if you want to do custom DSP functions, you want to do Fourier transforms, whatever, that's interesting to stick down in FPGA. Good news, it's not in addressable space so if you have security code, if you've got keys on NERVs devices that you don't want to hold in RAM because you're worried about someone stealing the keys, you stick the private key in the FPGA, you can't address it and then you just send requests down to use the key. There's this thing called a physically unclonable function which my FPGA friends were explaining it, boom, made my head explode. By the way, that's been my journey in Elixir as you learn matching and all that. That's just repeated head exploding. A physically unclonable function takes advantage of the fact that the masks that are used to build the chip are never perfectly aligned. So individual chips actually have slight changes in how the masks were put together so slightly different voltage variations on each individual transistor and if you're clever you can build a cryptographic function that uses that in it so even though you think you know what the key and even is and you know what the function does if you lift those transistors up, put it down in another chip, you won't get the same values. A physically unclonable function. And these things are not as expensive as you think. So that's a Zinc 7000 series chip in the picture that's made by Xilinx. That has a dual core A9 arm processor on it sitting surrounded by a FPGA fabric. This is I think the kind of chip that will become more and more important in devices that are out in the world. And then one I know I did load. On servers, an article from the register. I've been hearing about these things for a while but it's very exciting. Intel's going to ship later this year. No rumors flies, you know, so I don't get to, don't quote me on dates or anything. That's a Xeon Broadwell chip with an FPGA fabric on the same die and notice the size of the FPGA fabric compared to the actual Xeon chip. There's a lot there. So they get it too. In servers, I can start moving some of my really tight code, some of my security code down into FPGA fabric. I have my business logic sitting up in a space that hopefully has robustness and that kind of stuff. And these are things we are going to have to learn. The problem is switching out of the power drives, cost, security, paranoia, FPGAs, data processing. This one feels like it makes sense after we saw Jose's talk this morning. On the server side, data processing is already a big deal. There's already MapReduce. It's been, we're several years into this journey and there are people who are really specialized and know how to do data processing. When I think about Elixir going forward, what I saw this morning is that it's going to make data processing much more attainable and much more doable by the rest of us. I don't want to become a big data expert, but I do want to be able to do it. So yay, that made me happy. Just as important, if you're building a device, you are also doing data processing constantly. Think sensors, think cameras, think audio. You are constantly collecting data from these devices which are effectively the same problem set. You've got to MapReduce those. You don't want to wake the radio up to be constantly sending a stream of data up to the data center because your battery is gone. You want to wake the radio up every 10 minutes and you send up a pre-aggregated blob of data that you've used data processing techniques on and you send that one packet up and then you turn the radio off. Okay, yeah. All the world's a gen stage and we are merely players. It felt like quoting Shakespeare on that one because this is a beautiful thing. This is part of why I like the language is I see problems that I can solve on both ends of the spectrum and they're both very relevant. I don't know enough about machine learning and I haven't heard about it in this past couple of days. I know that it's going to be, it is important already and it's going to be important over the 10 years and given this stuff, I feel like there's opportunities in this space and maybe the only real audience I intended for this slide might have been Jose. Think about, like in my experience, I'm coming from having tried to build Connect and things like that. Those are all machine learning systems. You take gobs of data and you do gobs of data processing on it and what you spit out at the end is a very tight little ball of parameters and configuration and that can be run on a low powered machine to make real time decisions. This is what embedded devices are. They are machines that make decisions based on real time data coming in. You need to do the data processing. You need to use machine learning to make good decisions and that's all tied to servers sitting up in the cloud that are assisting in both those operations. I feel like there's some good stuff here in the frameworks we're talking about to help in this space. Communication is interesting. I remember when we started Xbox Live and this was, I mean we started it in like August 2000 and we're going through the journey of realizing that we had absolutely no clue what we were doing, right? I mean like I'd never done a data center thing before and we were making it all up. We were paying $350 per peak megabyte per second. I mean megabit per second is the difference. We were paying $350 per peak megabit per second coming out of the data center and that's really expensive, right? And today we're probably talking fractions of a penny. So it feels on the first hand like bandwidth out of your data center is a solved problem but latency isn't. We are all going to learn about latency and it will become more important than the decisions that we make about where we deploy and how we deploy. And my best favorite example right now is that Facebook center in Norway. The reason they could do it, the reason they could stick it up in Norway is because for the problem they were solving they decided that some latency was okay. They could eat the latency for the problem they were addressing and that allowed them to put their data center in a cold part of the world and cut their power bill in half. If you were to go and try to do, remember on live, they were gonna do games streamed out of the data center. Opposite problem, very latency sensitive. That requires you to put servers near every city that people are playing or at least on major trunk lines and you can't put them in cold places, right? Latency will become a big issue in how you design the overall architecture of your application. For devices, it's a different problem. You need to be able to communicate it all. Over the next 10 years, I think we can pretty confidently say that there will be global wireless accessibility. So it changes the way you think about your markets. Right, how do you think about your app different when you know that people out in Africa have good connectivity to your servers? Maybe terrible latency, but good connectivity. Does that change the app you're building? Does it change the way you think about your building, your architecture? What radios are you gonna put in your devices? These are all things that are going to have an effect. And then I thoroughly believe that in this timeframe, satellites will make us rethink much of our architectures. This has been a hard week in terms of satellites. But when was it? It was like a year and a half ago. I was really lucky and I got to see a speech that Elon Musk gave in Seattle talking about his plans for SpaceX. And I think he said something like their goal by 2020 was to put roughly 4,020, probably some artificial accuracy here, 4,025 satellites in low Earth orbit. Low Earth orbit is about seven milliseconds away from the ground. Once you're in low Earth orbit with basically a mesh network of zipping by satellites, you can go hop, hop back down to Africa or wherever, Europe, Asia. And the speed of light in space is significantly faster than the speed of light in glass. So you're racing a signal going through fiber optics, which is going what, 0.6 of speed of light versus the actual speed of light in orbit. And suddenly you think, oh, whoa, maybe latency isn't as bad as I thought it would be. It makes you ask questions like, where should I put my CDN? Where should I put my processing? Can I put some of that in orbit? That would be interesting. I mean, one of the things we're seeing is that the cost of getting something into orbit is dropping dramatically. And that is something we will all need to think about. And as we talk about robustness, as we talk about OTPs, we talk about recovery, there is no more interesting place to put that stuff than orbit because it's hard to drive a truck there to fix it, robustness. I don't know, I've been doing this a long time and I rarely hear the word. Software engineers have not been trained to think about robustness. Okay, this is different than the Erling world. They did it early. But the rest of us, we've been kind of trained to skate by on ever faster chips, reboot your servers, don't worry about it. As much as the crash is, it's gonna go. And that isn't really good enough anymore. As you're thinking about your service and you're thinking about your overall cost structure you're dealing with, if you can prevent these things from going down in the first place, if you can use these techniques to lower your overall load, you can cut your costs and that's good. But much more obviously important in devices. If I've got a device that's reading, say a methane gas sensor, I'm sorry, picking things I'm actually working on. I've got a device that's reading a methane sensor and it's reading an oxygen sensor and it's reading a carbon monoxide sensor. And one of those sensors fails. The rest of them should keep working. Right, if you're dealing with safety equipment, if you're dealing with anything, please, just because you're not reading one of those sensors, don't tell me that I'm not, you know, don't fail to tell me if I go into a dangerous situation than one of the others. Talk about cars at the end. IoT is in serious danger of becoming the internet of things that doesn't work, that don't work. Right, and if there's one thing you take away from a conference like this is this is a, the combination of Beam, OTP, and good languages on top of it is a beautiful framework for these kinds of devices. Because you can start thinking about how you partition your application so that when one of these sensors fails, when that piece of code over there fails, at least my ABS brakes will continue to work. And I bring that up specifically because you can go look up about lawsuits where ABS brakes have stopped working because of memory leaks. So, after thinking about all this, I started scanning around languages and trying to decide what I wanted to learn next. I looked at Scala, I looked at all, I looked at Clojure, I looked at a bunch of different things and for starters, I decided I don't like the JDM. And for that matter, I don't like the CLR anymore either. And the funny thing is, when I think back 15 years, they kind of made sense. This is the point I think I was gonna make later, but I'm gonna make it now. The way Moore's law was going through the 90s and the 2000s was ever bigger, ever faster chips. Clock speed is what you got. You got single processors that got faster and faster and faster and that allowed many sins. Both the JDM and the CLR were trying to answer a need in the developer community to cut costs of development. It was getting too expensive to pay all these engineers. So they said, hey, you know, let's come up with languages that are gonna be less expensive to hire people for, that we can put guards in place, that we can keep them going. And they didn't really need to worry about complexity within those systems because they could rely on chips getting faster. Now, go back further to the mid-80s and when Erlang was being developed. And this is kind of my interpretation of the story. They're trying to build phone systems on machines that were just barely active, barely adequate to run them. So they didn't have that luxury. If you can't have a business that has their phones, that running their phones on your system, you can't have those phones go down. The phones don't go down and you needed to be able to scale out from the beginning. So they built a system that was designed for failure, for recovery and for distribution from the beginning. And that looks like today's world. The power needs of batteries on phones and those kinds of devices means that they're picking chips that may have more transistors. So kind of following Moore's law in that way. But the focus on low power means, no, no, the cores aren't faster, there's just more of them. In fact, when we make individual cores lower power, lower the clock speeds, put more of them on and you get overall more efficiency. So the patterns and the practices that we learned in the 90s and the 2000s are really the wrong things now. There's a generation of programmers who need to unlearn what they learned because they're building code the wrong way. You need to have something that is inherently multi-core, inherently distributable. And when I found Beam, I was just so happy. Beam is your OS. If there's one point here that really, I really kind of internalized at some point as you're building your applications, as you're building your Elixir apps, and we think, oh, am I running on Linux? Am I running on a Mac? Am I running, no, you're running on Beam. Beam's your operating system. And I think the sooner we all internalize this, the better. And it changes the way you think about your relationship to the hardware. Kind of a weird talk for an Elixir conference. This is my only slide that really talks about Elixir. And it's pretty much yay. So when I came across Elixir, I'd heard of Erlang before. When we were building Xbox Live, I think I'd heard of it. And now in hindsight, I really wish we'd built the present server in Erlang. I think we would have gotten a lot more scalability and we would have gained a lot of benefits. But I'm sorry, the syntax really isn't to turn on. So here's Elixir. It's a syntax that does work for me. And there was a series of little, my head exploding as I finally understood what matching, what function matching was. And then another one when I figured out what assignment matching was and all these things. And then this morning, gen stage is coming, flow is coming. These are great computer science problems that are being reduced to consumable pieces. So it's a very exciting, a language and a very exciting time built on a VM which solves the problems that we are now all facing. The bottom point here, separation between language and frameworks is an interesting one. The name of the conference is the Elixir and Phoenix Conference. And both Chris and Jose have expressed some concern over what do you call this thing? Well, part of this is the power of branding, right? If, just, I'm not suggesting any names change. But if you think about Phoenix as it's just server.web and server.channel, then you're kind of worries about this go away. And even in Phoenix 1.3, when you think about taking the models away and moving them up into a separate umbrella application, it's kind of heading that way, right? It's an Elixir application that has some great libraries that help you build an application. When I think about Nervs, Nervs is a collection of several things. There's a bunch of libraries which are Elixir libraries that I can match and I can mix in with other libraries and I can build my application. I'm free to think of my app. And then there's a tool chain that brings in an operating system, compiles it down, spits out a boot image, and then another tool which Garth is working on to help you do updates into machines that are in production. That tool chain is really important. So you get to think about your app in terms of beam as your operating system and there's tools now that let you apply the specific instances of that hardware. Okay, so the rest of this is kind of open issues that I've observed. And some of these have got people working on them and they won't be open issues in a year. Some of these are longer term projects. Let's start with really low level interop. I'm worried about my perf, I'm worried about my power consumption. This is where low, and I'm worried about security. Those three things mean I have to pay attention to low level interop. And this is where I get to use FPGAs and other code. If there's really only two kinds of code in the world now, performance sensitive code and everything else. That tight loop, that thing that really burns the power, that needs to be as optimal as it can. Everything else needs to be an OTP with recovery and all the failures, recovering from failures and all the goodness you get from OTP. All right, so another worry I have is this real time enough. Like the first issue I hit when I was building a nerves application was, okay, I wanted to use this humidity sensor that had a digital signal coming out and I had to read it in terms of microsecond, you get millisecond timing, right? Do we have tight enough granularity in the timing of these systems to make it work? It's okay if you can come out to native code and FPGAs. That's how you solve these problems. I want to keep the goodness of OTP and the goodness of the language, but there are some things I need to come out for. So yeah, you can write NIFs, you can write ports, you can write these things. I haven't heard enough discussion about them. They're a little daunting to try and figure out. Maybe this is a documentation problem, but this is a space where I think we can do more as a community to help us learn it. Not a lot of discussion in the last couple of days about UI, right? Think about an app where I'm gonna put my finger around the screen and I'm gonna start dragging it around. As I was writing an iOS app the other day, I'm like, oh my God, what thread am I on? And it just drove me crazy because I want there to be a gen server somewhere that spins up a process that tracks my finger movement and when I'm done, it sends a message to another process that decides what to do with it, which sends a message to another process that owns a screen, right? So what's the right model for how we handle user input and we handle drawing? What's the right overall model that feels like open space? And we haven't even gotten to what does it mean if there's a GPU on board and with a Raspberry and a Beaglebone? There are GPUs on board. Distribution is interesting. There is more discussion of distribution and a year from now, some of these things won't be on the list because I'm really hoping that a year from now, NERVs will be 1.0. So active working on the NERVs boot image thing blows me away. I mean, it just blows me away because I could build a device reading a bunch of sensors, making decisions on it and my boot image was 15 megabytes. It's teeny tiny. That reduces the amount of money I need to spend on hardware to make an actual product. It booted into my code in 200 milliseconds, which blew me away. Again, reducing the amount of money I need to spend on hardware to build an interesting product. Now there's all kinds of things that need to happen to finish this up. If I've got a million devices deployed, I know people are trying to attack them and I wanna send updates out. You've gotta have signed encrypted updates that you can bounce between and Garth is spending some time on that and we're gonna have lots of reviews, I hope, and that is a very interesting piece of the puzzle. What does it mean if you know some of these devices have been hacked? How do you have revocation lists? How do you turn that device off? How do you deal with failures and know which device failed and that kind of thing? Who gets assigned them? Who gets assigned them? How do you set up a signing authority? Big open questions. And the more, something I'm hoping for, you remember the whole thing about power and cutting the number of machines directly cuts how much money you spend in data centers. I would personally like to see nerves spitting out a boot image that I can just load to AWS and that's my app and it boots super fast and it doesn't use as much memory as everything else because all that other stuff isn't even in the image. That's my life. Okay, so I'll end on an example because it covers everything. Cars are about to go, well, not about to, they're going through fundamental change. The biggest changes in the automotive industry since automobiles were invented. Right, we're replacing the most unreliable part of the car. The most unreliable part of the car is getting replaced and that's the driver. Right, so you think about a car now. So what's a car? A car is basically a phone on wheels. Right? It's got radios. It's got lots of subsystems. Some of those subsystems are life critical. You don't want your brakes to go out. You don't want your steering to go out. Some of these systems are maybe not life critical but they would really annoy the occupants. I want the radio to work. When I step into a car and I'm the only person in it, even if I'm just renting the car, can I play my music? Right, open spaces here. This is tons to think about but one of these systems failing cannot cause the others to fail. And that smells like OTP to me. Right, really, really interesting space and you think about the needs of an automobile and you start to think, whoa, this is really interesting tech. Security is super important. I think we really need to spend a lot of time thinking about security of these applications. If I'm driving down the road, A, I don't want people tracking me to know where I am but more so I don't want an attacker turning my brakes off. I don't want attackers turning the acceleration on and a point I probably should have made earlier but I don't think I did. That was really freaky. Okay, when I'm talking about attackers, you don't think the individual in their room trying to make a point. Right, we're talking about nation states. We are talking about organized crime. We're talking about people with real resources and when I say nation states, I mean every single big country has people who do this for a living. Every single one, right? Let's not pretend it's otherwise. Right, so there are real security concerns and the more you move into devices where people's lives are on the line or there's data that could cause lives to be on the line, it doesn't have to be direct, it could be an indirect impact. Fooling a gas company into raising the pressure on the lines, for example, by manipulating the data going across, that could be dangerous or lowering the pressure on the lines. That could also be dangerous. We need to be very, very conscious of security in everything that we do. And then on cars, right? They've already got cell modems but still when I cross into Canada, my car kind of goes dark for a little while until it recovers and decides it's okay to use Rogers. Very soon now, I expect new cars will start shipping with two-way satellite communications on board, which is the only thing that really makes sense. So as I'm driving around, I'm getting really good, I mean, like the SpaceX guys are talking about one gig up-down, one gig a bit up-down at every single point in the planet, including the middle of the Pacific. When I've got those kind of capabilities, you change how you think about your vehicle, which means you change how you think about how you've developed software for it, which means you have to reevaluate everything of your assumptions and look for systems that are well-placed to last over this period of time. And just like I'm looking at the camera for a sec here. I'm ready to talk now, Elon. Okay, so it's an exciting time. We've got an exciting set of languages, an exciting set of frameworks, and I'm really glad to be here because the next 10 years to me looks like it's full of opportunity and we are in the right place at the right time with what I think is going to be one of the most important languages and frameworks for the next decade or beyond. Right, and that's it. So that's me. I've got a GitHub repository thing. You can find the presentation there. I'll probably stick it up on speaker notes or whatever it is. Okay, thank you.