 Hello everybody. Welcome to the Exxon Stage first talk on day three. We are really happy to have Alistair and Aura here. They are from the Fenkoko group, which is an interdisciplinary group that is researching stuff and trying to, on one side, work on their own, but also get together, discuss debunk ideas and help each other with their projects. Fenkoko group was founded in 2013 by Aura, so if I remember correctly. They are researching a wide range of interesting and relevant topics and they are up to now non-funded and completely autonomous, which brings us also to the subject of our talk because those two are going to talk about self-driving cars and yeah, I'm really interested in what you have to tell and the stage is yours. Thank you. I just needed to unmute myself. Thank you so much for this kind introduction. And before we get anything wrong out there, it was a bit of a misunderstanding. Alistair is not part of Fenkoko. Maybe I can convince him to be part of Fenkoko one time. No, but he's taking part. That's fine. So we're going to talk about, let us go in media space, we're going to talk about Big Tech's $100 billion delusions with self-driving cars and we are talking about it as a delusion and I'm hoping to create an interesting talk with all, touching all kinds of topics, but we're going to end up with a question of whether the car in itself shouldn't be abolished and of course what interesting mobility concepts would follow if we try to do a encompassing futuristic mobility. That's good for everybody. Thank you. Let's start with Ina Masku. I'm sure you all know. Here he is looking extremely serious. He's a very serious guy as you know. This was him in 2019. I think we'll be feature-complete on self-driving this year, meaning the car will be able to find you in a parking lot. Pick you up, take you all the way to your destination without any intervention and just to make sure you know how serious he is, he says, I am certain of that. That is not a question mark. So let's see how they're getting on. They actually released Autopilot a year later, a bit later than they planned and it's now in beta. Here we go. Okay. So I'm going to go out. Jesus. Oh my God. Yeah. That was a good example. A good example of this is still beta. I don't know how important it is to have control at all times. It just steered directly into the back of this park car and it wasn't going to break. So it's still detecting those rings on the road as a Oh my God. This is why we don't have people with us normally. Okay. So there might be a few problems there and this is really what our talk is about. Over the last few years, there have been hundreds of billions of dollars spent in this field and we got that from a survey in 2017 where it said 80 billion and we know that there was quite a few dozen of billions more than that being spent in the last few years where it's really peaked and that includes startups. It includes the big tech companies, as you know, and automotive vehicles plus a whole network of other suppliers and consultants and we call this for the sake of convenience, the technology mobility complex and they're all convinced and trying to convince us that self-driving cars are just around the corner. So how are they getting along? Well, we've seen Tesla and certainly they have something to market. It is in beta. It costs $7,000 or somewhere around that and there have been three fatal crashes so far. It's very limited compared to what they said it would be. You've got Audi, for example. We've also tried to bring something to market. This is the A8 and it has jam pilot. They convinced regulators in Germany that it was perfectly safe to read your emails whilst driving with jam pilot on. Unfortunately, they didn't quite convince any other regulator in any other country and as a result, it's been withdrawn from the market this year basically because if there was an accident it would effectively be Audi's fault. Then you have, of course, the tech companies, the big tech companies. Uber has been very prominent in this field looking for mobility as a service. A concept we will come back to. They started in 2015. They've invested over a billion dollars. But unfortunately, Uber have had some issues too and we'll talk a bit more about them later on, but there was a fatal crash in 2018. In November this year, just a few weeks ago, they announced they were giving up altogether. They sold, or we should say, paid to give away their self-driving project to another company. Amazon got into the field with this startup, Zooks, very recently, a little late to the game, but Zooks has gone for the all completely autonomous new build design. This was released a few weeks ago. However, they were a little bit coy about when this will actually be working on the streets. All they would say is it's definitely not going to be 2021. Our view is it's probably going to be a bit later than that. And then you have the old automakers also involved. And what most prominence amongst those is General Motors, who bought up crews. They've been investing in this since 2013. They claimed they'd have a commercial taxi, a robot taxi service available by the end of 2019. Well, that hasn't quite materialized. If you go to San Francisco, you certainly see the crew's cars all around. They've recently announced, and they're very happy about this, they recently announced they have driverless vehicles for the first time on five streets in San Francisco. They will only be working in low traffic and at night. And there will be one person in the car with an emergency stop button. So a little far away from a commercial service. And then of course you have the big player always, which is Waymo, which is of course owned by Google. They've been around since 2009 with a number of different iterations. You can see the first version they have here, the Firefly. But someone must have been very rude about that to somebody at Google. Because these are the latest versions you've got here, which are obviously really, really mean jaguars. Anyway, they've been going in Phoenix for a long time since 2017. They announced in 2018 that they'd have a fully driverless taxi service. Then they announced it again in 2019. It's taking a little bit longer. They finally announced it again in 2020. And they actually kind of do have driverless taxes in Phoenix, but it's limited to a 50 square mile area. They're actually supervised remotely. So somebody can at least intervene from afar, a safety driver effectively remotely dialing in. And they're also, they need perfect weather. So again, very limited. And it's always in where you see the successes in this area. It's always in these highly controlled environments. Phoenix is an ideal place with perfect weather, easy streets, no hills. And then you've got other projects like this, the Ford Argo. Again, a lot of investment, $2 billion from Ford, even further investment from Volkswagen. And they're delivering fruit and veg in Miami, which is very nice. Again, in this very small area, but that requires two safety drivers. And then you get this, which is a very successful project, the Optimus Ride project. So that's in a retirement community. And those are the kinds of places where self-driving cars seem to be working, not so much on busy urban streets. And you could go further than that. And this is a quote from Dr. Jill Pratt, head of Toyota Research. She says, I would challenge anyone in the automated driving field to give a rational basis for when level five will be available. And we'll talk a bit about level five, but that means a fully autonomous vehicle, which, as we would expect. And, you know, here's John Kraftchick, the CEO of Waymo, in a candid moment a couple of years ago, he conceded, you know what, self-driving car technology is actually really, really hard. Who could have possibly thought that? And I'm going to hand over to Oris to explain a little bit more about why that is. So we're going to talk about, first, why do we call it a self-driving car, if I may. So to answer it, we need to look at language and technology. And in general, it is interesting to note that words gain meaning through their use. And they can, if you want to say that, lose meaning, but they can surely change meaning and get ambiguous by the wide acceptance of more than one meaning in society. And autonomous has, by now, not only one anymore, if we want to say the least. So if you look at a definition of an autonomous car, hang on, stay with me. If we look at the definition of an autonomous car, you get the obligatory definition of a vehicle capable of sensing its environment and operating without a human environment. A human passenger is not required to take control of the vehicle at any time, nor is a human passenger required to be present in the vehicle at all. And an autonomous car can go anywhere the traditional car goes and does everything that an experienced human driver does. Now that is a high aim. Would you? Yeah. So we land in impractical ambiguities with this. Not only because the traditional definition of autonomy is something completely different. I'd like to share with you just a very short version of what we read when we type it into the Stanford Encyclopedia of Philosophy, which is a nice source. And it gives you individual autonomy is an idea that is generally understood and referred to, to the capacity to one's own person, to one's life, according to reasons and motives taken as one's own, and not as the product of manipulative or distorting external forces. Now, this is not a new point. This point about autonomy has been made quite a number of times, especially in terms of autonomous cars, but it is important to mention nonetheless. And we will see how difficult and how ambiguous it is when we go to the automation levels that are utterly defined via automation, sorry, the automation levels that are utterly defined by autonomy. And autonomy is supposed to be the more complex concept. And this is supposed to, you know, seems a bit the other way around. So the Society of the Automotive Engineers, oh, by the way, insurers have identified autonomous ambiguity as a potential reason for an increase in crashes due to confusion. So that's not, it's not only a point of philosophical extent. So the Society of Autom... Yeah, that's it. Thank you. The Society of Automotive Engineers, the SAE, currently finds six levels starting by zero, which is why we ended five, obviously. And these levels go from fully manual to fully autonomous. And these levels have been adopted by the US Department of Transportation. Now, there is a standard out there that I'm going to come to and look at the wording that they use to be sure what we're talking about. But in general, it's safe to say level one assumes that a system can assist the driver with one driving task, just one. So ACC fits into this category, which is the adaptive cruise control, ACC. And level two systems such as pilot assist can assist with, for example, two tasks. And level two is the highest level of automation currently available. That will lead to a discussion on strategy in development that the one is trying to erase the driver and the one is trying to kind of see through the driver and learn from the driver. The first one is trying to skip level three, and the second one is kind of putting great emphasis on it. But to understand where that lands with all the assistance systems and the wording, we need a map it somehow, which is why I looked at the taxonomy and the definitions for related terms to driving automation and systems for on-road motor vehicles, which is the long as name for basically the standard J3016201806, which is the standard which is where it's all defined. And this standard gives you roughly this explanation. And me as a philosopher, I need some proper words. And I do, as I said, am utterly aware of the fact that we can use in different contexts words differently due to different reasons. But we should also make sure that we do understand each other and the context in which we use those words. And we should be clear on at least our own extent of meaning when we use the words. And then especially with, if it's used by others, sorry, I've got a little scrolling problem here. Okay, so this document that I was talking about refers to three primary actors in driving the human user, the driving automation system, and interestingly, other vehicle systems and components. And these other vehicle systems and components, or the vehicle in general terms, sorry, I'm having serious trouble scrolling here. So it boils down to processing modules and operating code that overlap in the automation system and the subsystems that are supposed to somewhat actor, primarily actor-wise, can we go back one? Primarily actor-wise are supposed to be distinguished, you know? I'm going to be through with this in a second. But just so you know, these automation levels are defined by the role of those primary actors and how they act in traffic. So they're trying to match the automation levels onto the dynamic driving task performance and the DDT, so dynamic driving task, DDT, fallbacks, which is usually the driver, especially in the systems that we're talking about nowadays. But this is supposed to be done by the system completely, which we're going to talk about a little bit later. So for example, a driver is, oh yeah, and it's necessary to see that it's about the way that the system is designed, not necessarily the actual performance of a given primary actor. For example, a driver, for example, who fails to monitor the roadway during engagement of a level one adaptive cruise control ACC system, still has the role of a driver, even though he or she is neglecting it, which is basically the most easy example that you can pick and all the others bring you into actual trouble. This one seems clear, but the others really don't. Okay, so we're talking problematics in decision-making and predicting and responsibilities. So these levels apply to the driving automation. And you can see what I've just been talking about here. It's a little bit moved, but I've tried to repeat the definitions up there. And you can see that it matches roughly to the scale. You have the system, you have the human driver and you have the other system components, which end up being some driving modes. But some driving modes doesn't give you too much information, obviously. So while we're trying to get informed about the extent of assistant systems and how responsible they are, we get that and end up being really confused. And that's a bit of a pity. And we can see this. Oh, we are already. So some of those subsystems even in the definition are explicitly excluded from the taxonomy that is supposed to describe the automation. So we have some automated subsystems explicitly excluded from the automation taxonomy. And to understand what that means is only possible if we look at what the heck we're talking about. So what are these ADA systems that we have to understand language-wise? And these features basically boil down to perception. So we have, this is the every autonomous car roughly has, and you could always argue with the wording and you could always argue with the nuances, but roughly you have a perception system and a decision-making system. And then you have actuators of those decisions. But they consist roughly of depending on whether you use a Tesla or not because Musk doesn't believe in lighter. And for a good reason that I'll come back to later. But the idea is you have surround views, surround view, cross-traffic alert, park assist, emergency braking, traffic sign recognition, lane departure warning, adaptive cruise control, collision avoidance, rear collision warning, surround view, park assist, and all these kind of things. And they amount to different functions with different extents of automation, right? Lanekeeping has more automation than the, well, I'm just not, I'm not even gonna go there. Because that's what is so difficult about it. So what's interesting and necessary to understand in general is that we have perception and perception is made up of computer vision and sensor fusion. And it's all about understanding the environment. Computer vision uses cameras and it allows to identify cars and pedestrian and roads. And sensor fusion uses and merges data from the sensors such as a radar or a lighter or the complement data from the cameras or infrared when it's close to the car or all kinds of things, depending on the project we're talking about. But decision making is on the prediction and decision side. And it's not, as perception doesn't seem to be yet, even though impressive, it's not developed sufficiently to just roll it out as they're climbing. And that's what I'm trying to get here while showing that it is, it is interesting to look at those things. Now, all these things we can come back to into the discussion. I've got to move on. Yeah, yeah. So now we're gonna talk about automated, sorry, assistant, the automated driver assistant systems, that's what it is, and the interaction with the driver. And there are two slides on this because, and this goes back to what I mentioned in terms of the turn in strategy. The first, let's say bulk of ADAS systems, a support to take away the wheel, to take away the need for you to take the wheel. And the second bulk, or what they are now recommending, is basically making you see through in your decisions so that the system can learn for you from you. Because what drivers can do is still much better than what cars can do. And since it didn't really work out to skip three, to skip level three, they're now trying to come back at it. So while the downside of ADAS systems that doesn't seem so problematic, you know, it's all confusing, which interestingly actually has practical consequences. People don't know what they're doing, and thusly they're creating accidents. But also, it's misleading in terms of wording. I had that to a point where they think they can take an app while driving, which I found very interesting. And now if we look at, we can move, if we look at the next one, you can see that there are recommendations, recommended escalating attention reminders for level two automation. And level two automation, again, we started from zero, we want to go to five, and we have level two right now. And it's already ending with the car taking over from you and locking you out. So what I'm trying to say is, first they try to erase the driver. And now they're doing everything to make the driver back up the system that they developed. And to do that, the consequences or the costs that come with it might just be not that desirable depending on what you're looking out for or have a problem with. And of course, I'm hinting to privacy later on. But even if we leave out that very massive, unbelievably massive topic. Oh, and I didn't even go into, I think, observation and control in terms of the ambiguity. Even if we don't think about the privacy issue, that turn in development shows that the hype is leading to developments. And I mean, massively and greatly pushed developments that might not just be that well aimed. So if the first galloping in this direction leads in galloping in the opposite direction and still doing it with a lot of enthusiasm, it might just be a thing to notice. And it's not a grumpy point, all pro progress, but in a better, in a good way. Okay, where are we? Failures and perception I've been mentioning. And I don't think I have the time to really explain what it boils down to. But here is, apart from the driver not paying attention, one of the reasons why this is problematic is because of the driver's confusion, but also because it's really unclear what the automated systems can do. And I've been reading up on it. Which slide are we on? 17 27 free mapping. Hang on. Hang on. Okay, that's good. Okay. So, so localization, we're talking about localization right now. This is the my pre mapping slide, which, which hints to the fact that you need specialized schools code for pre mapping. And that seems to be a rather difficult issue. More importantly, though, localizing which is making the GPS signal or which is complimenting the GPS signal with other technologies. So you really know where you are and not only in the range of 10 meters basically is this is a lighter specific problem because it's about keeping the maps at a current and Well, anyway, the pre mapping. Yeah. So the issue is that if the main changes and you can see that up there really frequently as it does, and if it changes too much, you can actually lose your localization. And, and that is needed to for the car to know where it is and so on. And that in the end, this advanced technology presents a drawback to the self driving cars. And I mean, moving on the weather thing, we can just skip because we knew that already. Interestingly, there are developments that can see in the dark, but how fast they can ride and whether you want to sit in them in all situations is a completely different question. And now we come to as opposed to the problems with recognizing objects and classifying them, classifying them properly, which is perception, we talk about prediction. And again, even though impressive, it's a it's an open problem. If you can predict the future accurately, then planning and how to react to those situations is easy to solve that sounds like an a equals a sentence. But being able to predict the future actions of recognize the objects in autonomous car computing is an open problem. And that is Dr. Eustis from Toyota, who is the SVP for automated driving. And the issue about the issue about this is that you have very specific problems that the car needs to solve just a second. And that is the semantic recognition of something, not only the understanding of the surrounding, not only the the percepting of the surroundings. So if you percept these surroundings, you can see people on the side of the road. But if you understand the surroundings, you understand the difference between teenagers who might be erratic and run onto the street to use the obligatory example, or an older lady and a younger child to very conscientiously wait for the lights to turn. And that's the difference that humans are much better in recognizing than cars. And if we look at the ethical problems, so we have perception, once you've percepted correctly, if you want to distinguish it like that, you have to predict correctly what those objects going to do, which is a whole nother question. And then, once you've done that, you still got some ethical problems, which are usually explained by the trolley problem. Now, due to a couple of reasons which will become clear with the next slide, I've put this in only as a joke, because the trolley problem, as it turns out, doesn't give us too much information for the development of autonomous cars, neither on the programming nor on the on the ethical side, although it focuses our attention. And thusly, it's not to be missed as a point or a topic focuses our attention on county and questions of responsibility and autonomy or utilitarian questions, for example, like in Mills utilitarianism, which need to be thought through if we want to be able to structure society properly. We can't just leave ethics out. And that seems obvious, but I'm just going to make that point once more. And here you can see driver versus pedestrian and cyclist is another version of this. And this just roughly says what I just told you. There has been a lot of hype around the trolley problem. But in the end, the information that we get out of it is rather restricted as opposed to situations that could actually happen to you as a driver, which brings us to fatal crashes due to perception failure. And I'm just going to go quickly because we've made that point a couple of times now. There is the I think we're basically through, aren't we? Okay, we've got the perception issues. I think there is one more slide about security, but I'm happy to give over right now. We're talking here. The next slide is around Uber. And this is really interesting. We talked about the fatal crash with Uber in 2018. And what was interesting here was when it was investigated by American authorities, they found what they called a cascade of design failures all the way through the process. The car itself had six seconds to determine what was in front of it, what object. It was alternating between different things. Every time it alternated between thinking it was a bike to an object to a person, it lost the memory of the movement of the person. So it couldn't actually adjust its position according to the situation it found itself in. And then when it got close enough, there was an action suppressing system that kicked in to prevent sudden movements which prevented it from handing over to the driver. And what's interesting about this is the level of failure the authorities found there with Uber and also the safety failures, the regime which oversaw the safety drivers. They didn't drug test. There was no oversight. And yet when it came down to it, they in the end ended up charging the driver and not actually sanctioning to Uber in any way. And coming back to this point, Oris was making about the safety issues. Who's actually going to be responsible for a fatal crash? However safe these cars are, they're always going to create some kinds of fatalities. It's inevitable the scale of automotive transport. But who's going to actually be accountable for it? And this is not a good precedent. So when we talked about cybersecurity, should I just quickly go through? I can do that one just very quickly, just because this is a very specific one. If you see the headline, it says a study on, can you can you put in the headline interview? Yes. Thank you. Automotive Industry Cyber Security Practices from measured or assessed in an independent study by the Commission by the SAE International and Synopsis. Now Synopsis sells software for autonomous driving. So we can see where that is coming from. But those guys are trying to get it into boxes that we can work with. So those are the key results from this study. And this study is not only on connected or autonomous cars, but on new cybersecurity in automobile and the more automobile industry. And this point is just a bigger explanation or longer explanation of this one. And the three key points are software security is not keeping pace with technology in the auto industry. Software in the automotive supply chain presents a major risk. And that's an issue that will lead us back to proprietary versus open software issues, amongst other things, because the software comes from third party suppliers. And sometimes, you know, the OEMs have to superimpose things to them to make it more secure. And this guy, we can go into this in the discussion and connected vehicles have a unique security issues. I mean, we could all have guessed that one. But that's just what I wanted to throw out there, because I found they do have some interesting questionnaires with people from the industry and from science as well. Okay. And then when we're talking about these cybersecurity issues, we can also we need to talk about the data and privacy issues. Tesla currently on the road today is equipped with hardware for autonomous driving. And that means it has eight 360 degree high definition cameras all recording constantly has 12 ultrasonic sensors, it has GPS, it has an inertia gauge, even actually can monitors the pedals and the steering, all of that information is being shared with the data sensors with Tesla. And it can be recording even when the car is stationary and actually off it's recording all the time. So it's not just recording the people in the car. It's recording all of its surrounding area and all of the people there as well. Research suggests that there's something like potentially a fully autonomous vehicle could actually be sharing 40 terabytes of data every eight hours. And then we have the cybersecurity issue. What if we have malware, for example, in a car? It's one thing if you have it in a computer at home, when that computer is in two tons of metal, going at 120 kilometers an hour, that's a bit of a problem. And it's not just malware for the car. A lot of researchers are concerned about passive hacking. That is doing things, contaminating the environment with different information. For example, with road signs which really screw up what the perception of a self driving car system would be. These all could be things with fatal consequences. So there are massive issues there. I would like to add that we are aware of the fact that recording and monitoring is not the same thing. And monitoring something and recording it doesn't have to go together. It usually does though, because for a couple of reasons, mostly because if you wouldn't need the data afterwards, why would you monitor it in the first place? And those are things to think about. Sure. So let's see. So having seen all of the complexities and difficulties and challenges, we may want to go revisit. Why exactly do we need self driving cars anyway? And one of the obvious answers we have to that, which is frequently said, is it's going to somehow be an immense boost to our economy. It's going to, one report recently said it's going to add $7 trillion to the world economy. That's twice the size of the economy of Germany. And how is it going to do that? Well, when you look into the report, you can see where they're going. If they could saying that there's going to be $3.7 trillion spent on mobility as a service, in other words, taxes, that's an awful lot of money. That's more money than the entire automotive industry generates currently, which is about $3 trillion. So that's a lot of money we're spending on taxes. Is that going to make us richer as an economy, as a society? It's hard to see that really. And likewise, they're looking at freight and transport and $3 trillion spent on autonomous vehicles. Well, that could be more efficient. But of course, we have a huge number of huge workforce employed in the transport industry. There are something like 5 million drivers professionally in Europe alone. What happens to them? Is this really a great idea for our economy? And fundamentally, is it actually going to make us that much richer if we're in a car? Instead of driving it, we're looking at our emails. It's hard to see that. Safety is the other big argument that's often made. This is taken from Waymo's website. It says 1.35 million deaths every year, 94%. This is a statistic you often hear when you look at things around self-driving cars. 94% of accidents are caused by human error. The implication being that somehow autonomous vehicles will actually address all of those. But a lot of researchers have questioned that and said, well, actually only a third of those accidents are going to be avoided by autonomous vehicles. Even when there are humans involved, there's nothing an autonomous vehicle can do about a pedestrian in the street, for example, to avoid the crash. So the idea of safety is a big, big question. It's an assumption, but there's no real data to support that. What we know is that autonomous vehicles can be reasonably safe in controlled environments. But that's not the same as a normal city. And then we're given this kind of vision. This is Berlin. This is a lovely Berlin a few years in the future as created by Daimler. And this is from a report from a synopsis, another company in this whole self-driving car industry about how it's going to reduce congestion. This is a very popular argument. Also, how it's going to cut transportation costs by 40%. Hard to see how it's going to do that with the cost going into the research and how it's going to improve our walkability and livability. This congestion issue keeps coming up. And the idea presumably is that if you have a whole fleet of autonomous vehicles, they can just drive bumper to bumper at 70 kilometers an hour and be hugely efficient. But it doesn't really work like that. For a for a start, you've got autonomous vehicles almost certainly for the next few decades, even if they are exist, working with normal traffic. How is that going to be more efficient? An evidence suggests that actually autonomous vehicles could increase traffic congestion as people start using them for completely frivolous journeys if they're not actually in them. And so the issue around congestion is very questionable indeed. And indeed, traffic planners also point to public transport. Highways today can carry a maximum of 2000 people cars per hour. If you're very optimistic about autonomous vehicles, you could possibly quadruple that, but that's really stretching it. But a good public transport system will transport 50,000 people per hour. And there's no way, as this urban planner says, no technology can overcome that basic geometry. Maybe just as a side note, if automation could eliminate all involved driver related factors, then that would help a lot, but it's a big if. And there is actually numbers out there that show that even with the increasing automation level, you don't get that much out of it. It's like there are numbers. I'll have to check it out. But the point is that those assistant functions work well, either on higher speed roadways that are so framed that it works anyway. And that's not where you get usually the crashes. That's not where you usually get the crashes. Or it's even if that those ones were excluded, it would still just mean like 15, like only 17% fewer deaths at 9, 9% fewer injuries or something like that. So I mean, we'd have to look at the numbers properly, but this is this is an interesting thing to note. So one of the other challenges here, as we said, is it's so difficult for cars to perceive their environment. And inevitably, because of the industry and the scale of it, you're seeing the alternative. And this is from Andrew Ng, who was one of the most prominent AI researchers in the world today. He wrote an article saying self driving cars won't work until we change our roads and attitudes. It's up to us to adapt to them. And this is going to be increasingly allowed in the years to come, as one expert transport expert put it, the open spaces that cities like to encourage would end as the barricades go up, foot movement would need to be enforced with similar to poor style authoritarianism. Maybe that's why we're seeing also a huge amount of hostile action and human beings being really quite cruel to robots in self driving cars. This is from Arizona. There have been other cases as well. But there are other issues we need to think about. And there's, if you want to look at these scenarios, something like this video coming up, it's one of the places where you need to worry about where self driving cars going. So as you might have expected, that is from the recent fires in California. Something you would think might be present in the minds of a lot of people in the self driving car industry, as they're all based around Silicon Valley and probably encounter problems with fires over the last few years. The real issue here, then when we're talking about driving is not who drives a car, but the fact that we have any cars at all. And so there we are. And the fact that we have 1.4 billion cars currently on the planet. Anything we do with cars is going to be unsustainable. No matter how we change the technology that drives them. And of course, what we assume or what the self driving car assumes is somehow or other, it's going to be fine because they're all going to be electric. Well, this is a lithium plant in Bolivia. And admittedly it looks actually quite pretty from here. But you got to remember that each of these pools of evaporation has toxic waste in them. And lithium extraction is like any other extractive industry. It is appallingly destructive to our environment. And the places where it happens, it has a huge cost. When you look at lithium, there is a vast amount that we require. Currently, if there are 1.4 billion cars in the world, and we change them to lithium, that's 12 kilograms of lithium per car. That's the normal amount at the moment for say a Tesla. That's 16.8 million tons of lithium. And yet we have 80 million tons of so called resources that known quantities, but only 17 million tons of reserves, those that we can actually extract. In other words, all the lithium we know we can extract. We would have to use for self driving cars. That means there's nothing left for your mobile phone. They'll have to go clockwork. And that's 10 times the production of lithium that we actually produce today. And of course, lithium is not the only element we need to look at. Cobalt as well. There's a kilo of cobalt in a lithium battery. And that comes primarily from Congo, which is the centre of some of the world's worst child slavery situations. So again, you've got this real problem of locking us further and further into an extractive industry, which is fundamentally unsustainable. And even when you look at the carbon, it's not so clear that a lithium powered electric vehicle is somehow going to be more sustainable than a normal combustion engine. Researchers in Germany have found, for example, that in actual fact, over the life cycle of a car, that is for the manufacturer, as well as the consumption of energy while it's in service, to the actual disposal of the car, the carbon impact of an electric vehicle is probably just as much in Germany because of the dependence here on fossil fuels, on coal power. Other research has been more optimistic about that. And it's found that, for example, if you look at the whole life cycle, a standard average petrol car is going to produce something like 250 grams per kilometre. Whereas a Nissan Leaf, one of the lightest forms of electric car available today, is 142 grams per kilometre. So a lot less, admittedly, but it's still significant. And of course, this is one of the lightest cars. But electric vehicles are having another issue. And sorry, autonomous vehicles are having another issue. And that is on impacts on public policy today. And here we have a situation in Camino, in California. And in Camino, in California, they try to introduce bus lanes. And those bus lanes were overrun by people saying that they're going to be antiquated. And they're going to, we need to wait for self-driving cars. That was in 2014. They're still waiting for them. They still don't have any bus lanes. The same thing happened in Detroit. And in Detroit, they had a referendum which was overruled. And here you have, and this is at the heart of it, really, this is from one of the leading venture capital companies, don't build a light railway system. Please, please, please don't, says this person from Anderson Horowitz. We don't understand the economics of self-driving cars because we haven't experienced them yet. Let's see how it plays out. And here you can see even Sundar Pichai here talking about, this is in the last few weeks, how Google is helping with climate change, how it's using AI to address carbon impact, nothing here about the fact they're plowing tens of billions of dollars into a technology which is going to take us towards climate change. So the reality is we've got one option for safe cities and that is to take cars out of them all together. And what we need to consider is why we are going the self-driving car route in the first instance, why we're not, it's like other technologies which the tech company tend to push on us, like going to Mars, like cryogenics. These are things that belong in a teenager's bedroom. And so with a hundred billion dollars, we could do a hell of a lot more. We could build cycle super highways. We could spend 10 years of free public transport. So this is the real future of the car, autonomous or not autonomous. This is the way cars can contribute to a sustainable future. Thank you. Thank you very much. We have time for a few questions because there are a couple of them. So the first one is, is it more a liability issue or a technical issue that no autonomous vehicles are on the street yet? That no autonomous vehicles are on the street. Well, first of all, we don't, well, it's technologically, if I may, it's technologically not yet possible due to the fact that it's always restricted to the geofenced areas where the maps are pre-mapped, pre-built or where the web account harm you. So truly autonomous, truly autonomous or let's call it self-driving cars are not out there for the technological reason. But we can be very happy about that because the liability, even if they're trying to grasp at it now, is not at all done. And it doesn't look like they're going to develop it in a way that we humans can just lean back. We rest of our humans that not necessarily depend where the millions go right away. And the liability problems are so intractable. It's hard to see where the solutions really lie under our current regulation systems. So both. Next question is, are regulators or insurers worried about the danger inherent in human passengers who aren't paying attention, only needing to take control in extreme conditions? Seems far more dangerous than requiring constant attention. Nice one. Yeah, sure. I mean, I mean, this is absolutely correct. I mean, when you look, this is a real problem with level three, we went through the different levels. Level three autonomy, which seems technically most achievable, that's what Tesla is aiming for. The real problem there is that it's very hard to get drivers to pay attention if they're not actually driving. And research has shown time and time again that as a result, the reaction times when something does go wrong are that much slower. And this is a massive, massive problem. And I think it was highlighted in a lot of the scale that you showed. But this is a big reason why, for example, Waymo, other self driving cars are going straight to level five. They actually find level five to be an easier technical challenge than trying to address this human interaction problem with level three. Did before they couldn't couldn't erase the driver. And this question has another beautiful connotation. It goes into the direction of what they are in, what are they interested in? Are they interested in saving the call of the human or roughly something like that if I heard that correctly? And it's, that's a nice one because that's exactly what what this is so interesting about in this turnaround that they made. First of all, they're trying to skip three and then they're trying to make the driver see through so they can make three and then four and five. And it's so weird because as I said, it's never the everyday or I hope I've said it's never the everyday situations that are a problem with the autonomous driving. It's always the interesting out of order situations that then the system is supposed to learn from the driver in the situations where the system wouldn't have ever decided like that, but the driver did. And that is just 160 degrees around and then another 180 more to go into the into the other direction that you've been going in. First, you want to erase the error source human. And then you need the human not to do the worst errors ever, which is just a complete turnaround. Okay, there are quite a lot of questions here. I'm afraid we won't have the time to ask them all. I'll take one. But before there's already the question, where can we discuss the further for tonight? I would recommend meet in the pet because that's showable. Thank Coco just and this is why I'm so glad that you mentioned it beforehand is basically a standing possibility to do colloquia and conferences on all the kinds of topics that you're interested in. And I'm going to do that talk in a different version again at Frank Coco. And I would be delighted if you guys came and and brought in your your expertise. And I'm sure Elderser is going to be there. And apart from that email and yeah, I mean, get meet us. Let's follow up on chat afterwards. Yeah. Okay, then thanks. Thank you for me. And also, there are a lot of thanks. And that's the best talk we heard in the chat here in the pet. So end a lot of questions. So I give you the link afterwards and you can chat it out with over there. So thank you for the talk. Thank you. Thank you so much.