 I've been here for the last year and a half doing research on self-driving cars and their media representations, and my thesis is titled, Driverless Dreams, Narratives, Ideologies, and the Shape of the Automated Car, and I'm going to be presenting sort of a slice of that for you today, trying to encapsulate what my main argument is. So if you're like a lot of people, when I say that I'm working on self-driving cars, you might think of one of a number of things. This, which is Google's self-driving city car concept, this picture sort of started to circulate towards the end of last year, and it's continued to get a lot of sort of presence in this picture. It's everywhere. The sort of device in question is an attempt by Google to move away from their sort of Prius highway driving work towards low-speed city high complexity environments. Another thing you might think about would be this, Minority Report, which has actually been lauded by some people in the industry as being a relatively good representation of what a automated vehicle system could look like because it was done by an actual designer. But you probably didn't think about this image, and if you did, then you probably don't really have to listen to the rest of what I'm about to say. But so here you have like a server technician in a data center doing diagnostic and repair on part of a server rat. And you also probably didn't think about this. A group of people in a room monitoring and supervising a large scale complex system, in this case power grid. And my contention is that these last two images are just as useful, and in fact in some ways more useful in terms of thinking about what automated vehicles will be than the first two. So basically in this talk we're going to talk about three main things. First, what are people's ideas about automation. Second, we're going to talk about supervision and hybrid human machine systems. And third, we'll talk about how sort of at a high level how to design effective vehicle automation systems. I want to say to start, and this is sort of a, to bring up what my main argument is here, that automation involves people. It involves people building, repairing, monitoring, and operating systems. And then it's really much more productive if we think about hybrid human machine systems rather than just thinking about automation or just thinking about autonomy. So what do people think automation is? In general people seem to think automation in the vehicle space is about seats of robot cars. This is a vision that comes directly from science fiction. It comes from Minority Report, which we saw already. It's also a vision that comes out of our sort of cultural ideas about what automation in factories looked like and what effect that's had on society. But more than that, it's also a picture that comes out of Google and what sort of Google has to say on how they present themselves in terms of what they're doing. I don't think that that sort of science fiction emphasis there is, is there an enemy they really feel like this is something that makes, you know, makes it look like we're doing cutting edge things that maybe no one else can do in solving big issues. So it's not a notion that they seem to be ready to, you know, disabuse people. It's also something that comes out of research in artificial intelligence and the sort of discourse around AI history. And this is done when that an AI researcher talking in 1997, this is actually a quote I believe from Pamela McCordek's book about AI researchers. But before we let robotic chauffeurs drive around our streets, I want the automated driver to have a general common sense about the value of a cat versus a child versus a car bumper, about children chasing balls to the streets and so on, about death being a very undesirable thing. And there are a couple interesting points that come out of this quote. One is that there's the sort of vision here of maybe a humanoid robot behind the wheel of a regular car. Another is that we can see these are moral and ethical data that's coming out to the fore here. But also, it's an idea that presupposes that these things will just be out there in the world driving around and doing stuff entirely separate from human commands. And people tend to think about that fleets of robot cars vision as opposed to advanced driver assistance systems being developed by Volvo and Mercedes and many others. You can get these on high end luxury vehicles right now. They'll be available pretty soon and sort of trickling down to other models of cars. These are things like adaptive cruise control, pedestrian detection, lane keeping, incremental improvements from the sort of system that we have now. And the opposition of these two regimes of automation is something that actually gets supported by levels of automation, formulations by groups like the National Highway Traffic and Safety Administration, which released their tax on last year in order to guide organizations, those sort of lawmakers and technology organizations in terms of how they're going to develop and regulate these systems. And there are a lot of problems with that document that I can talk about in the Q&A if people are interested. But basically what they do is they have five levels of automation from zero to five. Zero is no automation, which, troublingly, there are certain things in there that are actually automated. Five is like a full automation everywhere. And though they're placed at opposite sides of what is sort of ostensibly a continuum, but it's built into these sort of discrete components. And they're sort of very much opposed. One is sort of the past. And one is, you know, this sort of ultimate goal that we want to get to, we want to be at level four, we don't want to be at three, two, one or zero. But the reality is actually different. There really is a lot of stuff in the middle there. There are a lot of sort of hybrid models to look at. And we should really take that seriously, take seriously the idea of automation as a continuum. Now on this topic, Google's Chris Hermsen says that advanced driver assistance systems that we've been talking about will never become fully autonomous. And he has a great quote about this. That's like me saying, if I work really hard at jumping, one day I'll be able to fly. And I don't want to push back too hard on that because it is a good point that if we work at automating one little piece at a time, it's going to take a while before all of the pieces are complete. And we're not even assuming all of those separately designed components end up working together. But that vision still assumes that the only reasonable goal is full autonomy. So Hermsen's future is way off here on the right-hand side, the sort of absolute maximum that you can get. And somewhere in the past, we were fully manual and advanced driver assistance will only get us part of the way. But there's actually no reason to think that that one sort of extreme model is the way these things will be or should be. We may just as likely see something else in the middle, in fact, that may be the most productive model for achieving certain types of goals. And so we should really consider, you know, where our future might lie in this space. And that means talking about hybridity, and that means talking about hybrid human machine systems. And I have two cases for why we need to be talking about this. The first one I'm going to go through is the pragmatic case. The second is the fundamental case. And the pragmatic case is basically that self-driving cars are hard. There are a number of difficult AI problems that need to be solved in order to, you know, create automated vehicles. Some of those are problems in computer vision. That, you know, object detection is something that's quite difficult. There are a lot of strides being made in that right now with new computational approaches. But it's still something where humans are better able to do tasks like tell a two by four from a piece of foam of approximately the same size, which and those sorts of knowledges about what objects are made up and what their properties are potentially very important when you move into doing things in the real world. Another difficult problem is pedestrian interaction. When we're driving through cities, we have complicated transactions of space with bicyclists with pedestrians. There's been some work done actually here in Media Lab in terms of how to design a, you know, robotic car that has sort of eyes that track pedestrians and sort of interact in those ways. But it's still a difficult problem that people are actually surprisingly good at that because we're social. We know can we sort of know how to do those sorts of interactions. There's also a case for people in terms of risk mitigation and verification validation and certification. The amount of code that's involved, I ran into this relatively recently, amount of code involved in a current day automated system like Google's or Mercedes system is 10 times that of a recent military jet fighter. So we're talking massive amounts of code that need to be exhaustively tested. How do we do that? And especially how do we do that in systems that may be non-deterministic? They may not always respond exactly the same way to the same stimuli. That's a thing we really don't know how to answer. When I say we, I don't just mean me, I don't mean, you know, independent computer scientists, I mean the Department of Defense. That this is one of their big questions for a building automated system. How do we test them? How do we make sure that they're reliable? And one of the ways to do that that's been used historically and is still being used is put someone in the loop to sort of check those decisions and interact with that system. And there's actually a whole engineering field that's designed around this idea. It's human factors engineering, which takes very seriously this idea of human supervisory control or joint cognitive systems development. And these are sort of close cousins of AI. They're interested I think in a lot of the same objects but from a very different perspective. So from an AI perspective you could look at the vehicle as a sort of object itself. You know, how much intelligence does it have? What are its capabilities? In a human supervisory control, human factors perspective, you look at the human and the vehicle as the system trying to accomplish certain sorts of tasks and that have certain interrelationships of control, of supervision, of agency within that. And one of these ideas is let the human do what he or she is best at. Whether that's identifying whether a certain like social behavior is threatening or non-threatening or crossing the street or doing something else or whether that's a sort of high level management focus. You know, where do we want to go? What do we want to do? But there's another case here which is the fundamental case. Which is that full automation simply does not exist. Automation is about human goals, it responds to human needs and therefore it comes from and involves humans. An important point here is that it doesn't necessarily reduce labor but it changes the form of that labor and it displaces that labor in space and in time. So instead of occurring, you know, within the vehicle at the time of operation, it might occur beforehand in an organization where programmers are working to design the code to deal with these sorts of situations. Or it might occur at the same time but at a different location in space, in a remote data center, you know, somewhere across the world where the system is being monitored. But however you cut it, this process still involves people. It involves people in development. These cars require a lot of testing and programming to make work. This was a job posting for Google that actually my father sent me about a month ago. He said, like, you should see this. Self-driving vehicle operator operations associated. We're looking for vehicle safety specialists to be a part of the Google self-driving car project responsible for operating a vehicle for six to eight hours per day collecting data. So there's a lot of this labor that goes into these systems. They also involve people in management. And this is, I think, a point I can best make by a sort of historical analogy with means of production. That sort of, you know, factory work comes up to replace artists and shops over sort of 18th and 19th centuries. There are a lot of changes involved in that process, including many sort of shop floor workers turning into menial machine tenders responsible for just sitting there and pressing the button and ensuring the operation of the machinery. But it doesn't just do that. It also generates new cultures of machinists and mechanics who are responsible for designing machines, building machines, installing them, repairing them, operating. And it also creates new management roles and strengthens management culture. So there's some of that that control about what's being done that gets moved from the person on the floor to these new management positions responsible for sort of strategy. So even in that factory space, people are still very much involved. Not only that, it also involves people in operations. And I think a good place historically to go through this is space. Verniflendran saw astronauts as missile riders that you would get into this vehicle, you would press a button, it would take you to space, you would do your science, it would return you to Earth. And you sort of didn't have to interact with the machine. But that wasn't how these systems were designed for a number of reasons, including that if you're putting somebody in a machine that, you know, can very easily kill them, they quite reasonably want to have some sort of control over what's going on. And so you get the Apollo system where you have a digital computer, but you also have all sorts of manual backups and switches. And you also get situations like today, on the left, you have a combination of robotic and human, you know, actors working together to do assembly and prepare in space. And this is an image of sort of reminiscent of the person in the data center that I showed in the beginning. But that's manned space. What about unmanned systems? Which you could say like those must be more autonomous. And the Mars robbers are popularly seen that way. Spirit and opportunity are figured in a sort of anthropomorphic way as making discoveries. But if you pull back the curtain a little bit, you see a big room at JPL with a bunch of people in it monitoring these systems and interacting with them on a daily basis. Sometimes actually living on Mars time rather than Earth time. Interacting at least once per day. Sending commands, receiving replies. These vehicles can actually do automated pathfinding. So if they know their location and you give them a destination, they can figure out how to get there. But that capability is rarely ever used. Instead you have, you know, the engineer is responsible, sit down and plot out the trajectory and give those instructions to the mission. And actually when the robbers are out of contact with the Earth, they could operate autonomously but that they are not allowed to. Instead they're told to just sit in place and do what they can in a particular location. But it's deemed too risky to allow them to wander around by their own without that constant human supervision. So why does this matter? Why does this hybridity and historical examples of it matter? Well they matter because we can expect automation to have major social impacts. Social impacts in terms of road use. Not only how much does the road get used but who gets to use it? Are there important class dynamics to who gets more mobility and who gets less mobility? It matters in terms of congestion. One of the big questions of these sorts of systems is are you making driving easier, more seamless therefore more people will take more trips and actually the traffic situation will get worse? Or can you offset that with the increased efficiencies of the system? Do you have enough increased efficiency that you're gaining by increasing the sort of automated control to make up for more people using the system? You have questions of pervasive monitoring. Are we setting ourselves up for ubiquitous surveillance by putting video cameras on all of our vehicles? How is that data managed and handled? And we also have these ethical delos that we ran into before with Doug Lanatt talking about. These vehicles are very much the sort of classical trolling problem waiting to happen in real life and you know how do we deal with that and how do we think about that? And all of these questions depend on the engineering approach that you take including where the human is involved, how they are involved, what their role is. Why do we automate systems? Well we automate systems to reduce the failure rate of people because there are certain things that we're not particularly good at. But we don't we don't only do that. We also automate systems to increase human agency, to increase the agency of a group of people on earth in the JPL lab to do things on Mars. And we also automate to increase management control over production for good or ill. I'm not sort of making that argument here. But it's sort of this is not to say that automation makes the human less important. In fact here you have the National Research Council talking about automation use and civil aviation saying quote humans role becomes more rather than less important moving towards the autonomous end of the spectrum because it is so important to assure that the systems are properly designed, tested, deployed and monitored. And then you have the park cast working group which is a prominent airline safety working group in the US which says that an exclusive focus on pilot errors will not take into account the positive actions and decisions that pilots do on a frequent basis. So the idea here is that just add automation isn't the answer. Alone taking what we think of as a car today or we have the car today and adding automated systems to it doesn't alone achieve any sort of social good. Instead you have to start from the desired impact. What do you want to do with the system? Do you want to increase safety by sort of factor of a hundred? Do you want to decrease the environmental impact of technology? Do you want to provide mobility for poor, the elderly, the disabled? These are all positive goals they're all good things that we could get behind but they're also not all equally served by the same engineering approaches. And some of these goals actually may turn out to conflict with each other. Providing more mobility for more people may actually be detrimental to the environment. That may be an issue where you know those two goals are conflict. So to conclude all tasks are hybrid tasks. What I'm saying is that no system is fully automated and we shouldn't just sort of take for granted this idea that systems move from all human involvement to no human involvement. Second that that human involvement is both a weakness and a strength. We are both sources of failure and also sources of great success. And then a focus on the people, the focus on people, how people are involved in automated systems both now and in the past changes the questions that we ask rather than asking how do we get rid of people? How do we make things more automated? Instead I think when we sort of see that fundamentally all these systems involve people we can start asking well how should they be involved? What are the appropriate roles for people to achieve particular goals and to sort of start from those goals and move towards the engineering system? And finally I want to close with a sort of note of caution here which is that automation responds to human goals but it also serves some goals and serves some people over others. And to sort of go back to that factory model that I brought up earlier you know if one of the effects of automation in the factory was to create two very disparate classes of labor the sort of creative managers who are in charge of large scale strategy and then the menial machine tenders who are responsible for just sitting there and pressing the button. That doesn't mean that we shouldn't automate things but we should be very careful as designers, as legislators, and as citizens about how that automation is being done and what role we end up in because we may have to be very careful to ensure that we end up not as the sort of menial machine tenders for you know high the sort of you know capital investments of multinational software companies that design these things and keep all that sort of management role for themselves. Thank you. I will now say any takers. Yes. So I wanted to reiterate that you've brought up about a public perception of unmanned vehicles as partially involves either the people in the vehicle or the people responsible or operated in the vehicle maybe not in the car. So what about discussion about people, about you know the rest of the infrastructure that could change in order to accommodate vehicles? Because right now everything that I see is actually well the only the only time that you're in a conversation is when the car might hit them. Yeah. But we teach kids how to stay on train trends and you know that's that's a good point. It is something I deal with a little bit in the thesis itself looking at models from the sort of you know reshaping of the street around the automobile in the first place and around the street car and different modes of behavior that come out of that that the street didn't used to be a place just for vehicles. One of the one of the main concerns that I have run into about people and these are concerns that you see reflected in you know news articles and some things that reach a sort of popular audience is whether we're going to make cities worse. That there's a lot of talk from the sort of urban planning side about that the sort of automobile has ruined cities especially in this country especially in Asia perhaps and you know we need to move away from that and one way to do that productively maybe through vehicle automation but not necessarily and so that's actually that's actually an interesting space where you do see nuance about well this you know this could decrease traffic but it could also make it much more sort of difficult to get around as a pedestrian. One thing I'll mention just on that point was from a presentation that I went to actually at the media lab last year where you have someone talking about replacing all traffic lights with a sort of slot based system so all of your automated vehicles could drive up and as they get close to the intersection they could request a slot and then go through in a sort of complex computer choreograph way rather than having to wait for traffic to be done in a direction and there was actually a very pointed question that was asked about well what about bicyclists what about pedestrians and there wasn't a very good answer to that and my thought had been you know what what does that do for you know maybe a homeless person who doesn't have a smartphone how do they then get across that street if everything is just based on slots um and that's a question you do sometimes see those things come up in sort of popular press I actually see those concerns less in these sort of engineering spaces that I've looked into those those documents that's much more focused on like the technological approach and that approach is often though not always very focused on the individual vehicle because I think you know companies like Google have sort of decided that they can't depend on any broader sort of governmental push to change the infrastructure because people have been talking about automated vehicle systems since fifties um at least in a sort of real way but they would have required other infrastructural changes and that was part of the reason I think those didn't ever get off the ground so there there's a sort of there's some talk about the sort of broader infrastructure and I tried to sort of address that a little bit um but I mean I think that's important I mean we just didn't mean to make more parts of why not I mean see then you get then you get into the drones issue right I didn't cover that but you mentioned the NHTSA's protocol their tax on because you mentioned a little bit more about what you found good at that yeah yeah so um well I mean it's good that there's a document out there that tries to address this because it is a space that really does I think need some guidance so that companies can have some idea a little be expected of them from a sort of governmental perspective and presumably so that legislators can get some insight into what's actually going on what what are the technologies I think there's two major problems with that taxonomy one of them I sort of talked about already which is that the levels are very artificial um that within level zero you have um you know automatic transmissions are not automation that's in the no automation category um and also um information automation which is a well recognized category within human factor research doesn't count as automation so if it's only an automated system that's giving you as the driver um you know information about roadway conditions that doesn't count in that system that's level zero and that you know doesn't seem to really make sense to me doesn't seem to be a productive way to talk about that and then it's set up as zero one two three four we obviously sort of want to get to four and I think it could be a sort of different document if it didn't take that hierarchical structure the other problem with it is that within each of those levels um they're they're sort of very discreet um so level one is that one axis of the vehicle is automated either accelerator and brake or turning level two is that both axes are automated level three is that both axes are automated and you cannot pay attention to it for a bit like if you're on the highway in level four would be both axes are automated and you can not pay attention to it all of the time but even in vehicles today um we're undergoing sort of complicated transactions on a minute by minute basis with automation um you know if sort of you know if I let go of the wheel in my vehicle that I'm sort of trusting to a very mechanical automated system that just keeps me going straight like there's a reason our vehicles are stable um that hasn't really changed the property of the vehicle but there's been a sort of transfer of agency there and if you were you know going up a hill and your vehicle is slowing down a bit you may push a little bit harder on the accelerator because even though you have an automatic transmission you sort of want some input into that system but those complex dynamics of what you're actually doing is a driver on a minute by minute basis in terms of how you're working with the system um they just can't at all be dealt with with that the way that that taxonomy is set up because it's it's a sort of very artificial um description of how these systems work and I actually think the society of automotive engineers does a better job with theirs which was released I think six months before or after I don't remember which order they came out of um but that one's better at sort of dealing with some of these nuances though it still has its own problems. Jesse? Yeah you mentioned uh that it's not a good model going apart and then automated going that way but could you see in the near future with maybe a four-level system that certain areas of a series or certain roads are designated as sort of automated only or or some things just might pose too much of a problem to enter fully except for the level four system there? Um I mean I think I think sort of segregated operations areas are a totally possible thing I mean that's actually happening right now um a lot of the you know driverless cars are in Singapore or driverless cars are in you know other area of the world they're sort of low-speed golf carts um and it's a very particular you know physical regime in which they where they can go slow they're dealing with pedestrians but in a collision people aren't really going to get injured um and there's also a case to be made for things like you know other sort of rapid transit systems that are sort of like automobiles and operate on you know separate in separate areas um I think again I guess I would just say that that the question of are there are there places that are too complicated to ever fully automate is a question that sort of makes more sense when what you're thinking about is that we want to fully automate things and it's a question that makes less sense when you start asking you know are there particular roles that we always want human beings involved in and then you know what is full automation mean in that case so like do we always need a human being somehow watching the system to be sort of ethically responsible for you know accidents or other other issues that happen is that something that we're just like I mean a bit too squeamish to to let that be a decision that some human isn't watching um and then I mean is that is that a fully automated system anymore if if it's like here's an area where you can build that but we decide you need that person there to be looking at it and I'd say it's it's not a fully automated system you know that's not a great term precisely for the reasons that