 The Media Lab is, I think, fairly well known for technology and innovation. Over the course of the last 32 years, we've developed a lot of technologies, spun out a lot of startups. I'm losing my first panelist. But I think what's been more unique, there are lots of places that are doing technology and innovation. I think what's been more unique about the Media Lab is actually that we are interested in the relationship between the technology and human systems. So it's not about the technology per se, it's about originally maybe the interface between humans and technology. And then in a second wave maybe it was networks, the communities that technology enables and what that does for humans. And I think now we're getting into a stage where the material per se is becoming all kinds of things. It's bio, it's genetic engineering. And one of the topics that we think about a lot and that these three panelists are thinking about and working on is AI. There's a lot of interest in artificial intelligence as a material that we could be designing new kinds of technical but also human systems with. So I want to spend the next 25 or so minutes exploring some of the issues around this relationship between humans and machines and the material of AI and what kind of possibilities this material offers us, but also what kind of responsibilities it demands of us. And I think I'd like to keep this quite informal, so more of a conversation. I've encouraged them to interrupt me, to interrupt each other, to have a lively discussion. And I also encourage you, if you want to ask a quick question, you don't have to wait till the end. There are some microphones. I would ask you not to do long statements, but if you want to throw in a little provocation or a little question, feel free to signal us and then we'll make sure to pull you into the conversation. That maybe I'll start with Joey. So, you know, the media lab in its role as an institution that trains or educates designers and developers of technology. You don't have perfect control and you don't have perfect predictions. So I just have a baby now. I have a 15 month old baby, so I'll use the baby metaphor. So when you give birth to a child, you have control of what the child sees and eats and does. But you don't have complete control of what the child becomes or what they actually do. But you're still responsible for the child. So if you create a font, if you create a machine, if you create a smell, you are responsible for the aesthetic, educational, environmental, societal impact of the thing that you make. But you don't know exactly what's going to happen. So the trick, I think, is to be very aware of the context of all of the systems that it will hit and try to instrument your interaction in an iterative way so you can watch the child. So when you're seeing the child running around and you think it's about to do something, you can try to intervene. But I really do think it has an ongoing relationship and it's like introducing a new life into a complex system. And coming back to the topic of AI, I know that you've been involved in a class about AI and it's actually not a class about AI. It's a class about the ethics of AI. So I'm curious, why that framing, where did that come from? How is that different from what other people are doing? I think some other people are doing it. The class that I'm teaching with Jonathan Zittrain is specifically Harvard Law School, Harvard Kennedy School of Government and MIT. And it's lawyers, philosophers, engineers and policy people. But I think that you shouldn't be allowed to make laws or policy without understanding technology. And you shouldn't be allowed to make technology without understanding ethics, philosophy and law and policy. And so our goal is to get engineers to understand the other side and vice versa. And our success has been we've got some of our best engineering students to go take start the law degree program. It's harder, I think, to get lawyers to become engineers. I was going to go to York next, but maybe I'll already change the flow because Julia is looking at me. I'm looking at Julia. You shouldn't be allowed to make laws. No, definitely not. I mean, if this is the standard that I have to understand AI and technology in order to make laws. Well, I guess a lot of people who elected me probably didn't think about that at all. And I think if this is kind of the goal that we aspire to, that we want to be governed by experts, then we would probably need an entirely different approach to democracy that we have today. I think today, at least in the ideal sense, it's a lot more about trust. Like, who are the people who we trust to actually try to seek out the expert information that they need in order to make informed decisions? Sure. When I said understand, I think it doesn't mean that you have to be able to code. But in the US Supreme Court, the guy who just retired, he didn't use computers. He only wrote with pen. He didn't like electric lights. And he was judging things that had to do with internet. And we have a lot of the form when I was fighting with Lindsey Graham about the internet. He had never used email, you know? And so you have to have at least a sensibility to understand the architecture. Otherwise, it's gobbledygook. So I don't think that you have to be an expert, but you have to understand it. It's a slightly different thing. I mean, I guess perhaps we could say that only young people should be allowed to design schools and only old people should be allowed to design retirement homes and only people who use the internet should be allowed to govern the internet. And that might actually be a good approach. I mean, what I see in my work a lot is that we make quite general rules that apply to the internet as a whole that are designed to only work with Facebook and YouTube because the people who are making the laws only know Facebook and YouTube and kind of perceive them as being the entirety of the internet. And when I asked them, well, how would this new legislation that would require you, for example, to scan every uploaded piece of work for copyright infringement, how would that work for Tinder or how would that work for GitHub, then of course I need the person who I'm talking to to know what Tinder or GitHub is actually about and ideally to have used it at some point. And that is very much not the case. But then you could wonder, is this because our politicians are too old? Is it because they don't have enough expertise because we're asking our politicians to make more and more with less and less resources? And I think both of those could be part of the problem. So this was great. Let me jump back to York because I know you're thinking about the types of structures and systems that we would need to support an engagement with things like AI at the level of a state. So we've talked a little bit about who should be allowed to govern, who are the experts, who do we trust? How do we maybe prepare people to take those responsibilities? What are some of the other pillars, let's say, of an approach to this? Well, I think my first point would be that there needs to be an understanding of both sides. You know, AI is politics and, you know, those coding and designing need to understand that they heavily affect society with what they do. Maybe not when you come up with a music engine, but we have this understanding in our society among the general population that AI is more something really remote or dangerous like a terminator or something. And we don't understand yet that it really influences us in our everyday life. I mean, maybe not AI, but algorithms. And so if you live in New York these days, the question of to which high school your kids go is decided by algorithms. The question whether the police is patrolling in your street or not is decided by algorithms. You know, the sentence a judge sends you to or releases you from prisons is supported by algorithms. And I think we have to have an understanding that everybody who's programming those things has a political responsibility, and so we need the training on both sides, make people aware of what they were doing and not only make lawmakers and politicians aware of, you know, what modern technology brings to our society. So this is a perfect example. So right now in the U.S. we are passing laws that require algorithms to be used to support judges, but the companies that are selling them at, you know, police expos and conferences and they are signing in the procurement letter that the data is not disclosed and the algorithms are secret. And so when you have a court case, when a defendant is trying to ask why she or he can't get probation, they can't force the stuff in court. And if the judges or the lawmakers who pass the law to include the stupid risk assessment thing, which they shouldn't have in the first place, they would have put in a thing that said, oh, the data should be open, the source should be open. And that's the example that I would say of a politician knowing enough about, or a lawmaker knowing enough about the code or software to be able to say, oh, this should be in the law about procurement. Yeah, I think there's also a difference in the situation where the state is actually actively employing AI, because then I would say the primary responsibility is not with the developer. Then the primary responsibility is with the state, with the developer to actually choose and then which kinds of companies to use and what kind of rules to give them. But I think that not all of our lives are controlled by the state. So of course, especially if we're dealing with companies that are employing AI at a grand scale and that are building quasi-public spaces like Facebook, for example, they do have a responsibility even in areas where they are not kind of government mandated. But I think, yeah, in order to be able to judge as a politician where using AI makes sense, you do need a certain understanding of it, but I think what is even more problematic at the moment in the way that politics approach AI is that it is used as a way of basically having less work, that we are confronted as politicians with a huge array of extremely complex problems and increasingly an answer of politicians that are dealing with limited resources, limited money is to say, well, let the big companies fix it for us and we don't exactly care how you remove hate speech from the internet, for example. We just want you to do it and if you want to keep the algorithms that do it secret, then we will let you do it because you're fixing a problem for us. So I think there's also an issue of kind of who actually has the ultimate responsibility about these... But I think there's a broad field where it's not the direct responsibility of a government or a state, but it affects the chances and equity issues of a society. Whether I get a credit from a bank might be just, you know, some issue between me and the bank, but if I don't get a credit anywhere, I'm reduced in my chances and opportunities. And it's today, it's algorithms, it's, you know, maybe even intelligent algorithms, deciding whether I have a right or access to a credit and with that to an opportunity. And so here I see just a very broad field where it's not direct government responsibility, but where you can't just, you know, look away and say, oh, just leave that to the finance industry, they'll figure out some way. But it is direct government responsibility to the extent that regulators report to the government. So the credit scores in the United States, they're now selling your credit score data to marketing companies and there is a law in the U.S. that says you can't sell personally identifying information with credit scores, they're selling the house. So they send predatory advertisement to the house and they say the house is not, doesn't have privacy. And then what's happening is they're now using credit scores to give jobs and give loans and now they're including Facebook data into credit scores and LinkedIn data into credit scores. So it's making this horrible loop of, and you have a regulator. The government's supposed to be watching this and they're letting it happen. And it's very, also there's one other, I'm going to make one other point. So if you go back in history to the insurance business, when they were trying to calculate the premiums and actuarials, there was quite a debate in the United States about what is fair. So should poor people be carried by the rich people? Should your premium be decided by your specific category of risk? Should people with pre-existing conditions be allowed? There were all this discussion about risk and then the technical statisticians came in and they made it very complex. They came up with a single definition of risk that was too hard for the activists and the people to understand and the fairness became a mathematical technical decision and then all of the democratic discussion disappeared. The same thing is now trying to happen in AI and machine learning where you have different kinds of fairness, fairness of discrimination, fairness of outcomes, fairness of need, fairness of accuracy. Each community is going to have a different view of what's fair. But what they're trying to do, the community of statisticians and machine learning people, they're trying to come up with a checkbox that says this is the definition of fair just like the insurance industry many years ago made their determination of what's fair and the government doesn't know enough to intervene. This is an example where the government could or should have an opinion. They don't have an opinion and I think the experts are now fighting over this and this I think is an important battle. I wonder if we could go back to you, Julia and maybe you could talk a little bit about your experience with legislation that deals with technology regulation and both the frustrating sides of it maybe and the limitations but also I think there are some very positive aspects in your work that we should also highlight. Okay, I will try to not depress you too much. I actually found it very fitting that you started with Kraftwerk because Kraftwerk symbolizes a bit in technology policy one of our huge challenges and that is speed because the European Court of Justice is currently dealing with a court case of Kraftwerk versus Moses Pelham. I don't know if anybody still knows who Moses Pelham is but he was like a German hip hop DJ in the 1990s and in the 1990s he used a very short sample of a Kraftwerk track. Now under kind of traditional copyright law it's very clear that if you take somebody else's melody that they have composed and you use it as your own you need a permission for that but what is not at all clear is if you use more modern technology which in the 1990s was sampling I don't think we would necessarily think of it as very modern technology today and you don't actually take somebody else's melody you just take kind of two seconds of sounds of metal banging against each other and use that in your own work and make something new out of it like under US copyright law it would probably be relatively obvious that this is not an infringement but I think the point that I want to make is that in 2018, 20 years later the courts are still fighting over this and I think that is a big problem that we have not yet come up with the mechanisms to make democratic decisions about how to deal with new technologies that are not new at all anymore by now making these decisions in a reasonable speed and I think that is a huge problem because in the meantime the cultural use of technology is just going to create facts for better or for worse and if we now decide that either we think it's morally wrong to take somebody else's metal banging on pots or not it will not change what we consider as actually fair like the law will invariably be far behind the technological development but on the positive side I think you can see that at least in European politics in the European Parliament there is a certain responsiveness that if you have a big community like an on-line community like YouTubers for example actually speak up with their own stories and tell politicians who may not have first experience writing on the internet that a certain proposal would kind of threaten our very kind of cultural surroundings that can have an effect so we just had a vote about this in the European Parliament in July where actually a majority voted down what the supposed experts in the committee had decided because suddenly an entire kind of generation of 17 year olds was writing to their representatives and this is completely out of touch with how we use technology today so I do think that kind of technology is all on the one hand kind of under governed because we are too slow to react to the developments but at the same time it's a huge opportunity to kind of have new ways of communicating with your politicians so I found the proposal about using blockchain to kind of track politicians promises very interesting in that sense I mean I think there are probably flaws to it because it assumes that every politician can make a decision by themselves and that they're an absolute ruler but at the same time I think it's kind of grasping this potential of using the internet, using technology to create a dialogue with these different backgrounds so maybe I'm curious if you've seen other spaces if you think we need more spaces for this kind of conversation we definitely need more spaces for that kind of conversation because first of all if the general public isn't aware yet of what's going on there is no need to discuss it because you don't know it's happening maybe an example if you build a new building in Germany you can put down the proposal in some hidden room in some public building and if as a neighbour you don't know that something happened you wouldn't look at the proposal and maybe intervene in Switzerland if you want to build a building you have to put a frame of the building made out of those metal pipes to the place where it's supposed to be and everybody walking by sees something is happening there people get aware so the first step is general awareness in getting people involved there's something which affects everyday citizens life and that everybody should be involved the second issue of a conversation you need more than one partner and there is a discussion in the scientific community there is a discussion in the political community there is definitely lots of going on in the industry but there are not enough platforms where those different people meet so we need awareness and we need platforms for a conversation in order to get everybody involved and not only informed politics often or industries as it's enough to just keep people informed of what's going on I think here we really need participation and involvement and actually this reminds me of something I saw in one of the presentations where the myex.ai there was this gasp in the audience when they saw what this new company is doing or this new fake company is doing so I think we also need art I think there are certain things here that are actually easier to communicate through art or through performances or through experiences and maybe not necessarily just having the facts written down and provided to everyone in the newspaper or the public service announcement so I wonder maybe back to you Joey at the media lab we think about art, design, science and engineering as this wheel I think that art is very good at posing these provocative theories that evoke an emotional reaction but then we should go in and explore because when I was looking at this myex.ai what I was thinking was that this may be a fake commercial product that we don't want to be in the market but at the same time it's also a very real experience that a lot of women already have today I mean this is exactly how stalkers use social media and they may not have artificial intelligence to support them doing it but it's nevertheless a reality for a lot of women so I think if we think that is creepy and if we wouldn't want that to be kind of a business model we should also think about okay how do we make sure that the internet actually makes sense and works for everybody who is already using it today and I think when I was on the pre-artist electronic board a jury for the internet I think one of the key things is that artists will use the tools in ways that they're not intended and they are creative so the app I think is kind of on the edge of art for me I think really interesting art is when they break the tools and it does two things I think it advances the tool and it also allows, like you said Philip to look at it from a different direction so there's also a very positive piece so there's a critical design which is to criticize and show the negative but at the Media Lab we think about things like photography or computer graphics and games where a lot of the artists were also involved in the technology so the form was able to evolve in a very interesting way but forms where you have the technology and the artists separated like television or newspapers the form kind of got stuck and wasn't able to adapt to the technology or the social system so bringing art and technology together has those two things I think it provides the societal context back to the engineers but also takes the engineering and moves it in a creative way but I think it's the job of art and also of science and education to demystify the black box of algorithms because as long as the general population thinks it's just something we cannot understand it's completely closed, it's completely complex it's one billion lines of code then it needs science, education and art to translate the complexity in something either visual or understandable or simple and so I see here more than just breaking the rules which is part of the game but it's really this translation issue into something everybody can understand and understand that it affects and you know my ex-AI or however this wonderful company was called is exactly that it confronts me with something I put out in conversations digitally and plays it back to me in a super visual way to understand who you know this is just affecting me and the way people can understand me or think about me because it's out digitally on the web I completely agree with that point that it's very important to demystify because that actually will lead to better policies but I think one big challenge that I face there as a lawmaker is that there is an entire industry of lobbyists whose primary job is it to confuse us as politicians about what algorithms actually do like I have set in seminars designed for politicians where certain academic publishers for example try to explain to me if we allow scientists to data mine academic articles then Trump wins the elections and these kinds of... I mean I'm not making this up this was a seminar I participated in and I think this is kind of showing that we have to sift through so much misinformation from people whose job it is to basically confuse us and try to lead us on the wrong path So one thing that I found encouraging I think we've been focusing more on the dystopian sides but what I found very encouraging is many of these projects explored ways to demystify technology by allowing more people to use those technologies and I remember a conversation with Eric from LEGO before the workshop actually he said you know 20 years ago the kids started going to these maker spaces and learning about electronics and about sensors and motors and then the next time they go to the supermarket and the door opens magically they look at the door they're like oh I know exactly how this works it gives this incredible sense of ownership and power in a way over the world by understanding how to use these materials and these technologies and I think it's much harder to do with AI and some of the more modern technologies but some of the projects actually I think we're pushing in that direction like how do we make them tinkerable and then you're not going to get to the highest sophistication necessarily but that's not needed go ahead I don't actually think they're harder so I think that they are a little bit harder right now because we don't have the interfaces but the AI one actually comes from work by Stefan at MIT who is trying to figure out how to teach young children about AI using the bricks and using robots and the concepts of AI aren't that hard as long as you sort of and in the real world when you cook you don't actually understand the chemistry of what's going on but you have a cookbook and you think you know how to cook and you actually can walk into a restaurant and say I know how to make that omelet and it gives you power so I think that a lot of the emerging technologies are just complex because they haven't yet put into nice Lego bricks so once you have the bricks the experts know how to make the bricks but once you have the bricks and you understand their function it isn't that hard to learn and I think getting back to your point it's the design is it designed to express what it does or is it designed to confuse and to me there's a little bit of any difference between art and design I think design is about how to make something more suitable for use in society art is actually a little bit different I think it's more of a it doesn't really care about whether it's useful for you art is more of a perception and it's more of a provocation but anyway that's a technicality so I think that once the technologies get better designed they'll be easier to understand intuitively just perhaps as an example of this how this can be translated kind of also into a policy field so a few years after 9-11 there was this huge security craze in politics that we still haven't gotten completely over and in the kind of youth political group that I was active in at the time we decided to organize a security conference which is what all the political parties do it's kind of the hot topic of the day and at this security conference we thought about okay what is actually most likely to kill me as a 16 year old girl in Germany and then we talked about everything from suicide to traffic issues to drugs but we also did things like for example isolating DNA from a banana in like a half hour workshop because I think it's extremely useful to kind of well use transparency and understanding as a means of countering fear and I think at the moment there's a lot of fear in the AI debate which may also be part of the reasons why politicians have a tendency to kind of push the topic away to the companies like Mark Zuckerberg coming to the European Parliament is telling us AI will solve all of our problems that it will get rid of fake accounts and it will get rid of illegal behavior on the internet practically because he has an interest in saying that because his company is building the AI so if we are extremely afraid of these problems that he is promising to solve for us then we are of course very susceptible to these kinds of promises and we don't have the tools to question whether this is actually the way forward. Before we continue I do want to invite all of you to participate so if there are questions there is a hand, I think how does it work the microphones are here in the middle so I would ask you to please get up and if you can go to the microphone and maybe just say your name and who you are. Hi, my name is Juan I studied physics here at the TU Berlin and I have been working as a programmer for the last five years more or less in this field and I don't want to sound like a looted or something I think technological progress is very important but I have become slowly disenchanted in different aspects and I think what Joey just this example that he just said about that judges are being supported by algorithms that are not open and that just completely defeats everything of what are the fundamentals of a justice system so I feel I don't know I want to see what's your take your three opinions on isn't it necessary that the tech community maybe finally just acknowledges some shortcomings on what you can do and what you can't do for example Joey said that there's this initiative working with the Kennedy government school and with Harvard and MIT but that sounds to me extremely elitist there has been an ongoing movement in public education of just like dismantling this whole principle of that we should learn how to learn and what to learn and now people just learn something how you can be embedded into an economy and just this acknowledging of there's no amount of smart contracts in the blockchain or AI that will fix the problem that for example that there should be massive public outrage that even this algorithms that are practically helping these decisions by judges that this code is not disclosed and that is not disclosed and there's no technology that's going to change this there's a million questions in there shouldn't be there also a massive public outrage if we see that humans discriminate in for example court decisions I just wish we take a more positive look I mean clearly some of the software products proved not to fulfill all the promises we have hoped but they both can be less discriminatory and take away repetitive tasks from judges giving them more time for the real important tasks I mean if a judge has 5 seconds on average to decide on probation or not I don't want to be there I want a judge to have more time for my case maybe because something algorithmic helps him do very repetitive tasks and beforehand we talked about the finance industry and the problems with credit ratings yes there are problems with credit ratings but on the other hand in the traditional banking way something like 50 or 60 million Americans are invisible to traditional credit scoring and only when you use algorithmic systems that access more data those people become visible and get a chance for credit so I think we really have to see both the more opportunity, more chances with a positive outlook of what could be possible and obviously the things that go wrong if you don't have the right rules the right regulations people misusing the power of algorithms and AI I wish the discussion wouldn't only go in this dystopian way we tend to lead the discussion sometimes too quickly I'd like to add something because it's a direct point to what he's saying just hold on one second I don't want to make it a back and forth Julia wants to get in maybe she'll broaden the conversation give us a second at the risk of making your point for you I think there is a huge difference between criticising AI being used at all and I don't think there was the point and AI being used in an in transparent way I mean I completely believe that there is a place for AI to be used and that you know it can be a huge benefit but I think the reason perhaps why you are outraged and a lot of people out there are not is because they don't believe that even if the data were disclosed or even if the AI were transparent that they could have the facilities to actually interpret that so I think in the open source community it kind of works because it is a community well kind of there's the developer community where everybody has a basic understanding of programming and can look at the code and understand something but there is also kind of this broader layer of the community of open software users that may not be programmers themselves but they trust that if the software is open then hopefully somebody else will look at it for me and then we had things like I don't know a heart bleed where it turned out that well just because something is open doesn't mean that a lot of people had actually looked at it so I think I completely agree with you that it is outrageous that we are using these transparent closed systems but I think people will only be outraged about it if they feel empowered to actually use the information that we are asking them to disclose I'll actually be a little bit more extreme I think there are cases where you shouldn't use technology so I agree that we should make the system more efficient but for instance when Mitch Kapoor was talking about Oakland he said that they were using in jail an old file maker pro database an access database and an Excel spreadsheet and they were doing everything by hand and it took two days to process it fix that first and to me the electronic voting machines are a bad idea I think they just are a bad idea but you shouldn't have it and I think it's possible that certain categories of risk assessments just won't be fair because the underlying data is unfair because I think that what happens when you look at all of the American systems you have data on poor people but you don't have data on rich people and there are some systemic biases in social systems and depending on where you want to go whether you're just trying to keep business as usual and punish poor people and move power to the rich people you might want to but I think maybe thinking about the to your point it kind of makes your point that you have to look at the whole system including the humans including everything and then I would deploy algorithms much more strategically on what are the effect in the whole system rather than right now you're trying to make each subsystem more efficient better accuracy in policing better accuracy in risk assessment better accuracy in parole but that isn't making the system more fair that we are looking at trying to make the judge's job more efficient when we should try to make the judge's job more effective and so I think it's slightly different but also making it more consistent I'm only making the point that humans are not terribly consistent and judges aren't very consistent either but it's very hard to get an inconsistency and discriminatory behavior of a single person once I have it algorithmically designed and transparent and there I'm completely with you then at least I can in the democratic process openly discuss about fairness consistency and parameters but I want to talk at the next layer because the current criminal justice system sucks it's biased against poor people and biased for rich people so why can't we use data and machines to fix the whole criminal justice system rather than just trying to eliminate a little of those who are already terribly biased because they're publicly elected in the United States so to me I want to set the bar higher I want to say can we use data and machines to understand a causal system in a theory of change of society and that we use AI not to look at the defendants the people but to look at the politicians to look at the judges and say let's look at the history of the judges across this jurisdiction and say alright these judges that tended to be conservative caused crime to increase in their neighborhoods these judges who tended to let people go for drug crimes decrease crime in their communities let's try to understand why rather than saying can we make the judges job more efficient that's kind of my sorry it's hard it requires a lot of political will but I think we're just trying to basically we're trying to automate existing functions in a democratic system that isn't really very good yet and I guess we all agree if we now digitize the unfairness of the current analog system we do the worst job of all I'm hesitant to allow another question but if there is another question yeah go ahead the microphone is in the middle there so no I was joking please ask more questions I wanted to ask about because when we talk about technology we always talk about progress and possibilities are going forward and there is a lot of things that technology very much but are not analyzed that we kind of start to see that they're going in the directions that are am I doing something wrong okay is it better hello is it okay okay so I wanted to ask about the when we talk about technology we talk about progress and the technology moving forward and a lot of technological solutions that we come up with turn out to not be a beneficial change like the example with the judges and then we kind of leave it to the legislative system to solve for us so we do a big mess with technology and then we like okay let have politicians regulate it and I wanted to ask if you know about any initiatives who try to invent technology to like go backwards and kind of change the direction I don't know sorry I don't think that necessarily just because there are technological developments that are bad that there isn't progress or that it would be worth to just go back so for example if you look at the car I think it took like I don't know 30 years before they had seat belts so I think we are still in the kind of kindergarten phase of digital technology and its regulation and I mean my party the other party were kind of sometimes accused of just wanting a wild west on the internet which is actually not what we are about at all like so for example net neutrality was something that perhaps you didn't need as a law in the early days because it was kind of built into the technology but as companies were trying to exploit it and change the architecture you needed the legislator to step in so maybe an example of what you are talking about that basically the technological development goes in a kind of bad direction and then you take the law to go back to something that worked so maybe net neutrality is an example of doing that but I don't know if there is the same kind of among technologists themselves that they try to do that electronic voting there is another again the mic is in the middle so we you mentioned design as distinct from art but I guess one way to talk about design is as a system of methodologies and different ways of thinking and working together and when we have these experts we still need specialists and it's not like everybody can be a generalist so I guess my question is do you see a case for creating more of a role for people and systems as a kind of connective tissue since the other ways that we used to do this seems to be failing at the moment I think one of the biggest problems that we have right now is the silos of disciplines and there is a lot of work talking about it but basically you have very tribal systems whether you are talking about academia or whether you are talking about business and what we have is these systems interact in a very formal clunky way and the federal funding so government funding goes usually along these paths, ten years, schools and it's the if you are in between these spaces it's very difficult to get funding it's very difficult to get a job if there is no job description and it's very difficult to get a degree it's very difficult to get anything and so what we are doing at the media lab is to create a legitimate job art and engineering but actually what you want to do I think we need to do is to be able to explore the spaces that aren't even just between two disciplines and I think it starts with education I think that you want project based learning that isn't constrained by classes and disciplines and like the hackathon you learn what you need to learn in order to get things done I think you can have specialists and the organization should really be about if you have a passion about something and you do it and you get to be the best in the world at this peculiar weird thing and then the world has a way to find you because you have a YouTube video or you have a website I think it should be extremely diverse specialists rather than a whole bunch of specialists that have a guild and they all know how to do this certified way of doing you know bolt turning so to me I think that the internet actually allows us to develop, learn and connect these broad array of specialists in these completely non-existent fields and I think that's the opportunity but the universities and the schools and the job description job market all of those structures get in our way and perhaps to just add one thing to this I think that also the internet and this having access to all these specialists and experts who you know explain to you how to build an airplane on YouTube they make it easier to become reasonably expert in a number of fields whereas like 50 years ago you would probably have to study 10 times that long to get to the same level of expertise because you have to basically learn everything from scratch and you have less access to the mistakes that other people have made so I think there is definitely a space for specialists but probably if you get to a place where they cannot communicate with each other anymore because they are in a community of only experts of the same kind then it can also be kind of limiting but maybe just adding one different perspective yes we need the expert but also we need in the broad general public a basic algorithmic understanding I think we just got to teach young kids the basic understanding how algorithms work nobody needs to program them but you need to understand how they influence you or how other people build something around you using technology and that you get best and they are completed with joy if you have project based learning in schools and you build applications, you engineer them and you understand how just an algorithm functions yeah I think also there is kind of a problem in our curricula that we spend a lot of time teaching children things that computers are already better at so really kind of analytical problems but I think we should be focusing the education much more on the things that computers are bad at because then we can really kind of have an added value from technology and probably achieve more and have a positive outcome there that's actually maybe a really good moment to slowly wrap up and also it's really unfortunate which really actually build on the conversation we've had here one is we had 50 or so people in the in this church for a week who were working on technology projects and they spoke 35 or 38 languages but they were experts in many many many more things and it was really fascinating to kind of throw them together into these groups where they would discover things about themselves but also things that they could share with other people and I think it was exactly this idea of learning where you need to bring those different perspectives together and then have people build things together then that's when they really understand the other perspective and also more about the technology and I want to end on maybe one positive example of AI that I saw in the week that didn't get built to my disappointment on Monday we so the workshop had this first day it was field trips all of the tracks went out into Berlin and they explored companies and they visited spaces they went to museums and they met interesting people and I was going along on one of these field trips again with Eric sorry Eric for constantly referencing you but and we were reflecting on how refreshing it was that we could see the sense of unlimited possibility in the young people who were part of this workshop everyone was talking about oh we could do a startup in this area we could be working here at the university or we could be starting a nonprofit and they were like all these possibilities and one of the teams had considered building an AI that would reconnect you with your younger self where you could have a conversation with yourself when you were maybe 18 years old and your AI could say hey Philip remember that time when we went on holiday in Italy and you fell into the water or you stole the boat or you met this wonderful person and had an amazing conversation and your AI would know these things about you and could have a conversation with you and I thought it was for me that was a beautiful vision for having an AI that is transparent that I control that adds something to maybe my reality that's actually helpful to me and it did give me that sense of wonder and of possibility and Eric and I both came to a moment in our conversation we were like we still have these possibilities so for me that's what we hope to get out of this workshop just a sense of possibility and a group of friends that we can do these things together with and I want to thank everyone I want to thank my panelists and I want to thank everyone all of the participants and all of you for coming for really a fantastic week together so thank you and there will be drinks and food