 Jenny and Christine, thank you very much for joining us. Jenny Vingle is a machine learning engineer at Mixpanel and Christine is a future tense fellow and the senior editor at the New Atlantic. Sorry the New Atlantis, my apologies. So serendipity. This is kind of the fun part of the panel I think because as Ed and Jen and Ian talked about at the beginning there's this algorithms tend to generally not surprise you very much. They will offer predictable things. Occasionally they will offer you something really wrong and then sometimes they will offer you something surprising and delightful. My personal experience is I don't find algorithms surprising enough. I don't know which one of you wants to kick off but I feel as if when I for instance all right I get a couple of different email newsletters. A few different email newsletters that suggest to me links. One of them is from Twitter and it is based on the people I'm following on Twitter and it you know it offers me things that it thinks I'll be interested in. Another is curated by a guy who happens to be a friend and a colleague of mine but even if he weren't I would always find it much more interesting the things that Twitter sends me just because he is an interesting mind and he has more capacity to surprise me than anything that Twitter will send me. So is that just because I'm difficult or is there something wrong with algorithms or is it that by their nature their program to satisfy the average and therefore not be very surprising? I think it's just that it's a really hard problem. Yeah I mean so like I kind of I do so I like my background just for people as I'm a machine learning engineer like I have a I did a PhD in this and now I've done kind of two startups that are very ML focused so I'm purely a practitioner at this point and so I'm very in the data all the time and you know I really agreed with a lot of stuff that was said in the first panel about how when the algorithm does it right it's really not surprising like for the company that I work at now I built a model to figure out who was gonna start paying us and it's like oh look the most relevant future is people who look at the pricing page like that's not surprising you know and that means I did it right right if it shows you something truly surprising your model is probably wrong it's not that your model is so incredibly clever that I like figured out some right all of these algorithms are written by people right they're not it's not like the algorithm is writing the algorithm is writing the algorithm and something is just gonna magically get conjured up that can do can do magic right that that's not how it is like we're still really bad at machine learning and so like to me it's it's like so my I was used to work at a place that was a personalized news reader and so we really you know we tried to scour the whole web for links and then show people what they would like to read so exactly what you're talking about with the newsletter except it was just like a feed and you know I would never advocate to somebody that you should get all of your news from an algorithm like that doesn't make any sense to me like you're definitely gonna have situations where there's stories that are of interest to you that are not related to any of your interests are not related to anybody that you're connected to they're not related to any obvious thing about you maybe they're just like a big news story that for some reason you know it just caught your attention and you're like super fascinated by this story and and it will never get that right the the the the system that we wrote which did a very good job of recommending stories to me since I was one of the authors you know still didn't get me everything there would still be like a big story would come up like the FIFA scandal for instance I even said I'm not a sports person I don't care about sports but like I thought that was really interesting it's like a bunch of higher ups and this thing are all actually getting busted like I totally cared about that story I got zilch for my my prismatic feet on it you know but the the feed still got me a lot of stuff that I would never have gotten otherwise that I was very very interested in like it helped me say super informed about all of my interests all of my disparate interests I could just know you know what's the latest ML news what's the latest like in the like the world of knitting what's the latest in the world of interior design and and you know the serendipity was never we never tried to explicitly model serendipity like I don't know how you would do that and so the irony to me was like the times that the model kind of did what felt like serendipity like when would give me an article about like a knitting pattern for some like mathematical structure or something that was actually combining interests in a way that I didn't realize people did before to me that was like I got lucky like that was kind of what it is and so I think it can be very good at finding you the stuff you care about and it's really good at you know we can be very good at like just like the large amount of data like that's what it's gonna do it's gonna go through the massive amount of data that you would never go through yourself and find you stuff that may be of interest to you and that's it's really just like one part of the whole puzzle to me I guess I don't know right well so I think one thing I'd like to say is that there is no such you cannot manufacture serendipity and thank you for saying that I assumed I was gonna have to sit here and tell you you couldn't say that but I mean serendipity is the op the opposite of serendipity is manufacture engineering it if you think about when we talk about a serendipitous experience it's something that happens to us for God knows what reason when you think about though what I think is curious about the discussion going on coming out of tech companies now is that they are saying we can manufacture serendipity why there's a huge amount of hubris there's an underlying almost a moral argument being made there about this is what these things can do and I think most of the people who are on the panels today are not making that kind of argument practitioners tend not to be tend to not to overreach in that way but I do think we need to have a conversation as a culture about why we even have the phrase manufactured serendipity it's ridiculous and I think it does go back to somebody mentioned the uncanny valley on an earlier panel and what struck me is that you know the uncanny valley when you see a robot that's so human like that it creeps you out because you know it's not human we don't have an uncanny valley when it comes to algorithms do we what happens is that after the effect after after we find out that there's something that that an algorithm or a tech company has learned about us that disturbs us when we find out they know it then we kind of go it's too much like the Facebook you know constantly experimenting on its users story which as you pointed out quickly died off so I think the kind of cultural conversations we're having about these technologies are shaping our understanding of privacy our understanding of what we can and should expect from our machines and the software that drives them and ultimately we're having the wrong kind of conversations because we're accepting wholesale a kind of tech Silicon Valley fueled happy picture image of you know let's manufacture serendipity you can't do it I mean one of the thing because as you say human beings design these things they are flawed we don't have appropriate number of auditors even looking at these these things and telling us through transparent procedures how to fix fix them when they go wrong can I what I'm curious what company actually can I be doubles advocate for a second I'm gonna suggest that you have a you've been an essentialist about serendipity you're being the Antonin Scalia serendipity here because because let's say that some social network service or dating out whatever says to me hey you should really meet Laszlo from Hungary and based on you know some digital data trail that I've left matches me up with Laszlo from Hungary how is that essentially different from if all of various circumstances in my life led me to be walking down the street and to bump into Laszlo from Hungary the middleman the fact that there is a middleman that is placed between this human relationship and I think this the one question I wanted to keep asking every member of every panel and I'll probably pester you afterwards with the question is what should we not be asking algorithms to do much of what we're discussing is here all the things we want them to do we're getting better at doing this there are going to be things that we should want them not to do in there there's this great ill Wilson quote where he said you know technology and science is what we can do morality is the things we decide we shouldn't do and that I think is a conversation we need to start having about algorithms in the same way it's been going on an AI for a while I think and in science fiction there's a rich discussion of a lot of these issues so yes in some ways I am essentialist because I think when if you'd bumped into Laszlo on the street that would have been a deeply human and a deeply private interaction and no one else would have known that you know Laszlo and I have a friend who's worked with all the people that cameras pointing well exactly no but I mean there's something to have a friend who never went on Facebook because she helped organize political dissidents and if you ask her why she's not on Facebook she goes first of all I don't want people to know who my friends are and they don't want me to know you know so there is I think something yes it sounds very Luddite almost but that I think is the difference right sorry I interrupt it you were gonna come back on something that I was actually just curious which companies you think are claiming to manufacturers that are in deputy Eric Schmidt was interviewed and said we meant we can manufacture serendipity we can do that now I mean you hear the rhetoric out of Silicon Valley hot leaders all the time about manufacturing there's actually so is it wayfarer or there's a company that has developed a basically a muse an algorithm to give you good guidance through museums and the person was interviewed I think was in business week a couple years ago said the reason that he decided to create this this app was that he'd been in the British Museum on a tour and he had a really awful tour guide and the group next to him had this fantastic tour guide and he thought you know human beings are so inconsistent I want to be on that tour so let's completely standardize this experience using out and using all the wonderful tools we have and when I read that story I thought that's terrible I mean you had the bad tour guy but then you have a great story about your bad tour guide or you have he had a human experience that he found unpleasant and his solution was to engineer it out of existence by creating something that would make that experience standardized and so I think that if you I think I spent a lot of time looking at what tech companies say about their product not the people who engineer the people actually making this up don't say this it's the people who talk about it I think and and there's serendipity pops up in a lot of the marketing for apps any sort of jeep especially the new indoor GPS apps the discovery apps and by the way you must be born because I share a Spotify account with one of my nine and a half year olds and so he has his playlist and I have mine and so I'm constantly being given recommendations for weird Al Yankovic songs like not again I mean it's so I do think that you can when you share information you can have these machine-aided discovery tools but that I think is still different than serendipity but is that then just so is serendipity just another word for for more options or I don't know I mean I'm sort of wondering almost whether we we're just arguing about words rather than about reality what do you think Jenny I definitely came in thinking we're gonna argue the serendipity I mean the semantics of what that was definitely something that I would that I kind of assumed it's gonna end up I mean I'm kind of on the side of like you know if it walks like a duck and it cracks like a duck it's a duck like if it feels serendipitous and it happened to have been manufactured like to me it still kind of feels like serendipity and it could just be that you know part of that whole process of the serendipity was you know the computer in the middle at some point but like I mean I don't I guess I don't even really know what it means to me serendipity is just like when you kind of have a random thing that seems like a coincidence but is like nice you know I mean like like I don't even know what the formal definition of it is and it reminds me like my dad's my dad always used to say when we were growing up the biggest coincidence I'm sure he didn't make this but he was like the biggest coincidence would be if there were like no coincidences and it kind of feels it feels related to me where it's like you know that like something like it just doesn't like things are gonna happen things are gonna come together and like maybe in this case it was that way in this case it was this other way but it's just it it doesn't feel different to me it's not like the computer is some other or whatever that like doesn't get to count and I guess like so I I don't read a lot of the Silicon Valley marketing speak I try very hard to avoid it but so like so when I was actually reading kind of the questions leading into this and it was talking about the quest to manufacture serendipity like which to me kind of seemed foreign because I'm like I never tried like I just try to like get as close to the right answers I can and if I kind of mess up in a funny way it feels like serendipity but like to me it just seems like if someone was actively trying to like manipulate me with certain with like manufacturing serendipity is a really explicit goal like you're just gonna pick up on it right it's kind of like the viral clickbait and that's how like the first few times you saw an article that was like you know what this like where they like it's the new style like the kind of developed where you're like you know what this woman did to stand up against whatever will blow your mind and then you read it and you're like oh yeah you kind of like said something to someone once right like I don't like you and you kind of click on those like two or three times and then you realize okay this is just like an emotional manipulation that I'm just like gonna tune out now right every time I see you know an article that has a like you'll never believe what number X is I'm just like I'm obviously not gonna click on that and so to me it's either like it's crappy and it's obvious and then you just ignore it right you learn it and you ignore it or it like actually gets good right if if prismatic had actually gotten good enough that like people were just super pumped to read everything that they read and it actually found you awesome stuff like who cares that an algorithm made it right it's like either it feels valid to you or it doesn't and that the source kind of just just feels irrelevant to me I think it's because to me there's no black box like I know what the algorithms are doing and so it's just a person trying their best to like figure out how to like show you the content you might like right like I have no malicious intent when I do that it's like literally like I'm just trying my best right but there is algorithm or other the people who make algorithms I'm gonna suspect or argue have an incentive to make those algorithms work for as many people as possible and that necessarily implies a certain reversion to the mean in other words as I said before I don't think you're gonna try as hard or prismatic would have tried as hard to surprise me because it's trying to get a certain number of clicks from a certain number as many people as possible basically right so it's reversing to the mean whereas so in other words it's it's optimizing for a large group whereas my friend who curates the newsletter is really basically saying this is what interests me and if you happen to be one of the people who share my interests great that's like the thing that makes it technically hard and therefore fun is that like you want it to work well for everyone but I don't I don't see any reason why that automatically means a reversion to the mean like like if it I mean it may end up doing that but that's certainly not how do I put this it's like we're still gonna be like if our if our goal is to maximize clicks like say our goal really is to just like straight up maximize clicks like we're gonna do that by getting more people to click more stuff which means like we have to be showing those people things that they want to click on right and so like it we you could be some little and say okay well we're just gonna try to show you like the clickbait right and it was definitely the case that like naive early implementations just promote the clickbait right because people fucking click on it and it's just like whatever you can't stop them I don't know and then so we actually sorry sorry and so like this is what happens in your an engineer and not a scholar but like we I mean like so tell us that that was bad like we actually mean we definitely saw like okay this thing attracts clickbait and then you sit there and you brainstorm like how do I get rid of the clickbait like what's the solution it's not it's not like that the goal is to send you clickbait it's it's similar to the stuff where people are talking about where it's like the algorithm develops as the people use it right and so it's like if everybody clicks the clickbait the model is gonna learn to show you clickbait but like you know you've kind of got two options there like either you let your algorithm degrade and you let it sit there and just learn to show people more and more clickbait and then more and more people stop using your app because nobody cares about the clickbait and then you got a business or you take time to actually evaluate your data you look at what it learns you look at a bunch of people's feeds you look at what they click on and you begin to see the pattern okay it seems like we have a tendency to show people clickbait so why don't we add some features that try to identify that this is clickbait and you know you know actually hopefully you will then learn that like these particular features that are a bit more nuanced are actually things that people don't like so for instance we even added a feature that was like is this article likely to be a listicle you know because it's like nobody cares about listicles and we weren't explicitly modeling listicles but like once you can then actually model this is a listicle then you can learn that people ignore listicles and then you can down weight them and so it's like you clearly need a human but it's not once again it just gets back to this sense of like like these problems are hard right like the the person who was up here earlier who was talking about like systems versus ML and they like to have you know zero to 100 and you know I'm kind of happy with 70% or something it's not that you want 70% it's that you want to solve problems that are not solvable with traditional techniques right like there is no deterministic algorithm that is going to produce articles that you're going to like like I just don't know how to write that there's nobody out there that knows how to write that and so you say okay well you know I want to find cool stuff to read like my current sources they're not that great like I go on Facebook it's an echo chamber I go on Twitter it's an echo chamber like I'm just going to try my best and it's going to evolve and either I'm going to do a good job or I'm not but there's not I don't know there's not the underlying intent that I think a lot of people kind of attribute and it's kind of it's unfortunate because I think the marketing speak then just adds it all in I mean it's like you sit there and you just know the marketer's going to oversell whatever you build and you kind of have nothing that you can do about it right but the echo chamber that you mentioned is obviously part of this question right all the all the filter bubble is Eli Paris are called it this notion that we're just you know we get the stuff that our that our friends are interested in and so we come circumscribed and then that ends up limiting our political discourse because we only see the things from from people whose opinions we make we may share do you think either of you that that is reversible first of all do you think it is actually a real problem the filter bubble second can algorithms themselves solve it I mean I know there's some debate about Eli's particular model of the filter bubble but it does seem to be the case and certainly anecdotally you talk to anyone they will say if you look at I'm reading all the same things that my friends are reading and I think it's why some the online curators who do the best people like brainpickings.org there are a couple of really quirky sites where a human being with weird interests or you know a broad range of interests like your friend just for the love of wanting to share that creates something that then you know appeals to a broad number of people although and I love her site but maybe half the stuff on there is not stuff I'm gonna click on but the stuff I do find there is fascinating I never would have found anywhere else so it is a problem the filter bubble and it certainly has led to polarization of political and cultural discussion but whether we should turn to an algorithm to fix it is I think a question we shouldn't there are so many questions we should be asking before we even ask that and I think that's true of a lot of these algorithmic discussions which is I totally agree with you I think a lot of what people are creating with algorithms are trying to solve a problem but for a lot of these things like who should be paroled and for how long what sort of surveillance should we have of certain communities and for what reason there's a starting point about justice and about equality that an algorithm should have nothing to do with now maybe you bring the algorithm in after you solve these moral and ethical and political problems and you might not totally solve them along the way but I do feel like the our way of thinking and approaching some of these problems now because the tools are so nifty and so incredibly powerful is to start with the algorithm to look at what we can figure out like if a gunshot goes off here that means this person's car is likely to be stolen and we get so excited because it is exciting it's powerful but that's exactly why we need to first get through those tougher human questions which are not going to be solved by any algorithm I kind of have mixed feelings I guess so I think that like there's there's clearly like a human element right if a human decides to implement a system that's going to decide how to sentence people like that human should be very confident that is going to be like a very good system before they make the decision I don't think you can blame if you somebody whose job is to just kind of build the best system they can and that system is not very good and that system has a lot of bias I think it's it's mistaken to kind of blame the system that you should blame the the person who chose to put that system into practice but and I do think sentencing is like a particular hairy one that I don't know anything about and I don't want to make a lot of opinions on but I think there are a lot of situations where people like in the previous panel about fairness right and it's like is the algorithm fair like is the algorithm gonna particularly harm me as opposed to other people and and I think they're like you know my opinion really is sort of like if it's if it's if it's better than live ship it it's like if it's better than what humans are already doing even if some people you know someone's gonna kind of get get the short end of the stick no matter what so the example that I'm actually thinking of here is like self-driving cars right I think for a long time there was like the beginning of the idea of self-driving cars and people were super uncomfortable with it and it was like what if a car gets into an accident was somebody dies but if I die as a result of this algorithm what if my kid dies a result of this algorithm and I think it's actually kind of it's been slow enough the progression that people have gotten sufficiently comfortable with the notion of like you know self-driving cars will have fewer accidents few people will draw which few people will die from car accidents if everybody has a self-driving car you however may be the unlucky person who ends up dead who would not have ended up dead if everybody was driving right like themselves because it just it happens to be that the algorithm in that moment had to make a decision between this car and that car and it went for that car and like I'm sorry you know and so you know you could argue that's unfair but to me an aggregate that's still like a way better outcome right I'd rather have an outcome where where fewer people are dying unnecessarily or where you know potentially where few people being sentenced you know incorrectly even if it does mean that some people will have a bad outcome from the algorithm because in my mind previously other people were just having the bad outcome from a human you know before a human died from a drunk driver and now that human gets to live and this other human who was just driving and happened to be in the wrong place at the wrong time you know dies and that was the result of an algorithm whereas the first one was the result of a human and I'm like I don't really but as we said earlier the difference is that when a human makes a bad decision you can challenge the human find out why they did it maybe hold them responsible or accountable figure out kind of what went wrong and with an algorithm that's a lot harder you can find out usually I if you actually willing to date you can find out why an algorithm did something I mean it's not a mystery but there's another part that we're missing here it's not just fewer people will die if we have self-driving cars versus now you know it's not an either or because there's another thing that goes extinct when you have an all self-driving culture and that is the human skill of learning how to drive a car now you can make an argument for why we shouldn't we don't need the skill anymore it's not necessary but you know if for some reason something went wrong and the self-driving cars could no longer drive themselves then where does that leave us in terms of our skills and you know this is a debate that goes on with automation in almost any industry particularly flight recently but I so I think that but again the discussion becomes X number of people die here why number of people die here so obviously this is X is better than Y but in there is this whole human component that isn't easily quantifiable that is still nevertheless crucial for a functioning society and in a lot of these debates that's what never gets discussed I mean the toll booth operators right it's cheaper to have easy pass it's more efficient we can track everybody goes through things but you know then you lose toll booth operators as a human beings who had these jobs who spoke to people who you know they were wonderful interviews with the ones who were the last toll booth operators over the Golden Gate Bridge I mean some of them prevented others from you know committing suicide I mean they were they're human beings so it's fine you can you can embrace and more efficient algorithmically driven system in a number of areas of life but we should have the discussion about what we're giving up and it should include these non-quantifiable things well yeah I mean I don't want to get too much into this you know this other question of automation and jobs and people and stuff there is something though slight tangent on self-driving cars maybe but there's something that I'm noticing now is starting to happen which is up to now algorithms have been defining what we're going to see in the digital space right they define what puts will see on Facebook what gets highlighted for us on Twitter all of those sorts of things so they are shaping our digital world and what we're starting to see now is that they're beginning to impinge on our physical world so I mean the most obvious example is the one that Ed gave of Google Maps giving him directions and he's sort of not even thinking possibly not even looking where he's going he's just kind of following the map blindly and that start influences what he sees similarly if the map starts when Google Maps and Apple Maps start giving transit instructions they you may start avoiding certain subway lines or something or even certain neighborhoods because just the transport directions tell you not to go there with self-driving cars you also start to get a more mediated experience of you know maybe which roads you go on what you see on those you know what you experience on those roads so very gradually they start to shrink the boundaries of your physical world to the point where maybe we get to the point where algorithms start making it less likely that you will go to certain places certain neighborhoods see certain streets do certain things so that sort of reduction is that a reduction of is that a loss of serendipity I suppose well if you combine it with things like ubiquitous computing and some of the wearable sensor technology that's being developed now I mean the massive amount of data that we can give off just by being living human beings walking down the street you combine all those things and yes I think it does start you start to live in kind of an electronic pen in the way that they do for veal they're fattening for slaughter not to make a dramatic I'm just kidding but but no but I do think that that again those are the questions of how much autonomy do we want to give human beings because we're selfish we're violent we're a mess we're messy messy creatures and the engineering solution doesn't like a mess I mean several people on different panels today talked about how algorithms are so great you know what a mess but we get in there and we solve these problems and they do and many problems should be solved that way but some of the problems that we wrestle with and will continue to wrestle with as a species cannot be solved that way and when we try to solve them that way we just end up creating a lot of unintended consequences as several people mentioned today or really kind of undermining real human serendipity okay but that's you know we're going to have a mix of both right so what is what exactly would you like to see that preserves the right balance more cultural political social legal questioning of these algorithms before we take them for great before they're actually out there and running and I mean self-driving cars is a really good example because already the laws are being passed in states across this country that say yeah we can allow self-driving vehicles I don't think we've even started to have the ethical moral and legal discussions around they're starting but we should have been having those long before and I think anytime an algorithm algorithmic fueled way of looking at the world or an engineering solution is imposed I mean there's we have we have decades of you know sort of engineering ethics and theory and I think one good field to look to for guidance is bioethics so if you look at bioethics there has often there are often these moments where people come together and say you know what we have the power to do this and now we need to ask a harder question which is should we do this and if we do want to do it what is the path that we're going you know a Silamar there was recently one about this with genetic manipulation we need to be having those discussions in the tech space more than we are I think we have a lot of hype then we have practitioners who are solving problems and we don't have enough and we have wonderful tech critics as well but I think we do need to be bringing these more into the policy space and into the legal space I mean a lot of these questions are going to get answered in the courts I think I think people are going to get sued and the decision will come and that is not the best way to answer a lot of these questions. As I said earlier today the tech moves faster than the law does and then policy does so like what's what's your feeling about that question of how to have the discussion in a way that doesn't slow it down I mean I if I'm being totally honest I think a lot of the concerns that I hope this isn't too inflammatory I think a lot of the concerns that people have and a lot of the fear that people have is generational like I think it's I think it's like the older generation now like like basically my age and above is really worried and my age and below doesn't really care very much about you know your data getting out people using your data they grew up with the technology and they see the benefits and it's not that you know I'm not naive about the the downsides of the technology and giving up data I just believe the upside is so much greater like I believe the fact that I no longer have to navigate I can just I can just let the thing do it it just it frees up brain cycles it frees me up to do other things that I actually care about it's like we got cell phones 20 years ago we stopped memorizing phone numbers does anybody miss memorizing phone numbers like no I don't think so right and so like I just kind of think a lot of this stuff is gonna you know everyone's gonna fight for a while all the people that are fighting are gonna die and then it's gonna be young people who are just like yes my data it's out it's fine and they'll fight about something else like the next version of it or whatever okay that's great all right time to some questions oh sorry we have okay well one of the back and then this gentleman sorry hi Daria Stigman and this has been a fascinating discussion all the way around and I wanted to throw out one more thing and I agree with you about generational because I just don't I mean I think there's some issues of around health that are different but the bigger question I have that no one's really brought up is the issue in terms of search and how we use this of personalized search versus incognito because I think it makes a really big difference in whether we're aggregating all the data which really incognito is doing the best job from from a broader thing than personalized which puts you even more and more into a bubble so I just want to throw that out I don't know if you don't say something I don't have anything super insightful I mean I kind of think that the you know there was just general there was just incognito for a really long time basically and then they added personalized and to me personalized is just like you know it it knows which like like facet of the word like which you know if it's a word that has more than one definition like jaguar or something right it just knows which one I mean it knows that I care about the sports team or I care about the car and it makes like the car ones come to the top and honestly the the biggest thing I like about is it just means like if I search this before and I clicked on something it's gonna like move that one up so like it doesn't feel like that bad of a of a of an echo chamber the personalized search to me because it's like when you're searching you're so specific with what you want but I don't it's not that's not like a great answer so I don't know if you think more insightful no just don't share your you know make if you don't share a computer it's a different set of questions than if you do in terms of personal right but from one generation to the next thank you I am from France and between Washington and Geneva now and both the both of you the following question in US you have between 5.5 million to 6.7 million youths from 16 to 24 years old out of school out of job proportionally the same situation in Japan is getting much worse in Europe so this generation of divide or I wonder would you recommend because the life will continue you're going to work hard silicon valley nothing will stop I hope but I wish nothing to stop it but how can we work on this divide between other cultures other generations in this case and to address urgently this issue we have with young generation as I said from 16 to 24 years old as far as I know White House declared it as a national security problem now how do you feel in this gap if somebody would ask you how do you address this issue so you're asking about unemployment this huge social economic cultural gap between underserved and those better served economically social and culturally most of those 6.7 million they they don't understand very much about applications about algorithm they have their own implicit algorithm how do we address this it's kind of outside of us are in the video if you want to say anything on that I mean I wish I had a good answer for you but I don't well it does I mean one one thing it's not going to answer your question and I think an economist and political scientists would do much better but it did make me think of the fact that you're now seeing some of the tech companies Facebook already and some of the others are starting to be sued by the victims of terrorism for example by the victims of anything that happened that might have been organized online which I think is kind of a fascinating pushback if you think about this in the algorithmic serendipitous context so one of the great things about our these online spaces is that you do meet people in who you would never meet in physical space and you often find connection with them but this question of responsibility when when in sight inciting violence or inciting terrorism happens who's responsible the people who you know created the platform or the people who perform the violence and I do think that the younger generation setting aside all the economic challenges they will face they are starting to wrestle with this issue of how they want to live their lives online and I do think that I mean they're not going to Facebook as much anymore they're they're finding ways to mask their identity online in a much more sophisticated way than any of us did there's a reason Snapchat was so huge is that you know these messages disappear so I think in some ways they're a lot savvier in terms of having access to the online space that's that's actually there are tons of people here in New America who look at that issue and brilliantly so I would say ask them because but I do think this question of responsibility you do see a lot of this now with Facebook and some of the bigger companies are having to tackle that so I like to reframe the original question can algorithms facilitate serendipity for example it seems that with it one of the one the activity trackers there's a quote I heard that you know self-deception covers its own tracks so if we can have data that shows our blind spots and allows us to see things if algorithms can take away something and maybe for example Netflix says this is the movie with subtitles that people who never read used want movies of subtitles would like or documentary or whatever that is does that open up some possibilities that does that facilitate serendipity I'm a defender of self deception so you'll hear no argument I mean I think it's it's still I mean I think it comes down to the semantic sense of what you think serendipity means but there's still you know a human had to sit there and decide that they wanted to model that specific phenomenon like I think it kind of comes down to like there's the so for movie recommendations for instance right like there's a lot of and if you were just trying to write a recommender system for movies like there's a lot of aspects that you could consider and you could try to be like oh what someone's like their first time in a genre what should I show them or if it's there should I show them a series of that like you can kind of think about all of these these like one-off questions that you could try to answer and that you could try to explicitly model and so you could try to be like okay I want to get people to broaden their horizons so I'll show them you know whatever they've never seen a subtitle movie so I want to figure out the best subtitle one to show them but anything that you do like that it's like there's a thousand of those possible things and you thought of three of them and you kind of implemented them and so it's to me that's just like part of your recommender system and like the hard part is trying to you know do something at a somewhat meta level to kind of cover all the cases you like haven't thought of like that's kind of the the hard part of machine learning is like how do I handle the situations that I didn't think about previously and and like to me that's kind of where the act the actual like algorithmic serendipity occurs because it's the the part of the system that like you know you weren't explicitly like I am trying to do X and said you're like I'm trying to do A, B and C and this other thing you know F appeared as a result of this combination of B and C and so like it's it's hard for me to actually picture anything that's like an explicit goal like that really leading to serendipity in a in a manufactured sense I guess because I it's it's the bigger picture. Well pretty much out of time but we'll take one more question really briefly. All right is this on? All right I don't want to be that guy but I haven't I guess this is more of an observation than a question but a couple times during this afternoon Gideon you've mentioned things about people not having control their data or data being cheap because they're not really paying attention to it there's been a lot of there's been talking earlier panels about the role of transparency and knowing when you're in an algorithmic sort of monitored field and sort of pulling all this I think one of the things that hasn't been really discussed here much is the question of design behind some of these systems in that there's been such a push in the last 20 years to make our electronics and our electronic mediated interactions seamless and this push for seamlessness sort of erases it erases boundaries it removes friction and those points of knowing where boundaries are or knowing where friction is are so the points that give people a chance to grab on to stuff and if there was more more borders more seams and less seamlessness I think there would be a lot more potential times for people to be able to realize oh my data is going out here and I need to do something to stop that it's kind of like that art installation that Jacqueline mentioned of the the vibrations that go in and out of your phone I don't know is there a way to reconcile those two because obviously design and designers want things to be a seamless as well and so do users for that matter yeah I mean I think if you make things less seamless your app will fail and then the one that's actually seamless will succeed and it won't it won't you know it I think that's a kind of a non-starter but I really do philosophically I guess believe this hasn't come up yet but like your data privacy is your responsibility right like you choose to use the internet you choose to use apps you choose to go online and like you know you can you can opt out of that but like it's it really is your responsibility like this is the world we live in now where like when you use websites they collect how you use it I mean I hate to break it to you but every website you use does experiments on you like look up a B testing it's all of the time and all it does is make your experience better and like that's on you to op you've opted in and like I just don't I don't have a lot of sympathy for people who were like oh my data got away I'm like you know encrypted know about I am not I'm the only person on all of these panels who isn't on Twitter I'm not saying that probably I'm just I'm not on Twitter I will be on Twitter I have not opted into Twitter I don't mind them on Twitter it might appear on Twitter through I did not give any any permission for that and you can't people have jobs where they are monitored with badges that see whether they wash their hands after they go to the bathroom I mean that you are you cannot opt out of so many of these things some of them yes you absolutely can you don't have a very high cost I don't think you own all of the data that is peripherally related to you like if I'm in a panel with you and I choose to tweet about that and include your name like I'm I just feel like I should be allowed to do that like I don't think I have ownership I obviously that's a free speech issue but I'm thinking in terms of opting out I think we increasingly live in a society where you can't opt out a lot out of a lot of this tracking and certainly of the surveillance I'm I just and if you look at workplace surveillance right there you can see how many times I mean you you have a choice of what having a job or being tracked constantly on a job and people are trying to fight that but you know they need a paycheck so that part of it gets gets my libertarian hackles all you know raised because I think that we don't we have fewer and fewer fewer less and less autonomy than we should have in some of those spaces just as the debate is getting interesting I'm told I have to stop it so thank you both thank you everybody else thank you all very much for sitting through this