 I think this is selected from our old written by expo folks at Kulika on the use of automated risk assessments for criminal and predictive policing purposes. So these are tools that police officers can use to show it to shine light on an individual and will come up with a risk score with regards to their likelihood of predation crime and their likelihood of recidivism. And what they found in doing their research was that this guy on the left heard a partner in black. And in a guy on the right there was white and this was 3D and the data eventually showed that you know the black fellow had no subsequent offenses but was indicated to have such a high risk of recidivism. You said what was the name of this thing? It's a north point in the name of the tool that we use. Yeah and if there's an article by Kulika if you just put in like criminal risk assessment from a bar and a pine that they like you agree like use these kind of tools because there was some pushback as to whether or not it makes sense to use them in a criminal justice context. And I could student in my law school course that's writing about this and for us the sort of key issue in that criminal jurisdiction is supposed to be focused on the individual. You're like so as to not how I feel like you're really supposed to consider the facts in their uniqueness and these tools are general by nature. So you know how can we they might give some insight that could support judgment. So that's definitely a question of how the individual or judge or the user of the system sort of filters through whatever bias might be introduced so as to making judgment. So this very brief thing on sort of some of the ethical issues that creep up the issue learning algorithm. So what was the statement? These are some fellows from e-college. So supervised learning algorithm I always think about it we have an equation where x times y x times w equals y is supervised learning. We know what x is and we know what y is. We're trying to figure out what this parameters of w are so that we can get it right for to iterate every night. We get the right thing to get that outcome in our equation. And if the y in the next set we have or have data that we should admit to wherever you're coming from that would be college. And it's a bunch of people that look like that. And I'll likely that in the future we're probably the system's going to go out for characteristics that are somewhere to it. Likely that just sort of perpetuates some of those so that half day well it can be introduced by next. So like the orchestras in the United Kingdom in the 80s or so they found the power in front of female and all of our additions and it works. And then they got a little bit in and it became professional violence. So we say I can replicate that profit in algorithms and for bringing you know the time to get to data map part to just sort of learn people similarly as if we had worked for Microsoft then. So what we have to do is treat this similar people this similarly right. So they were like you have to sort of you know accept that the prejudice are there to try to do something to address them. But it goes through. Progressing on characteristics of the chart. So they're going to be embedded in home redundant and embedding trying to you don't have to be a part of our own decisions. So we say we will keep the part of it but think about awesome right. So even when you treat this similar people this similarly this is from Warren's heart currently Google. It's tough from a statistical perspective because if okay this is your minority group and this is your majority group your algorithms will fit well. We got a really strong sample set and you'll have a job fitting to a smaller sample set. So if we were to do some sort of targeted advertising campaign you know these guys would fit nice personalized stuff. It's just another slide to show that you just let them practice the creeps in all of their own place. This is a trauma through deep dream. So this gets even harder. That's our interpretive discussion kind of interpretability. We're not even sure which features are we where it creeps up in terms of the language stuff. So there was studies by I'm kind of a little bit of an audience but I had board embeddings that I talked about where they went through and they did these plays where you can use the vector today in between a couple vectors to make analogies and words. So they showed in the space that the same distance man lies here, woman lies here and then you map over and you find where king lies and then trace that same distance in space and it will end up needing to be put one for you know woman. So you know women's queen is going to have to write these two things live in parallel spaces. So they push this and for us that's an innocuous comparison because there's not many people who can clean through them all anymore so you don't need any heated gender or assimilation for those two terms. But if we push them to contemporary professions, this is just the frequency in which these terms can occur in the trains that doctors should train upon as well as blackmailers to assault it as whitemailers to enable to. So if we build a chaplain that's trained on this corpus we presumably not want to perpetuate those kind of biases. So they've done some cool work to solve that where this is probably TC which is the dimensionality of the high dimensional data where they put on them. X factors are just just terms of the different genders and then we've got below the gender. So it's the genders are above the X lies words that are gender charged. So we were sort of all women sat in breast house speakers that are under neutral. So you know if you call a queen a gal, a sister, you'll okay. Likewise for men it is anything that loads that lie below the X axis can stay. So it's cool we're allowed to have that gender association. Anything above they just reduce it down and collapse to the mid axis so that I wouldn't be there therefore if they'll get to the gender in the model. Which is a cool hack to get around. So yeah so as I said throughout for me the sort of takeaway here is to have a support legal activity or a place legal activity. I think we've seen the answers yet there's a lot to stick through that. And then I mean the other is it's going to just focus on professional responsibility. Most of the time these tools aren't going to be built to challenges. So to what extent does technology need to be aware of and like that. I think there's increasing awareness of the social ethical issues that we've been discussing in the in the know me. But few of our between are going to be those legal hackers that actually have written how to be should these rules evolve or what kind of training do technologists who are working on a application need to have. Are you asking the same question the same question like the guy working on the self-driving car does he need to have a license. Imagine if you're an engineer at you know Uber or Google it's like do you need to have do you need to be an analyst in order to You need to know how to drive. Do you need to know how to drive a car to design out. One thing that I can say in terms of the chat box work of customer service data. No one thing never talked to a customer. And so we said to them go do some design stuff talk to her and sit for a day because my service person and then you'll get a feel you don't have to make these functions around like what's like a friendly chat pod you'll actually understand. I definitely think about that side and there was that or you could partner up with somebody yesterday who the businesses to you ended up you know we work which is a co-working space. So the guy who's doing data right now for pricing is quitting. And that's because he's a super smart physicist who knows everything about crime. He never read a marketing book he never talked to marketing people and he can't solve problems. So I think there's a lot of benefit to be had from collaboration with people and doing this stuff as opposed to arrogant. But something that's interesting about the way the question was is do you need to have a driver's license in order to be able to work on the algorithm? Did any one of the things that demonstrates and that it was kind of taking us in your work is the how much the context of the role in how much the role in context matters. And so I don't think I don't really know how Google does this but I would think you might want someone to be kind of like a driver who has a license perhaps and someone to be a role passenger, someone in the role of mechanics, someone in the role of city planning and traffic and so you can have all these different roles and perspectives. Driver's license has a input to like the consultation of people on teams or experience that people are going to but at least important to kind of and across where is the consultation spirit as I work. I'd say that I thought that was awesome and so I was able to get the last bit of it and in the last week of January we're going to have a legal intensive pair of data analytics. We're going to take some of your questions as inputs to IED second annual data analytics legal intensive and blockchain and I like the last few questions. So if anyone's interested in continuing dialogue, you're welcome to take a course but also it'll be online so you can kind of like check in and see what it's like and maintain the conversation. This really is the cutting edge. It's kind of beyond I'd say like the environment of the law and law practice understanding nature of the technologies in the next slide and what's at the forefront of the field or in AI and they can look over the horizon so I really encourage you to keep track of this and kind of stay in the game from the title of the story. I think you guys are welcome to come here if you're interested in being able to reach us.