 More or less our movement here about free software and the community behind the free software and the ethics and what we have created here But this will then be shifted to the more or less upcoming AI data mining whatever world And I think it's a very fascinating topic how to keep our ethics, our themes alive in this world Because it's very, very necessary. We have all these Google and other big companies trying to get the hold on all our data stuff So this will be presented by Justin Flurry and Mike Nolan Mike is from Rochester And they're from the Institute of Technology, Leopold and UNICEF Office of Innovation Please give him a big welcome and enjoy the talk So in 2018, Victoria Krakowna at Google's DeepMind AI Lab asked her colleagues at Google to bring her some examples of misbehaving AI So how does AI never lose a game of Tetris? You pause the game How does a self-driving car that's supposed to keep itself fast and safe keep itself fast and safe? Spends in a circle in the same spot In the same article, there was an artificial life simulation to simulate evolution That ended up creating this species that had this sedentary, lazy lifestyle And it would mate to produce new children that it would then eat for resources and energy And then mate to have more edible children So some of these are a little funny or odd, but this is increasingly becoming our world Not eating children, but major social networks are serving you ads for cat food after you post about getting a cat Major U.S. retailers know that you are pregnant before you've even told your family or friends So what happens when these stories turn into a major job-matching site, knows that you're afraid to ask for a raise And a major online retailer knows that you're an impulse buyer The rest of the world is starting to take notice too Last week, January 20th, the CEO of Google made a call for regulation for the governments of the U.S. and Europe To start coming up with ways to regulate AI So this brings us to the perfect opportunity to talk about Good Heart's Law When a measure becomes a target, it seizes to be a good measure We who work in technology are too often a witness to a system that does not serve Why? Actually, there's a pretty easy answer and it explains a lot Technology was designed with technology at the center Not people Which is to say, it was badly designed And so this is the problem that we want to talk about during this talk today There is this increasing diversion between what we feel like technology should be doing and what it's actually doing And more importantly, there aren't really any clear actual steps that we have seen to mitigate the risks of this technology infringing on our own human rights And so specifically we want to break this down into three main things We want to think about what sort of freedoms do we actually want to protect specifically from AI systems And then we want to think about a couple different ways that we as creators of these AI systems can actually build in ways to make sure that it's respecting these rights And then lastly and perhaps most importantly, we want to think of a few different ways that we can organize And ensure that everyone's rights will be respected by this So, let's start from the beginning What freedoms do we want to protect? Before we can look at today, let's go back to 1983 So, 1983, September, the GNU project was founded by Richard Stallman It was a software project, but it was also, it was more than just software It came with a set of goals and a vision to give computer users freedom and control in their use of computers and computing devices How? It would be done by collaboratively developing and publishing software that gives everyone the rights to freely run it any way you want To copy and distribute it, share it with other people, to study it and see how it works and to modify it Many of you probably know these as the four freedoms of free software So, the GNU project was always more than just software It came with a set of values and ethics that the project believed in Copy left becomes this copyright hack to protect these essential rights But, who will actually protect and enforce these rights? 1985, October, the Free Software Foundation is created to support and sustain the free software movement The values of the GNU project were important and valuable, but it wasn't enough to just leave them on their own At first, the FSF focused on employing software developers to work on the software for the GNU project and free software Later they transitioned to legal and structural issues to support the free software community So, it's one thing to have your values and ethics out there But they need to be protected and respected by the rest of the world So, the Free Software Foundation represents the sustainability of protecting these rights And beliefs that have been put forth by the GNU project So, this helps to sustain those rights, but how does a foundation like the FSF actually enforce these rights? Four years later, we see the first version of the GPL license in 1989 For the GNU project, stewarded by the Free Software Foundation The GPL was unique because it put power in the hands of people and activists like us About how we decide the rules of how people use our software So, Copy Left is now put into legal policy And it's the way that software developers place the free software values at the core of their code and their projects So, although the enforcement of Copy Left has its fair share of issues This was still the teeth in actually being able to enforce these values and beliefs that were put forth previously And copyright wasn't something that we were really thinking about before the 1980s when we shared software So, in that sense, Copy Left becomes this radical invention in software with the proliferation of the GPL Especially two years later in prominent projects like the Linux kernel Okay, so is the past actually relevant today? Let's reflect on how the Free Software movement responded to these societal issues So, Free Software was a response to the changing ecosystem of computing in the 1970s Software became more valued because there was a standardization on hardware So often, there were so many different architectures to work with So the software you're building, you have to do it on all these different architectures if people actually wanted to use it practically So, now there became fewer architectures We started to standardize on things like x86, 64 and 32-bit architectures So now, software's value had increased It became a commodity The four freedoms were de facto the nature of how software was distributed and shared before that time But after commodification, this was no longer true The four freedoms were rooted in a belief that there are essential rights that belong to all users of computers and computer systems Stallman observed this change at the MIT Media Labs in the 1970s and early 1980s Which motivated him and also many others to stand up for software freedom by asserting these rights So, to respond to this commodification, Free Software took a freedom-based approach to establishing the four freedoms Across the 1980s that we just looked at So, looking back almost 40 years ago, is it possible for us to extend and make the past relevant again today in our ever-changing world? Well, first, how has the world changed? So, the history of Free Software overlaps with what is happening right now We're combining software and data to determine literal human futures What you and I are going to do next, what we want to buy next, what our next way to spend our money will be So, another way to put it, what are human futures? The combination of data and software AI, in a sense So, how are human futures becoming a commodity? You know, before, software was this thing that we sold It had inherent value behind it And now that our futures are becoming this new commodity It comes from that ability to predict what you are going to do next And that's what drives so much of the profit behind these business models But we have to remember that data is only one piece of this really big puzzle It is the enabling force to determining your future So, in today's world, we have these third-party organizations that are increasingly collecting data on a massive centralized scale And your data is what enables companies to sell your future So, one way to think about it is data isn't gold, but it's more like oil You consume the data and you sell the output So, Mike, where are we today? Well, you know, we noticed that data is a huge factor in this new equation of these issues that we're realizing all around us And we've responded accordingly I'm sure many of you here in this room are very familiar with the privacy movement And what all many organizations are doing to try to allow us to take hold of our data And take back control of what happens with it Even within our regulatory bodies, GDPR here in the EU has been a massive movement in trying to enable its citizens to be able to take back that power as well And it's natural and acceptable to want to protect data privacy But privacy is only a single part of this greater equation of buying and selling human futures And more so, what the current data privacy movement has focused on is empowering individuals But this doesn't necessarily protect us from societal effects I'm sure maybe a few of you here have heard about predictive policing algorithms Where we'll use data from crimes committed around a city to determine what neighborhoods should our police patrol And what neighborhoods will be more likely to have arrests happen in them Even our court systems are now using data to determine what sort of punishment a person should get based upon past crimes committed by other people So this is beyond just our personal data and our personal agency There are parts of our society that are employing the use of other data beyond ours which will affect us And so while the data privacy movement has been a key factor in helping combat the effects of surveillance capitalism We've noticed that there's still gaps And we notice that we need to approach AI not just in pieces, but rather as a whole And so we have Some organizations recognize this challenge and have started to address it The popularity of working groups such as the AI Now Institute or the partnership on AI have begun working on this as of a few years ago But what we've seen is that these groups are not showing to be very effective on moving forward with ways of ensuring that people are effectively protected from AI systems They write reports with many suggestions, but there is a big emphasis on this whole light self-regulation We'll give you a few tips, but we aren't here to make sure that you follow through with them We're just passing around some ideas So if we're in this major societal shift from software as a commodity to human futures as a commodity, where do we go from here? Well, the first thing that happened during the free software movement was a definition of rights or freedoms We notice that we are presenting, so now what we're doing is we're going to present three possible freedoms with regards to AI systems that we can think of going into this new decade Now, we aren't presenting the truth or the answer We're just presenting experiences from our own careers in the research that we have done up to here Obviously, we know that we're just two people from two very specific backgrounds and we look forward to input from everyone in this group Patches are always welcome So what are these freedoms? Well, first we know that we're entitled to know and understand how decisions are made So you should have the freedom to audit or understand how these automated decisions are made I mean, imagine a teacher gives you a bad grade on an essay Naturally, you're curious to know why I mean, I know I was So why wouldn't you want to understand why a decision made, how it affects you? So really this may seem familiar because it really came from the freedom to study the source This is really where the freedom to understand how decisions come from Like this is the source of that in our opinion So by show of hands, who here knows what Linus's law is? So a few So Linus's law is with enough eyeballs, all bugs are shallow If you open up a code base or a system to enough people to look at it Any problems with it will be more likely to be found And experts across different backgrounds and fields should be able to research and understand how these huge AI systems are affecting us But the thing that makes this very different from one of the initial four freedoms is that this goes beyond just reading the source code AI systems are not just source code anymore It's the source code, the training data which was used to train the model But it's also considerations that the developers took when developing and what the team is made of The underlying research within the context in which the system will operate And so we have to understand that our freedom to audit goes beyond just reading code So for example, has anyone here heard of this trolley problem? This has gotten very popular from self-driving cars because they're already programmed to make decisions involving human life And so how are decisions made when human life is at risk? Well, we have the right to know that, right? And this technology directly impacts us as humans and our friends and family So a study asked people if a car had to choose between running into a tree or running into a child Which choice should the car make? So I hope you'll all be relieved to know that over 90% of the respondents said it should run into the tree However, the next question said would you choose to ride in that car that would run into the tree? And vast majority of respondents said no, I don't want to run into a tree, I want to save myself So would it be ethical for a car to take into your age, your race, gender, social status When deciding whether you get to live? If a self-driving car could access personal information such as criminal history or known friends Would it be ethical to use that information when making these decisions? Would it even be moral for someone to make a car which favored the safety of passengers within the car above all else, no matter what? Well, maybe we're just asking too much of AI systems to make these decisions for us So our second freedom that I want to talk about is these systems we know are capable of harm We just demonstrated it in some way or another And we deserve the guarantee of liability when these systems do create harm So you should also have the freedom to deliver and expect accountability and responsibility by those who design and deploy automated decision making systems that affect you When machines make decisions for us based on how we program them, who is accountable for these decisions? Is it the machines? Is it the creators? Is it us? Well, clearly we feel that it's the creators and the organizations which are profiting from it Those who create these systems oftentimes do to profit themselves And we as those effective deserve to be put over their profit This is our livelihoods and our lives So I want to give an example for all of you How could social media be possibly ever connected to genocidal campaigns? Some of you may be familiar with the Rohingya genocide in Myanmar And so what was the role of Facebook in all of this? The Facebook newsfeed optimizes on engaging content But what is engaging content? What makes people click on links? Well, numerous studies have shown that optimizing for engagement increases recommendations for extremists in alarming content Researchers knew of these issues and called them out years ago Before the Rohingya genocide happened in Myanmar So is Facebook responsible for fake news propped up by religious and military leaders That contributed to an ethnic cleansing of the Rohingya people? They didn't, obviously Facebook didn't explicitly think about causing genocide when building this feature But it was a contributing factor In some way or another they did know about it In many ways, they did know about it So who is to blame here? Is it Facebook? Is it nobody? Is it just a bad thing that happened? I don't know But what we do know is that profits were placed over people here specifically So this brings us into our third freedom So no decision-making system is ever perfect We are always missing some data So we are always missing some data So you should also have the freedom to appeal a decision that affects you So maybe have you ever told a story to someone to try to help them see where you're coming from Or to empathize with your situation Or maybe you have to explain a fact about yourself in a background check Or something that doesn't really represent who you are Maybe you've heard the phrase, you know, walk in someone else's shoes So our ability to do this is what connects us as humans And often helps us avert disaster more often than you think There are always hidden stories that are not captured by a set of data points So we should always have the opportunity to break through automated systems That influence an organization in the first place Break through automated systems that influence an organization and the people behind these systems To always use our humanity So by a show of hands, how many people here have a university degree? Awesome, I don't So many of you, whether you have one or whether you don't, you've applied for jobs at some point And when you apply, you submit your CV or your resume to the application We already know that automated tools are used to review CVs and job searching applications But what is the point of the interview with a real person? Interviews are a chance to tell our own hidden stories and explain the gaps between what's on our CV and what isn't It gives us a chance to build empathy between us and who we want to be our employer So where else do we see some examples of this? In most court systems, you have appellate courts Where if you think that a decision was made where it was an unfair trial or you had a biased judge There's a system in place for you to appeal that decision Same for rejection for loans and credit offers If you believe that a decision was made unfairly against you on bad data You can appeal to an impartial third party There is a system in place to appeal So to wrap up, what is the idea behind this freedom? We must not erase the opportunity for human connection and empathy when these decisions are made Even with automated systems So I hope all of you are beginning to see a trend here in a theme that we're trying to build So we know what freedoms that we feel like we're entitled to Or at least a subset of them But how do we as creators of these systems even respect those? Well, just as the free software movement did before us There are certain ways that we can establish creating those freedoms Certain rules and suggestions and guidelines that we can give to designers and developers and data scientists and people who create these systems And so we have a couple of recommendations that we've thought of So we like to claim that models are foolproof and avoid bias and that we really thought about it and we got the right data set and everything's perfect But how do we prove this when your training data set is completely proprietary? Or with the fact that we know that any data set is really a subset of real data, right? How can we verify these claims of amazing accuracy made by creators of AI systems If we can't actually reproduce what is happening And so our point here is reproducibility is about that freedom to audit And it connects back to that first freedom that we talked about Where we should be able to look in and understand how these decision-making systems are half And so in general we have three concrete steps that we think can help ensure that a model is reproducible You really need three main things As we said earlier, you need access to the training data to train the model You need access to the source code so you can have understand what model you're using in the corresponding pipelines And then you also need proper documentation describing the model In fact, I think that this is actually the most critical piece So by show of hands who here has worked with open source Awesome, I think I came to the right conference So sorry, but keep your hands up If documentation was useful for you to understand how the open source project work Good, documentation is an equalizer Users have the right to reasonable documentation detailing the functionality of the AI system that they are subjected to Now I know this is general language, but there are key steps that we can move this forward Even AI now, the institute that we mentioned earlier They have published a standard documentation format that at least gives maybe the minimum needed information as to how the model was created Things that they took into account, what data that they are actually using and what they are trying to do with it And even this can be a very, very powerful thing in equalizing the knowledge among the creators and the users of this software Now, I know this isn't perfect, but we can design these requirements in an actual way So for our second suggestion, it's difficult to create liability upon yourself as a creator of something There are some options that we thought of, like the B Corporation certification can help you can sign on to to help create liability upon your organization But we don't really have anything like this for the AI freedoms that we are talking about today So how can we be responsible in designing our systems to be accountable for these adverse side effects that we are trying to prevent? Well, let's design responsibly One option that really gave us a lot of inspiration comes from GDPR and the requirement of performing impact assessments So GDPR requires what's called a data protection impact assessment anytime you collect significant amounts of user data And so as part of this impact assessment, you consider you do a risk assessment So you have to consider things that might go wrong and you document mitigations that you have in place to help try to prevent that It's not perfect, but what it does do is it creates a small amount of liability and more importantly a paper trail on the creator for having considered these side effects And what sort of considerations they made Most importantly it forces creators to consider ethical repercussions as part of the design process So our third suggestion for how we can protect these freedoms is going back to the appealing mechanisms mentioned earlier So we have these systems already There's no secret sauce or new ideas that we're pitching here So let's start with the things that we already have as a model So appealing mechanisms, what are they? Human controlled methods to appeal a decision and correct mistakes made by automated systems Common among other services we already have The courts, bank loans, we talked about these So even if an automated decision making system does have input on a decision, appealing mechanisms must be human centered and human controlled So appealing mechanisms should be designed to create empathy between humans So okay, that's nice and all, but what would that look like for AI? So an example of manual override is an excellent example of this In the Tesla autopilot cars, the moment that you as the driver place your hand on the wheel, your feet on the gas pedal or the brake You immediately override any automated decisions that the car autopilot would make for you In a sense, you appeal the decisions that the car would make for you So that's a good example, but what happens when appealing mechanisms are poorly designed? Many of you are probably familiar with the Boeing 737 MAX and the MCAS system And the results that it had in many of their planes crashing So our appealing systems must place user experience at the forefront Appealing mechanisms need to be designed with accessibility and ease of use Or else, the right to appeal is not evenly distributed to all people So okay, so free software came from wanting to build software with our values and freedoms at the core Can we reapply that to the changing world that's increasingly being driven by data and human futures? We want to adhere to these freedoms, but we have to agree on what they are We need to collectively define something that we can collectively organize around So looking back, how did free software do it? The four freedoms came before the FSF, which came before the software freedom movement Was really a thing that we were talking about before we were even here at this conference So we need to find more ways to work together and standardize our freedoms While we can talk about the freedoms all we want, first we need a common structure to define what we want And then agree upon that So you want to build a movement? Define a standard And as it happens, we free software people are pretty good at that So standards and protocols are kind of our thing While the open web is not perfect, right? It has its flaws But we have created a globally represented standard and many standards that are mostly respected by organizations around the world And they're in use every day Some examples, the World Wide Web Consortium, the W3C, the Web Hypertext Application Technology Working Group A little bit mouthful, what WG, the IEEE, the Internet Engineering Task Force, RFCs, Request for Comments So there is a need to take this into account that we do have forums in place to do this type of work already And some of the power is in our hands as designers and makers of our shared digital world What we presented are some of the ways we believe engineers of all backgrounds building automated decision-making systems can remain conscious about these freedoms So these methods and ideas, they may work for us as creators, right? Because we have control over what create But how do we scale this issue from the individual level of us as creators to the societal level of making sure that everyone is protected Well, looking throughout history, society has had many different tools that can help organize movements, enforce their demands And we all have our own skills and positions and place in society where we can make an impact So we need to understand how different tools can create different kinds of impact that we wish to see And no matter how you can trade for you, it's always up to you to join in As I said earlier, patches are always welcome But how do we ensure the freedoms are respected by the rest of society? Because we know if we don't protect these freedoms, we can't ever truly be sure that they'll be respected And so in our research, we have found four main methods of either enforcing or at very least incentivizing others to also respect these same freedoms that we're trying to protect And so the first one is maybe an obvious one to many of you When researching the history of free software, we naturally gravitated towards using licensing as a model because that's what they did Because AI systems extend simply beyond the model, but also the data and even the documentation This began to get a little tricky And so neither I nor Justin are lawyers, believe it or not But we suspect that licensing could be a strategy that one could use to ensure derivatives are developed in a sustainable and ethical fashion By developing some sort of comprehensive package license solution for a code and data sets So no matter what part you're touching, whether you're a contributor of data or a contributor of code Licensing could be a way of virally infecting all other parts that have to use it and enforcing these rights Now obviously this has a few pros and cons The biggest con that we immediately saw was this is contributing to the issue of license proliferation It's yet another software license out there that hasn't been tested in court And the more licenses we have, the less sure people are within our ability to actually enforce them It creates this massive legal and litigatory overhead and it places strain on the original creator of the software to actually go out and try to enforce that As the holder of the copyright However, it does shift power into activist hands It gives us an immediate tool that we can use now to begin saying us as creators or contributors to these AI systems We want it to work this way and we don't have to wait for anyone to decide that And so we have seen a number of interesting innovations in the area, specifically around data licensing Which has tested out the waters a little bit, not comprehensively but maybe shown a possible way forward OpenStreetMaps a few years ago changed their licensing model to almost adhere to more of a copy left strategy Allowing them to gain power in the creation of their own data set and allowing people who want to make derivatives also contribute back The CDLA took into account people's personally identifiable information and ensuring that it doesn't get released in public data sets The next step that we really thought about was regulation We were inspired by GDPR and the strong arm of the government is an enticing tool to want to use However, the road to creating ethical AI regulation is long We all know about the issue of politics and how slow it is to convey the right information to regulators And we want all of our rights here now But while regulation is slow to create, AI ethics is something that regulators are already interested in And the time is now to try to impart our needs upon them, loud and clear But this also has a few pros and cons, right? It's powerful but it's slow And it's also political and unstable, regulation can get repealed and it's geographic So maybe it's somewhere else but not within your country And lastly, the issue of regulatory capture We live in a world with lots of very, very strong, wealthy and powerful technology companies That have strong opinions as to what they should be able to do But looking back, even the CEO of Google has suggested that countries begin at least thinking about creating regulation So this brings us to our third idea of how to help protect our freedoms So the difference of knowledge between consumers and producers is one thing that allows for exploitation by producers So one way we see a way forward is having a third party, independent certification body Which is kind of a classic solution to simplifying really complex information for consumers To help them make the right and smart decisions for their own life So a certifying authority could allow creators of an AI system which respects our freedoms to more easily earn consumer trust There are some pros and cons again On the one hand, this does empower consumers I don't have to have a three hour conversation with my mom explaining ethical AI when she's going to download an app on her phone to help her make the right decision This gives people who are not in this bubble of our world to understand how they can make the right and smart decisions And have there be some kind of authority to support that And while it does empower, it doesn't explicitly protect It is kind of an opt-in model But this does empower competition If you're going into market and your competitor is pushing out that message for an ethical or more respects your rights product It is something that puts a little bit of pressure on you as a competitor in that space But one thing to really emphasize here is that it must be run by a third party You can't have a single company or organization rolling in saying, we know how to do AI the best way, here's the way to do it, let's go No, you have to have an intersectional building of different people from different backgrounds and perspectives Small and humanitarian corporate backgrounds So it's not just one voice overpowering the conversation So going back to some examples, we did briefly mention B corporations earlier This is a certifying authority, at least in the United States as far as I know To help ensure corporations have a positive impact on their workers, community, customers or even the environment Another example from the US is we have the Department of Agriculture and Organic Food The Department of Agriculture in the United States acts as a certifying authority to ensure that food producers don't use chemicals and growth hormones and other substances in growing their food So you as the consumer can go and buy food that respects your preferences Another example is actually one that's really right here and personal to probably all of us is the Free Software Foundation's Respect Your Freedom certification We've seen that with examples of things like the Lulzbot 3D printers and also the Librem laptops from Purism So this brings us to our fourth and final idea of how we can enforce these freedoms And the fourth one is lastly what has historically been one of our strongest tools as workers is to organize Labor organization is a powerful tool that workers can use to ensure their freedoms are respected So maybe this is personal to you, it's personal to me, but more tech workers are noticing that there are negative impacts to the systems that we're building The Facebook example from earlier with Myanmar A labor union's ability to affect a workforce's employment contract can incentivize your company to take notice and respect the wishes of their employees A union can transition some power from the shareholders' hands into the hands of the creators Me, you, who are actually creating the real value of these systems So what are some examples of that in this 21st century world we're living in? The Amazon employees for climate justice are organizing against Amazon's inaction towards climate change GitHub employees are protesting and threatening to strike over their company's deal with America's violently and xenophobic immigration enforcement department Google has already threatened their employees by trying to unionize that Google has been actively deploying expensive countermeasures to sow doubt in the power of labor organization So all this said, the power of labor organization is being respected by these organizations so much that they're scared Google and Amazon have already started to fire employees over these issues They want to downplay and put it down before it makes them change what kind of organization they are and possibly bring a shift to the organizational and culture and values there As scary as this can be for us, it's more so for them So to summarize everything we've talked about here today, the stakes have never been higher These systems are being built not next year, not next month, not tomorrow, now These problems are not going to go away, they're only going to continue and grow But we know we can imagine a better world because we have to The alternative is unacceptable So the history of the free software movement left us clues to learn about how to build a social movement To address these problematic patterns in our digital society Let's take these cues and start demanding for our freedoms to be respected Just like those who came before us nearly 40 years ago Now I really wish that I could come out of this and say, here's an organization that you can go to and support And this will be the way forward, but I don't What I do have is all of the people in this room and around the world Through perseverance, through perseverance, we can begin championing this issue in our own way For those who agree with us and want to commit to standing up for our freedoms We do have one last action for you Pledge We have drafted a pledge at freeourfuture.world And we want you to pledge We want you to pledge that you will not participate in the creation of software which infringes on our freedoms We want you to pledge that instead we'll build a better world for all of us, not just a few And we want you to push your organizations to do the right thing Because we cannot accept having our future bought and sold for other people's profit And I mean if there's anything we want you to take away from this talk, it's this Me, Justin, you, everyone in this room watching it online We do have some power to make this change So let's build a future that we own together for everyone Thank you Thank you Justin Florey and Mike Nolan We have a few minutes, so are there any questions? Hello Thank you for a wonderful presentation I would like to know what do you think whether the big companies that are creating huge models For example, Google speech models and so on similar ones That cost a lot of electricity to produce Should they, in your opinion, open source the data or open source only the models And whether it's beneficial Because if somebody else would recreate those models They would also spend those same amounts of energy So what do you think about this? Yeah, so your question is a bit quiet But I think you're sort of asking whether these organizations building extremely large And kind of powerful models should be required to open source their software And data too So this is a tough question to answer because it is so broad And in some circumstances there are justifiable concerns for saying we shouldn't open up this data set And while I don't want to give a broad answer to that same question I do think we should consider if you can't open up a data set or if it contains PII Or if it's a risk to people Is it worth creating a publicly used model that is creating actions upon that data set A lot of the implications of AI software stems from us ignoring the implications of other things So we just kind of put it away into a closet I know that's not like the most satisfying answer but I hope it kind of addresses that I would say as so many people are moving out and in You probably should let you ask your further questions in person So you're probably available for further questions Thank you again for the good talk and yeah