 Greetings everyone. My name is Lisa Pogiale and I am a senior democracy data and technology specialist at USAID. I could not be more delighted to welcome the rights con community to this panel on a technologist code of ethics. Earlier today at rights con USAID administrator Samantha power and new America CEO and Marie slaughter announced that USAID and new America are collaborating to kick off a multi stakeholder non governmental to draft a technologist code of ethics inspired by the Hippocratic oath in the medical profession. This code if drafted with care and implemented thoughtfully in the spaces where technologists work and learn could help transform the underlying values of the global tech workforce and cause tech companies to shift their norms around ethical development and design. The idea of the code of ethics for technologists is of course not new. There have been many ethical codes produced before by companies, professional associations and non governmental actors around the world. And indeed some of the panelists you'll hear from today have been involved in such efforts and their work has informed the initiative that brings us together. This proposed code of ethics though charts a new course in two important ways. One, it proposes a global radically multi stakeholder social movement based approach to constraining the misuse and abuse of technology, in part as a response to the fraught geopolitical context in which this challenging work takes place. We've witnessed how technology platforms models standards and systems have become a battleground on which the struggle for democracy is playing out around the world, including in multilateral fora. We've seen how algorithms and machine learning models have been used oftentimes inadvertently to undermine democracy, for example by amplifying hate speech or enabling the surveillance and harassment of journalists. This past Tuesday morning administrator power delivered a policy address in which she highlighted that a technologist code of ethics quote must be developed by technologists for technologists so that it will be upheld by technologists. Indeed technologists must be at the center of this effort. The effort must also bring new and diverse voices into the conversation, who can help technologists better understand the human impacts of their work, including in places far afield from where they engage in it. Second, in order for this code to catalyze the kind of change democracy advocates around the world wants to see. It has to be more than an articulation of values and principles. Drafters will need to carefully consider how to implement its principles and relevant institutions and organizations from emerging tech companies to universities from tech incubators and hubs to vocational schools. New America will work with drafters to identify existing levers of power they can pull within institutions and companies, as well as new structures they may have to build in order to implement the codes principles. USAID is so grateful that New America has enthusiastically agreed to steward the drafting process. We will amplify their efforts and the efforts of the technologists activists investors thinkers and influencers who they will bring into the conversation. USAID will not lead or manage the drafting process to be clear, nor will we play any role in drafting a code. We firmly believe the voices of non governmental actors should be elevated. Our role must be to leverage our convening power to bring a diverse group into this conversation to engage in a consensus driven process to empower technologists to code on the right side of history. Today you'll be hearing from panelists whose passions dedication and expertise will set the stage for this important public conversation to come around drafting a code that will help us to collectively build a rights respecting digital future. So I'm very pleased to turn it over to Allison price to get us started as a senior advisor for the digital impact and governance initiative at New America. Allison over to you. Thanks Lisa and thank you to all of our panelists who are with us today I'm going to be really brief so we can jump right in into the conversation. But my first request and and don't, if it's too distracting don't look at it now we'll drop it again at the end. But we have a request for public engagement about this idea of a technologist code of ethics so the digital impact and governance initiative at New America Digi and the public interest tech program pit at New America are looking to get responses or just initial reactions or ideas to help guide this inclusive process. If you want to answer it's a it's a brief and voluntary survey you can be anonymous you can choose to identify yourself. You can answer one none or all the questions it's totally up to you. We're just really looking of ways to either collect contact information for folks who want to follow up with us that we can follow up with as we move this open and collaborative process forward. The bottom line is we really do want to hear from all of you. And we get that everyone's at capacity but we're really hoping that this week of incredible programming with the rights comm community has left folks energized to work towards a rights respecting digital future. And with that, I want to get into the panel. Our moderator today is a fool Bruce, she is the author of the tech that comes next, how change makers philanthropists and technologists can build an equitable world. Afua is a leading public interest technologist who has spent her career working at the intersection of technology policy and society. Her career has spanned the government nonprofit private and academic sectors. As she's held senior science and technology positions at data kind, the White House, the FBI and IBM. Afua over to you. Thank you for that introduction and I'm so happy to moderate this conversation today. As has been stated, versions of a code of ethics for technologists has been tried over time by different groups of people and other fields have professional certification requirements that touch on ethics. And within the incredibly wide and often changing definition of technologists we must also acknowledge that this is a global question and perspectives on the merit of a technologist code of ethics can differ based on community. Our panelists today have a wealth of experience in academia civil society and the private sector, and I'm excited to discuss with them the potential for a code and questions, exploring the importance of norms and culture for building a rights respecting digital future. So let me introduce our panelists to you today. First up we have Christina Martinez Pinto, who is the founder and CEO of the Pitt Policy Lab. She has worked as a digital development consultant at the World Bank. She designs AI for good good lab and co founded in Mexico's National AI Coalition. I a 2030 mx Julia mono is the executive director of Internet without borders and inaugural member of the Facebook oversight board and the executive director of the content policy and society lab at Stanford University. Maron Samani, so homing, excuse me is the James and Eleanor Jewsboro professor at in the School of Engineering and professor and associate chair for education in the computer science department at Stanford University. The second to is the founder of Taiwan AI labs, which develops next generation AI solutions for healthcare smart cities and human interaction by gathering experts in software development medicine genetics and other disciplines. So welcome to our panelists. I'm going to direct my first question today to Julie Julie efforts to address algorithmic discrimination and technology and technology enabled human rights violations often focus on legal or regulatory reform as the best way to constrain the behavior of tech companies and states. Conversely, efforts that focus on reforming the values and norms of the tech industry have been criticized despite evidence demonstrating that normative shifts can lead to legal and or regulatory reform over time, and can help strengthen company compliance with existing laws. What are the merits of an effort like a technologist code of ethics, which focuses on shifting the values norms and culture of the tech workforce workforce pipeline and industry. What are the challenges of such an approach and how can efforts focused on values and norms be leveraged most effectively alongside ongoing advocacy for legal and regulatory reform, as well as other policy and multilateral efforts to address tech accountability. So that is a lot there, but I'm curious to hear your thoughts Julie. Thank you so much, and I would first like to thank USA ID and America for the invitation to join this much necessary conversation and timely conversation I should say. To respond to your question, what would be the merits of such an approach a technology technologist code of ethics. I think historically we have seen the merits in other spaces professions and sectors. One of the comparisons used to present the technologist code of ethics is of course the oath of hypocrite I don't know how to say to column in English, a hypocrites or hypocrite in French hypocrite, which has for century for millennia now guided the very profession of taking care of our, of our bodies right. I think such a technologist code of ethics would act as a same but this time put in the in the minds of any technologist that you're not just a product creator, you're not just you know, an engineer, you have a responsibility to society to democracy. We have seen, and we continue to see what that responsibility looks like so it's really high time that we, you know, have this this conversation on everyone has a role to play to safeguard democracies it's not just the government. It's not just the activists, like myself. It is also the role of private companies, private creators to think about how we can better protect our shared values and democracies now. So what would be the challenges where the first challenge of course is who is going to define that code of ethics right. And what I particularly appreciate with this with the proposal put forward today is the multi stakeholder approach. We are operating on a space that is by design interconnected and multi stakeholder. I create the companies provide tools to make money, the government also have a platform to speak well everyone is profiting and everyone is contributing and there's absolutely no way that the solution these these values would stand only from governments, especially knowing the rising authoritarianism, especially online that we're living at the moment, and not only companies so I think an approach that includes civil organizations that includes citizens, because these conversations should not be conversation for pundits only it's a conversation for the society as a whole. So yes the multi stakeholder approach could respond to that challenge of whose values are going to be put into that code. Thank you. Thanks so much I loved what you had to say especially about the fact that everyone has a role to play it's not just governments and activists. It's a role of private creators to think about how we can protect our shared values and so to that end I want to encourage people who are listening today to go ahead and provide some ideas that you have in the chat, but also want to create a little space for if any of our other panelists want to respond to what you just said Julie. I want to add that I couldn't agree more with what Julie said. I think that in parallel to pushing for the advance in the field of technology governance and regulation which is very important. The merit of this initiative is that it places like a core responsibility to companies academia civil society and, and, and this is a much needed effort. I think about the challenges that she was also mentioning I think that that's something interesting is that we have a solid foundation to build on. And I'm thinking about this principle AI initiative and paper that already analyzed at 36 prominent documents that highlight different principles that different companies are putting into practice. And what what's interesting about this paper is that they found a core themes, and these themes could be the point of the parter to think about the drafting of the code, and, and just like resonating with the words of what Administrator Samantha power was saying earlier. It seeks to be inclusive transparent and to contribute to build a democratic little future. It has to take into account all these previous work that already exists. So I think the focus shouldn't be as much as in the principles principles themselves but actually in the operationalization piece of those principles. And there there's, again, existing word. Not as many as as in the papers that have been developed around the principles, but around play books on a specific actions that the private sector that civil society are taking action to, to, to put these principles into practice. And so I'm raising those points Christina I think the, the points you made about an inclusive and transparent process, really taking into into account previous works that already exist and the thoughts of actual technologists who would have to operationalize this are are especially well taken. And so with that I want to shift a little and moron I'm going to direct my next question to you about tech about thinking about technologists as stakeholders. So technologists are key stakeholders in an effort to shift the underlying values of the global tech industry their work in creating algorithms writing programs and designing software systems leaves them well positioned to identify and mitigate the potential adverse effects of their work in real time. If they are trained to do so, of course and so at the same time, in many organizations, company executives rather than everyday programmers decide how algorithms should be deployed and what use cases they should address, creating really a gulf between and decision making. And so, given this structure of power, what are some levers for change in the spaces where technologists learn and work. So thinking of tech companies impact of incubators universities research institutes and like. But what are some of these levers of change that could help facilitate the implementation and instant instantiation of such values. How, in other words, could a code of ethics be operationalized to shift the norms of the industry and can lessons be drawn from other efforts including public interest technology efforts, and if so to what ends. I have a question and thanks very much for the opportunity to talk here and for pushing forward this effort around a code of ethics. I think the important thing to keep in mind is that ethics is a practice not a destination and so it's not just a matter of thinking of we get to a place where we check a box and say we're ethical, but what is the kind of culture we build around that kind of practice always being taken. There are different constituencies, for example, tech workers, in terms of building a culture of change where, if you think about say the software development process we don't put out software, unless it goes through a review for quality assurance and debugging and making sure it's a quality piece of software that it's usable that it has things like security built into it. I think we're getting to a place where a code of ethics would help to inculcate a notion of thinking about an ethical or societal review for the software products before they actually get released and that becomes a practice that working engineers, you know, regardless of the decision making process that's made put into practice with respect to the products they build. Universities provide an opportunity to teach students the next generation that's going to be producing this technology about a code of ethics and really to build that culture. You know, the analogy that was drawn, say, to medicine, where students in medical school actually take the Hippocratic oath when they finish medical school that they're going to do no harm when they go out into being practicing doctors, building that same kind of culture as part of the educational process in universities for the next generation of software engineers, and then the next generation of managers and leaders becomes critical. We've got other facets, say, from the standpoint of funding, say, in incubators or venture capitalists, thinking about making ethical review a part of the funding process by understanding what are some of the principles of a code of ethics, what are the kinds of things that they would want their companies to uphold as part of the review for doing funding. And part of that is reputational risk management, right. Someone might ask why might a VC actually care about this venture capitalist, and it's because they don't want to be seen as someone who funds companies that are actually deleterious to society. And so when you think about it from multiple angles, really what these things all have in common is they're different facets of building this culture of a practice of ethics, which can have as its foundation a code of ethics to bring those principles to the Yeah, thank you so thank you so much for that response I think what you started off with the fact that ethics is a practice not a destination is so true and can be such a guiding principle that ties into what we've started talking about today about the importance of going from principles to operational operationalization and what does that look like, and how could a code of ethics move that forward. So I'm curious to hear from the other panelists about how those thoughts resonate how do we think about the funding aspect about ethical or social reviews and what other things can we really do to help move things forward despite what decision making processes may look like inside specific organizations. Very briefly, I think one way to operationalize such an ethical code would be to actually have those technologies in the world where their products are being used in the diversity of the world where their products are being used. And this is something that the content policy and site lab tries to do a lot. They try to include in the conversations with civil society organizations from around the world, product makers and designers, many of which have many organizations in in, I don't know, Latin America in Asia in in Africa, for instance, have never seen in the conversation in any conversation with tech tech companies so that would be one way to operationalize it decentralized Silicon Valley like I like to say often just wanted to share that briefly. And on the funding piece of it I think it's critical that there is a growing funding ecosystem outside of the United States and Europe, because that usually leaves us out of conversations or even of the opportunity to participate in this kind of initiatives because, because the work is not properly funded or or staff in terms of human resources. And I think to what Mehran was mentioning that there's a huge opportunity in documenting these best practices. It's something that the World Economic Forum started doing with the use of responsible tech working group. And it's really interesting because they documented practices from big the companies from IBM Microsoft Salesforce and provide a very specific examples of how the operationalization of these principles look like. But again, as as Julie was mentioning, there's an opportunity there to pilot those same use cases in the context of the global south, which is something that these companies have not done. So when we see to how the responsible use of tech field is moving, and we see practices in Latin America, for example, it's interesting to see that it's more about about initiatives that are being pushed by international organizations like the American Development Bank through the Fairlake Initiative that are focusing more on small and medium enterprises and on building ethical practices and and auto check tools and different tools that they can use. But what I found really interesting is that there's a lack of a bridge between like these tools for small and medium enterprises and then what's happening at the global level with big tech companies and their responsibility and opportunity they have to to operationalize their own practices in countries where they have operations. And Christina actually just want to to push on that a little a little more. There's a growing acknowledgement that those who will experience the most severe negative impacts of algorithms and data driven technologies over the next decade are those who have the least amount of control to your point over how they're built deployed and governed. And accordingly, these people these are people from the global south and those living in countries with weak legal and judicial systems, different levels of human rights records and shrinking or absent civic space. And this is particularly vulnerable. So I'm thinking especially about the algorithmic amplification of hate speech regarding the Rohingya in Myanmar, for example. So as people as more people come online for the first time states with authoritarian proclivities are becoming increasingly sophisticated at using technology and data to control and repress populations, and to undermine democratic institutions processes and norms tech companies are also innovating new methods for accessing and profiting from users data. And so given that technology facilitates greater global interconnectedness for both good and ill. It stands to reason that any ethical code drafted by technologists must be global in scope with robust participation from the global south. So what are some ways, some concrete ways to make a big tent and drafting in a drafting approach led by global south technologists and civil society and academics maximally effective. Who should be included and how should they be engaged should anyone be left out of the process and why. I think these are an excellent set of questions and that they come down to something crucial which is that context matters. Even where there can be agreement around a certain set of principles, I think it's operationalization implementation will look very different to different actors in different regions. And I think it's also important to recognize that the shifting norms and culture which is what we are trying to do with this initiative take time and time we often do not have because we're struggling to get up with the rapid technology development. So, so this is to say that I believe that for any global effort to be inclusive engaging transparent and democratic. There, there has to be financial human resources, which I have mentioned before, and any kind of effort can depart from a series of conversation, like the one we're having today but definitely in different languages and in post collaboration with local technology ecosystems, both from the north and global south. I think that a challenge that we're facing as public interest technologies is, as I mentioned is that field is growing exponentially in the United States, but not necessarily around the world. So, there is an opportunity and an urgent need to come in thought leadership in the technology governance space representing different regions to come up with plans to engage local technical systems, not only to discuss viewpoints and how we could draft this specific code but actually how are we thinking on the building of this field, if we're aiming towards a democratic digital future. And in terms of specific processes for the drafting of the code I think there is a lot of value to be learned from existing initiatives that have crowd sources the participation of several actors. For example, at an international level we have the OECD AI principles that have been signed by all member countries. And at the local level we have also interesting examples for example, an initiative we laid we led from C mines where I was working at the time back in 2018, where we crowd sources it was a bottom up approach where where we used open tools. For people's intelligence to build Mexico's national AI agenda. We ended up working with more than 400 people in six working groups, and it was very challenging. And there were so many learnings but I think I, the ones that I like to point out the most is that this kind of effort. It's focused on the thing on like the content and the technology piece of it. And, and often disregard the human part of the process which I think it's the most challenging part of it, what are the interests of the people participating and their own agenda so I think this is something that we consider. And then, again, and because I just think it's critical, the, the human resources and the funding involved because in our experience with it as an voluntary effort, we were very excited about it everyone was volunteering at the time. But it was, it was very, very time consuming, it was a full time job, and we were doing it on the sidelines. So in order for an initiative to be effective it has to be seriously funded. I'm sure that funding piece resonates with a lot of people including myself and you spend so much time talking about the importance of inclusion here when it just gives space to include everyone on the panel and a response to this will open it up for the folks on the panel to respond. Well maybe to add to think about a code of ethics means certainly the process needs to be an open and public process where people can provide their input and thinking about the different kind of stakeholders involved. One of the questions that comes up is, when you have this kind of process, how do you think about its, its deployment long term and what kind of accountability really is there. And that's one of the things that in terms of thinking about a code of ethics we need to consider very seriously is that accountability regime and what ways do you actually make it, not just something that is aspirational but something that's actually enforceable. And for example, the biomedical field there they have such a strong code of ethics that it doesn't actually require a legal liability regime, although you know we could potentially get into that in the future. The culture itself is strong enough to have you think of something like CRISPR technology and hey Janku who, you know, used CRISPR to basically modify the genome of two babies. He's immediately ostracized from the, the professional community, the research community, and that wasn't as a result of a law that was as a result of a cultural norm. And so I think that's one of the places we really need to get to is we have had code of ethics in the past, but they haven't been really inculcated as a cultural norm. Other fields have been able to do that and so it's possible to do that we just need to get there with technology. Go ahead. Go ahead. Sorry. Yes, I wanted to chime in briefly on the, you know, in globality of the effort and also the accountability mechanism, and provide an example of what the the meta Facebook over several you call it however you want. In fact, the oversight board is the result of worldwide consultations physical in person before COVID in person in different regions of the world where the platforms Facebook and Instagram are used by a great number of people. And I participated, for instance, my organization, not myself personally, but my organization participated in consultations in Africa and in Europe because these are two regions of the world where we work a lot. And the result of that has, has led to the fundamental texts that guides the oversight board which is our charter and one of the, although there were, of course, contextual differences, which are important to to take into account it's important to think about how to factor that into account. The way the oversight board does this is by having a global and represent not represented it will not represent the country sorry, but you know global body of experts we have experts from Taiwan from Mexico from, from Pakistan, myself, camera and friends, and many others at the United States of course, but in addition of taking into account the context there are some commonalities. The first commonality that we identified from the different stakeholder engagement and consultation was the importance of the independence of the oversight board might seem very obvious today to say so but at the time it wasn't an easy exercise so it was important to have those to have those commonalities brought together in a single document that now guides the work of the oversight board, and even in the, in the, in the way we function. We continuously the other very important aspect was like, Maron was saying a continuous process of ethical process of making sure that the recommendations that we make do reflect what the communities need and to do that we have the obligation to engage continuously with communities around the world we organize roundtables frequently on different issues LGBTQI rights, refugees rights conflict, many other issues, and these have been extremely necessary for us to do best or job, which is to hold a company like well like Medtine specifically Facebook and Instagram accountable so I just wanted to share this example as one possible way to get global perspectives and identify commonalities while at the same time respecting and putting a yes it's very important to respect the contextualization of, you know, all these global efforts. Yeah, absolutely and I appreciate the tag teaming of the panel there to go from talking about operationalization and Julie for you to follow up with a specific answer or a specific example of something that's been done recently. And also, again to the spirit of inclusion I'll just remind people who are listening to the panel today to go ahead and put in any questions you have into the chat and we'll have some time for q amp a from the audience coming up very soon. And then I actually want to shift to you and to get your thoughts on the tech industry in civil society and how it's can work to and how they, those two entities can work together better. So the tech industry in civil society often find it challenging to achieve consensus on what ethical principles should undergird the production and deployment of technologies, the codes drafting process will have to contend with this. There are barriers to achieving consensus on a global code as well. For example, a technologist code of ethics drafted in Cameroon would likely look different from one drafted in Taiwan or Mexico or the US given the different legal and regulatory political, economic and social environments that structure technological development use and abuse and misuse in these places so very much the context piece that we've been discussing already today. So even speaking from the context in which that you know best what are some of the key principles you think should be present in a technologist code of ethics. What are some of the principles that you think might be relatively easy to achieve consensus again relatively easy to achieve consensus on across sectors and or geographies and what principles on which do you think consensus will be more difficult to achieve. I think so. So when we talk about the total is that first of all, to reach a consensus in the great detail called that will be very difficult cross broader, but usually we will demonstrate the impact. If the artificial intelligence, it spells what will be the impact. For example, we, we will measure the correlation of a Facebook and see how that impact our democracy. And we also measure, for example, the information happening in the Ukraine war and who are the victim and by showing this kind of data. And so we can really reach the cost border, what kind of the, what kind of a responsibility we should take into a development. In Taiwan, I think Taiwan is a very special model like the reason we funded Taiwan and that because we, we, we believe that artificial intelligence will become the superpower in the future. So for the argument itself should be able to transparency view by others. So Taiwan and we promoting like a transparency of the area. So if, for example, if there's a medical device they want to adopt the area, we will help to build up the we call it a common protocol to help this area can be trained. Without bells. And also we can validate into a different by different facility, for example. So for example, if we train an algorithm that is for help the data to diagnose in a one hospital in National Taiwan University. We will make sure this algorithm can be essentially trained by, by multiple PI, we call it Taiwan can you go try consortium. Who, who call host is the algorithm and then, then, when we want to apply this algorithm to for example, out of Taiwan's hospital, we need to find the recording federated validation. A setting for example, Japan to validate the idea of a setting the United States provided. So I think by doing this kind of the practice is will be easier to say that the common code of command. So starting from avoid the harmful, harmful, harmful algorithm. And we set up the we call it like a security by design transparency by design, and also the validation by design process to make sure that we deploy this algorithm can revive can be validated by different party and make sure it is it's brings up to the several societies. And I don't, I don't know about. So, so we will try to use in some open source command infrastructure to support, not only for the healthcare but also the transportation for example. When we build a autonomous vehicle, artificial intelligence algorithm, for example, how can you make sure the test that vehicle, you can reach a level for autonomous driving United States, but still doing the same in Taiwan. And a lot of times this test I never seen the motorcycle behavior in Taiwan, for example, how can you validate this algorithm. So, so by, by doing this common infrastructure, common validation protocol, a lot of the university professor, they were starting to think about this question and then under minutes of sense and technology of funding in Taiwan we have a little bit. Sponsor to sponsor the professor in Taiwan to do this. AI, artificial intelligence, ethics, the practice, the research and that is the related to the, I think in the GPI we call it responsible AI. So responsible AI that will include in your activity and they need to be responsible, auditable, traceable, doing this developing process by setting up this common protocol in our AI research. We actually help the data scientists and also entrepreneur when they are building on the artificial intelligence solution, they will follow these common protocols. So, so maybe this kind of effort we can not only do in Taiwan, we can also collaborate with more, more countries, for example, the United States or maybe the global south countries, we can provide our common practice and then we can, we can have like a common platform we can share our experience and also maybe for the global collaboration if we can reach out for some global funding, we can also exchange our experience, not only from the experience in Taiwan but it's also the experience in Southeast Asia. And there are a lot of good companies, they are developing their new solutions. But I think in the, you know, she can vary the message is more like profit, profit centers, but I think in Taiwan, when we develop a new technology we, we actually have three core value we want to emphasize one is human center value. Sustainable development value and also the diversity and increasing value, which means that the, the, the, I believe a lot of AI will not be very in the war is more like the second, the second value is that a lot, which means no one can, no one is able to update this, this result. Yeah, I think, I think even that's an excellent point is the importance of being clear as to what the values are that have been ingrained into different processes that have worked I also think your point about that what started in Taiwan perhaps other countries learn from that and you can learn from other countries that builds on some of what we've touched on in our discussion today about. As we think about reaching consensus, it will take a lot of work and it will be difficult, but there are a lot of local examples that we could use as starting blocks, and really take into effect I think you know thinking about things as security by design and transparency by design as I mentioned, can be really, really powerful. And so in the spirit of getting some building blocks from different people and building on what's already out there. I'm going to now start to pull from some of the questions that people have submitted in the chat and again, please submit questions if you have them. But one of the questions here is asking the panelists, could you please elaborate on how to hold tech companies accountable in implementing the technologist code of ethics and so I might turn to I think, Maron first, and give you some time to respond I think you might have been the first on the panel to bring up the concept of accountability and how we need to make sure that with a code of ethics we would be able to hold people accountable. I think of a couple different regimes for accountability one is a cultural accountability which is that companies and you begin to actually see some signs of the Silicon Valley right now companies that are deemed to be doing things that have negative societal impact, are having more difficulty recruiting students are avoiding going there out of college. So for example, we find a lot of students at Stanford who are turning away from big tech companies because they have seen some of the impacts that have resulted from the technology and they're not interested in contributing to that. That's a long game. You can think about other kinds of accountability regimes which are more around liability. And so we do actually have liability regimes for things like manufacturer goods if a company produces a car. Once they put the car into the world they still have liability for what happens with respect to results of that car if it doesn't work properly. Or we can think about firms in terms of the generate negative externalities for example polluters. And so those are liability regimes that have actually been built over years, it's not the kind of thing you snap your fingers and suddenly you have an accountability regime and the problem is solved. But if you think about wanting to build in an accountability regime both culturally and with respect to legal aspects with respect to liability. That is something that we've been able to do successfully for other industries. And it's something that we should be able to do successfully for the tech industry, it will just take us a while to actually be able to develop those regimes over time. Yeah, I think those are excellent points. Do any of the other panelists want to respond or add on to that? I think I could jump in briefly on this on this very important question of accountability. And specifically the second point that great point that Miran has just made on the legal liability. We are seeing increasingly around the world governments adopting regulation that do hold companies as responsible and liable for and I'm talking specifically about content platforms because I know the conversation is broader but I want to use this example as an important aspect to keep in mind when we are thinking about accountability mechanisms and what we're seeing with some of these regulations is that the harm themselves are so badly designed that it's not necessarily the harm that justify the liability regime are so badly defined that it actually becomes, well, whatever whoever doesn't agree with me is harmful to society with me as a regime as an authoritarian regime on top of that. So I'm mentioning this to say we want to be extremely careful to define in those, if there were to be an accountability mechanism will be extremely important to spend time defining the kind of harm and negative behavior that could lead to an accountability regime. I think this is important in order not to, well, instill a sort of arbitrariness in this, you know, techno code of ethics or whatever system that that that is going to be built. And the second aspect that I wanted to share today with regards to accountability and values. And there has been a lot of conversation about whose values what values and I would like to point here to an existing corpus of texts that have virtually gathered what humanity at one point in history, a major point, decided that this was fundamental humanity to uphold as freedoms as rights and as principles and I'm thinking specifically about the enormous corpus within the international human rights law and standards domain, where you do have some of the principles that guide us today freedom of expression for instance, I want to see a technologist code of ethics that says we uphold all the principles that were declared in 1948 by a comedy led by Eleanor Roosevelt and many other pundits and experts from around the world. Well, well, I think this international aspect will be necessary. If we want to start from a point where we all agree that these are the values that matter to us all, no matter what we where we can come from. Yes, I go ahead please. Yeah, I just just very quickly because I think Julie just mentioned a critical point which is actually understanding the differences between ethical based approaches and human rights based approaches because they can be complimentary, but they have different aims. Ethical based approach are good when we're trying to define are more flexible. Our can be different depending on traditions, cultures, countries, religions, but as they're more flexible and can be interpreted differently than the outcomes and priorities suited to specific needs and sensitivities can also be different. And, or if we are focusing our conversation in human rights based approaches, that's a more holistic view international record based on international recognize loss. So that can be the point point of the partners of the commonalities that we're seeking to address as a global code of ethics but I think it's important to, to first not reinvent the wheel and understand that there are and and recognize there are valuable practices and examples that we can learn from all over the world like the example that Ethan was was sharing from Taiwan, but also that that is not that we're starting from zero there's so much work, work of literature, different papers, common themes, and of course the declaration of human rights so we can serve as a basis of any kind of principles. And I think it's also vital to move the conversation like to not get the conversation stagnated in which are the principles that are going to be part of this for document of the code of technologies. But actually, what are these approach the realization pieces and how does this example look like in different context like to me this is where the conversation, not not where the conversations will be heading but where the conversation already happening. And we're seeing many interesting examples of how that looks like. Yeah, absolutely and I think definitely a common theme in our discussion today is that we're not starting from scratch there are examples and things that have been tried in the past or documents that have been produced in the past that could create a strong foundation. At the same time our conversation today has really been oriented to sort of how do we get to yes for a code a technologist code of ethics but I see a question here that was submitted that's, you know admittedly, has some very valid skepticism skepticism, and this person how will this be different from every other technologist code of ethics schools have codes of ethics the ACM has a code of ethics tech companies have codes of ethics. In the person who submitted this questions is personally I no longer believe that yet another code of ethics can make any meaningful, just any meaningful difference in the field without a professional certification for technologists there is no enforcement system for gross ethical violations by any given technologist. And even if there were powerful technologists who engage in unethical behavior are not going to be stopped by a code of ethics or even revocation of a certification or license so pretty heavy question, but do you want to hear that I hear from the pan that we hear from the panelists today. How will this be different and is it worth the time the resources the energy the effort to build a new technologies, code of ethics. Well maybe to respond to someone also who was involved in the revision for the current ACM code of ethics and has actually been involved in ACM for a long time. I think part of the, the point and I think it's an important point that's been made is how do you turn a code of ethics into a practice. What is the kind of accountability that comes from it. And so some of the reasons why, for example, a corporate code of ethics doesn't really work is the company's only accountable to itself. And so it can put out a code of ethics and then justify any behavior that it takes relative to that code of ethics, ethics, because it's its own arbiter of that code of ethics. For the ACM for example you can actually be found in violation of the code of ethics and removed from ACM, but for many practicing technologists that doesn't have a real impact. So the real question is how do you go from a code of ethics into a practice that has impact. And there's discussions that have happened in the past around, for example, licensing of professional technologists, whether or not you can have a code of ethics that gets enough buy in that you actually get a cultural change that's the kind of thing we've seen in medicine and the bioengineering realm for example, and ultimately can you get to a place where code of ethics inspires an actual accountability or for example, a liability regime around what people build. So for example, civil engineers have to get licensed to build the bridges and are held accountable if those bridges fall down. Right, we don't have something like that in technology. Should we see enough of the negative impact of technology on society, it's actually reasonable to believe that that would be a regime we move to. So I think when thinking about a code of ethics now it is primarily aspirational but it's aspirational in the sense that it moves, it moves the conversation it potentially moves the needle toward thinking about an actual implementation for what this means in a regime with accountability. If we don't have an aspiration to start we never get to that kind of accountability. So I'm not saying that I don't think you know this is guaranteed to work, but I think unless we try we won't get there. And to briefly follow up on this, I think in any, it seems like we will go in a direction where they will be some form of, you know, consequences for not respecting certain number of number of basic principles. And I'm thinking, I'm saying this because we have the European Union that has just adopted a new body of legislation, called the Digital Services Act, which will impose an additional 16, at least, if I counted well 16 compliance obligations to companies, which stand from, you know, transparency to due diligence to many human rights impact assessment of products and development of products. So, at some point, I think, given we've seen the influence of the European Union legislation and other sectors including privacy with the adoption of the GDPR, the Data Protection Act in the European Union, to some, well that act has had tremendous consequences on the conversation itself around privacy, to the point that I was reading yesterday that, you know, some tech leaders and companies particularly Alphabet and Apple are now considering that privacy should be part of any ESG, so environmental, social and governance risk assessment in the investment in the investment industry so before investing in a company. The company will have to show that it has also privacy mechanisms in place. So, yes, I think to some extent we will be forced to go into that compliance space, given the current legislative environment. As I expected we have had an incredibly rich conversation today. So thank you again to our panelists but we are nearly out of time so in our conversation today we've touched on so many aspects of the pros and what we would need to get to a technologist on ethics and I've also touched on what some of the factors and forces against getting to a technologist code of ethics might be. So many thanks to Julio Wono who reminded us that technologists are not just product creators they have a responsibility to society and to Thanks to Ron Sahama Sahami, I am so sorry. And for reminding us that ethics is a practice not a destination and the fact that it's important to think through what is the culture we build around making sure that these actions can be taken and people can be held accountable. And as into thank you for sharing your thoughts with us, especially about the fact that context matters and operationalization of principles will look very different in different regions, but however we go through this process and it's important for us to not disregard the human part of the process, which is perhaps the most important part and Ethan thank you for sharing the specific examples of what has worked in Taiwan, and really reminding us that we need to be able to demonstrate impact and to think through in our design process what it means to have security by design and transparency by design to wrap up this panel though I think it's really important, especially with a topic like this to think about what's next and where can we actually go from here how can we continue the conversation. So I want to hand it over to my friend and the director of new America's public interest technology initiative, and dream solely and dream joined new America after 20 years of experience, working within higher education and in the nonprofit sector and she's focused on the certification and culture of ethical practice needed to build a future with technologists trained to center human rights in their work. And so Andrew can you close this out please. Thank you so much. Thank you for facilitating such a insightful conversation. Thank you to Christina, Julie, Maron and Ethan for setting the foundation for how we can create a technologist code of ethics, I especially like Maron's Maron saying that ethics is a practice of knowing what a food said because I think it's really helpful to ground it in action. It's a practice that needs to be nurtured and supported, both within institutions and funding streams. And I also want to thank rights gone for continuing to provide a space for these conversations at new America we define public interest technology or pit as we call it as the application of technology expertise to generate public benefit and promote the public good. I think the cultural conversation really underscores the promoting public good piece. We know that one of the best avenues to achieve that goal is by engaging directly with individual technologists and other relevant actors who are willing to accept a responsibility to center people, particularly those most vulnerable to exploitation in the design and implementation of technology. We approach this work through research projects and partnerships that support the creation of digital public infrastructure. We also do this in partnership with academia with our public interest technology university network pit UN for short. That is a group of 48 universities and colleges that are working to build pit as a field of study and practice. As we heard from our wonderful panelists, a technologist code of ethics has a long history. But their inculcation as a cultural norm has been missing, as well as levers of accountability. Our aim today is to guide the global workforce and tech companies towards a future in which human rights are deeply embedded in the technology they put out in the world. Such a code to be impactful across sectors can't be created without input from everyone. I've heard inclusivity mentioned quite a few times, and that for us includes civil society, academia, the public and the private sector, particularly as we talk about disparate impact. That's why we're asking you to join the conversation and help to create the international bridges that Christina and Julie referred to earlier. If you want to be a part of this conversation, please join us. We are dropping in the chat a link to a survey where you can share considerations, principles, bold ideas and other concerns, particularly around infrastructure for accountability and liability that our panelists mentioned, as well as other tools and publications that could impact the discussion. The link to this survey will also be available and open in the coming days on our website. Thank you all to our thank you again to our panelists who really gave us a wide ranging set of views on this, this approach that we are beginning to work with as a team at New America. And I look forward to hearing from you and hearing the ideas that will help us animate this work moving forward so thank you again. Take care and have a good rest of the day.