 Hello, I'm Audrey Tang, Taiwan's digital minister. I'm really happy to be here virtually to share with you some thoughts around assistive intelligence from singularity to plurality. One popular vision of artificial intelligence imagines a future of large-scale autonomous systems outperforming humans in an increasing range of fields. Now, that vision, autonomous intelligence, tends to concentrate power, resources, and decision-making in an engineering elite. It is optimizing for artificial metrics of human replication rather than for systemic augmentation. And it misconstrues intelligence as autonomous rather than also social and relational. And because the term AI is used in a variety of ways, let's begin by clarifying the primary target of this talk. It's against a vision of autonomous machine intelligence that aims to achieve and surpass a generality ascribed to human intelligence. The application of this vision implies centralizing decision-authorizing. It implies obstructing more productive, diverse, and decentralized directions for technical development. And the issues of autonomous intelligence is not about the technology that it has inspired to create. For example, there's nothing wrong with deep transformer networks or reinforcement learning algorithms, many of which have genuine utility when applied for human empowerment. Instead, the problem comes from the focus on deploying these technologies in pursuit of a vision of exceeding human capabilities as rapidly as possible, leading to so-called singularity. But we are not locked into this trajectory. A vision of digital plurality have already powerfully transformed how we live. It helped to feel approaches from personal computing to the internet of beings and to share realities. And today, a variety of pluralist approaches could reign in AI's worst excesses. And those approaches range from collaborative governance strategies to treat centralizing symptoms of AI to investment in research programs, such as data collaboration and optimizing human complementarity, and also to build attempts to lay out comprehensive alternative technology agenda, focus on human-centered participatory design and democracy itself as a social technology. Now, this talk is based on a joint paper from a diverse coalition of contributors with a range of disciplinary expertise, personal backgrounds and perspectives who disagree on many things and are each most enthusiastic about different research programs and possible visions. That too is part of pluralism. What unites the contributors is a belief that strongly articulated pluralist projects for technology is crucial to avoiding the perils of autonomous intelligence. So what is autonomous intelligence? Let's see some quotes. Open AI's mission is to ensure that artificial general intelligence or AGI, by which we mean highly autonomous systems that outperforms humans at most economically valuable work, benefits or of humanity. That's from open AI. Solve intelligence and then use that to solve everything else. That's from DeepBind. Human-level AI. Well, that's the goal of Facebook AI research. Now, to avoid strongmaning, it is important to focus on the visions of an AI future as they actually exist in the greatest senses of power in the field. Representatives of these centers are the cutting edge operations of the three dominant technology companies investing heavily in AI. And that's open AI, DeepMind and Facebook AI research. So let's label the common elements of the visions of these groups. Autonomous intelligence. The conceptual and the practical commitments of this research and policy program are already well-reflected in the mission statements I just quoted. And there are three pillars. Human competition, autonomy and centralization. First, human competition. This first year commitment is a target of quote achieving general intelligence end of quote. It's largely defined by comparison to some concept of generalized human-level capability with the aim of surpassing them. This is already explicit in the open AI and Facebook formulations. The DeepMind formulation is a little bit more oblique. It focus on solving intelligence. And indeed, DeepMind's practice have often avoided a single-minded focus on human comparison. With that said, though, to solve intelligence is a single thing. Somehow, to me suggests that intelligence singularly understood is a well-formulated problem. And given that humans are usually described as intelligent, this also implies that outperforming some sort of singularly understood human intelligence is at least a necessary condition to solve intelligence. The second pillar is autonomy. Well, the machine assemblage in this pillar is compared to the human in the envision competition between human and machine. It's targeting independence from human direction and agency. Measures for success in achieving of so-called intelligence are predicated on achieving this autonomy. This is, of course, very explicit in the open AI formulation. It's also visible in the DeepMind formulation as the intelligence that it is envisioned is creating and imagined to itself solving everything else. Such commitment is least visible in the Facebook statement, but the term autonomous appears very frequently throughout their way as well. The third pillar, the final share commitment is centralization. It follows logically from the first two. It's more a practical consequence of the agenda. Though a future of centralization deriving from AI is clearly envisioned in more extended articulations such as with Ottoman. Now, with this term, we name what is occurring, a centralization of capital under the direction of a small groups of engineers of AI systems. Now, if technological systems are to be judged by this singular intelligence they achieve, then the more resource that can be put inside the box and the fewer people involved in creating those systems, well, the more clearly the technological advances can claim to achieve this autonomy. But the pursuit of autonomy from humans drives to what? This concentration, the centralization of power to direct capital and infrastructure in the hands of a very few. And now let's talk about theoretical problems against the three pillars. First, against human competition and other narrow technical benchmarks. The practitioners of autonomous intelligence measure progress by passing benchmarks and competitions with the most common benchmark being human parity in a task. For instance, tracking projects such as Stanford's AI index and the Electronic Frontier Foundation's AI progress measurement project all characterized progress in the field with comparison vis-a-vis human performance across the fields. Yet there is little theoretical basis for considering human parity as a useful target supportive of the interest of the long-term flourishing of our civilizations. Instead, this emphasis is most likely to create brittle technology capable of competing with and substituting for humans rather than complimenting us. And this may seem counterintuitive in some cases where it is human communicative or collaborative capacities that the systems aim at replicating. However, even in these cases, parity metrics are at best imperfectly correlated with desired aims. For example, many of the most effective human communication media such as video conferencing do not replicate any defined human capability. It instead facilitate communication using distinctly non-human infrastructures. Video conferencing and shared realities complement and extend human capabilities rather than replacing us. And we note the human parity in specific fields is often seen as an intermediate step toward the goal of absolute evident advantage over humans in every domain. However, this incessant focus on achieving so-called human level intelligence via task parity itself creates great harm and waste. Specifically, it sets the relationship between humans and machines as one of competition rather than one of cooperation and augmentation. So it both excessively displaced workers and forgoed myriad opportunities for improving human productivity. An array of paths exist for developing productivity enhancing technologies. The autonomous intelligence one based on achieving human parity and automating human work across various tasks is only one path and it's actually an extreme option. Pursuing automation in the many areas in which machines do exceed human capabilities is not itself problematic. The problem is focusing solely on automation even in place where algorithms whether AI power or not seems unlikely to have a significant comparative advantage over humans in the near term. Such focus fuels a more serious problem ignoring areas where technology could create new tasks for human civilization. And this problem flow from the more generally taken automation as a goal rather than a potential side effect of what would be a better goal namely to create new opportunities and productivities for humans. When technology focus on automation as a goal rather than a potential side effect it both displaced workers and fails to generate the new task and opportunities that would reinstate the workers into the production process doubly disadvantaged in them. So other approaches that focus on areas of apparent machine comparative advantage by creating new opportunities and productivities for humans are far preferable across both economic and social measures. A canonical illustration is Alpha-Vode from DeepMind which is a protein structure prediction model. This work proceeded by focus on area where computers already exceeded human performance that's to predict protein structure and where computer performance was highly complementary to a human capability namely medical science. It's important for human need namely relief from sickness and suffering. So the focus on achieving so-called human level intelligence for currently existing tasks frees in place contingent notions of current economic value where humans are capable of and what the economic judge to be valuable co-evolves with technology. The focus on automating economically value human work as in the open AI mission assumes some status in what is economically valued and could misdirect us away from the crucial way of evolving our economic system. Now against the second pillar, autonomy. While the human competition benchmarks for autonomous intelligence are unproductive for supporting a goal of broadly flourishing human society at least they are a meaningful objective and would be worth pursuing. If for example your overarching goal was to enable a technical elite to operate as independently as possible from the rest of society. But in contrast, the conceptual commitment to autonomy is based on a fundamental misunderstanding of intelligence as it exists and functions. That atomized view of intelligence captured in the very word autonomy misunderstands how social technical systems produce value. And this makes autonomous intelligence system inherently successful to good hearts law. That's to say the tendency of optimizing quantitative systems to over optimize what they measure. Despite the avoidance of such situations being the stated goal of much of the currently safety research particularly alignment research but intelligence is not an autonomous but a social and relational quality. While there is significant dispute about what intelligence means or whether it's even a useful concept most attempts at definition focus on capacity to solve useful problems and make plans to achieve desired ends. But both empirical study of such capabilities in humans and computational economic theory strongly suggest that such intelligence not primarily property of atomistic individuals but of entire social systems system that aim to achieve something like the intelligence perceiving human civilizations will thus depend on their capacity for interdependence and sociality not just autonomy. So autonomous exceptions of intelligence are particularly subject to good hearts law. As Drexler highlighted concerns about good hearts law apply primarily to system that aim to autonomously pursue ambitious goals with limited temporal or social constraints. And this is particularly true when pair with a focus on surfacing human capabilities exemplify in canonical thought experiment such as the so-called paper clip maximizer described by Bostrom. Now the autonomous intelligence research agenda cause in the long term both for the development of powerful autonomous human independent technologies and simultaneously though decries these as most responsible for soaring risk potential in the long-term future almost existential. What the clear alternative is optimizing for limited and constrained systems that are deeply integrated into social and communicative frameworks which has have far more circumscribed scope for good heart style failure myths. And moreover, autonomy tends to obscure and undermine important external agency critical to make systems function effectively. The production of the myth of autonomy both perpetuates the erasure of certain classes of human labor and also obscures how much decision making and assumptions on a part of the tiny technical elite govern how they operate and for whom. So our technologies should reflect social realities rather than the fiction of autonomy not only as a matter of ethics or politics but as a simple matter of efficacy and transparency. Now for the third pillar against centralized scale. Autonomous intelligence depends on two symbiotic future visions one optimistic and one pessimistic both dependent on centralization. Now in the first optimistic view a concentrated investment in a small number of people achieving distant and ambitious goals will yield broadly beneficial in the spectacular outcomes for humanity. Now in the second pessimistic view not achieving these very distant goals in a sufficiently aligned and highly controlled manner may result in significant potentially existential harm. But both views are fully dependent on the centralized control of AI systems effectively concentrating the power over vast resources in the hearts of very small groups. And this worldview of this elite will travel as their system will do displacing alternative perspectives domain specific expertise and pluralist values and epistemologies. However assertions of centralized control are also typically illusory. As many have shown the Soviet claim to conduct central planning what it gave way in practice to a range of decentralized decision-making activities given the inability of central planners to perceive or act on all the details necessary to implement their plans on the ground. And similarly AI system that aim as so-called neutral fairness via control by engineers end up instead devoting to replicating the very biases of society whose data they train on. And the aspiration and illusion also led to the formalization of a narrow set of values namely those of the designers of those systems currently a highly concentrated and homogeneous group with more and more humans left out of the decision-making process and benefits. And now let's talk about digital plurality. If the vision of intelligence as autonomous is the mistaken horizon for a goal setting for technological development then what is the alternative? Naming a single alternative a correct path will be self-defeating. Instead I think the alternative is not a singular narrow focus on any specific goal such as achieving so-called general intelligence but rather research and policy support for a plurality of complementary and dispersed approaches to develop technology to support a plurality the plasticity of human goals that exists inside the boundaries of a human rights framework. Now this alternative to autonomous intelligence already exists. We will refer to this alternative pathway as the assistive intelligence toward digital plurality but what exactly is that in jail? Well, plurality resists any single definition instead of aiming toward a technical end state it describes an ecology constituted of approaches that cooperate coexist co-evolve and operate in support of human decision-making about social well-being operating within their constraints of human rights frameworks. And these approaches create and intersect with and support new modes of decision-making by raising ongoing human goal setting to the surface as governing technology they transform narrow technical questions into opportunities for social innovation whether achieved through collective digital participation or some other modes of decision-making together. And rather than converting questions of social progress into formalized inputs for narrow technical expertise to resolve well they support and extend the human capability for directional goal setting and fair just in productive collaboration. As we here seek to characterize what's actually existing in this digital plurality we describe attributes of this emerging ecosystem and these attributes are reflected within the theoretical statements motivating work into space but not perfectly replicated in the technology that constitute it. And as we see this emergent alternative has three shared conceptual and practical commitments and they are complementarity, participation and mutualism. The content of these core commitments can be summarized as follows. First, complementarity. Technology should complement and cooperate with existing intelligent ecosystems never replace them. Technology should broaden the surface area of complementarity across individuals, organizations and systems allowing for even more network evolution. Two, participation. Intelligence is collective, it's not autonomous. So technology should work to facilitate the social nature of intelligence and in particular to facilitate deliberation on and participation in setting outcomes in equal measure to drive the achievement of the outcomes. And three, mutualism. Decentralized heterogeneous approaches under the umbrella of digital plurality can build on and benefit from one another. Technologies evolve in interaction with each other and social, political and economic institutions form an ecology. A wide range of social and technical projects in and around the digital world are currently developing promising alternatives. These projects differ in the extent to which they target the three pillars of autonomous intelligence and only a subset of them break away from them entirely to define a new ecology of digital plurality. So we can see that there are three rough lanes of work leading to the development of plurality and they share different elements of the three commitments named above and so we'll introduce this field by focusing on these three areas of practice each in turn. The first lane of work focus on mitigating the problems of autonomy. Now these approaches accept the necessity or likely inevitability of autonomous systems that aim at general human style intelligence but they are concerned about the tendency of such technology to become misaligned with human agency or have otherwise harmful effects and often because of centralized control over them. So these approaches tend to cover ethics, governance, safety or redistribution of AI systems and the benefits they create. Practitioners are to be found in various parts of AI ethics and FAT start communities. The AI safety alignment existential risk community and AI governance regulation community and the work on sustainability, windfall causes, capital taxation and related redistributive mechanisms. Now the second lane of work challenges one of the two more technical premises, competition and autonomy. There are a range of approaches that maintain focus on autonomous intelligences but break from the competition concept and break from the focus on imitating or displacing a singular anthropomorphic concept of intelligence and this work aimed to develop intelligences that are related to human intelligence that performs well in humans areas that are not just not that good to begin with like protein voting with alpha food or that complement or collaborate with humans such as the worker optimizing metrics of human complement terracing. Examples here also include cyborg technologies, human computer interfaces via brain and et cetera. Now one opinionated survey of this perspective is Drexler's vision of an AI services model as an alternative to the unitary human agent like intelligence envisioned in much of the AI safety literature. Now within this lane, there are also approaches that embrace the competition concept of autonomous intelligence systems and a goal of replicating various human skills and skill but that take aim at autonomy concept and the vision of systems achieving these goals as needing to be autonomous. So practitioners in this lane aim to explicitly account for, build off and harness the capability of the organizations and the individual that enable AI and digital systems to function. And this lane would embrace work on technical elements such as the privacy preserving machine learning stack or on economic designs like data dignity or on governance and legal institution like data collaboratives and on interaction paradigm such as machine teaching and human interloop systems. For instance, in contrast of GPT-3 some recent work from open AI in developing AI systems for pair programming by co-pilot would also fall into this category. Now the third lane of work breaks entirely with the autonomous intelligence trajectory. And these approaches are the most diverse and focus on decentering the role of technology and its capabilities while centering complementarity, participation and mutualism. This has led to a nascent but powerful ecosystem in which technological approaches and evolving human system for goal setting and decision-making build on and co-evolve with each other. And this lane of work includes many threads that begun as disparate areas of practice. The human-computer interaction, the human-centered design communities have a rich history of plurality evolving a program of understanding needs and limitations, building systems engineered to effectively fulfill these needs allowing for flexible use and compensate for limitations. Distributive agency has entered the design of technology infrastructure itself through self-organizing mesh networks and decentralized age computing. Economics and mechanism design have also played an increasing role in designing and building decentralized digital ecosystems ranging from microfinance and also decentralized e-commerce platforms and also radical alternatives such as those emerging from the distributed ledger and smart contract ecosystems. Now building on those ideas and broader social science there's an emerging agenda of social technology that harness social and computation science to build new and responsive institution enabled by digital technology. And much of this has been practically harnessed by civic technologists that harness collective intelligence to push the boundaries of knowledge. As these approaches have matured and developed intersections and second level complementarities have formed giving rise to new directions. We have seen decolonial technologies that work to distribute power and voice and are grounded in historical analysis of existing post-colonial power relationships that shapes our technical priorities, politics and infrastructure. Work on human-computer interaction has begun to understand the need for participatory action research and collaborative design that rigorously involves stakeholders in the design process often utilizing deliberative and constructive tools seen in other approaches. So there has been a flowering of work on decentralized technology that builds heavily on the work in the distributed larger communities. It aims to augment internet protocols to allow for web 3.0 or peer-to-peer information transfer on a decentralized network interaction patterns. And this has led to the formalization of digital commons and knowledge commons creating processes for systems from Wikipedia to open source code repositories that provide access to collective innovation while instituting a polycentric multi-stakeholder governance structure across local regional and global levels. Now, these are diverse directions and the landscape is constantly shifting and yet a nascent and growing ecology of digital pluralism and active and emergent landscape of digital pluralism makes it clear that we're not trapped on a fixed path to word desingularity. In conclusion, the essence of the autonomous intelligence trajectory is a rather authoritarian one. It envisions intelligence as a single distinct autonomous quality to be both reached for and feared and one that once achieved is uniquely transformative. It becomes a singularity that will be the so-called final invention that humans create. And once this general intelligence is developed well, it will be impossible for humans to keep up or even contribute much directly. But we have another choice. We can replace competition, autonomy and centralization as guiding goals. We can replace them with principle that support digital pluralization, designing for complementarity, increasing participation and supporting mutualism. And emphasis on complementarity would mean evolving beyond the goal of automating human labor and explicitly designing system that augment and support workers. Increasing participation would mean recognizing the contributions of humans to existing AI systems, ensuring fair and dignified labor conditions across the AI supply chain and compensating not just data input but also labor inputs. And this requires regenerating the digital commons rather than enclosing it. It requires fundamentally rethinking the practices of capturing public data and privatizing the economic benefits from the models that are indebted to it. And benefits should be shared with the communities maintaining the commons and efforts should be taken to return the resultant technologies as much as possible to the same commons. Indeed, when data derived from a source specific to some community or a group of individual, what these people should have collective agency over the development, the use and the design of the model be publicly recognized as contributing to it and also share in any commercial benefits. And in a wider sense, greater participation means balancing the optimization of the current fixed and measurable goals with the future investment in broad best stakeholder reflection on whether those short-term goals are appropriate ones to begin with. And this means moving from a thin representation to richer and layered representations. That is to say, deliberative technologies, collaborative design and a fusion of policy change with technology developments. And finally, supporting mutualism would require directly addressing the impacts of technology rather than relying on policy makers or civil society to fix the problems technology creates or to balance highly skilled incentives. Clear accountability mechanisms must be established for the impact of a technology on the distribution of economic and political power, measuring it and reporting it along with other ESG metrics. Now there is a long way to go before the centralization goals can be productively redirected. And so that work must begin now. We must evolve beyond the focus of building autonomous human-like intelligence in preference for exploring a range of other possibilities that complement humans and human societies and facilitate participative cooperation. And by doing so, we will break down the centralizing tendencies with its reactive and proactive dangers. We will move via an embrace of digital pluralism and complementarity, participation and mutualism toward greater collective flourishing. Thank you for listening. And now I will read my job description again. When we see the internet of things, let's make it an internet of beings. When we see virtual reality, let's make it a shared reality. When we see machine learning, let's make it collaborative learning. When we see user experience, let's make it about human experience. And whenever we hear that a singularity is near, that is always remember, the plurality is here. Thank you for listening. Live long and prosper.