 Mae'r rhain. Mae'r rhaid ei wneud o'r rhai gysylltu'r hwn a'r AI. Yn ôl yma'r gweithio. Mae'r rhai gweithio'r hynny o'r cyfrig. Mae'r rhain o'r cyfrig. Rhaid i'r rhaid i dda i'r gweithio, a'r hyn o'r cyfrig? Yn ni'n dda i'r holl oes gwylltau yn ymgyrchol, yn grwyddol o'r rhaid i'r llwylo. Llech yn gwaith yng nghydfawr hwnnw i'r cyflawni. Felly mae'r cyflawni yn ei fod ymweld yn dda o'r ddawayrau o'r ddau i ddafio'r rhywbeth yn dweud o'r ddau cyfan o'r ddau cyfan o'r cyffwyr o'r ddau cyfan o'r ddau sy'n maes cyflawni'r ddau o'r ddau cyfan o'r ddaid ym mwyllgor o'r ddau cyfan o'r ddafiau a'r ddawnu. Mae'r gyrach yn y cerwbeth ymlaen o bai ymargron. ...y blech yw'r cyfridd o'r holl yn y trofio'n deallig y gallwn. Felly, mae o'r holl yn y trofio'r holl yn y trofio'n deallig gymhe courtol. Mae yrhywbeth dysgu cyfaint yn hyn o'r cyfeillfau ym mh construfio. Ond,混wag y lleoliwn wedi cael ei wneud yw ddangos cymaint y Llywodraeth ym Mhrydd. Yr hyn ymddyll yn gyffrediant. Ond, y bydd deall, mae'n trwy fwych cyfrydd o'ch Management Pryddedig ymlaen... innovative o'r cyfridiau llwyntau a'r cyfridiau. ac mae'n gyntaf ar gyferwyd o'r cyfryddion ymdegwyd. a'r cymdweithio ar gyferwyd. Fe ydy'r cyfeir iawn, a hefyd i'r awgryddiaethau a gwahagol ar y cyfnodol iawn ddiweddol. Ond ydych chi'n gwybod yw'r masin i'r wych o'r cyfrifau sydd ymdegwyd. Rydych chi'n gwybod eich rhaid a gweithio ymddydd yn ei wneud. Rwy'n gweithio ar y mynd yw'r problemau a'r mental hefyd ar y dyn nhw, as available services are contracting. Bringing professionals and patients together costs time and money, but we can easily acquire samples of speech via our networked infrastructures. Machine listening offers the prospect of early intervention through a pervasive presence beyond anything that psychiatry could have previously imagined. Machine learning's skill at pattern finding means it can be used for prediction. As Thomas Insel says, we're building digital smoke alarms for people with mental illness. Insel is a psychiatrist, neuroscientist and former director of the US National Institute of Mental Health, where he prioritised the search for a preemptive approach to psychosis. He jumped ship to Google and then founded a startup called Mindstrong, which uses smartphone data to transform brain health and detect deterioration early. The number of startups looking for traction on mental states through the machine analysis of voice suggests a restructuring of the productive forces of mental health, such that illness will be constructed by a techno-psychiatric complex. Health rhythms, for example, was founded by psychiatrist David Cupfer, who chaired the task force that produced DSM-5, which is the so-called Bible of psychiatry, which defines mental disorders and the diagnostic symptoms for them. The health rhythms app uses voice data to calculate a log of sociability to spot depression. Sunday Health screens acoustic changes in the voice for mental health conditions. We're trying to make this ubiquitous and universal, says the CEO. Meanwhile, Sharecare scans your calls and reports if you seem anxious. Founder Jeff Arnold describes it as, in emotional selfie. Like Sunday Health, the company works closely with health insurance, insurers, and health rhythms clients include pharmaceutical companies. It's hardly a surprise that Silicon Valley sees mental health as a market ripe for Uber-like disruption. Demand is rising, orthodox services are being cut, but data is more plentiful than ever. There's a mental health crisis that costs economies millions, so it must be time to move fast and break things. But as Simondon and others have tried to point out, the individuation of subjects, including ourselves, always involves a certain technicality. Stabilising a new ensemble of AI and mental health will change what it is to be considered well or unwell. The ubiquitous application of machine learning's predictive power in areas with real-world consequences, such as policing and the judicial system, is staring an awareness that its oracular insights are actually constrained by complexities that are hard to escape. The simplest of which is data bias, a programme that only knows the data it's fed and which is only fed data containing a racist bias will make racist predictions. But surely, proponents will say, one advantage of automation is to encode fairness and bypass the fickleness of human bias, to apply empirical and statistical knowledge directly and cut through the subjective distortions of face-to-face prejudice. But here's the rub. It's mathematically impossible to produce all-round fairness. Machine learning's probabilistic predictions are the results of a mathematical fit, the parameters which are selected to optimise on specific metrics. There are many different mathematical ways to define fairness and you can't satisfy them all at the same time. Proponents might argue that with machineic reasoning we should be able to disentangle the reasons for various predictions so we can make policy choices about the various trade-offs. But there's a problem with neural networks, which is that their reasoning is opaque, obscured by the multiplicity of connections across their layers where the weightings are derived from massively parallel calculations. If we apply deep learning to reveal what lies behind voice samples, taking different tremors as proxies for the contents of consciousness, the algorithm will be tongue-tied if asked to explain its diagnosis. And we should ask who these methods will be most applied to. Since to apply machineic methods we need data and data visibility is not evenly distributed across society. Institutions will have much more data about you if you're part of the welfare system than if you're from a comfortable middle-class family. What's already apparent from the field of child protection where algorithms are also seen as promising objectivity and pervasive preemption is that the weight of harms from unsubstantiated interventions will fall disproportionately on the already disadvantaged with the net effective automating inequality. Most AI only performs well when there's a lot of data to train on. They need voice data labelled as being from people who are unwell and those who are not so the algorithm can learn the patterns that distinguish them. The uncanny success of Facebook's facial recognition algorithms came from having huge numbers of labelled faces at hand. Faces that we, the users, had kindly labelled for them as belonging to us or by tagging our friends. Without realising we were also training a machine. If the product is free you are the training data. The democratic discourse around voice analysis is hushed and yet we're increasingly embedded in a listening environment with Siri and Alexa and Google Assistant and Microsoft Cortana and Hello Barbie and my friend Kayla and our smart car and apps and games on our smartphones that request microphone access. Where might our voices be analysed for signs of stress or depression in a way that can be glossed as legitimate under the general data protection regulation? On our work phone? On our home assistant while driving and calling a helpline? When will using an app like health rhythms become compulsory for people receiving psychological care? Let's not forget that in the UK we already have community treatment orders. Surveillance is the inexorable logic of the data diagnostic axis merging with the beneficent idea of public health surveillance with its agenda of epidemiology but never quite escaping the long history of sponsorship of speech recognition by the Defence Advanced Research Projects Agency. A history that Apple doesn't often speak of that had acquired Siri from SRI International who developed it through a massive DARPA contract. Before we imagine that the smart speakers in our home could be monitoring changes in our speech as we ask for the news, weather and sports scores and detecting disease far earlier than is possible today we need to know how to defend against a therapeutic stasi. It might seem far fetched to say that snatches of chat with Alexa might be considered as significant as a screening interview with a psychiatrist. But this is to underestimate the aura of scientific authority that comes with contemporary machine learning. What algorithms offer is not just outreach into everyday life but the allure of neutrality and objectivity that by abstracting phenomenon into numbers that enable machines to imitate humans optimisation can be applied to areas that were previously the purview of human judgement. Big data seems to offer sensitive probes of signals beyond human perception. It doesn't seem to matter that this use of voice pushes the possibility of mutual dialogue further away turning patients' opinions into noise rather than signal. Machinic voice analysis of our mental states risks becoming an example of epistemic injustice where the authoritative voice comes to count more than our own. Of course mental health problems can be hugely challenging for everyone involved and in the darkest moments of psychosis or mania. People are not going to have that much to say about how their care should be organised. But in between episodes, who is better placed to help shape the specific ideas for their care than the person who experiences the distress? They have the situated knowledge. The danger with all machine learning is the introduction of a drone-like distancing from messy subjectivities with the risk that this will increase thoughtlessness through the outsourcing of elements of judgement to automatising systems. The voice is analysed by machine learning will become a technology of the self in Foucault's terms producing new subjects of diagnosis and intervention whose voice spectrum is definitive but whose words count for little. The lack of use of voice in mental health services has been a bone of contention since the 1960s with the emergence of user networks that put forward alternative views seeking to be heard over the stentorian tones of the psychiatric establishment. Groups like Survivors Speak Out, the Hearing Voices Network, the National Self-Harm Network, and Mad Pride. But the introduction of machining listening that dissects voices into quantifiable snippets will tip the balance of the wider apparatus towards diagnostic determinism especially in this era of neoliberal austerity and yet ironically it's only the individual and collective voices of users that can rescue machine learning from talking itself into harmful contradictions that can limit its hunger for ever more data in pursuit of its targets and save classifications from overshadowing uniquely significant life experiences. Designing for justice and fairness not just for optimised classifications means discourse and debate have to invade the spaces of data science. Each layer of the neural network must be balanced by a layer of deliberation, each datification by caring human attentiveness. If we want the voices of the users to be heard over the hum of the data centres they have to be there from the start putting the incommensurability of their experiences alongside the generalising abstractions of the algorithms. We are developing AI listening machines that can't explain themselves, that hear things of significance in their own layers which they can't articulate to the world but they project outwards as truths. How would these AI systems fare if diagnosed against DSM5 criteria? If objectivity, as some post-relativity philosophers of science have proposed consists of invariance under transformation what happens if we transform the perspective of our voice analysis looking outwards at the system rather than inwards at the person in distress? To ask what our machines might hear in the voices of the psychiatrists who are busy founding start-ups or in the voices of politicians justifying cuts to services because they paid off the banks or in the voice of the nurse who tells someone forcibly detained under the Mental Health Act this ain't a hotel love. Prediction is not a magic bullet for mental health and can't replace places of care staffed by people with time to listen. What we need is a society where precarity, insecurity and austerity don't fuel a generalized unhappiness. The dramas of the human mind have not been scientifically explained and the nature of consciousness still slips the net of neuroscience. So why should we restructure the production of truths about the most vulnerable using computational correlations? The impact of AI in society doesn't pivot on the risk of false positives but on the redrawing of the boundaries that we experience as natural fact. The rush towards listening machines tells us a lot about AI and the risk of believing it can transform intractable problems by optimising dissonance out of the system. If human subjectivities are intractably co-constructed with the tools of their time, we should ask instead how our new forms of calculative cleverness can be stitched into an empathic techniques that breaks with machine learning as a mode of targeting and reads computation with ways of caring. Thanks very much. I'd just like to say that this is an abridged version of something that's online on open democracy where there are many, many links in the online version. It's really intended as a kind of research perm to enable people to find out more about what's going on with AI in society. Thanks very much.