 According to the media, online chatbots are helping people to be more creative, but is the whole picture really that rosy? Digging deeper, the facts are in plain sight, but no one in authority seems to be capable of holding this rather disturbing discussion. This is a really complex issue, which is perhaps why the media has so far failed to have an honest, expansive discussion about it. In response to recent popular science-type news about chatGPT, what I outline here is why the drastically changing pace of technological change, being driven by artificial intelligence, is about to upend the traditional world of work, and with it the wellbeing of large parts of England and Wales. The government's own research says this. The question is, why is there no public debate about this, or why are our politicians seemingly so supportive of it? This is an issue which I have been following for many years, mapping how the change in use of technology is allowing seemingly unstoppable forces to reform society. There's a lot of necessary information I need to cram into this piece, which is really hard to do in a clear and concise way. The first draft failed, taking a massive detour which ended up as a piece of music celebrating Mario Savio's rant about the machine. Having got that out of my system, the second draft was meant to be a short news item, but after passing 2,000 words it was clear it needed a new approach, and so we end up here. So this is how Liberty dies with thunderous applause. As I rework a one-second clip of R2-D2 in my first attempt to make a video, that line from The Revenge of the Sith kept going through my mind. In Star Wars, the droids are essentially dumb voice-controlled tools that people use to the menial chores in their life, and nowhere in that fictional universe do the droids actually seem to replace people. Instead, they seem to act like a slave class, where even the poorest person in society can own one to make their life easier. The first obstacle to consider is the inbuilt bias of the media, in particular the business, economics and technology media, when covering issues of technological progress, especially as it applies to work and automation. The media dialogue is skewed toward a narrow message of affluence and progress, phrased within terms and assumptions that are only relevant to a narrow, unrepresentative audience of the like-minded aspirants it seeks to address. As a result, the coverage of technological advances tends to favour the most educated or affluent, and ignore the perspective or needs of the least affluent. Take for example, the similar debate around the effects of home-working. Businesses just released by the Office for National Statistics show that while 90% of those earning £50,000 a year or more can work from home, and 27% do, only 25% of those earning £10,000 or less can work from home, and only 8% do. Home-working in general is skewed towards the affluent, yet the media debate, as in the general bias towards office culture, favours the perspective of clerical or managerial roles, despite statistics showing that only a minor proportion of jobs are office-based. A critical flaw within the popular discussion about chat TTP, and of AI in general, is that it represents a similar distortion both of the intended audience, and how representative of the whole nation the audience is. Figures in the media are talking about AI as if we'll all end up with droids we can talk to, and which will free us from the drudge of daily work by making us all creative geniuses, when in fact it'll more than likely make people employment insecure, rather than simply unemployed, with a far lower level of job security and income. We must question not only what is being said within the popular dialogue about AI, but also who is it being said for, and whether that is representative of society as a whole. Enter chat GPT, where the inbuilt bias within the media's coverage leaves many of the most troublesome aspects of AI unexplored. About six months ago, YouTube has started producing content which had been written by chat GPT. It was clunky and at points gibberish, and while many YouTube has reacted to it as an entertaining toy, others, particularly in the field of music, fully understood the potential of this tool to overturn the way people worked. Arguably, that splurge of content wasn't just the developers of chat GPT wanted to generate publicity. An AI requires training in order to make it work more reliably, and all those content creators being given a free trial were willingly, though perhaps unwittingly, helping in the development of the system. Here we hit the next problem of AI in general, the energy consumption of creating a functioning AI system so it can give the desired response. This is highlighted in the Royal Institute lecture from last December, featuring the results of a survey published in Nature. Computers become more powerful by performing more calculations per second, and while its calculation may use slightly less energy over time, the fact the amount of processing capacity is growing faster than the decrease in energy use means that overall energy consumption across the IT sector continues to rise. Machine learning and AI training consume a very large amount of processing power and hence energy. Rather like the problems with Bitcoin mining, this has been driven by the use of multi-CPU, pluggable GPU units, which add large amounts of processing power to a standard computer system, albeit at the cost of a far higher level of power use. As the study in Nature showed, the processing power consumed by older methods of machine learning had been doubling every 24 months. The latest AI systems, because of the far larger amounts of data they process, are doubling every three months, but for chat GPT and similarly large AI systems, that figure is doubling every two months, and as shown in the Royal Institute lecture, on or after 2030, data processing may be consuming 20% or more of global electricity supply. What does AI training achieve? It makes the response created by the system reflect an optimum of opinion from all the content it has reviewed. Of course, as I note above, a large part of the news media's content is biased towards an idealised narrow perspective. This means that what chat GPT reflects isn't necessarily the dominant voices in, for example, political journalism, but the general sentiment across all public media. For right-wing media pundits, this has produced a truly infuriating result. Chat GPT shows a left liberal bias in its responses. That is unsurprising, given the mainstream media consistently assumes the public are further to the right than they actually are, and in recent years research shows an increasing level of prejudice in media discourse, despite the upcoming generation's been far more liberal than their predecessors. And of course, Britain has the most right-biased media in Europe. Personally though, I find chat GPT to be boringly liberal, as whenever I take these tests, I always end up in anarchist's corner with extreme left and libertarian views. Irrespective of that, chat GPT is never going to give any truly radical responses, since it can only reflect a blend of views which are already in general circulation, encompassed within all the data upon which the system was trained. Its tendency to represent a sample across the whole spectrum of content means that the colour of that response will always be beige, not red or blue. Finally, there's been a lot of talk lately about Britain's industrial strategy, all lack thereof. What is not said, again because of the bias of the political and business media, is that for the last 50 years, whenever businesses and politicians talk about investing in productivity, what they're actually talking about is greater automation. The position expressed in research carried out for the government is that AI and related technologies should not cause mass technological unemployment, but our analysis suggests that they may well lead to significant changes in the structure of employment cross-occupations, sectors and regions of the UK. The effects may be relatively small over the next five years, but could become more material over the next 10 to 20 years. The debate about the change in future of work seems inextricably linked to the tagline of high-paying jobs. In reality, greater automation leads not so much to unemployment, but to the end of traditionally secure high-paying jobs and the growth of less secure and gig economy working. Research shows 8% or one and a half million jobs may be transformed in England and Wales by new technology over the next few years, a process that will accelerate until 2030. From the 1980s, though automation played a role, the offshoring of primary industries such as steel and mining and manufacturing played the greatest role in the decline of working-class communities. Now AI will do this for the middle classes. That trend will hit the fringes of the nation, furthest from the south-east, the hardest, and overall, women, younger workers and those working part-time are disproportionately affected. The catch here is that those who believe this will be positive the economy assume people retrain to use these new technologies, which so far has proven difficult because of the structural barriers to access informal education or in-work training. There is no evidence that AI will aid most people's creativity or get them better jobs. From the many studies available, including those carried out for the government, there is a consensus that there will be disruption to the roles people play across many occupations and that the scale of that disruption is hard to pin down beyond the general description of significant. The greater issue here is that chatGPT is not like previous waves of automation. ChatGPT will target the middle-level clerical roles across professions, from legal secretaries to copywriters to local authority managers, who currently do rather well out of the technocratic knowledge economy. That's not reflected within the current debate over automation, which tends to focus on the impacts new technologies have on semi- or unskilled workers. And certainly, most of the mainstream coverage of ChatGPT didn't relate the likely impacts of this and other AI tools highlighted in recent research. I think the most insightful discussion I've seen so far came from science YouTuber Tom Scott. So why did I still have that feeling of dread? Artificial intelligence, text transformers and diffusion models, everything that we're currently seeing seems to be on that sigmoid curve of progress and I don't know what point on that curve we've got to. If we're already most of the way up that curve, then cool. It's not going to take many jobs. If we're at the middle of that curve, then wow, we're going to get some really impressive new tools very soon. But that feeling of dread came from the idea that ChatGPT and the new AI art systems might be to my world what Napster was to the late 90s. The Herald, the first big warning that this new technology, the thing that was going to change everything, was starting to actually change everything, where huge numbers of people, not just the nerds were actively using it. Tom Scott, if you watch his huge number of videos, enjoys and actively promotes the idea of technological progress. In this video, you can see that the penny has finally dropped and that he truly realizes the scale of disruption AI tools like ChatGPT might bring to all kinds of employment. Yet at the same time, his appreciation of technology also makes him seemingly resigned to his powerlessness to change this outcome. We seriously need to talk about AI-based automation, such as ChatGPT. Right now, this debate over the future of work has been promoted by the ignorant, especially politicians and economists who feel they are immune or unaffected by the changes to employment these systems will create. Many others, like Tom Scott, clearly do understand, but feel powerless to stand against the continued advance of neoliberal capital's dismemberment of the social contract. This is not about stopping technology. It's about who reaps the economic benefits of this process and how this accentuates national and global inequality. And the fact that under the increasingly unequal distribution of wealth, it's highly likely those most negatively affected by these new systems will not receive any significant support to manage those disruptive effects. In the 1930s, economists, such as Keynes, believed that a century later, people would only need to work for a few hours a day. Clearly, this did not come to pass. The economic rewards of high productivity, created by new technology and economic globalization, were not shared. They were hoarded by a minute group of what we now call billionaires or oligarchs. There is absolutely no reason to assume that this same pattern of immiseration will not repeat with the introduction of AI, unless, as a nation, we choose to oppose it. This is not an issue of tackling technology. It's an issue of tackling the dominant economic ideology that shapes these trends towards certain desired outcomes.