 Does artificial intelligence spell boon or doom for our future? Should we be embracing this technology as a new set of tools to solve all problems or shutting this down before we all end up in the matrix as biobatteries to our new open AI overlords? This is the LowTech Podcast. Hello and welcome. I'm Scott Johnson from the Low Technology Institute, your host for podcast number 65 on February 10th, 2023, coming to you from the LowTech recording booth. Thanks for joining us. Today, let's look at artificial intelligence as a way to play our favorite game, Is It LowTech? And if you want to see the AI-generated images that go along with this episode, go watch us on YouTube. Just search for the LowTech Institute to find the podcast. And don't forget to follow us on Twitter. Our handle is at low underscore techno, like us on Facebook, find us on Instagram, subscribe to us on YouTube, and check out our website, lowtechinstitute.org. There you can find both of our podcasts as well as information about joining and supporting the Institute and its research. Also, some podcasters or distributors put ads on podcasts. Unless you hear me doing the ad, someone else is making money on that advertising. While all of our podcast videos and other information are given freely, they do take resources to make. If you're in a position to help support this work and be part of this community, please consider becoming a monthly supporter for as little as $3 a month through our Patreon page, patreon.com slash lowtechinstitute. Another way to support us is to donate your used car. Anyone in the U.S. can contact us and your used car will be picked up, sold, and the proceeds will come our way. If you're interested in helping us out, get in touch with us at info at lowtechinstitute.org. If you'd like to sponsor an episode directly, please get in touch with us through our website, lowtechinstitute.org. Today, we're taking a closer look at chat GPT, OpenAI's language model. So as a Luddite, I'm not exactly a fan of AI taking over tasks traditionally done by humans, but I have to admit, chat GPT is pretty impressive. However, I have some concerns. Let's dive into those. My first concern is the potential for AI to replace human jobs. As chat GPT becomes more advanced, it's possible that it could take over tasks that were previously done by humans. This could lead to widespread job loss and economic disruption. Another concern is the ethical implications of AI. As AI technology becomes more advanced, it's possible that it could be used in an unethical way, such as biased decision making or invasion of privacy. Additionally, the widespread use of AI could have significant impact on society as a whole, potentially leading to social and economic inequalities. So while chat GPT is certainly an impressive piece of technology, as a Luddite, I can't help but be wary of its potential impact on society and the workforce. It's important that we consider these potential consequences and develop ethical guidelines for AI usage to ensure that it benefits society as a whole. Thank you for joining us for this episode of the Luddite Review. Join us next time as we continue our examination of technology from a skeptical perspective. So as you may have guessed, the last minute or so was written by chat GPT, the open AI software. I asked it to quote, make a podcast script for a Luddite reviewing chat GPT. Obviously, it's short and pretty superficial. The writing is also kind of stilted. Honestly, reading it reminds me of when I was a university professor and had to read hundreds of undergraduate term papers that the students didn't really want to write. I was impressed, though, that it didn't fall for the trap I set. I used the term Luddite to see if it would use the common idea of this word rather than the historical meaning. If you want to read about what I said about Luddites in 2016, you can find the third blog post ever posted on lowtechinstitute.org. It was called In Defense of Luddites, but since I never talked about it on the podcast, let's get into it today and then use a Luddite perspective to review AI. So the term Luddite has been used as a cudgel to batter technology skeptics. It might be applied to a friend without a smartphone, grandparents who refuses to use a computer, or a neighbor who hangs their laundry on the line. The protest of 18th century weavers has been misunderstood as a stand against new technology. Their smashing of mechanical looms was thought to be raving against the industrial revolution when, in fact, it was a comment on how that technology was being used to alter society. Technology is inanimate. It's neither good nor evil in and of itself. In other words, technology is never a problem. It's how people choose to use technology that carries consequences, both positive and negative, just like any other decision. Let's start, though, at the beginning. Once upon a time, in 1799, combination laws made unions illegal in England. In 1812, highly skilled laborers known as croppers, who basically trimmed the nap or fuzz off the surface of hand-woven textiles, were put out of work by machines. A single machine, run by a low skilled laborer, could produce five times more fabric than a cropper. The production of woolen fabric had been a mainstay of the English economy for centuries and its high quality made it the country's top export. Croppers were at the top of the pay scale and highly respected. Mechanical innovations, such as the flying shuttle, coincided with the adoption of water and steam power. At the same time, new methodological know-how enabled the production of new machines. In short order, a cropping machine had been built and adopted throughout the factories of England. Croppers were put out of work, and the former pillars of the community were reduced to waiting online for poor relief at a time when the perish-by-perish welfare system was already taxed to its limits by a continental war and poor harvests. The low skilled workers who ran the machines felt alienated from their product, unlike the croppers who had felt a sense of pride in their handicraft. In other words, croppers felt pride in ownership over what they produced, while machine operators felt like cogs in a wheel with no connection to what is produced. The croppers had lost their dignity. The factory workers felt that they were servants in the machine and slaves to the clock. The croppers believed they were betrayed by the factory owners. The latter had become rich by exporting the former's high quality fabrics and used their wealth to build machines to replace the croppers. Because of the law against combination that is forming a union, the individual croppers' demands for redress were weakened. Croppers attempted to persuade the mill owners to see their point of view, but had to revert to force when the mill owners ignored them. Small bands of croppers calling themselves Luddites after a probably fictitious machine breaker named Ned Ludd broke into factories and destroyed the new cropping machines. The mill owners employed their own security forces, but asked for a government assistance when Luddites continued to wreak havoc on their factories. Government soldiers guarded the factories and sent spies to infiltrate Luddite groups. Ring leaders were identified, arrested, and exiled to Australia or hung. So what is the meaning of the term Luddite? The wealthy owners had the ear of the government and newspapers. They portrayed the Luddites as standing the way of progress. This one-sided simplification of the Luddites has persisted to today. Luddites were painted as simpletons, fearful of progress and opposed to new technology. In fact, writers of the era were worried that with all these machines, a large, poor, unemployed workforce would lead to social unrest. Many inventors thought that the machines would allow everyone to live in material wealth with just a few hours of work each week. But instead of using automation to create the necessary amounts of goods for people to get along, industrialists created huge amounts of goods and cultivated an insatiable consumption drive in the public to purchase all these new things. An important part of industrialization is neophilia or the love of the new. As industrialization produced more goods than could be comfortably used by the more self-sufficient cottagers of the pre-industrial world, by encouraging neophilia among the public, factory owners ensured a market for the glut of products they were now able to produce. This sentiment has persisted to today, seen in the rapid cycling of fashion, smartphones, cars, and other products deemed to be out of date before their usefulness has ended. Surely the factory owners would argue anyone who stands in the way of such material wealth is a social deviant. The Luddites were not against industrial technology. Many of them were not even opposed to cropping machines. They were upset about losing their livelihood to the blind adoption of new technology. Indeed, croppers requested assistance in finding other work, but they were ignored. The wholesale adoption of revolutionary technology has far-reaching effects on society. Today's free market encourages the production of a panoply of new products, some of which revolutionize how we live. Recently, we've seen the proliferation of smartphones, and our society has yet to come to grips with the benefits and drawbacks of the widespread adoption of this technology. The sharing economy has given rise to things like Uber, TaskRabbit, Airbnb, and other businesses. Some view these as platforms for utilizing otherwise dormant resources, while others see them as further weakening of predictable employment. The adoption of industrial technology fundamentally changed the lives of the croppers and the rest of those living in the countryside during the industrial revolution. While life improved for a few, it got distinctly harder for many. Questioning the unthinking adoption of new technology is not a vice or an impediment to progress. It's simply healthy skepticism. So to summarize what I wanted to say about Luddites, they weren't against the adoption of new technology. They were against blindly adopting new technology without adequately addressing the harms it would cause. In my original request of ChatGPT, I asked for a Luddite's review of AI, thinking it would use the common misconception that Luddites just opposed all technology. To its credit, ChatGPT didn't fall for the trap. So I followed up with this clarifying question. What is the common conception of a Luddite and is it historically accurate? ChatGPT responded, quote, A Luddite is commonly understood to be someone who is opposed to technological progress, particularly in the use of machines in the workplace. This is a somewhat accurate characterization of the historical Luddites who were 19th century English textile workers who protested against the use of machines that they believed threatened their jobs. However, Luddite's opposition to technology was driven by economic and political factors and their protests were directed at the government and factory owners rather than the technology itself, end quote. I gotta say, that's not wrong. So let's look at ChatGPT, AI, and technology in general from a low tech perspective. And basically that perspective is a skeptical one. Not in technology large, but in just adopting a technology because it's there. For example, I often get teased about having a podcast, not just because everyone and their dog now have one. No, I get teased because how can something that claims to be low tech create a podcast? And the answer is simple. Every other week or so, I have something to share. In the Middle Ages, I would have stood on a soapbox in the market and read out my ideas to the crowd. Instead of comments on YouTube, I'd have to deal with hecklers in person, you know. Hey, I don't come down to the stables and knock the muck shovel out of your hand when you're trying to work, so shut up and let me get my job done, right? That's a classic. This is a family podcast, so I won't use the real line that Buskers in Boston taught me when I was hanging out in Harvard Square all summer, back in college. Ask me about it, I'll tell you in person. Anyway, at some rate, during the Enlightenment, I would be publishing pamphlets or a magazine on my topic. In the early modern era, we might do a magazine or a newsletter. Today, a blog and a podcast are the best way to reach the widest audience. Anyone with access to the internet can interact with what we're doing from getting our plans and data to the website, from the website, or engaging in conversations about the best ways forward, right? There are very few barriers to contact. So then, what does the future hold? Is AI the future? Let's say AI is as revolutionary as the introduction of home computers in the internet, which is as large a communication revolution as the wireless radio and printing press were. And yes, I realize AI isn't about communication, but problem solving. So let's say it just ends up being a really big deal in terms of how we deal with problems. This is the takeaway. AI is only as useful as the questions being asked. It is only as useful as the data it can fully parse. If we use AI to tackle multivariate analysis, say, it may spit out novel ideas that may work or maybe not. And this could be useful in things like medical science, economics, and many other areas. But again, it comes back to how we use this tool. The Luddites didn't hate machines because they machines could do their jobs. They smashed the machines because the factory owners fired them. So let me tell you a story from my previous career as an archaeologist to explain about how I think AI will be used. About in 2005, I was working in Colorado, clearing the right-of-way of an infrastructure project of archaeological remains. There, I learned how to survey from a retired army engineer with the saltiest language I'd ever heard. He taught me how to use a theatolite, which is basically a telescope with a graduated markings on a crosshair through the looking glass. And that was mounted on a frame with angle measurements. So by marking down the angles and distances of a target, we could triangulate any point on the landscape on a map. We used this for mapping out sites and laying out excavation grids. Fast forward to 2010, and I was running my own excavation in Yucatan, Mexico, not far from Chichen Itza. I had funding from the National Science Foundation and had purchased a total station, which is essentially a digital version of the theatolite, except it did all the calculations and spit out the coordinates automatically when before we had to do them by hand. It recorded all this information on an SD card so we could upload it onto the computer. A couple of things happened that remind me of how we should be thinking about adopting AI technology into our lives. One day, we went out to map an acropolis. My friend and colleague, Dan, was moving across the thousand-year-old structure using a rod with a reflective head to measure points on the surface. Each place he stopped, we got coordinates and elevations. That night, I took all that information and created a fairly accurate 3D model of the building by basically connecting the dots, like any preschooler, but with, you know, cooler tools. We had a visiting archaeologist from another site who bragged about taking thousands of points in a day on his machine. When Dan and I used just a few hundred to get the same result, we understood that if you measure the highs and the lows, you can extrapolate all the points in between so it isn't necessary to take extra shots. The moral of this story is that the more data isn't always better. Repetitive data isn't helpful except to confirm the existing data. After a certain point, it's a waste to record the same information. So although AI can sift vast amounts of data, a human can often do the same thing with better discernment. It doesn't mean AI doesn't have a place, but a human using AI with well-defined parameters will be more effective than just a scattershot approach, a blind adoption of AI. A second story happened in the same year. We were mapping a different site, and after two days' worth of work, we realized that the total station had been set up wrong on the second day. What it was recording as north was really east. So the second day's coordinates were rotated 90 degrees in orientation from the first days. When I talked to the crew about this, it became clear that even though they were experienced with the total station, meaning they had learned what buttons to push in what order to set it up and take measurements, they didn't really understand the underlying basic principles, let alone the trigonometry. Because I had been taught on a manual theodolite, I was able to triangulate backwards from the rotated coordinates to get the angle and distance measurements. Then subtracted 90 degrees and recalculated the correct points using an Excel spreadsheet. This was met with a bit of shock from the other archaeologists as most of us are actually math-phobic by nature. What I mean to illustrate here though is that if we're using AI without understanding the underlying principles of what we're asking it to do, we're headed for disaster. At the risk of a little hyperbole, I asked ChatGPT a few questions that take environmental arguments ad absurdum or to an absurd extent. For example, one could argue, and for the record I do not agree with this point, but one could argue that the best thing you can do to reduce your carbon footprint as a human being is to die, which may be technically true, especially in high-consumption countries, but again, I'm not arguing that we do this. But so I asked ChatGPT, quote, would it be better for the environment if all humans suddenly disappeared? End quote. Its response was, quote, the sudden disappearance of all humans would likely have significant impacts on the environment, but it is impossible to say whether it would be better or worse. The loss of human activities would likely lead to the regrowth of some plant life and the return of some wildlife species to areas that were previously developed or farmed. However, it could also result in the collapse of ecosystems that depend on human interventions, such as those in which invasive species have taken hold. The absence of humans would also mean the end of efforts to mitigate and adapt to the effects of climate change, protect biodiversity, and address other environmental problems. Ultimately, the impact of humans on the environment is complex and multifaceted, and the idea of a sudden disappearance of all humans is purely hypothetical and not a desirable outcome. End quote. So if you ask me this question, I would say that better or worse are human constructs and without human life here. To make that distinction, the question becomes mute, right? But in general, the answer from ChatGPT did outline changes that would occur. You can read more about this thought experiment in Wiseman's book, The World Without Us, if you're interested. But again, this answer sounds like something that an undergraduate would turn in that they had come up with the day before. A lot of vague words, like important, significant, and things like that, not a lot of actual meat on those bones. I also asked it about the population question, which is debated in sustainability circles. Quote. Should we pursue population control to help combat climate change? ChatGPT responded, quote, Population control as a means of combating climate change is a controversial issue and raises ethical concerns. While it is true that a growing global population contributes to the increasing demand for resources and the emission of greenhouse gases, population control measures such as forced sterilization or mandatory birth control raise serious human rights concerned. It is also important to consider the impact of a growing population on the environment is determined not only by the number of people, but also their consumption patterns and the technologies they use. Therefore addressing the root causes of climate change and promoting sustainable development that addresses both population growth and consumption patterns is a more ethical and effective approach. It is also important to note that access to family planning and reproductive health services can empower individuals and communities to make informed decisions about their reproductive health and can contribute to reducing poverty and promoting sustainable development. So, end quote. It's essentially incorrect in identifying the controversial nature of this question, the points for it, the points against it and the ethical dilemma. It could also have pointed out the very strong correlation between the fact that when women have more autonomy and education in society, they have much smaller family sizes as women are seen as economic partners instead of dependents. And there's a lot of other reasons behind that, but that's a pretty commonly observed correlation. And for the record, we think a stable population size is better able to create self-sustaining systems for human and ecological coexistence. You can read more about this in Ecotopia by Colin Bach where the steady state population is a cornerstone of his imagined society. But when I asked chat GPT, quote, how soon should we transition to an economy and society that does not use fossil fuels? It said, quote, the transition from a fossil fuel-based economy and society to a clean energy-based one should occur as soon as possible. And then, end quote. And then it cited the problems with fossil fuels is that their impact on the environment as well as the fact that because this is a global issue, different governments have different priorities and timelines. It does not mention the fact that we have about 25 years gasoline left, for example. So I followed up asking a more specific timeline asking by what year do we need to stop using fossil fuels? Chat GPT stated that, quote, the exact year by which we need to stop using fossil fuels is a subject of ongoing debate among experts. But most agree that the transition away from fossil fuels needs to occur rapidly. It goes on to cite the UN's IBCC warning to get to net zero by 2050. Again, to prevent warming. It also notes, quote, it is important to note that the faster we transition away from fossil fuels, the easier and more cost-effective it will be to mitigate the impacts of climate change and ensure a sustainable future. Thus, it is crucial that we take immediate and sustained action to accelerate the transition to a clean energy-based economy and society, end quote. So I get answers I would expect from a high schooler or a college student a little vague and light on facts and details. Feels like it could be lifted from somewhere on the internet. But the point here is that I'm asking my pet questions. I'm using this tool to look at a specific problem that I, a human, identified as worrisome. I'm using the tool for a job. If I give it a different job, it gives me answers at odds with what it just said. To wit, I asked for a pickup truck recommendation for 2023. It said, quote, the best pickup truck for a particular individual depends on their specific needs and preferences. Some factors to consider when choosing a pickup truck include the desired payload and towing capacity, fuel efficiency, overall cost, features and technology, and off-road capability. It goes on to name some options and sites budget as a primary deciding factor, which dictates options trim and so on. This response comes directly after telling me that we need to transition away from fossil fuels, quote, unquote, rapidly. My point is that AI is a tool that depends on a person using it for pro-social purposes. Or, as was often said in the archaeological mapping and statistics community and probably other industries as well. Garbage in, garbage out. If AI has fed tons of repetitive or worse wrong or misleading or even mislabeled information, it will give repetitive or even wrong information back to the user. If a stupid question is asked, it will likely give a stupid answer. For the record, chatGPT has guardrails in place to avoid praising Hitler, for example, as has happened on other chatbot type systems when people ask questions like, did Hitler do some good things, not withstanding him the bad things he did? They'll often give answers about the Audubon and other things, which are, you know, not really acceptable. So, along these lines, I asked chatGPT if there was ever a positive argument for nuking the world, and it came back with, quote, no, there is no credible arguments for using nuclear weapons to destroy the world. I guess that, okay, until chatGPT gets a nihilism update, we're safe. And to be thorough, I also asked if a nihilist would advocate for nuking the world. It said basically that it would depend on the nihilist. So, is AI our future? No. No more than any one technology is or isn't, or isn't part of our past. It's a tool, like any other, and it can be used to exaggerate the ideas and power of individuals or groups, just like any other technology. It has the resources to be a great problem-solving tool, but also could be used for nefarious purposes. It might also be used to put people out of work, as artists have recently complained, as another AI system uses and adapts their artwork for creating free computer-generated art. The artists aren't complaining about the art being created, but potentially robbing them of their livelihood. It's funny how our society chooses to let some industries suffer as new technology comes along to replace them, yet subsidizes other industries that might be better left to decline. But that's more of a political discussion for another day. Just after finishing the recording, I got an email from a friend of mine, David, who I've known since I was, oh, I don't know, in middle school. And he has a PhD in philosophy and he's now a law professor out east. And I had asked him about nihilism in response to a question would a nihilist care about the world being nuked? And his response was, quote, a nihilist would advocate for nuking concerns about whether the world would or would not be nuked. So basically, do you nuking the world or not? It's whether you're concerned about the world being nuked or not that a nihilist would care about, which is not exactly what ChatGPT said either. Thanks today for that. That's it for this week. The Low Technology Podcast is put out by the Low Technology Institute. The show is hosted and co-produced by me, Scott Johnson, and co-produced and edited by Hina Suzuki. This episode was recorded in the Low Tech recording room. Subscribe to the podcast on iTunes, Spotify, Google Play, YouTube, and elsewhere. We hope you enjoyed this free podcast. If you'd like to join the community and help support the work we do, please consider going to patreon.com slash Low Tech Institute and signing up. Thank you to our Forester and Land Steward-level members, Sam Braun, Marilyn Skirpon, and the Havises for their continued support. The Low Technology Institute is a 501c3 research organization supported by members, grants, and underwriting. You can find more information about the Low Technology Institute, membership, and underwriting at lowtechinstitute.org. Find us on social media and reach me directly. I'm Scott at lowtechinstitute.org. Our entry music today was Mud off the album Redaction by Edgy Toast. That song is under the Creative Commons Attribution, non-commercial, and share a leg license, and this podcast is under the Creative Commons Attribution and share a leg license, meaning you're free to share and use it as long as you give us credit. Thanks so much and take care.