 Welcome, everybody, to the Undergraduate Economics Club 2023 debate. And so I think we have a very exciting topic this year, which is our Genitive Artificial Intelligence programs good for the US economy. So this is a topic that I've been wondering a lot about myself, so I'm very interested to see what the two teams are going to say. I've also been wondering about this on a sort of smaller scale of, are these technologies like chat GPT going to be good for our students or bad for our students? Or for our teaching, are we going to be able to tell if an essay is written by a student or by chat GPT? Or maybe, though, that this will open up all kinds of opportunities for learning for students and new things. So it's hard to say what the effect is going to be on education or on ability to teach. And so we're going to approach a much bigger question of what's going to be the effect on the whole economy. So we've got the pro team over here and the con team over here. I'm very excited to see what they have to say. And Ellie will be moderating the debate. Thank you. Excellent. Thank you, E. Tai. My name is Ellie Zeiper. So welcome to the 2023 annual Undergraduate Economics Club debate. My name is Ellie Zeiper. I'm a sophomore economics and OIM major and will be your moderator for today's debate. The session is being recorded, and the recording will be available on the economics department website. Today's proposition for debate is, generative artificial intelligence programs are good for the US economy. On the matters of format, the debate will go as follows. We will begin with 15 minute opening statements from each group starting with the affirmative. Then we will have a five minute break. Next, we will get into a 16 minute argument and rebuttal section that will be broken down into four four minute segments, meaning it will go from the affirmative speaking for four minutes to the negative speaking for four minutes, et cetera. We will then have another five minute break. We will then have a 10 minute question and answer portion from the judges. We will finish with five minute closing statements from the affirmative and the negative with the negative having the last word. On the matters of keeping time, I will be tracking the time on my phone to ensure everyone is getting the appropriate amount of time to speak their case. I will also display a five minute and a one minute warning to notify the speaker when they have five and one minute left. I will now introduce the participants and the judges. For the affirmative, we have Nicholas Quigley, Sankal Korapalli, Grace Davis, and Kisan Hong. For the negative, we have Owen Moroski, Barrett Patine, Alex Gillespie, Joshua Chase, Lee Sutherland, Finn Kniff, and Olivier Bradley. Our judges this year are Itai Cher, associate professor in undergraduate program director of economics, Deepankar Basu, professor in graduate program director of economics, and Chris Booner, assistant professor of economics. Thank you all for taking the time to judge the debate this afternoon. Are there any questions about the format or timing of the debate? Great, now that all of the rules and the participants have been introduced, who will be able to start the debate? Let me prepare my timer and I will give the floor to the affirmative. You have 15 minutes for the opening statement in one second. You may now begin. Thank you, Olivier, and good afternoon to all. Our team thinks of Generative AI as the robin hood of the US economy for three main reasons. Firstly, stealing away in efficiency, second, empowering workers, and third, redistributing the wealth of productivity to every corner of the nation. When we look at the growth of a country's economy, a significant factor towards that is the economic growth, to the increase and improvement of goods and services, which brings us the quantity and the quality, which is perfectly elucidated by two main things. First, the GDP and secondly, the efficiency. For furthermore, Goldman Sachs predicts that the financial impact of Generative AI on the GDP in the United States by 2030 is significant. The US has already noticed the considerable impact of Generative AI on industries such as media, entertainment, advertising, and design, affecting a trillion dollar market size. Generative AI has the potential to significantly improve the GDP in the United States. I'm globally with estimates ranging from a 1.5 increase in the US labor productivity growth. Furthermore, Generative AI can enhance content creation in industries like media and entertainment. And these kind of algorithms usually generate realistic images, videos, graphics, saving time and resources for content creators. Furthermore, this efficiency can lead to increase for productivity, faster turnaround time, and cost savings ultimately contributing to economic growth in these industries. So far, according to HubSpot, 75% of marketers say Generative AI helps them create more content than they would without it. And 77% of marketers agree that Generative AI could help create content more efficiently. Already seeing the benefits of Generative AI currently just in increasing and improving the goods and services produced following a predicted increase in the value and output of goods makes Generative AI crucial to boost economic growth. And with that in mind, you may have questions about the result in jobs from Generative AI. And with that, I'll pass it off to Nick. AI also has the capacity to create jobs and even new industries through technological innovation. According to the World Economic Forum, AI will create 97 million new jobs by 2025. This is a huge figure that runs counter to the narrative that AI destroys jobs and creates unemployment. PWC predicts that AI will create around one million jobs in the healthcare industry as AI-assisted technicians will become a new profession. Additionally, the AI maintenance workforce will skyrocket as well as machine learning engineers, software developers, and data scientists. As you can see, the projected growth spurring from AI tracks with historical trends in long-run occupational growth. In a variety of industries, especially professional and administrative work, millions of new jobs today did not exist in the 1940s. Today, the professional sector employs the most Americans out of any industry. AI is slated to have similar effects, innovating job growth in unforeseen ways. This economic growth will need to be paired with government incentives, decreased regional concentration, and training programs to provide workers with new skills. The emerging markets AI creates will push society into a new innovative landscape, providing new jobs and societal benefits. In addition to a growth in the supply of jobs, AI will have a transformative effect on the labor market by improving the lives of workers in existing sectors with minimal unemployment effects. According to estimates by Goldman Sachs researchers, 63% of jobs in the near term will be complemented by AI, 30% unaffected, and only up to 7% of jobs will be substituted. This large-scale complementary effect will make countless workers' lives easier by delegating the more tedious tasks of their jobs to AI. According to this forecast by Gartner, by 2030, 44% of AI business value will be derived from decision support and augmentation. This forecast not only shows economic growth spurred on by generative AI to the tune of $3 trillion, but also that most of that will be complementary to existing workers. Workers will be less focused on sifting through large databases or manually answering repetitive queries and more focused on the human elements of their jobs, more likely to inspire passion and productivity. For the jobs may be replaced, the researchers singled out the legal industry and the administrative industry as being most likely to have jobs replaced. However, such trade-offs are often the cost of innovation. The invention of the typewriter and later the computer also put some bookkeepers and secretaries out of a job, but we accepted that trade-off in exchange for the immense complementary boost for many employed in those sectors, as well as countless others in various industries. Public policy changes will also be necessary to help those left behind in the AI boom, such as integrating 21st century skills, including communication, complex analytics, and creativity into education and job training. Additionally, this retraining will cause workers to become less replaceable and more productive, which could have positive effects on employee wages. Additionally, AI will provide benefits to the consumer. Developments and generative AI will lower prices to consumer due to increased efficiency. With this increased spending power comes more contribution to the market as well as disposable income that can be saved or spent for leisure. In addition, AI will provide better and more personalized products to individuals, including life-saving applications like medical diagnoses or more trivial ones like customized dating app advice. AI has the power to transform the lives of the common man for the better, and if implemented thoughtfully, can be the rising tide that lifts all boats. All right, so now I'll provide some insight into the specific application of generative AIs in the IT sector. Here the key term is process automation, so that includes code generation, troubleshooting, data entry, manipulation, or processing. There will be less human error, there will be less costs involved. A good example of this would be the field of data science analysis. It's very rare the case that you have a perfect data set or after you collect the data. It's usually the case you have to clean it, you have to take care of missing values, you have to take care of values that or nonsensical values, and that is where generative AIs can shine. They can help with value imputation, for example, and just take care of the tasks that traditionally humans or until now humans have had to do. Another good example of generative AI application here in the IT sector would be with the company called APN AI Skill Designer or APN. What they have done is they provide services to businesses for email classification or document classification. So that means, for example, having received customer reviews, responses, inquiries, feedback, stuff like that. They can classify it for a business in order for the business to make better decisions or make better solutions to increase customer satisfaction. And equally with document classification, they can increase organizational efficiency thereby. Then a quick example of this relates to the finance sector. Liberty Mutual Insurance Company have collaborated with MIT in implementing generative AI into their risk assessment methodology. And what they've done, for example, is created a computer vision or computer system that allows them to detect certain risk profiles of roads or risky conditions of roads to improve the risk assessment in general. I'll pass it off to Nick to go deeper into the finance sector. Every year, billions of dollars are invested into financial markets all over the country. Why? Because we have confidence that our money will be safe in the hands of banks. Trust is one thing a functioning economy cannot do without. But in an age where data breaches are becoming increasingly common, trust is hard to come by. To build assurance in the market, companies are strengthening their cybersecurity networks by implementing new softwares and hiring more analysts to run them. But it is not enough. Cybersecurity breaches are much more frequent now than ever before with a 30% increase in attacks within the past 10 years. Though data breaches often occur outside and are associated with external sources such as hackers, hundreds of preventable breaches are perpetrated every year internally. On the whole, cybersecurity today fails because the operating systems are too complex for employees to use and the data is just too numerous to be properly checked for threats. And even a small error caused by a person with the best intentions can spell disaster for a multimillion-dollar financial firm and the millions of people whose money they are entrusted with. This is where generative AI, this is where generative AI models have a role to play in saving the financial sector of the economy. AI has the capability to comb through a copious amount of data almost instantaneously. And thanks to machine learning, it knows exactly what to look for. This can take an arduous job that requires hours, even days to perform by a team of employees and it reduces it down to a simple task that takes seconds. When AI flags information, it finds suspicious. If the result is reported to an analyst or supervisor who ultimately has the final say on what to do with this information. This acts as a solution to both external and internal financial threats. SEC investigations into embezzlement also often turn up empty-handed. We have seen firsthand how often they miss key data to the detriment of us all. Imagine if Bernie Madoff had been arrested before he defrauded investors out of tens of billions of dollars instead of after. Again, the quantity of information required to sift through in order to convict Madoff was just too large for a team of experts to successfully handle. But the inconsistencies in the company's financial records would have been easily flagged by a more advanced security network. The amount of time and money saved by the integration of generative AI into existing security networks is unparalleled by the existing, any existing forms of technology we have today. Major players are already incorporating generative AIs such as JAT GPT, companies that include Morgan Stanley and Bayman Company Financial Consultants. And it is clear why they are doing this. They wish to keep their clients safe and reinforce the integrity the market is so heavily reliant on. It is time to leave antiquated security procedures in the past and give financial markets the tools to combat today's challenges before they occur instead of being left to regret the consequences of fraud and theft that could have so easily been prevented. Generative AI technologies are here to stay and they will have transformative positive impacts on GDP, the labor market, consumers, financial institutions, and security. Let's not deny the inevitable crawl of technological process and instead embrace the opportunities generative AI will provide. Thank you. You don't wish to use any more time? Oh yeah, here. Okay, excellent. The negative team will now have 15 minutes to present its main argument. I'll restart it. Good to go. We firmly believe that generative AI is bad for the economy for six key reasons. First, the conception and application of AI under capitalism represents the next iteration of a class war waged since the Industrial Revolution in which the middle class wages, bargaining power, and employment are eroded for the profit of the elite upper class. Second, AI is inherently biased, replicating widespread biases at the input level, obscuring them at the transformation level and perpetuating them at the application level. Third, AI hinders not only academic integrity in our higher ed institutions but also the human capital development of the next generation, leaving our future workforce underskilled and over-reliant on AI. Fourth, through its usage in politics and gradual replacement of search engines, AI exacerbates misinformation, obscures nuance, and discourages critical thinking necessary for informed voting and decision-making. Fifth, because generative AI companies operate in all globally markets where power is concentrated in the hands of a select few, AI decreases competition, increases prices, and generally hurts the people. Sixth, because of its generative capabilities, AI disrupts the creative industry and hurts artists, authors, and musicians whose works are now at risk of being used without their consent or at risk of being replaced entirely by AI-generated content. As we will see, AI is unequivocally harmful to the U.S. economy. I will now pass it on to Owen who will start us off with AI's arms to the middle and lower classes. Contention one, AI is the next iteration of the class war. The use of our capitalist system seeks to maximize profits for firms and their owners. Firms want to cut costs in any way they can and they will use AI to automate their work first and cut these costs. But these costs are cutting, they aren't just individual costs. These happen to be people's wages which they rely on to feed their families and for all other needs of life. A study by Bruegel found that as many as 54% of jobs in the EU face the probability or risk of computerization within the next 20 years, with Goldman Sachs predicting 2 thirds of occupations are under the risk of being partially automated by AI. And furthermore, at McKinsey's study found 30% of the share of jobs could be replaced. How are we already seeing the loss of these jobs in the middle class, especially in post-secondary white collar jobs in industries such as manufacturing and assembly line work, data entry and analysis, financial services, legal services, insurance underwrites and claim representatives, agriculture and computer programs. And this is, but this is only the start as AI becomes more and more developed in our economy. These losses are being, these losses of people losing their job, these workers now have to go through these reskilling if they can possibly keep a job in the middle class or else they have to pick up non-skilled labor. And this is causing workers to feel alienated. However, because of this high unemployment rate that we will see from the loss of jobs, the labor first will lose their collective bargaining and individual bargaining power and they will not be able to argue for fair benefits leading to higher job polarization. And this is only the start as AI becomes more prevalent in the workforce. This is, so we must act now to fix this issue. Our contention too is that AI is inherently fraught with bias. Our next argument focuses on the bias that is endemic to AI and cannot be eliminated. More specifically, we see systemic, harmful biases at three levels of AI, at the input level, at the transformation level and at the application level. A report published last year by the UN Habitat for a Better Urban Future summarizes this problem eloquently, explaining that AI systems reinforce the assumptions in their data and design in order for an algorithm to reason, it must gain an understanding of its environment. This understanding is provided by the data. Whatever assumptions and biases are represented in the data set will be reproduced in how the algorithm reasons in what output it produces. Similarly, design choices are made all along the AI lifecycle and each of these decisions affects the way an algorithm functions. Because negative societal assumptions may be reflected in the data set in design choices, algorithms are not immune to the discriminatory biases embedded in society. Now that we have a high level understanding of the various biases embedded within generative AI systems, let's dive deeper into the three levels we mentioned earlier where we see these biases manifest. First, there's bias at the input level. Timothy Gabru and her co-authors in one recent paper described that one of the shortfalls of large data sets based on text from the internet is that they over-represent hegemonic viewpoints and encode biases that are potentially damaging to marginalized populations. Furthermore, Crawford in 2021 explains that the risk of historical bias occurs when there's a limited understanding of the historical, sociocultural, and economic biases within data sets and the context in which they were made. Data collection is more than a purely technical process as it is shaped by human choices that are context dependent and difficult to trace later. Suresh and Guttag in 2021 reasoned that by removing the data from its context of collection can therefore lead to harm, even when the data set still reflects the world accurately. Where still, since AI systems require a large amount of data to learn, discarding historical data is not always feasible and collecting more data to compensate still does not mitigate the risks of unfair outcomes since historical discrimination still persists in the results. Second, there's bias at the transformation level in the algorithm which is what happens between the input and the output. This bias is the hardest to observe in the one we are least qualified to analyze but the main idea is that the lack of visibility and understanding of what happens in this opaque process gives an air of objective scientific authority to an input that is in fact deeply biased. Third, there is bias at the application level which is the most important and where we can examine the real life consequences. For instance, according to a report published last year by the UN Habitat for a Better Future, many law enforcement agencies around the world have turned to AI as a tool for detecting and prosecuting crimes. AI applications for policing include both predictive policing tools as in the use of AI to identify potential criminal activities and facial recognition technology. All these technologies have been shown to be biased in multiple ways and lead to harsher impacts on vulnerable communities. For example, the ComPASS algorithm used to predict the likely recidivism rate of a defendant was twice as likely to classify black defendants as being at a higher risk of using the past of recidivism than they were while predicting white defendants to be less risky than they were. By using the past to predict the future, predictive policing tools reproduced discriminatory patterns and often result in negative feedback loops, leading the police to focus on the same neighborhoods repeatedly and therefore leading to more arrests in those neighborhoods. Another example is when the Chicago Police Department used a similar algorithm to create a heat list, using it as a suspect list and surveillance tool, causing the people on it to be therefore more likely to be arrested and detained. Similarly, facial recognition technology shows that poor accuracy for certain demographics has been widely adopted by law enforcement agencies resulting in wrongful arrests and prosecutions. AI systems tend to perpetuate and accentuate existing biases under the guise of mathematical neutrality. Such systems are all the more dangerous when used for detecting and preventing crime as law enforcement agencies often have a history of discrimination and prosecution of vulnerable communities. For these reasons and many more, even with the proper transparency in governance practices in place, AI systems should never be used to make decisions impacting human lives and human rights in such a sensitive context, whatever that is in healthcare, policing, surveillance, or another area. I'll now pass it on to Lee. I have contention three, AI is bad for academic integrity in human capital development. A rise of online learning during COVID and cause more students to turn to AI and other methods of cheating on assessments and homework. This caused academic integrity to be questioned because the work is not a representation of what the student knows. Teachers will not be able to discern students' work and AI unless the work is online and AI detecting software is installed. And even if the teacher does get AI detection software, students can go around it using more AI. The use of AI on assignments can cause grading systems to not hold as much value as some of the students are only using AI. This compromises the way that the degree or work holds since the quality of education has decreased as well as causing more competition between students who use AI on their assignments and those who do not. And AI extension that's being used is Grammarly, which scans documents and gives back feedback on grammar, clarity, and other linguistic components. This can cause students to be less aware of grammatical errors because they have an AI tool that will just tell them what's wrong rather than students acknowledging it themselves. The use of AI can also decrease human capital based on a study done by the OECD or the Organization for Economic Cooperation and Development. In 22, they measured human capital using the PISA survey, which measures the quality of education and the mean years of schooling, which measures the quantity of education, giving a direct estimate between the two components. The paper written by the OECD finds that the elasticity of the stock of human capital with respect to the quality of education is three to four times larger than for the quality of education. This study contrasts with previous data in the past years. Since now they have found the quantity of education has less of an impact on human capital compared to the quality. The quality of education will decrease due to an increase of AI sites being used by students. They use AI as a crutch and they will not be able to go without it. And this lowers critical thinking skills. All of this will cause a decrease in human capital in the future. Good, so my contention is that AI is bad for democracy and informed politics. AI will have a negative impact on the future of democracy. The increase in society's reliance on AI will greatly exacerbate the amount of disinformation in our digital world. Given AI's ability to create cohesive arguments in seconds, conspiracy theories and propaganda are now able to be created and shared much more effectively than before. With this technology, not only will disinformation become more prominent but it will influence the proficiency of these false narratives. We have already seen examples of this. In 2019, the company OpenAI, responsible for creating chat GBT, developed an early model for the AI bot that we have today. They found that this chat bot was so effective in producing fake news that they ultimately decided not to release it. Although choosing to withhold this technology from the public, the ability to produce misinformation is still present in these more recent models. This is not the only example of false narratives being spread. Recently, the Republican National Committee released a campaign against current president Joe Biden using an entirely AI generated content. The campaign at which attack the current president included entirely fake voices and graphics showing not only how easy it is to produce this propaganda but also how convincing it is to the public. This extensive library of problematic content will likely be used to worsen the already strong political divide within the United States or worse be used to draw more people towards extremist texts or groups. Another issue that this highlights is how AI bots respond to prompts. Single direct answers obscure nuance and disagreement, amplifying consensus while drowning out lesser voices. This isn't new, but it's an exacerbation of an existing trend. Google already uses natural learning processes, natural language processing and query interpretation. More importantly, they've been gradually rolling out featured snippets which provide direct answers to your Google questions, language models and the next step in this concerning direction. The absence and context combined with the growing proportion of disinformation spread online will surely lead towards the inability of future generations to think critically as the general public becomes more desensitized to misinformation. I will be presenting contention five that AI exacerbates oligopolies. Chat models operate in a field that can only exist with oligopoly markets where just a few firms can produce the ingredients necessary for the creation of these large scale AI models. Before building a model, you need to be highly skilled and knowledgeable typically with a master's or doctoral degree. Then in order to teach the best AI models, you need trillions of data points and as we saw in recent anti-trust litigation, big tech has a monopoly on this data and they have proven that they will capitalize on this monopoly through restricting access to data and selling it at a premium. Finally, the hardware used to teach models is pricey. As one of only two big players in the computing platform market, Microsoft notes that in development of chat GBT, they needed to create huge supercomputers made of thousands of expensive GPUs. Throughout the whole development process, you need to pay computer engineers, implement cooling systems, backup generators and other forms of infrastructure. The cash cost of this infrastructure is immense and only the biggest tech companies can make this investment. Big tech realizes this and seeks to profit and the current CEO of Amazon, Andy Jassy, has even said that there will be a small number of companies that can invest that time and money and we will be one of them at Amazon. These barriers will lead to AI markets that operate with only a few large players. We have seen the effects of these business practices on inequality before with the likes of standard oil, IBM, Amazon and Microsoft through high prices and exploited workers. Now I'll pass on to my friend Alex. The rise of generative artificial intelligence has the potential to disrupt the way jobs and creative industries have been done and poses a threat to job security of those currently in the industry. Artists fear losing their jobs to AI systems like Dali and Dali too, who can create artwork in under a minute. Artwork that would have taken artists from a few hours or days on a small piece of artwork to weeks or months on larger, more complex projects. For this reason, artists are losing work because AI decreases demand for them and will only continue to do so as the AI systems approve. As for writers, AI models such as ChatGPT, AI writer and many more have been used to write articles, scripts and even books. Not to mention these AI systems generate their work from combining work from all over the internet without giving credit to the original creators. These companies are essentially stealing other people's work to create profits for themselves. We are already starting to see pushback from the Writers Guild of America who have failed to negotiate higher wages for six weeks. One of strike May 2nd, 2023 with their main concerns being generative AI replacing them as well as decreasing wages. According to a recent Writers Guild of America report, median weekly writer producer pay has declined 23% over the last decade when adjusting for inflation. This is the only the beginning of disruptions in the creative industry with 26% of jobs in the arts, design, entertainment, sports and media industry being exposed to automation according to Goldman Sachs. It's the end. Both teams will now have five minutes to collect their thoughts and arguments for re-bottle. Sweet. Owen, one point which you mentioned was that 54% of jobs are due to risk because of AI development. However, a World Economic Forum report predicted that the number of jobs destroyed will be surpassed by the number of jobs created tomorrow. Jobs such as an AI prom in engineer and more others. For example, AI, including generative AI have destroyed 85 million jobs but will create 95 million new roles which are more adapted to the new division of labor between humans, machines and algorithms. So what does this truly mean for the US economy? This means one of three things. One, it will increase efficiency leading to more quantity of the goods and services. Secondly, it will leave ample time for more complex and creative work to improve the quality of jobs. Lastly, this whole improvement will progress the US economy and better standards of living and eventually better economic development. So the opposing side also argued that certain biases or there exist biases with respect to AI and AI perpetuates these biases and we argue that it's not AI that is intrinsically biased but it's the surrounding conditions around AI. Culture, therefore, really AI is a new field. It's only come in the last couple of years and we understand relatively little. It's really the starting difficulties, we'd say, that AI is having right now and it is really impossible to make a final judgment given that it's such a new field. So therefore AI is not actually intrinsically biased but it's really the people or the culture of surrounding it. That's what we say. I would like next to talk about academic integrity. So honestly, students have been finding ways to breach academic integrity to cheat on assignments for years now, for decades, even before AI ever existed. And maybe AI does in some aspects make this easier but it is not AI itself that is the cause of academic integrity. There's just too much access to it essentially and it's making it a lot easier for students to maybe commit academic integrity but the technology itself is not making any new strategies where this is being committed. And there are also a lot of detection methods that are also AI. Turnitin.com, for one, is an AI detection method that can detect if students are committing academic integrity. And then also to mention the point on democracy, so you mentioned how open AI chose not to release an AI program that they deemed too inefficient or too inaccurate to release. And this shows that yes, maybe the technology is not quite yet perfect. However, they did not release it because they knew that there were improvements to make on it. And that to me is not indicative of a threat to democracy but rather companies taking responsibility and acknowledging their mistakes when it comes to the imperfections in developing new technologies. AI could be used in shoddy ways that seek to fully replace jobs that are providing much of a productivity wage to boost to workers. However, thoughtful implementation could combat this and lead to improvements for the average worker and consumers. AI is expected to be used in a number of complementary ways such as providing accurate diagnosis in the healthcare industry. Thank you. The negative team will now have four minutes to lay out their counter argument. Okay, so one of the points that the other side made is they talked about this 2020 World Economic Forum Report multiple times showing how there will be roughly just over 90 million bumping jobs just under 90 million jobs will be disrupted by AI. But they also failed to mention that that report also says that in contrast to previous years job creation is now slowing while job destruction is accelerating. They also failed to mention how some 43% of businesses surveyed indicates they're set to reduce their workforce due to technology integration. And they also talk about how these new jobs that are coming in will be more suited for the new division of labor in our economy. This new division of labor in our economy is more unequal than at any other time. The idea that AI will also increase productivity is hard to argue against. But it's also hard to argue against the idea that benefits of AI will be unevenly distributed. We are seeing this trend in many industries where a majority of the benefits are flowing to a small number of individuals and companies. Those that own the means of production and know how to use AI efficiently. And they talked about the IT industries earlier and the tech industry is a great example of where there is this division. So we can point to the report from Barron's in 2021 for the top five tech companies together accounted for 23% of the S&P 500's total market capitalization. Even if top line GDP figures increase that is only one measure of the economy we will likely see an acceleration in the hauling out of the middle class. A 2019 report from the Brookings Institution found that jobs with higher wage requirements are less likely to be automated than lower. This could lead to a workforce that is made up of highly skilled, highly paid jobs and on the other end of the spectrum a lot of low skilled labor intensive jobs. Many of those individuals who make up the consumer spending and tax base the middle class of this country are going to be pushed out of their current roles due to AI. The managing director of the World Economic Forum the source that they cited many times also said that accelerating automation and the fallout from the COVID-19 recession has deepened existing inequalities across our labor markets and has reversed gains and employment made since a global financial crisis in 2007-2008. And all this is an abstract. We're already seeing layoffs due to AI implementation. In February, sports illustrated laid off many of their journalists after announcing that they have begun to use generative AI to help write their articles. So three quick responses. One, a high level and then second response to their response to our second contention which is bias and then third response to their response to our third contention which is academic integrity and cheating. So first at a high level we got to be mindful that all these sources are coming from companies, corporations who stand to profit from framing AI as inevitable and as good for the worker. Second, moving on to their response to our bias argument they try to tell us that it's not AI that's the problem but rather the surrounding environment. We agree and disagree with that. We agree that there are biases in the surrounding environment and in the current applications of AI that we can't mitigate those biases. So AI in the current status quo will always have those biases. Finally for academic integrity they try to say that students are always going to cheat and that it's access to the AI that's the problem and that Turnitin can prevent this. Again as Lee argued Turnitin cannot detect AI generated things and you can literally just prompt the AI and say okay now add some mistakes to this paper make it sound like a student wrote it and they can get around it. So it's hard to regulate. And additionally with seen job creation as the other group pointed out these jobs that are being created are very high level jobs that you need masters, PhDs for. It's not very easy jobs as someone who loses their job from AI and as a bachelor's degree this requires more advanced re-skilling that the average person isn't really going to be able to get these jobs leading to more decreases in the middle class. As the opposition stated AI could be used in a more destructive way that seeks to fully replace jobs that are providing much of a productivity wage boost to workers. However thoughtful implementation could combat this and lead to improvements for the average worker and the consumer. According to the MIT technology review AI is projected to be used in a number of complementary ways such as providing accurate diagnosis in the healthcare industry or generating individualized lesson plans in education. These innovations would not only increase the productivity of workers by allowing them to focus on less tedious tasks but will also increase the consumer's utility as they are now receiving higher quality and more personalized products. Based on the Goldman Sachs projections that I would note that the opposition used and declined to notice that this partial automation is not replacement. Partial automation is a complementary effect on existing labor and will not cause people to be replaced. This is the way the generative AI is projected to be most used. Deriding AI as fuel for inequality by pointing out the potential waste stratification ignores not only that larger effect that AI could create but just regards the other side of the common man, the consumer. AI has many applications healthcare, finance, infotech, sales marketing and it has the potential to add efficiency to any of these industries. However, regional concentration is definitely a prevalent problem but this is not unique to AI again and it will require a concerted effort to combat including investment into AI innovation and talent and establishing national AI research infrastructure and initiatives to move STEM education across the country. These things can all help combat the intense stratification that some of these tech advancements are likely to bring up. The other point was you mentioned that generative AI actually makes it unequal and I'm here to say that it's not necessarily the case in the case that you play generative AI within these services these kind of new technologies like virtual assistants and chatbot provide low cost free access services which kind of level the playing field and also make it more easy to access in that case. Additionally, you also get a lot more free information of symmetry where a lot more information is available to the public in all cases which makes it more equal for the rest of the population. Secondly, there's also and they also allow small businesses to compete with large businesses in the case that there's more information symmetry and also it kind of provides more opportunities to those who are employed and unemployed as well. Yeah. I would also like to address several of the other points made about exacerbating oligopolies and in turn I'll just I'll do this quickly. So the creation of oligopolies by AI isn't necessarily an accurate statement. So everyone can use AI it's an open access resource and honestly like when you talk about you know it's a very restricted market no one can you know generate it is true that a very selective amount of like educated people can create the AI but you have people who have to implement the softwares you have people who have to install to have to check the systems and this is actually a creation of a lot of jobs and then I will move to your point about art intrinsic value of artwork still reigns even with AI generated art like there is many ways to tell the difference between AI art and thank you the negative will have their last four minutes of the 60 minute section so we want to look at the point that they made that this empowers workers and we want to bring up again that even partial automation undermines worker necessity and this is going to be aimed at the middle class we're going to be looking at at mass amounts of workers being told that a computer can do your job better than you you aren't as necessary that you aren't as useful not only does this have economic effects like we've discussed but there's a mass alienation a mass societal effect mass hysteria could be mass depression mass drug use none of these things are conducive to any sort of economic growth and I think we can all agree that these are all not great things to look at so just to build off of that and sort of remember back to Owen's point about the middle class we're already seeing like jobs be lost by this we just like a few weeks ago Buzzfeed laid off their whole blog post team replacing with chat tbt so not only are we seeing jobs actually explicitly being lost but if as they argue some of the tasks that workers are tasked with doing are replaced with AI well why would the company pay them the same if you're doing less work if you're less if you're less necessary you'll get paid less that's the way that the incentives would encourage companies to act also they tried to talk about this idea of thoughtful implementation which is again this idea that sort of big tech and the people who have interests in AI try to propagate the idea that AI can be safe AI can be ethical AI can be good if we just are careful about it and we do in the right way and that's just fundamentally untrue if you look at AI can only exist in the context in which we're living right now which is a context that is systemically biased a context that where the incentives of like big tech are not aligned with the incentives of the middle class and lower class and so for that reason AI will be harmful another point they made at the in their intro statement is they talked about how AI is just like any other new form of technology to increase efficiency and one of the ones they brought up was the or one of the examples they brought up was the internet or computers the internet and I'm here to tell you it's very different than that the internet yes it took away jobs in certain industries yes it's aspirated inequality but those are the only things these two have in common the internet increased communication it allowed small businesses to reach out and small businesses to be able to grow and reach new consumers and just created huge new industry or created huge new industries and it was a new form of communication my adversary Nick brought up that these that these oligopolies may not materialize because in applications you have a wide number of people who can apply chat models to that we say that chat models have to be created and as chat GBT has been and the infrastructure to create those chat models is is controlled by only a few hands namely Amazon Web Services Microsoft Azure if that's how you pronounce it and it's almost impossible for new players to get in here that's almost the definition of an oligopoly a duopoly the judges will now have two minutes to organize their questions so you you raise both teams read very interesting points I would like to ask one question with both teams should respond to and that's the question or the issue about job loss so the pro team made the good point drawing on very good research by economic historians labor economies that if you look at the long stretch of history there has always been disruptive technologies in the short run they do create unemployment but if you take a long run perspective it does not so their contention was that AI is a technology just like that there is nothing new about that the con team made the point that there is going to be job loss so the question to the pro team is can you establish that this is just technological improvement like what we have seen and can the con team make the case that this technology is really different and we cannot draw on past historical evidence to claim that this will not increase unemployment even in the long run throughout history there have been many cases where a piece of technology has come out of the box and we believed that it was going to cause an end to the labor market as we knew it many automation examples in the industrial revolution innovations in agriculture innovations in manufacturing and of course the internet being the last thing in this string of technological advancements that has permanently that has permanently shifted are we is it on or no sorry okay here you go that has permanently shifted the labor market as we know it like many technological advancements for it AI is being met with resistance but it will become an integral part of the economy in many beneficial ways as we have seen with countless technological advancements before it they try to draw a comparisons from the internet or rather contrast by say by stating things such as the internet cause increase in communication increase the internet cause all these different innovations that will allow small businesses to be able to create the component of the labor market and I would argue that AI also can do all of these same things as well as the internet if not better we're just going to wait and see how it shapes up but I think that it won't be any different from technologies in the long run hi this was an awesome question and we thought about it very much ourselves the framing AI in general as simply another technological advancement we feel would be wrong this is a momentous shift in how we go about on our day-to-day lives the very idea of employment is going to be turned on over its head because we're looking at a technology that can affect every single worker even jobs that we previously thought were untouchable high level highly skilled jobs accountants financial managers computer programmers AI can do that better than they can furthermore we're seeing high high levels of investments from this in the billions and billions and billions of dollars regardless of whether this technology has the ability to do so or not society at large financial institutions seem to be hell bent on making this a reality of replacing worker jobs no matter if if if the technology is ready to do that or not yes judges please thank you yes we have more time and I can bring you the microphone actually if this one this one is working so okay so hi everybody so we kind of wanted to talk a little more about the aspects of like regulating AI and how to control it so we talked about some of the potential consequences this group highlighted a lot this group acknowledged some of these potential consequences too and talked about how well there are ways to deal with some of these potential consequences consequences and so I wanted we were maybe hoping that both groups could talk a little more about well so the against group might argue that like the sorts of regulations or the sorts of policies we could do I aren't going to work or we're not going to be able to do them and and to the group here like so if we think about some of the risks here related to like cyber security or related to job displacement or related to maybe regulating the bias effects or something like this you argued that well you know we could have income support policies or labor market adjustment policies or we could put the regulations in place the question posed to both groups was is this going to benefit the economy so we have to kind of ask the question is the political system up to the task and so maybe the for group can convince us that it is that this stuff will happen so that it'll be in a good way and maybe the against group could talk about well listen there are things we can do to control you talk about cops or using stuff before AI were cops biased did cops do bad things before AI like what is it about AI that's going to make it different and why regulation wouldn't be able to deal with some of this bad stuff yeah and I just want to add one more thing into the mix as well one thing that people have talked about a lot which didn't come up so much or at least I hope I didn't miss it and the debate is the issue of controllability so there's sort of the issue that you know you have these AI systems we try to give them you know certain goals but we don't know how to write the goals in exactly the right way and so their goals are sort of not aligned maybe in subtle ways or in important ways with what human beings actually want them to do and then they could actually do something very bad because they would disregard something that human beings really care about a lot and harm human beings so maybe something about you know that's closely related to regulation controllability as well what you think about that I'll give one minute for the participants to prepare so our key point here is that good regulation will have to fundamentally based on a good understanding of AI itself and that can only be achieved through a different investigation of AI some more research and development and that cannot be achieved through complete suppression complete suppression I think has never led to good things or scientific suppression has never led to good things here if we believe that AI is going to fundamentally reshape our world in the same way that the internet before it has then we are going to see it being almost like effortlessly implemented into the way that we think about all aspects of life government education job training and it will be interwoven into the fabric of society at large you've already seen the ways that people are now being trained in computer work in tasks in jobs that that wasn't even thought that it would be a thought in their mind that they would need any kind of technological competence to perform you'll see the same thing happening with AI in the future with a concerted effort through government and through private organizations individuals as well this reshaping will be a lot less of a large step than some people may be anticipating because of the way that AI is going to weave itself into our lives and the way it's going to have a complementary effect on not just the high level tech employees but also on low level workers and laborers in every aspect of life the internet it seemed to be highly similar to a generative AI just with the difference that internet has been almost entirely developed compared to generative AI when still it's in its early development stages and this difference proves that internet has proved to be highly effective in terms of progressing the economy after efficient regulation and once generative AI reaches that development of progression in terms of regularity the negative will have two minutes okay so before I start I want to be clear that this resolution is not can AI be regulated but will it be regulated in the status quo and so I think that for two reasons it won't first because there's a past president the past precedent of it like AI open AI taking advantage of deregulation and the lack of regulation and general understanding in this area and second because I don't think interests are aligned in a way that will be like conducive to regulation so first regarding past precedent we see that the way that open AI and other companies have sort of acquired this data necessary for these large language models was scraped from the internet with no regard for contextual integrity or people's intellectual property things like this things that which represents them taking advantage of a lack of regulation in that area second regarding future interests if we think about the money that is to be made in this there's no reason that open AI and these companies would want there to be regulation so they will lobby against them and I really don't think we'll see any regulation in this area and yeah we further even according to regulation and the idea of attempting to pass certain legislation we see even with the internet like our adversaries have brought up AI is different because we have companies that are creating models and these models are easily swayed in changes in algorithms and changes of input data they're malleable and they're malleable according to whatever a leading party would like to see in a day where these models can have such a heavy influence on everyone's thinking as we've seen people tend to trust what chat TVT might tell them thank you that'll be difficult would the judges like to come ask their final question okay let me ask the final question which kind of goes back to the main question that is being debated so the the debate was whether AI is good for the economy and one thing that was talked about the pro team mentioned that it will increase efficiency and growth the con team mentioned many things which might lead to negative impacts on growth so two things that immediately come to mind human capital so if human capital is going to be destroyed or not developed that will have a long-term impact on growth if democracy is undermined that will also have a negative impact on growth so the the the pro team needs to counter those arguments which the the con team did not develop but I would like to give them opportunity to develop the economic implications of the points you made and the and the pro team to kind of argue against that that you have said that it will have a positive impact on growth and you argued about savings in labor but there are other things which they pointed out human capital will be degraded democracy will be undermined that has negative impact so which of these are stronger and can you make a case that yes the positive impact overshadows the negative one and for you the opposite each team will have one minute to prepare in terms of human capital AI may decrease that in certain areas but thoughtful implementation can help quit exacerbating these effects it can be looked at as a boon that will exacerbate academic integrity and prevent people from building the skills that they need to build however people argue the same thing about the internet again kids are going to be looking on the computer they won't learn how to read a good book they won't know any of these things but instead you'll learn that new skills are going to be developed 21st century skills skills in analytics skills in creativity skills in building things that AI cannot do I think that this presents an incredible opportunity to develop new forms of human capital that don't involve sifting through files on your computer and making sure that you know exactly where to go in the library to find the information you need and to use technology to our advantage and to use our human brains for things that technology will never be able to replace I'd like to address the point on democracy very good so democracy is and has always been built on access to information and AI can increase access to this information the opposition pointed out that a lot of this information has the potential to be misleading or completely inaccurate however there are numerous sources on the internet countless sources that we know not to be trusted just as it is with the internet as AI develops we will be able to better sift through these inaccurate sources these incorrect sources look for inconsistencies but it is not about shutting out AI completely it is about knowing what information to look at to improve your educational capacity and to be a more informed voter yeah and a few of some oh yeah just a last note about democracy or our political system and AI we say that AI is not the cause of social stratification but it is simply a catalyst the cause of it may be our political system that is not the topic here but all that AI is doing is accelerating that process it's not the cause the negative will speak for two minutes GDP looking at it as a reasonable example what does it really measure it just measures how much money we really have how wealthy our nation really is but when you really think about it what does that actually mean if you have half of the population can't find work or need so much higher education to get these new access to jobs it doesn't really mean anything it just goes to the top especially with these big corporations that have high barriers to enter the access AI and all this increased productivity and all these benefits that you'll see they're not really it doesn't mean anything because at the end of the day the economy is based around the individual rather than just a number or statistic so it really invalidates the point and then additionally with these losses of human capital that AI will have it really and also the instability it will really hurt this growth in the end run just for the names of just for the names of efficiency and productivity thank you judges and thank you to participants for answering we will now have three minutes of closing statements from each group beginning with the affirmative you may come up and present your closing statement so to conclude AI will grow GDP and efficiency across the board and create new and emerging markets that will generate new employment opportunities existing workers will also be benefited through the complementary effect AI provides and the opportunity it presents to trained workers on modern skills the consumer will also be benefited as AI provides more accurate and personalized products that better meet their needs it could even drive down prices as automation increases productivity and lowers the cost of production lastly generative AI is fundamental in securing financial integrity building confidence in the market all of these items have the power to transform the economy as we know it in much the same way technology always has like it or not AI is an unstoppable force that will sweep the U.S. off its feet if we are not ready for it instead of trying to outrun the impermeable tsunami of innovation it is time for us to do what we always have done that is to adapt to the changes to technology improve what is possible when human ingenuity is put to the test thank you the negative will now have three minutes for their closing statement I hope as we've shown everyone today we are far from frivolous ludites who are angry about advances in technology and as we have shown AI is far from a simple tool that increases productivity it represents a momentous shift in technology where current worker conditions low bargaining power and exploitation will be made worse with profit going to a small number of capitalists who own the means of production AI can only exist in the current world and that is a world where a few interests such as Microsoft or Amazon are able to disproportionately influence our political economy through the seizure and the mean through their seizure and means of data production we will see a world where our very democracy will be undermined and where our empathetic creative industries will be replaced by emotionless and mindless computers in our current profit driven system as scholar Dan McClelland explains the more plausible chat GBT becomes the more it recapitulates the rationalizations of race science and that and gender constructs despite the claim that large language models are self-training real world systems require precaritized ghost work behind the scenes to keep the lights on AI is not something out of sci-fi but instead a technological amplification of existing labor and power relations open AI paying Kenyan workers $2 an hour to tag obscene material for removal is figurative of the invisible exploited labor that holds up our current existing systems of business and government the affirmative have tried to tell us that AI has the potential for increased productivity and economic growth in the long term but we are already seeing the negative effects of AI on our institutions AI will produce wealth and growth but only for the upper class we see AI in surveillance and policing and replacing jobs and decreasing wages with continued unfettered development of AI technologies we will continue to see the moral and economic decline of our society for that reason we urge you to vote in the negative thank you our judges will now deliberate and return with their decision okay so we thought that this was an excellent debate on a very interesting topic both sides were very creative and brought strong arguments at the end of the day we ourselves obviously we don't really know who's right I mean it's a very difficult debate and we thought it was very close it was not at all obvious to us who won the debate but after deliberation we thought that the pro team made a little bit of a stronger argument for their side so we're going to award the debate for the pro team but it was an excellent debate