 Good evening, everyone. Welcome. My name is Alyssa Stone, and I am the Senior Director of Programs and Community Engagement here at Mechanics Institute. Greetings. Welcome to Mechanics Institute. A historic gem nestled in the heart of San Francisco established in 1854. Mechanics Institute is a haven for intellectual exploration, offering a rich blend of library resources, literary events, and cultural experiences. Join us in celebrating knowledge, community, and the vibrant spirit of the city. Ladies and gentlemen, today we delve into the fascinating confluence of AI, authorship, and ethics. As artificial intelligence shapes our creative landscape, questions of ownership, accountability, and ethical boundaries arise. Join us in exploring this intricate dance between technology, creativity, and the moral compass that guides our brave, new world. Do you think that sounded like a human? Yeah. Yeah. ChatGVT wrote that intro for me, making my world a little easier, but a little more awkward. So, tonight we are very excited to welcome our two experts who are going to help us understand and explore this interesting intersection of AI, authorship, and ethics. A little bit more of a surprise for you. I have ChatGVT rewrite their bios. So, I'm going to read that as well to help introduce our two guest speakers. Denise Kynh Rieker, PhD, is a professor of management and ethics at San Francisco State University. Formerly serving as the interim associate dean of the Lamb Family College of Business and director of the Center for Ethical and Sustainable Business, she focuses on integrating ethics and sustainable business education, community service, and research. Her academic career centers on business ethics, compliance, corporate social responsibility, sustainability, and women entrepreneurs. With extensive teaching experience, she offers courses in undergraduate and MBA programs. Her research contributions span peer-reviewed articles and book chapters in ethics, risk, CSR, sustainability, and women entrepreneurs. Her academic journey encompasses a BA in economics from Indiana University, two master's degrees, and a PhD in philosophy, ethics, from the University of South Florida. Was that accurate enough? All right, check one so far. Professor Drogutin Petkovich urges PhD in biomedical image processing at UC Irvine, which over 15 years at IBM Alamedin Research Center, he contributed significantly to computer vision, multimedia, content management systems, and founded IBM's QBIC Purify Image Content Project. Recognized with numerous IBM awards, he became an I-E-E-E fellow in 1998 and I-E-E-E Life Fellows in 2018. Don't tell us in a minute. Dr. Petkovich beheld technical management roles in Silicon Valley startups including VMware and later chair SF State University's Computer Science Department from 2003 to 2016. He founded the SFSU Center for Computing for Life Sciences in 2005. Currently he serves as a professor at the SFSU Department of Computer Science leading the establishment of SFSU's graduate certificate in AI ethics in collaboration with the schools of business and philosophy. His research and teaching interests in commerce machine learning with the focus on explainability and ethics, global software engineering, teaching ethics, and sharing teamwork and user-friendly system design and development. How about that? Is that pretty accurate? Come see, come see. In tonight's event is of interest to you. We hope that you'll check out some of our other events and programs here at Mechanics Institute. You can find out more at mi-library.org. Coming up next Thursday, November 16th, we have an event with John King from San Francisco Chronicle on his new book Portal. And we have our Biennial Members meeting coming up in December. So we hope that you'll check us out with some other events. And of course towards the end of our event this evening, we will have Q&A with our audience. I will come around with this microphone. And you'll be able to ask questions for our two esteemed guests. And I have ChatGVT writing a fun thing for that as well. We'll have time for Q&A. This is your opportunity to pose questions to our speakers. Please keep them concise and focused on inquiries. Let's make this space a dynamic exchange of ideas to thoughtful questions. With that, please join me in welcoming Denise and Jack Luton between a number of different ideas, topics, and ways of thinking and understanding what's going on there. Ways of thinking is not just a fight. Ways of thinking about different aspects of artificial intelligence. Keep in mind, Dragoon is the expert in computer science. I am more expert in philosophy and ethics. So if you ask me a technical question, I'm going to turn them over right at them. So I think we'll get started. Let's get down on it. All right. So hello, everybody. Yes, I'm a technical person, but I'm human-centered. It's really interesting that we can immediately know the studies of do you care for the consequences of technology you are developing? The computer science crowd is the least concerned of any social implications. That comes on and on. So I'm not on that side, but I am computer scientist. So first to clarify what it is, generative AI. So classical AI tells you, patient is healthy and sick. Generative AI creates content. It creates steps and points and images. That's why it's called generative AI. ChatGPT stands for ChatGenerative Pre-Train Transformer based on large-length startups. And I'll explain that for general audience. Interestingly, Ray, as I read, decided not to give it a human name to make sure people don't get attached to it. Very fast kind of spread, hugely. And the most, ChatGPT is the most well-known technology developed by open AI. And anybody knows how it works. I'm going to explain it easy. If you're deeper than that, sorry, but I'm going to really explain it very high level and then we talk about consequences. One of the best plots explaining it is from Ray, from Steven Wolfram, who made Mathematica software. And I'm going to read it because it's perfectly set. And I'll give the link. The first thing to explain is what ChatGPT is always, fundamentally trying to do is to produce a reasonable continuation of whatever text. So it's super smart how to complete. It will be really proof of that. Basically, it will generate the next text based on previous text and the huge language modally. So that's the trick. They actually index an enormous amount of content, billions of pages, created a large language model. And basically, my most reasonable image, what one might expect someone to write after seeing what people have written on millions of their pages. That's what it does. No intelligence, no emotions. He doesn't care about that. Some people call it stochastic error. It's basically, type the prompt, it's going to analyze the prompt, find the meaning of it, and then look into language model, which is encoded in very complex system of neural networks and transformer technology, not to bother you with this, and build one word at a time and actually complete the text. That's kind of in very simple terms. So next slide, maybe. And this is what, you know, I created myself. Not true, you know, I didn't think. Basically, why in sound intelligence? It's because we encoded a source of art and knowledge and information in the language and we wrote it down. So they took and they indexed a lot of stuff that's written down. They scanned it, they used the letters up, and actually built the language model which contains the syntax and semantics of how the words are used in very complex way, very sophisticated way. So when you ask it a question, it simply answers whatever fits the syntax and language model for those questions. It doesn't have any intelligence. It actually looks at the data. It's very complex interrelationship between words and meanings and syntax that actually creates the output. Does that make sense? So it's not a reasoning. It's not intelligence. It's actually giving you the most likely response based on what millions of people wrote about what you just wrote before and that one word at a time. So just to make it clear what, you know, knowing that we have intelligence being coded in the language, change if it reads the large language model and repeats whatever is there but in a super sophisticated way. And it turned out by doing that it sounds like intelligence, it sounds like maybe emotional and sentient, but none of that is true. It's simply repeating the words that fit the model. It's very important to know. So that's the idea. You can also use it to create artwork by actually they index the lot of artwork and extract the meaningful patches of images and put it in the language model. So you do the prompt. You find the right words that match your prompt and match those words to images and fix up the thing. So immediately you can say, well, is this the horse that I created? How come it appears here? Or maybe it looks very similar to the horse that I designed and this thing indexed it and created another content that's on itself in the heart of it and you talk at the end of the legal issues. It's actually very tricky but so you can create art and you can imagine fake news. Elections are coming. It will be drama. I can tell you that. So a lot of unsettled issues. That's the news. If a concerns and I'm totally concerned jobs, education, being in schools have to completely revise you cannot ask students to write the report at home. I want to give it to my class. We ask them to, do you judge if you need to summarize them? They summarize them. They criticize judging people. But you can just ask them for essays so they can do it in three seconds. Creativors, machinery, self-driving cars, you don't want all the assets. It's guilty to be sued. Military, social, fake news. Safe issues with fully automated systems. To the point that famous people in the field like Hinton, who father of deep learning says I'm afraid it will be more capable than humans. Will it be us versus them? That is the first time that technology, this is what concerns me, is encroaching on the cognitive knowledge-based jobs that traditionally humans would do. It's not just mechanizing something. It's actually that I work. So, you know, impact to job market will be significant. And I want to take this. One thing that we will stick on and later come back to the legal issues that I were able to find a chief counsel of US Congress doing the legal analysis of JNAI and copyright issues for US Congress. Perfect. And we found it two days ago. And it's like, three weeks old. Perfect. So, we're going to need this. But here is an issue. You make a career out of writing. They indexed your writing. I say, write me a story about XYZ. I can even say, write it in Shakespearean mode. Write it like XYZ. It's going to pick it up and create it. So, it's a new thing, but it learned from your text. And I can sell it and make money. But it learned from your stuff. The image contains motives from your artwork. Where is the copyrighted stuff? It will go into that, but it's fascinating. By the way, the upshot is not separate yet. It's just going through your legal system. It just entered the pipe. And this is, by the way, investigating open AI for violation of this. And I work on explainable AI. Like tell me, and I think in Congressional and the crowd is submitted as actual lawsuit on copyright. And the problem is, let me tell you, I'll stop writing for forever about this. It's not the code that you can open. And there's a line that says, read John Christian book. I'll modify a few words and I'll put it. It's trained on the data. It has large language model, which is, which is a bunch of numbers and coefficients. It's unreadable to humans. And the code of open AI, which I didn't see at work on it, is engine that says that use it to train. But you cannot open it and see the rules that you understand. So these systems are very hard to explain and they're not traditionally transparent. You cannot point to a piece of data that says, ah, you stole my work. And that is a serious issue. Technical and legal. So, ah, I think this is time for you. No problem. All right. So, we'll do questions. Should we do questions later? We'll do questions whenever you need. Okay. Anything. Yeah. On this part, if you have any questions, maybe answer now. Yes. Well, all I was doing is, um, so how do you use gt and bulgy and and so on. And ask them not to discrete whatever you enter or not to train their system on whatever you enter. Is there a clause that you can check the box and say, actually, no. It has been already trained. But now, I read, okay, remember, it's so new that, you know, I'm reading from news. I don't have access to their code and the code is a little human. But they said you could actually mark your pages not to be that's in the future. But, um, that's exactly my slide. If you saw one of the things can, can, uh, can you create this, but don't use any copyrighted work. They're just working on it. But for now, they put everything in large language model without information of what's going on. I've already scraped up until 2021. Every single copyrighted piece has been made available. I'm an academic. I can thank you. So I can tell you whatever I want. That's my tenure is made. You know, it's done in middle ages in Europe to protect faculty from being replaced by the church. It's monthly duty. So, uh, they told you it's done whatever it's done by 2021. Prove it to me. I cannot. So, so, so, I don't know. Who knows? That's another thing. I don't know. And can you traditionally, uh, test it? FTC might push them. But at this point, the answer is technology is pretty open. I don't know. So, um, whatever is created, whatever is created, using one of those, I actually suppose we cannot be copyrighted because of the study. Welcome to that. I'm not a lawyer. I like legal system. Original law. And I just learned that from six page memos, which is great. And, um, Alice, you can send people to read to it. It's fantastic. Original copyright law apparently refers to words done by humans. Not even like monkey red, Like monkey randomly draws something you don't cooperate with. But now the discussion is, credit by AI trained on human conduct. Is it copyrightable or not? Originally, the copyright office refused to copyright in credit by AI. But some people are put to loss, you'd say. And I would have a good deal of money, maybe should be a good deal, or we'd pay it much more. If significant human design and intellectual effort to grant it, to create it in the content, also using charge, it may be copyrightable. And it's undecided yet, it's going to believe everything. And there is a good resource that yet, that I suggest you all read. It's very good, yes. Amazon, when you put something up on a print on demand, a platform which is called APT, now you have to say whether what you're turning into a book is either AI-generated or AI-assisted. If it's generated, it starts with the AI and you just put a little bit. And if it's just assisted, you've already written a book which you're getting some editing and purging it up a little bit with the chat GT. Yeah, so. But it's still a total graze on, and there's eight. And I thought. There's over 8,000 lawsuits. Yes, and I would say read carefully terms and conditions. Like with actors, today I read in Hollywood, they said they control whether they can be replicated. So it's all very complex as we speak. The first statement again is you didn't mention temperature, which is, so then you don't get the same exact, if you put in the same prompt, then you get a set of data, right? That's one. So if you could say that it is creating something different every time. And the B is every time that I draw a picture, I'm drawing it, or I take a photograph, I'm drawing it from all other pictures that I've seen. Seen, which means kind of like, shouldn't I be investigated for copyright infringement as well? And what would it be to fight it? Yes, it's precisely when they answer. I was thinking, okay, it rained on works of famous people, but then it produced something, so I would read John Grisham's book and try to write the translation. Yes, but there is a trick I just learned yesterday. In training process, you have to copy your stuff into the database of opening. You may not authorize it to copy. So you cannot train. And they are now drilling into that. But you read something. Yes. And then this is going to be a fight. Because what says, you have taken in that piece of information and you have modified it ever so slightly in your brain, like you are living in a piece of tissue. And so it's going to be all right. And this is where the fight is going to end. And another thing is that machines have created original content. And an example of it is a machine that did create the original piece of information is Google's Go, and they made a move that nobody in the world had ever seen. I haven't. Yes. So it's not a physical class, I'm going to say, that it's not in this little box. It's a very big one. And it's got lots of water. Yes. And a lot of lawsuits have it for lawyers. And it's actually going through the fight. So even advice to Congress. And the legal counsel advised that Congress maybe don't change the legislation yet about copyright and AI. Wait a little bit for the lawsuits to go through the system. And then, because actually nobody is sure what to come up with. I think that's a good point. Am I on? Okay. So maybe the thumbs up that way. So I'm going to segue into the ethics, where we're talking about the technical, we're talking about the legal issues, because the ethical issues are also interesting. And as human beings, we tend to have some level of ethics, either personally or in our communities or families, et cetera, or workplaces, and those kinds of things. That is going to be inclusive of AI. What are the ethical implications in terms of artificial intelligence, and its uses in today's society? So what we have, here we go. Thank you. I'm hearing a pair, so I can't hear myself. So who's impacted? We have something that we call a stakeholder model. Those individuals would have some level of impact by some external source, or can impact in the reverse order. So a stakeholder model, and some of you might be somewhat familiar with this, but when we talk from a business standpoint and look at what the ethical implications are, and that would include chat, GPT, usage in business, or in personal situations, where we have a company that has impacts by utilizing work, creating or generating the use of AI, these are going to have impacts on a whole host of individuals. So you can see there, employees are impacted by the use of AI within a workplace, or within their communication with other employees at other organizations. You have distributors, wholesalers, retailers that are impacted. You have consumers that are impacted from an ethical standpoint. Does that make sense? So some of you are shaking your heads. I can see that. Suppliers, creditors, stockholders are impacted by the ethical issues that we're dealing with. The natural environment, we haven't even touched that yet. What's going to happen in terms of our natural environment when we are looking at the usages of energy that are different than what they have been maybe in the past through the use of chat, GPT, or other forms of AI? The public in general, business support groups, the media, non-governmental organizations, and so on. So we've got a whole host of individuals that are impacted by any sort of change, such as what we're seeing with the use of AI. These stakeholders have rights. Rights to basically have protection from unethical business actions or decisions. Can anyone think of an example? Let me take an example. Or you can even look at what I've got out there in terms of these particular rights and potential concerns of an evidence scandal. Yes? Think about people that are going to lose their jobs. There you go. And not just like, as pointed out earlier, not the labor-intensive type of work that robots have been doing for years now, but the white collar type of work. Yes. The change in what that employment might look like, if it even changes or may be changed in a way that dissipates what that country has been. Exactly. There's issues in terms of consumer protection. They're both ethical and legal issues. We're looking at wages that may be affected as well, for example. We're placing quality discrimination. We have a lot of focus in making sure that we have diverse equitable workforces, but we're also increasingly concerned about what might happen in terms of that focus. Can I just add a big thing on bias and fairness, which is one of the big things of that sort of thing. AI, this thing repeats what we've learned. You have bias data, you're going to get bias decision. So they put some guidelines and stuff, but basically it can propagate whatever so that it gets to be taken care of. Yes, absolutely, absolutely. And even if we get into the area of the natural environment, what are we looking at in terms of changes in energy usage? Another speaker talked about the environmental impact of chat duty. And she said, every time somebody makes a query using chat duty, is it only one of dumping two liters of harsh water on the ground? Is that true? I haven't heard that, but I'll have to look into that. You may have a... You may have a quick answer. Quickly. Strength of chat duty would have taken 360 years on the regular machine. They used a thousand MDI processors with this. And they did it in a few weeks, their energy sinks. I even hear that they're putting them in North Pole to actually cool it, which they made, you know, kind of melt the stuff. So I don't know exactly, but it's not trivial because it has to go through the big language model and stuff like that. So possibly, I don't know, but energy usage is not trivial, for sure. Don't forget the speaker's name. I'm sorry. I'm already up. That's me. Yeah. Yeah, but it's possible that it's not trivial. Another question here. Go ahead. Well, I think the stakeholders and impacts, I think of children and kids. You know, if they don't learn how to write, or to think, how would they? Or to think. They're just little parents. What are they? Well, we had a kind of a partial conversation just in the car right over, you know, in terms of our concerns. We're teachers. I mean, we're professors, but I mean, we're students. And even though our students are mostly adults, if you will, sometimes we get some, you know, super high schoolers that... But we're concerned about their ability to become independent thinkers. One, you know, and are they going to have the knowledge they need to really just navigate the world in the same sense? If they're taking their term papers straight from chat to PG, or they don't have the knowledge. Or advice. I also get a paper that gives dubious moral advice. It's in the chair. And let me tell you, I'm always a suspicious mother of science. Who trained it? So my kids go and ask for social advice from chat to PG. What is it going to tell them? If it's something that I would advise them. And I always, I found that the president of China, said, and I respect China a lot, but he said, any Jedi has to reflect the values of Chinese Communist Party. Therefore, can you tell me on what material did you train, chat, you can do? They don't tell me. They just say they scraped one perabyte of data. It could be data that may be... I don't believe it. You see? Yeah. Issues. Okay. So when we talk about the ethics of AI, there's a whole host of things we can start unpacking in terms of understanding where we need to start doing some remediation if we can. And those are areas of who takes responsibility. What about the idea of transparency? And the awareness of where to chat. Any type of AI usage might have some impact. How do we audit and assess the impacts? What about the ideas of incorruptibility? Can we predict what the next steps are going to be? Those kinds of things. So, you know, we go truck and we talk about that. Then we need to know who's going to be responsible for the actions or the decision. Teachers, are we responsible for our students' knowledge acquisition? Or are we not anymore? We start to wonder what our roles are going to be. Those kinds of things. So again, trustworthiness is another aspect. Oh, that's fine. Yeah. So, yeah, thank you. So obviously we have more questions than answers. But one of our roles, both of us are to disertificate, at least to educate the public and alert them to issues, you know. Not to blindly trust them. But if you think we are only concerned, look at the initiatives going on today, as we speak, and last year, you always, they always try to regulate that. Think that's going on in the final stages of the legislation is AI Act, which speaks about trustworthiness and especially zeros on high risks, high impact applications such as autonomous systems and biomelectrics that they have to be certified, regulated, auditable, and transparent. Okay. OECD has AI principle. All the same, trustworthiness, no bias, fairness, human control. When I gave some interview, they asked me, what are you most afraid of? I said, it's kind. It's totally visionary. Absolutely. Autonomous AI. That is the biggest danger. No human in the loop. Very difficult. Like I'll just issue AI directive, which is actually pretty well written. Kind of wish list. It's hard to accomplish. At least they wrote down all these good things we should watch. And it's actually pretty good. And just last week, there was AI summit in New K. With top power vice president was there and top players. And 27 countries signed the declaration. But here's a thing for you to think. How does Czech GPT sound authoritative? Does it ever show any doubt in its response? The answer is no. I don't know. Show me what the prompt is. I'm talking in a technical detail. But I mean, to understand the statement is, I can make it so that it talks to me as a four-year-old, right? Yes. Okay, so I mean, we have the verabilities in this black and white issue. We have the option. Yes. But it will always say, certainly I can help you. Here is the answer. Does not give you the confidence level. Does not say, I'm not exactly sure. To the best of my knowledge. So let's say you're managing a nuclear power plant. And you say, Czech GPT, watch that next. Certainly increase the power. And it could be wrong. It doesn't say increase the power. But I'm not that confident. Please consult somebody. So make it autonomous. Good luck. Watch the minute. That's my fear outcome. So let me just sort it. The EUF has an expert requirement called pull the plant ability of the human to pull the plant, which for me is actually extremely important. The thing that really creeps me out is that you could have trained chat GPT to write code. And hackers, if hackers, hackers any rule whatsoever and they'll try to break it. And if you do that with chat GPT, it's just, it's infinite. That's a fear. Yes. Having used chat GPT to write code it is not always correct. It's a matter of time. So far it's not always correct but it comes as 80%. You fix it. But it's going to take that code, put back in the data, which they'll learn and learn as Hinton said, much faster learning that human biology is very important. So what can be in five years? It can make some rules. Okay. And that's what we're talking about here is the problem is, I have a billion dollars and I'm going to make my own chat GPT that will take on your chat GPT and why and do everything else. I mean, the problem is, it's like we're in this nice little world and everybody is coming to people and read to the initiative 28. I mean, it is big. There's a lot of other people out there who have a lot of money and have a lot of energy and they're able to do the same effect. So the problem is, we're essentially in a weaponization. And the problem is, I got a weapon. I mean, if you make a weapon, you go back and forth. That's what's going to happen. And that has to happen. Would not surprise the unfortunate. Yeah, but the problem is, is you have to figure out, G7, they made an agreement. I don't care. I'm China, you know, or I'm Indonesia, or I'm somebody else, you know what I mean? Correct. I think this sounds, you know, dark, but I cannot disagree with what you say. And one of the things that forced them to do this summit and the White House is the fear and also fear in upcoming elections and this information which can be done perfectly. And they're trying to regulate, but Bill Gates had a good, you know, his attack, he has funded this. He had good things. He said, we invented fire, explosives, nuclear energy, but we regulate it so far. So he said, we need speed limits and sequence, meaning regulation. So it will be challenging. Yes, this is very disruptive and in soccer it's easy to spread. The problem, sorry, as he said, you have to train, let's say, soldiers who do something for four years. You can download the whole GPD in five seconds. The whole knowledge, big goal. In five seconds they learn everything that you learn and it's really scary. The speed of learning and improvement is faster than humans can do and I think that's why I'm going for some regulation like in any other dangerous thing we develop that is regulation and monitoring and that's our chance, but there will be actors who will exploit it for sure. And I mean, meta let out their model. Everybody is modifying the meta model is the meta model is turning out to be and it's a new standard that everybody is curing to which is like freaking out everybody else. But that means that I can take the meta model and make them do my modifications. And I will develop my meta model that will protect me from your meta model. So here we go again, white dress that I have a gun and I have a fence and I interview if you don't like you, I should. That's maybe virtual space, but... Well, the Japanese, I think power plant that was known to go because of the tsunami that wasn't factored into when they built it but there might be a tsunami that would cause it to respond. I actually know there was a Norwegian who told them don't put the power generators in the basement they might flood and they were holding the hierarchical. So some people knew what bureaucracy messed up. So the regulations can be also defeated messed up. Yes. We have a couple other questions in here and then they are correct for me. The question is how do we balance ethics and purpose? When the purpose comes in and somebody loses their job because the machine or the aircraft has a better job cheaper than the person who potentially needs to find a new position is partially a purpose and this is really losing the job. So how do we balance it? That's a question that's age old. How do we balance in any kind of endeavor? Whether it's some type of business that we're looking at and looking at AI, et cetera. Or just getting on with neighbors, both kinds of things. How do we balance? I mean, there's a number of ethics theories that I don't want to go professor on. You know, there's a number of theories that help us kind of solve some of those documents. In terms of utilitarian, greatest, greatest number of people. That's how you make a decision. You can go a duty-based kind of theory. You know, you can look at virtues and how do we exhibit those virtues in these decisions. And sometimes people go down private holes trying to chase after which is the ethical way to name a decision here. My simplistic is do no harm. But then you have to start finding what is harmful. It's not necessarily a harm for everybody. It's very tricky. In the White House Directive, there is the whole section on protecting U.S. workers and stuff. The problem, if you remember, there was a lot of tech-style workers in England that broke the machines. And my tech is telling me during the... Oh, you know, they broke the machines. But at the end, they retrained. Everybody got more jobs in the whole revolution. I say, yes, it took 30 years. This is going to take a few more months. That is the problem. Are you going to reclaim somebody in Virginia to do robotic programming when in three months they're out of jobs? My students in computer science, there is zero opening for young graduates in computer science today. I don't know why. Actually, because of this, so... But I also want to make sure we have time to cover the legal. Corporate issues. I know how much time it takes for them to apply these... All these stuff from... Sounds good. I think you had your... I just want to bring up this question of disinformation. It's one thing. But the evolution of the information is another. Since the French press in the 1600s or whatever, the majority solution for energy has been oil and gas. Alternative energy has been recent developments. And so 90% of the info coming in is time. Energy solutions coming into the AI is old history, older information. And that's the... And I think what I'm hearing from you guys is they just take all the information and average it out. And that answer, the next word, is the average of all the words that would normally fit there. But the average is based on crap history or disinformation or lack of new, proper, scientifically generated, real good info that we don't have. Is that terrible? Is it the racism sector? Yeah. Yes. By and since I've simply translated, communicated as they were reported, I think new church in PT will improve. But you touched on one big question which we told our students. Use it to polish your writing, to look at your code, but don't use it for factual search because it cannot point to source like Google search. I think they're fixing it now. But that was the major objective. As I remember, there was a whole lawsuit against United Airlines written by church in PT. The fake, it's written like case, practice, case, legal status. But in the proper format, they didn't exist because it learned the form of writing it to just plant some words, you know. So... I don't have lawyers. That's the truth, right? So it's good to polish the text. It's still not good for factual research because Google will index it within two seconds. This thing takes, you know, three months on supercomputers to train. So there is still some room for it, but it's getting better and better. And it sounds again authoritative. It wrote wrong things to my friends who they asked to summarize. The wrong companies they started completely said different things in very authoritative way. So, you know, but they're improving. That's... They know that and they're improving. So... I want to hear about copyright. Yeah. Okay, we... Denise, you want to rank what you require? Yeah, I think so. Yeah. Well, this is like... We can only skip it. You want to skip that? Okay. Yeah. So this is the document that is from, I think, September 20, 30, because I'm very flashed. And I found it by chance. Look at the four questions and then it was completed. Do AI outputs enjoy copyright protection model? So, I get one slide for you. So, please, next one. So, I'm trying to extract words from the documents. It's in the quote. So, do AI outputs enjoy copyright protection? US Copyright Office, record is copyright only worth creating by human being. But now, of course, people are challenging that, but you could be lost with people. This was written before machines were able to go on. So, its guidance is to accept that words containing AI generated material may be copyrighted under some circumstances, circumstances such as sufficiently creating human arrangements. The author may only claim copyright protection for their own contributions. So, interesting analysis, but at the end, it says he has not been settled. But it's very good for, I think, whoever's in the six pages, very well written in terms of pros and cons. Who owns the copyright? That's another one. I can, they need to know if it's the same questions he had, and this person answered them, or at least laid out the issues. No clear rule has emerged in that if I include the author or authors, what these words could be. Companies that provide AI software may attempt to allocate and inspect the ownership through author. You know, I agree to terms and conditions. You don't read it. I say it from now on better read it, because they're trying to know it. Maybe you can sue them, but they are already modifying their terms and conditions through that. Like terms of service. It's fascinating. Very well written stuff. Next one, please. Does AI, this one was a new for me, does AI training process infringe to copyright? That was unknown to me. I created something that writes or draws like Picasso, but I didn't copy the pixel for pixel. But now they say, this is the core rule that the shure trial would be needed, because training copies of data, even for training, they infringe on copyright law. You are copyright owner. You can put in terms and conditions, may not be copied for the purpose of AI training. But people didn't know that, and they already scraped it, you know. So to be decided. And then the last one is, to AI outputs infringe copyright, in other words, could potentially be liable. Both the company and the office. So none of it seemed to be settled. But then the last one says recommendation to Congress. Next slide, please. Congress may consider whether any, all of these require no amendments, but they say better way to see what the truth is decided, and then amend the laws of copyright. So there you go. But it was the best thing I found to answer, kind of was this audience slide. Well, an intellectual property lawyer that I talked to, and only something like 1% of all lawyers are actually intellectual property lawyers. So it's job security. He wants to go to law school now. He said basically every single thing that they scraped up until 2021 was already copyrighted. Because according to the US PTO, Patenton, and Trademark Office, just simply by publishing something or writing it, it's all automatically copyrighted. If you want to take it to federal court and take someone to court and sue them for stealing what you did, then you get a C copyright or a car registered or something. No, that's fine. But everything is automatically copyrighted. Yes, but yes, then the people who use let's say changing the need to write something based on your writing to say, I did not copy or write which copyright is. It protects the particular embodiment implementation exactly not the English as a language. That's why I want a jury trial, because I can sway people's emotions. That's why Patenton, they said it could have been illegal even to copyright into the engine for the purpose of training. That was for me, you know, an interesting twist. So we decided. Many lawsuits coming down the pipe. Many, yes. All right. How are we doing on time? We're good? Okay. Thank you. Yeah, first of all, this is the second. I think you should try if you want to see ChatGPT become an accurate, go and use the ChatGPT rule from the plug-in. You will start to see the real package. ChatGPT what? Plug-in. So ChatGPT is they have that if you go to the $20 button is that you can add in plug-ins. And the plug-ins is one of the plug-ins is Wolfram. So it does all the mathematics and the half-maps and sort of like that. And another plug-in is travelosity. So that if you start asking questions about travel, then you go ChatGPT that getting live data is going out and talking to travelosity. And travelosity is telling down real facts. Yes. In real time. So I am and you can also ask there's plug-in to this is, you know, it's ChatGPT and what they're doing is they're adding module from the side that are live and you can go and say, go and look at this. Yes. New York Times. Totally. Remember this Monday was the developer, OpenAI developer, they announced travel things, APIs for you to write applications. What I'm saying is it was not accurate last month. They all know it's an issue. People will write plug-ins, it will be solved. And then it will be more. And then it will be more. And then it will be more. So the kids will only learn what type you do ChatGPT. Prompting and hearing will be the main skill. How you talk to it. And it tells you something and you do it. And maybe it's wrong. Maybe they don't like it. Maybe it's controlled by it. Whether it's the billionaire or the government or what. I come from the old, the slaggy, the physical country. What do you think I'm saying? The person. But you know, it's not a field for you. And I don't like the beat of all this. I don't like central source to manage. Anything that centralized historically can fail. But nature remains the forest of multiple genes and multiple individual agents. And you know, our applicants tell me, oh, charging the D.A.I., smart city, everything is connected to my computer. One computer and say, good work to you. When that gets hacked or controlled, you'll see what you get. So I don't know, I'm for little people kind of maybe fighting back being independent because this looks like Skynet. There is a, well, Alice tells us when we have two minutes left and then I play quickly. We'll do an official wrap and then we can play some Terminator. Because today, I mean, Elon Musk and many smart people said, I fear Skynet. It's perfect. I mean, unfortunately. The AI is learning from sort of everything that came before. And everything that came before talking about AI was saying that it's going to become self-aware and kill it. This is the AI that I made in mind. Apparently, you are apparently the news that it became to the rules. And AI was one of the players in the games. And it played one move and one and seemed to be killing everybody instantly before. Because optimization function was being the game. No emotion, no diss. I just feel like I'm going to get hacked. That's it, you know. That was the intention of the game. Yeah, but nobody thought, everybody thought it would. The objective of the game is to win and you go by the rules. Well, it found that actually the simplest one is to kill everybody and just go through the game. Yeah, that was that. Well, that was the rule. I don't know what was the rule. But if you figure it out, this is the simplest way to win. Right? Based on your technical knowledge, you've been in the field for quite some time. Do you think it's possible with current AI frameworks that the original authors can get micro payments whenever they have permission to use it? I will answer you the following way, then the next thing. Business, do it on the wrong. My thinking probably not. I'm being now, yeah. Okay, here we go. But I hope that you are a Supreme Court, you know. And somebody can say you either cease and desist or you encode what you trained and stuff. Algorithically, I think it could be that. It's hard to say, it's so fuzzy. But it's a good research project and I think that was one of my questions. People should actually get paid for the tools that work for someone who, how much you could use it. Remember the fight that they had with Google? I mean, we've gone through this in the last 10 years and we've been through this. Google's breaking shit out of everything. It's a different way for free. And there's a lot of lawsuits going on. It's like, it sounds like there's an apple in the room. You know, it's like they, that, and Google took all of our voices and put them, you know, everything that you have put into Google mail, they got it. And I don't care. I don't care about that stuff. I don't care about Google mail. I don't care about that stuff. And everything else, it passes. It passes. I mean, they apply. There's a lawsuit in, I think, Australia that they finally won that the newspapers are getting paid. And but that took 20 years. Yes. I think we have a question center right down there. So we talked about, on the flowchart and other times, about the narrowing down, oh, sorry, ChatteeGTV feeding on itself. It's a feedback loop. It puts on information, and that's part of the information group. Over time, won't that significantly make the language flatter? And be the oldest. Absolutely. Narrower and thought narrower and at the very fast speed. You guys are wonderful audience speakers. And I have it in my class. I have a student there. So everybody will speak the same way. You, ChatteeGTV, you can talk to my ChatteeGTV and get a stupid email from you. It's full of fluff, no personality. And it sounds like business. You know what I would say. It sounds like business until they destroy humanity. So who is going to pay their bill? No, but business moves those student messages back and forth. And the thing that you're not, I mean, that you've given the impression that when you get the same answer every single time, that's what temperature is all about. And that's why it's... No, there's this thing called temperature that is very hard to explain. But the problem is, essentially, is it doesn't pick the same... If you ask it the same question, it doesn't pick the same answer. That's the trick that you did. Temperature was the thing that has blown all the AI sciences away. Thank you. Can we have another question or a playlist? Let's get this question in too. That's okay. It strikes me that technology is one thing. When it enters our economic system, technologies really raise issues of power and profit. Yeah. And who has the power? How is it used? Who makes the money? How is it used or gained? And I wonder if ethics is even in the same ballpark, because it's usually what groups do to protect themselves from outside influence, like doctors, lawyers, journalists. And whether we need something that is closer to the slide about the different governments, whether we need to meet power with power and regulate profit. Yeah. Excellent points. A number of organizations of some size, and I can't do the pinpoint size of how many employees they have and their reach within the communities and whatnot, but a number of organizations over the last, I'd say, at least decades in the strong, meaning towards having an ethics and compliance office, not just the legal team, but an ethics and compliance office. And they are ethics and compliance officers. You know, we had an ethics and compliance officer come and talk to my graduate class. And these individuals are looking at beyond what the law might say in terms of what is appropriate. Increasingly, they're seeing that they're having to be pulled into the AI questions that we're struggling with. In terms of what does that mean for the organization? You know, here I am, the ethics and compliance officer. I might even have a department of a couple of folks. We might pull in a risk management department as well. You know, we're going to talk to the resource people in the organization. We need to have a meeting of the minds to turn some of these things into something that's going to be positive and meaningful in terms of our concerns. Yes. And on top of that, I think there is one, always the same argument that regulates itself. Interestingly enough, when top AI executives come to Congress, they all, they ask the Congress to regulate them. Because otherwise it's catastrophe. I'm telling you that. I mean, that's my opinion. So, interestingly, in general, we never was doing it. The government would put controls and monopoly above us. Because surprisingly, you know, out from open AI, they all said we need some kind of regulation because it's going to be wild west. So, I think some regulation is coming. There's money. And it's necessary. One last question. Okay, one last question. I hope you are not online. But I'm going to have a trouble with the whole picture as everyone's describing it for me. Of course, yours. I'm going to miss this trans apocalypse with all these competing disasters, like the revolution of democracy in the United States, about income inequality, about world conflict in Ukraine and Gaza, the climactic breakdown. And now we have to worry about generative AI. So, like, which of these disasters, like, doesn't the generative AI think have shrunk from all? Because people will all lose their jobs on the mass scale by 2035. There will be riots in the streets. There will be a function of democracy to deal with it. We're talking about compliance officers when we can't even run a Congress or Supreme Court in this country and never mind all the hackers who are going to come in who don't care about government. Like, how are we supposed to get here? How are we supposed to position this particular disaster in a large landscape of disasters? And what do we do first? I say let's go for a bottle of wine. You know, I don't want to sound negative. People always say, yeah, the humanity found the old fire and nuclear energy to benefit. This just worries me. It's coming so fast that I do it. And I know, so, control and many television organizations. But, you know, to be honest with you, I'll be honest with you. I don't know and I'm concerned too. And hopefully, what's positive is at least the top players including governments. They don't have a power. They got together and started talking about it. So, that's positive. I think at the end it will help to come top, maybe brutally from government because otherwise it will be... Okay, thank you. Obviously this is a hot topic. I hope this is the first of many conversations that we have here at Mechanics Institute about generative AI and ethics, authorship and ownership. And we will maybe take an official end but we'll play it in a little video on the Terminator. It's one minute. Yeah. It'll take me a second to pull it up. But before we get to that, I want to give the biggest thanks to our two esteemed guests, Aneed and that's Heather. I didn't like that much. Thank you for my imagination. I hope that you will stay for a little bit to continue chatting and connecting with one another. And a big thank you. I hope that you will come back in defense for other events here at Mechanics Institute. Please visit mi.library.org to learn more about everything that we have to offer here. Very little AI in our programming. It all comes from the heart and slowly. For now. And our library, of course, is still here with real books. So, thank you so much for joining us. Thank you.