 Hei, minä olen Mikko Renkka, ensimmäisempi professori ympäristö- ja ympäristö- ympäristö- ja ympäristö- ja ympäristö- Tämä videossa olen tullut katsomaan, että GPT-artifiikka on ympäristö, ja mitä he voivat tehdä ympäristö- ja ympäristö- Suomalaisen ihmisten ympäristö on GPT- ja chat-GPT-website. Tämä on ympäristö, joka on freetuja, ja voit saada usein Google-accountissa esimerkiksi. Kun sinä loppuu, sitten on tämä box, ja sinä voit saada ympäristö- ja ympäristö- ja AI-päivääristö- ympäristö- sinä voit tehdä ympäristö- sinä voit tehdä ympäristö- ja ympäristö- mutta siellä on muutoksia useissa, jotka tekevät ympäristö- ja tämä on ensimmäinen useissa. Tämä on minun kurss, ympäristö- ja ympäristö- ja minulla on tämä essay, joka pitää sanoa. The essay on about Elon Musk, they should read a case, and then answer questions about the case. But this is what AI can do, so you can simply copy-paste the assignment description into the box, ask the AI to write an essay for you, and here we go. I'm asking for between one and two pages of text about Elon Musk based on case, and this is pretty convincing. I would probably call it graded between four and five out of on a five point scale. Now the question is that, if this technology can be used to generate essays without reading any of the materials or without even thinking yourself, should it be banned? My answer to this question is no, because there are a couple of important use cases that are generally useful for researchers and for teachers. I got involved in the GPD discussion at my university because I was kind of an early adapter. I read about this technology on a newspaper in late 2022, and I was just starting a course that I teach on ALTO, but it's available for university students as well, and I decided that I will implement a policy that allows the use of AI on this course just to see how the students would use the AI. I also give a warning here that if you put questions from my course on GPT, the GPT will produce your convincing, sounding, convincing looking response that might be entirely incorrect. So this is an exam question that I'm using. GPT gives really convincing answer, and then I'll just explain why the answer is completely incorrect. So this is a bit of a challenge for students, but there are some good use cases for this technology, and I wanted to see how the students make use of it. Then the word spread at the department that I have a policy on my course, and then I was invited to join this group that developed a policy for the business school, and this is our policy, and this is to some extent based on my course policy, and because I was an active member in writing this policy, then when we released this press release, they put my name there for further information, and now a lot of people in Finland and in some other schools have been contacted me and asking how should they deal with our chat, GPT and the GPT AI, or large language models more generally. And this is one of the reasons why I'm now recording this video. Now let's take a look at some of the more useful cases. So there are three good use cases for GPT AI that I've come up with or encountered this far. The first is reading articles. So let's assume that you are reading an article on a topic that you don't really understand. This is an article by Sapienza that I use when I teach master level students. So this is an article that we use as a practice piece when we practice reading research articles on my research methodology course. How I teach students to read articles is to first identify the key terms, identify the central concepts, then find the definitions of those terms, and once they know the terms, then they already read the article. If you don't understand the terminology, then reading the article is very hard. Using GPT makes, allows for a different kind of approach. So what we can do is that we can enter the article text or parts of it into the AI. Copy pasting from this PDF is pretty cumbersome because there are two columns. But we can go to the research gate and quite often you can find an earlier version of the article. Like we have this authors' version that is in a Word document format. So it's just one column. We copy-paste the text from the document to a text editor. We do a bit of editing. I'm not sure if this is unnecessary, but I like to do it. Review page, numbers, review food, notes and so on. And then we asked the GPT AI to explain this introduction of the article in simpler terms. And here we go. So that is a simple explanation of what the article is about based on the introduction. We can ask follow-up questions like what does fungibility mean. In this context the answer is correct. We can ask imprinting and now the AI gets it incorrectly because imprinting is often used in another context. So we have to be more specific. We have to say that tell us about imprinting in the context of this article about internalization and then we get it correctly. So this is a very good tool, but it can also go horribly wrong. I'm asking it to tell about what does the term moderator mean. This appears in the introduction of the article. It tells us about moderation generally because that's what I asked. And it also identified that this article talks about three moderators. But these three variables here that it says are the moderators in this study is completely incorrect. So it just writes in something. It knows that it needs to produce a list of three things but it doesn't know what those three things are and it just comes up with stuff. Another interesting thing is that when you ask it to, for example, list recent articles it can just come up with articles that don't exist. That have real authors. They have topics that those authors might write about but when we put a title to Google Scholar nothing comes up. Another useful thing to know is that this is still under development so this is a technology preview. So there are, for example, capacities. You get errors like we got here an error but it was able to complete my query. Nevertheless, just to summarize hard texts or difficult texts in simpler terms it does a pretty amazing job. It can answer follow-up questions but then you need to be very critical on what does the answer make sense. It was talking something about cells when we were asking about imprinting and this is about businesses not about biology. The term imprinting is used in biology more often and therefore because this has been trained with materials from all kinds of sources those sources tend to use the term imprinting in different meaning than its article does and therefore the AI is confused about it. But often when it starts explaining completely unreasonable things then you will be able to see it. Let's go to the second use case. So this is pretty good for generating ideas and for example if we wanted to get started on entrepreneurship literature and we don't really know what to put in Google Scholar we don't know what are the terms that we search for we can ask GPT. So we can ask it to tell us what are some of the central concepts and theories on entrepreneurship research. Opportunity recognition, yeah. Research based view, that's strategy not quite social networks, not quite effectuation. Yeah, that should be on my list. Lean startup more of a practitioner concept. Bit of a hot topic now. So something useful, that is a starting point. We can also ask it to develop courses or lecture for us. So I asked it to give me a syllabus for a master level course on entrepreneurship and there we have it. First the learning objectives then we have eight weeks of content and we have grading, completion requirements and grading criteria. Would I be able to just take that and start teaching? Probably not, but I would treat this like a sample syllabus from a textbook. I take it as a starting point and then I start thinking what do I add and what do I take away. So this is a pretty useful tool for developing like early drafts and thinking about what might I include and it gives you ideas. It doesn't give you solutions, it gives you ideas. So this might not be perfect but it's a starting point. Now the final use case and this is something that I think is going to be most impactful for my research and it is rewriting your own text. If you are having problems in writing paragraphs that are well structured or if you struggle with grammar or you tend to write too long paragraphs or you cannot structure separate ideas into a coherent number of paragraphs then this is a great tool. So here is an abstract of a conference paper and introduction. This introduction is a bit too big to be included in the GPT version that I'm using here and I'm using this LFA software. It's a page software that you can use on Mac to rewrite this introduction. So what I'll do is I'll first highlight the text just a part of it because the full text is too large and then I click on rewrite. Now it's rewriting it. It's making it more tidy and now I'll rewrite it again. And we had originally four paragraphs. I was thinking that two paragraphs might be ideal but here we have the same information all the key points contained in a single paragraph that is a lot easier for the reader. And I've been using this quite a lot. For example, when I get texts from doctoral students some of them might not be very experienced in writing. They write very long texts and they repeat themselves. And then I need to correct them if we are writing a paper together. So my options are that I do the correction myself or I just tell the AI to write the text. And I ask the AI to write it then I send it back to the doctoral student and tell that this is what AI would do compare against your text and come up with a new verse. I use also this for my own texts. If I have some text that I think is too long and it needs to be cut cutting your own text is pretty hard because you don't want to eliminate the sentences that you spend time writing and also seeing the unnecessary words in your own writing can be difficult. So what I do is that I have a text that needs to be shortened then I take a copy of that text so I have two copies one after another and then I ask the AI to shorten the first copy. It shortens it and then I will compare it against my original text to make sure that there are no errors introduced and to see that all the important points that I had in the original text are still in the AI process text. And of course this is when I write my articles then I of course go through it myself again so it's not like I would have the AI written text to be the final text. So it's kind of like a co-author that works very quickly for you. My colleague has already used this in writing an article so he was sent a long text via his co-author that should go into the discussion section of the paper and he thought that that text is too long let's say that the original text was three pages he used AI to shorten it to one page and send it back to his co-author and asked if this would be okay then they did an editing round to it and now it's in published paper. So this is like a spell checker or grammarly with superpowers. Okay. So when you think about this technology and I needed to install an additional piece of software to get it to integrate my operating system but this is just the beginning for in the beginning of 2023 the Microsoft CEO pictured here in this interview published by Wall Street Journal told that Microsoft is going to invest billions into this technology the rumor says it's 10 billions and that Microsoft is going to integrate this technology into every product that they produce so they are already experimenting with chat in Bing and I would expect them to integrate this into Word where you can use it for rewriting like I just did there already exist third-party Word plugins for example Ghost Rider that live in the sidebar of Word and you can access GPT from there and rewrite your things and ask for ideas so this is just the beginning of this kind of language generation technology because this is not going to stop here we decided that our university policy needs to be forward looking so let's take a look at the policy now in a bit more detail so we have seven points and this is the first version so we update this policy in the years to come and the first point is that we are generally allowing the use of AI tools we don't of course allow every possible use case like we don't allow students to rewrite their essays fully with AI without reading but we think that this is the future and a lot of people are already using this like I use this in my work so it would be very hypocritical to ban this from students if I make use of this technology myself the second point is that this is a useful writing aid so it is okay to use this to polish your own writing because a lot of our students will not be professional authors they will be business people and if they can use this kind of technology to make their communication better then good for them they don't need to be able to write nice prose themselves as long as they can convey their ideas in an understandable manner student is always responsible for the text and this is also what I had on my postgraduate course that I started with by this we mean that if the AI writes things that are not true or if the AI plagiarizes then we treat it as if the student him or herself was the one writing they are the false things or plagiarizing so a student always needs to check with the AI writes as we have seen in these demonstrations already the AI doesn't know everything and sometimes it gets things horribly wrong so you always need to check with the AI writes even if you use it for polishing your own language students should be important of the principles of this policy advantage is a drawbacks of using this language models we plan to do that as part of thesis seminars as part of research methods course and then maybe in some other courses like introductory to studies because this is something that students will face at least in the beginning it might be at some point that in the future that students who come to the university already know how to use this technology responsibly but until that time we need to educate them then point number five it is important that it's not possible to pass courses with high grades using only AI and the responsibility falls to teachers to design their assignments so that answering with a generic text produced by AI will not produce you high scores for example analyzing Elon Musk is pretty easy for AI but if we want to make it more challenging for the AI we can ask more specific questions that are about less common topics like we can ask the students to analyze the strategy of Harvia which is a sauna manufacturer here in Juvaskyllä or actually Muurame sitting next to Juvaskyllä then if it's not possible to generate these more specific assignments if it's unavoidable that it would be possible to answer to a question or an essay using AI then the points for that AI or the weight of that AI of that assignment in the course total grading must be low so that the student who uses AI will not get any benefits over the student who is doing it themselves the students must disclose if they use these technologies and this is our initial guideline and why we want students to disclose how they use this is that we want to understand how students are using it and maybe we can learn some good use cases from students for example summarizing an article was something that I learned from a student on my postgraduate course the students were given a difficult philosophical text and then one of them decided to see if the GPTA I could explain that to him in a more understandable way and that was a great success for him then the final point, point number seven is that teachers can deviate from these guidelines but if they do so they have to justify to the students and they have to clearly communicate the exceptions for example if the purpose is to practice English writing then using this technology obviously would not be very productive so I hope you have found this short introduction the good use cases of GPTAI useful