 Let's take a look at how we can use AI and particle chat GPT to help us understand research articles. My example article is this article by Sapienza on early internationalization. I use the same article in other videos where I talk about how you read theory articles. And how would we go about understanding this article? Traditional way that I teach students is that you need to first identify the key concepts and the key terms, and find the definitions for those terms. They're often defined in the article. Sometimes you can find them in Google. Sometimes terms like AIDS and growth are something that you have intuitive understanding of. Using AI and chat GPT allows for taking a shortcut with some big caveats. So what you can do is that you can copy face the article text into the GPT and ask the AI to summarize it for you. So how we would do it is that we take the article, we copy the text. This is a bit inconvenient to copy because it contains two columns, but you can find a single column version of this article. The third skate, for example, which is the version before it was formatted for the journal, and copy-paste the text from there. So I copy-paste the text to a text editor. And why I use text editor instead of copy-pasting directly to GPT is that there are page numbers, there are footnotes and other things that I want to take away so that they don't throw off the AI. I don't know if they do, but I just thought that it is good to make sure that there's nothing extra. So I just have the text of the article here. Then we put it in a chat GPT and we ask it to summarize it to us in simpler terms. And the answer comes, it's pretty much correct. Only one small detail is that this is not about establishing your company in a foreign country, but it's about expanding an existing company into a foreign market. So there's 95% correct, only the first sentence contains a small error. What's more, we can also ask for follow-up questions and clarifications. So we can ask, what does imprinting mean? That was a term I saw used in the article introduction, and it tells that correctly that imprinting refers to things that happened early on in the company, company life cycle that have long lasting consequences. Then I wanted to know a bit more about what role the imprinting process plays in the theory, and I asked that question, and then I get an answer about mice. So this is not the biology article. How would the AI suddenly pivot to talking about mice? Well, it's because how the AI is trained, it's trained to predict what word comes next, and then that allows you to write answers. And it's trained with large amounts of data, and imprinting is often discussed in the context of biology. So it was answering based on the context that we gave, but based on also the training material, and now it gets it horribly incorrect. So this is a good AI for generating text, but it is not very smart, so you need to be very critical of what it gives you. It might give you a correct answer, or it might give you something that is completely incorrect. Let's take a look at another example. I wanted to understand what is research fungibility. It means that the resource can be used for different purposes. That's correct. And then I wanted to ask what is moderator, and what are the arguments that the article makes for supporting moderators. So moderator is a variable that affects their relationship with two other variables. The first sentence has something correct, but then there is a renewable energy comes up. We don't know why. And then it says that there are three moderators, which is correct. One is fungibility, that's right. Another one, imprinting, not true. Social norms, not true. So resource fungibility is correct, and then there are other contain things about renewable energy, which is not related to this article at all. So is this a useful way of reading articles? I think with some caveats it is useful. So the caveats are that this is very good at summarizing text. So you can tell the AI that this is a big piece of text. Make it shorter so that you just tell me the main idea. And it does that job really well as we saw in the first response. Then you can explain new terms that when you ask follow-up questions, but it might get those incorrect. So it can explain things, but it can go also horribly wrong. So reading, using this AI and asking questions about an article is definitely not a substitute for reading the article, at least at the time of recording, but it can be a useful thing as a first step when you want to have just like a big picture understanding on what the article really is about. And there the summarization function can be really, really useful.