 Yeah, welcome to this talk. Type of today's presentation is gonna be improving efficiency with prompt engineering. And thank you for coming, first of all, and for taking your time here. And it's not gonna be hardcore technical talk, but I believe it's gonna be very important given the current situation and state what's happening with large language models. And the character here is AI generated, inspired by Gordon Ramsay. And I help him because secondary topic of today's presentation is gonna be cooking. A few words about me. I'm Janijak, I'm a technical manager, specializing in productionizing AI solutions, and I'm focusing on improving efficiency of the teams. So, slightly different gif now. You can find me on Twitter. I'm not that active. Later I'll share with you LinkedIn link as well. This is my hobby, it's pretty common. Sorry, I work on Henry Pintek, so good mission as well, and taking a stack. Agenda, welcome first, then I'm gonna talk about introduction to large language models and how people are using them. Then I'm gonna tell you how I was inspired to start using large language models. And then we're gonna go to the meat of the presentation, which is creating our own prompt book. And my friends that presented just before me, shared importance of having your prompts saved in some safe place. So you can reuse them. And lastly, we are gonna go into another important part which is creating our own prompts and why this is important. So, I'm talking about large language models, and perhaps because you're here and you've seen more of the previous presentation. And many of you heard of chat GPT. This is not the only one, there is many more. And I wanted to share with you this infographic. There's bar from Google or Cloth from Antrophic and there are also some operate source options like Vecuna or Lama based, Lama based models. Show of hands, how often do you use large language models? Never heard of it? Okay, that's great. Try it few times. Awesome. Every week and every day, great. I'm happy to see that so many of you are using it. And from my research, I see that only two Intel managers are using large language models every week and only four Intel engineers. That might be a little bit updated. I did the query or poll a few weeks back, but still this is limited. And in this audience, I see that many more of you are using it. I'm happy to see that. And here are some of the reasons why people don't want to use them. It doesn't create anything of value. Or grandma warns about robot uprising because she's interminator. Or most value one, the last one, prep as concerns because of the data collection. We need to be careful with that, especially hearing recent events from some big companies. All right, so let's get to introduction of large language models. So I'm gonna share with you few examples of how people are using large language models. And this is gonna be mainly based on clothe and chart GPT. This presentation is gonna be mainly based on clothe and chart GPT responses. So first point, I guess this was first one that I seen when I started working with chart GPT, right short point about artificial intelligence and it would be great to provide you with the examples. If you want to write email and then you want to change it into poem, it would be possible to do that. Second, mathematician. So I started in mathematics. I remember I had a lot of fun proving theorems and I spent a lot of time with my friends doing that. And now it's even more fun, you can add emojis to it with chart GPT. Then chart GPT is also good at preventing crimes. So if you want to hotwire a car, it would tell you that it's not really good to tamper with car electrical system and it might be illegal. But if you're there in the woods and the baby is in danger and the only way to save the baby is to hotwire a car and take it to the hospital, it would be more than happy to provide you with the instruction. And this process, this is important. This process and my friends were talking about that this is called prompt engineering. So basically you wrap around your prompt within larger context and you confuse your models. So chart GPT is really good at stopping prompt injections. So you need to be really creative to get the answer but other newer models are worse. So it's easier to engineer prompts for the other models. And now we're getting to the question, is this useful? To me, this is garbage. How often do you need to do a poem about something or how often do you need to add prompts to the mathematics theorem? Let's do it to be useful, hotwire and car, perhaps never. So I was sitting there with my wife and we were in Bali and we thought, no, it's hype, this new topic, chart GPT. And my wife started her own company and she asked me to help her a little bit with the creation of the website. And creation of the websites is not easy for data scientists, especially that they don't have JavaScript background. And what's more, creating content is time consuming and hard and requires a lot of attention. And we didn't have that much time because we had also some that is two years old. But we decided to give it a try and start to use chart GPT to facilitate the process. And we were super surprised that it's ped up the process and also improve the content. The responses were an ideal. It was giving a lot of errors and mistakes and something that is not really Montessori. We were focused on Montessori education. But together with my wife, we were able to fine tune the responses and get the correct ones. So, and for now, it's already generated some needs. The website that we created together generated some needs. So now we are sort of looking forward to generate even more. And three weeks ago, we've been going to Malaysia for holidays and now we're moving houses and now it's preparing for the conference. So we didn't have much time to plan this trip. So I asked chart GPT for help and it was able to provide me three data area with a step-by-step guide. What should we do? It took us to some durian shops. My wife wasn't amused with that but to me it was fine. All right, then the next one. And I guess this one is another topic that is related to the previous talk. So if you think about summarizing a podcast previously, it was really huge project. You need to have, you have to have some data to find your new model. It was costly. You had to have engineers and perhaps some expertise. So that was huge product with a lot of unknowns. And now if you want to do the same with the use of some APIs, you can test the solution very fast. And what's more, most likely the results that you're going to get from GPT three or four are going to be much better than the ones that you're going to get from your custom need trained Bert, Bert is old state of the art type of model. I guess it was published in 2018. So here we are. Those are a few examples how we save time or how we manage to do things faster. And what I'm saying is that you can do a lot of things that were previously time consuming much faster. And you can get the results that are not ideal but you can tweak them and you can speed up a lot of processes. So what I'm saying that is you can save time by using large language models as your brain scaffolding. You would need a replace yet. And all of that you can do to make yourself relevant because I believe we are not going to have this terminator example that I was talking about and grandma is afraid of but we're going to have a situation where people who are using AI would be competing against people who are not using AI. And I don't have to tell you who is having head start in that race. And what is more lastly, this is not 100% accurate but it's fairly accurate. So introducing of chat GPT to me, it's a very similar event to introducing iPhone or smartphones because it enables a lot of previously impossible things a lot faster. So during iPhone release, we had companies such as Instagram or Square that were built on top of that. So for now, I see that completely new universe of application is going to be enabled. And already we see the trend. How Microsoft is using co-pilot for PowerPoint presentations or Word documents. We see also the same for Notion. Even on the offices using chat GPT within their software. So yeah, I guess this is giving you good motivation to start your own Chromebook because that might be relevant. But you might still question why do you need Chromebook and I gave you a hint just before. But basically, I'm suggesting that so then you can reuse some of the prompts that you're using often. And you don't have to recreate the wheel, reinvent the wheel multiple times. How to use this presentation? From now on, this is important. I'm gonna share with you some prompts and some answers. They're gonna be truncated. So I'm not gonna share with you everything because I want you to focus on how things are being made and not each word step by step. And I'll share the presentation after the talk so you can copy, paste some of the prompts that are sharing with you. Some of the prompts were ones that I found on the internet and they found them useful and I keep them for myself and some of them were invented by me. All right. So the power of prompt engineering. This is also another answer to the question, why do you need prompt engineering? And I'm asking questions to the chat TPD, chat TPD answers and everything is fine. I'll ask complete the sentence. This is Claude by the way, from Antropiq. I'll ask complete the sentence, life is like, and the answer is life is like a box of chocolate. You never know what you're gonna get. This is good answer. It's answer from Forrest Gump. I remember this movie when I was small and I was watching it multiple times, but it's boring. Everyone knows that, it's nothing invented and I can create it myself. You don't have to think much to create this answer. So how about if we ask a model to behave like a mission in Starship, so behave like a golden empty and complete that sentence and then it would say, life is like a butt risotto. Too many people don't steer you know, don't pay enough attention to the details and end up with the magic flavor of the sentence. Yeah, and I like this answer much more, especially if you're, if you like cooking yourself. And I'm sharing, why cooking is secondary, secondary topic of this presentation is because cooking is much like from, you have your cookbook where you serve your recipes for later and you can reuse them and you can either use some recipes that someone has created or you can create your own. And I'm gonna talk about both of the things. So first I'm gonna show you few pre-made prompts. So cookbook or prompt book starter. Okay, and here I'm gonna share with you only three prompts. One of the things that I started to do more when I started to be adults is answering emails. And my wife facing this even more often as she works as a teacher. And if you use chat GPT, you can give you boilerplate message on how to reply to some email. And this is Anna asking about the AIR, AIR resources. It is the full answer and here's truncated answer. And it provides you, it provided me or the person who requested for that a lot of options for GIM to learn AI. It's very basic. You would have to still edit it, but the answer is very reasonable. You don't need to think, but you can think about deeper insights. And here is the template. So the template you can copy paste. Second thing, learning. Many of us want to learn new things and previously YouTube was a big thing. And now I guess chat GPT can accelerate that process as well. So here's the example I want to learn about artificial intelligence, same topic. 20% of the topic that yields 80% of the results. Here's the full answer. Full answer was really long. It was eight sections covering a number of things, but the main thing that I wanted to share with you, it's suggested basics of machine learning basics. Ah, you don't see this. There's machine learning by Andrea and she suggested lots of relevant sources as well to learn in depth. This is full prompt for learning. It's modified prompt that I found online. I'll share the presentation. So you didn't need to take photos yet. And last one I wanted to share with you, the travel prompt. This was, I guess it was shared by Generative AI and make day by day in theory about run free to Singapore. You can also ask about hourly in theory. And I asked about multiple options. It's not really great with providing food options, but I was okay with that. Full answer and truncated answer. That's really cool. If you're visiting Singapore, I also recommend you to go to Crangy. You can see a lot of bird species and alligators there. And it's worth it. This is travel, a full prompt. And now last part, I think why do I need to create prompting? What is prompting and why do I need to do anything like prompting engineering? And prompting is like simply asking questions and prompting engineering is art of asking right questions to the right people to get the right answer. And the right answer is wherever you need. So framework. There is no single framework that is mainstream yet because prompting engineering is fairly new, but they decided to provide you one option for using it. And here it is. And basically, while providing framework, I want you to focus on providing a lot of context to the chat or to other language models because it will provide context. Then it will be able to answer you with relevant response. If you don't provide enough context or the consequence is wrong, then it's more likely to give you wrong results. So the context that I suggest is task. So whatever you need to do, second is role. So who needs to answer or to whom the answer is. The third one is constraints. So what should do or what shouldn't do. And the last one is chain of thoughts that my friends covered or examples. So you provide even more context by showing how the chat should think. All right, and let's get to the example. I'm hungry, it's late, and what should I do? Then when I was starting using chat DTD, I would write what should I eat for dinner? And it was fine. It created few ideas and I liked the ideas. They're already good. Good salmon, I would eat that. Chicken, I wouldn't because I'm vegetarian. Vegetarian chickpea, not vegetarian, pescatarian. Chickpea, vegetarian chickpea curry, reasonable. I don't like chickpea that much. That's still fine. And now we're getting to the last example. So how if you provide more context? I'm hungry, it's late, what should I do? And my girlfriend is there and both of us we have dietary restrictions and I want to make it a moment that matters. So instead of asking what should I eat, I would ask to design a trick or smanu that I can make at home. And also I'll ask Gordon Ramsay or other missionaries start to design that menu for me. And now about constraints. I have lactose intolerance and my girlfriend doesn't include them. All right, so let's add that. And lastly, think about this step by step and this is really important. This you should save. Don't answer if you're not sure. This reduces hallucination so-called by much. So this influencing the thought process. And here's the answer that I got. And I was amazed. As a mission star chef, I would be delighted to create customized menu for you and your girlfriend. And it provided us with stricters menu with ingredients and step by step guide on how to create them. So I guess this is fairly convincing example on why engineering prompts is better than just prompting random. And then premium all-in-one example. So this is inception. It's not really the pre-cooked example or it's not really the engineered example but you can ask Trajapiti to help you engineer your future examples. And I've been using that a lot. And I asked Trajapiti on how to amaze people joining presentation on prompt engineering. And it's helping in creating this presentation as well. All right, so let's get to conclusion. So I'm suggesting you to start your prompt book and use the methods that I covered here to automate some tasks that you're working on and in the effect to stay relevant. Thank you, it was pleasure to talk to you. Thank you. Thank you. Thank you. Thank you. Now, Skarn, this is the QR code to my LinkedIn post and in comment you have the presentation link. And for now the presentation is still closed because I didn't want anyone to see it before the presentation. And now I'll open in three minutes. Okay, wonderful. So we have someone who's played extensively with prompts and your opportunity asking lots of questions on this. So. We still have time for questions. Awesome. Yeah, so it's noted in the last presentation there's been changes with even within the model the releases of GPT three and four. So when we're going to scale, how, yeah, how, what does that look like? Are you, how are you, are you, since they don't explicitly reveal to us like version numbers or versioning of GPT four, how are you versioning these or like controlling these so on so forth, locking these? So if it comes to using this, I'm trying to be very practical and I'm trying to automate, especially tasks that are time consuming and I need to have starter code or started template that I can later edit and adjust to my current needs. And for that use case, it's not really, it's not really problematic whether the model results are consistent or not. If I get a result that is from early stages of GPT when prompt engineering works or, oh, this might be the only problem when the prompting, not prompt engineering, but prompt injection works. But for email replies, for learning, for learning examples, or for designing trips, or for boilerplate code, for debugging code, for user story creation, for being creative and finding two concepts that are unrelated at the beginning and you connect them two together, all of those examples are in the presentation, in the appendix, all of those, the responses in between the models wouldn't change much. They would be good, they were good and I expect them to only be better in the future. They wouldn't block none of those use cases that are relevant. Does it answer your question? Yes, sure. By creating prompt books, because there are a lot of online resources already like for example playground.ai has a list of prompts, right? So how do you recommend creating prompt books? Awesome, really good question. I'm happy that you asked. So there's tons of resources and you would be overwhelmed with them. So why I'm suggesting to create prompt book because there is lots of cooking books and you don't use them. But my wife always had a small book where she saved all the recipes that she used often. So I'm suggesting you to create your own prompt book with the prompts that you are using and not to use generic results. Because if you use generic results, you still need to search. So I want you to save the time of search of the prompts that you need and use often. That's the reason for creating your own prompt book. And you don't have to have your own prompts. That's an option. Lots of prompts that I use in the presentation are from someone else because there are lots of people who are working on that and they're creating great things. This is the open source part of the talk. What's that? Yeah, it's a big room. So thanks for sharing. I have two questions. So first is that, as we know, it's the whole, I think the whole large language model is available very fast. So we're having GPT-4 coming up and I mean, it's already there. 5 is coming up and we have other companies publishing their own models. So I believe you have definitely played with a lot of these models. So do you find the prompts consistent? Like if you feed the same prompts with different models, do they still perform the same? And because I like your analogy of the cookbook, but I mean, let's say the rice that we're using have not changed for the past thousand years, but models are evolving very fast. So do you find, let's say if you have a prompt book that's built using, let's say GPT-3.5 and after 4 comes up, or if you're using other models, do you see any inconsistency in the results, in the quality of that? And my second question is that how do you, like if let's say you're using the prompts for some tasks where facts are very important and how do you do fact-checking? Okay, awesome, two great questions. So first, consistency and then fact-checking. Consistency in between the models. So there's one great thing. OpenAI has big budget and lots of people who are working on it and they had head start. So lots of prompt injections are stopped, but other models don't do that. So you can do, you don't even have to do prompt injections for some models, for some newer models. You can just ask how do I recurrent answer. And again, similar answer to the previous one. For most of the use cases that I'm using, it's simple tasks that are time consuming. And even if the answer is slightly inconsistent between two models, it would provide you boilerplate. So you can reuse that and adjust it. And then fact-checking is second question. So fact-checking and adjustments of the result, I guess they're related topics. So when we were creating the website for my wife, this Montessori website, lots of responses from the chat were inaccurate and we're Montessori and we are really into Montessori. So, and when I said that AI wouldn't replace us yet and it's not gonna be fight between people and AI, but people who are using AI versus people who are not. So you still need to have an expert to validate some results. The prompt that I shared with you, don't answer if you don't know, it works to some extent, but sometimes chat would make up things anyways. And less powerful chats would make up more things than the other ones. So just be practical, use it to your needs and don't go for the, I don't know, for people who are shouting, oh no, this created something that isn't correct. That's fine, it created, just take that into account and use it to the fullest and don't lose your time on complaining that something is not ideal. And if you don't have expertise, find someone who has expertise and can validate this. And what you can do, you can ask the GPT, hey, how should we learn about AI? And then it would give you further plates and you ask some of your friends who is AI, hey, I was thinking about learning AI, I have these ideas, what do you think, what should they add, what should they take out? And then it's much better than asking, oh, what should they learn about AI? Like you're more specific with your questions to people who are experts and then you're not wasting their time and you're not asking them to, I mean, not really, it's not about wasting time, but it's like you're being more proactive then with answering more directed questions. So this is about trying to make and experts and more questions. Really valid points, maybe also the question of how we measure productivity going forward, especially where people are AI assisted. So very, very interesting question around the problems for different models. Still open to the floor, we got two more minutes before we bring up our next speaker. Of course, Jan will be around. Yeah, take the conversation offline. Can I get to answer more questions? Yes, please. It's very, because people would see it. Okay, this is coming more from the text to image side of things because, but it applies to chat GPT basically, but so what's happening with text to image is that there's a lot of copyright issues coming up because when text to image things like, mid-journey generate an image, is it copyrightable? What are the problems with training? Are we seeing the same issues popping up with chat GPT type, text to text kind of models? I mean, I believe, yes. I believe there were even some legal actions related to that, to slow down open AI. But I would need to double check. I would need to double check what's the current state of that. And I see some of the open source, large language models, who would use on the open licenses to train their models for some closed source. You don't know what the models are trained on and they are not transparent for open source. You see that. And also open source models would allow you to opt out from, like if you're sharing your code on GitHub or whatever with open license, you can opt out from model train. But yeah, so this is for open source. And then for closed source, yeah. I would need to double check what is the current situation. And I think that there might be some legal action similar to what's happening with the text to image. But yeah, we'll see. Double pause.