 All right, so what I wanted to talk about today was going through AI large language models, specifically chat GPT and using the API for it. Potentially a lot of very powerful things that kind of around the edges of crypto econ can help us accelerate our work and just like for like anyone in the PL network. There are I think a lot of opportunities so. Get the zoom, I think that's the zoom controls that I need to get out of the way. Okay, so chat GPT a lot of people have heard about you go here and you talk with this. This chat bot. You go to chat.openai.com. Okay they got high demand. See if we can get through. You can have this conversation with it and ask it. All kinds of stuff. So I, because of this demand stuff created an account, and you can go in. Okay, I don't really do this as it's responding so if you're making say say you're making a blog post and you want to just like brainstorm some ideas. So give me the top most compelling. Three points to sell decentralized storage on Filecoin. Right like, if you're generating content if you have done an analysis, you can say, help me to summarize this one of the top three points that I should make from my analysis and put at the top in the summary. It's going to be 100% accurate but it gives you a head start and probably saves you a lot of time while you're drafting things. And so, like, if we're creating a blog article we can just like copy and paste this and use that as our like boiler plate, like starting point. So that's a one way to use this. Another way to do this is like, if you want to, and let me just jump out of this chat interface and into the actual portal. And so I've been messing around with this and I've spent 12 cents using API calls and doing this stuff. It's pretty cheap at low volumes. But if you go to the playground, you have a more kind of like direct line into it you don't have to the chat GPT user interface doesn't get buffered and you can kind of like cut through that noise if you have the account. And so if you want to do something like, you're in this playground you're using the text DaVinci model that powers GPT three, and you might say, Python function that creates matrix of n by m and populates it with I or something and you just like it submit and it just like writes code for you. Alright, so demo live demos never work do they. Where is it. So the current oh it's overloaded on the inside to man that's a bar. So like, yeah, if you're like stuck on like, you can go to stack overflow and like find code chunks. You know, like that that's easy to do but but if you just want to like describe what you want and like get some code that will write things out for you. If you can do it in solidity you can you can write, you know, smart contracts, get just get it. Get yourself accelerated and move forward. So, that's really neat. I'll also just say for, you know, like logo stuff if you're doing like marketing design, you can upload like I've uploaded our carrot logo here and said generate me some other ones that are similar to it you could like. So it's a different model than the language model, but you can go into open AI and use these tools to, you know, make like say hey I want this this carrot to be blasting off like a rocket and turn it 180 degrees. Right so like this is interesting stuff. Now the thing that like when I was messing around with this that like really stuck out to me is a former colleague of mine posted this article. One of the shortfalls of open AI and chat GPT is that it doesn't cite its sources, right and so like can you believe and depend on this stuff. Well a way to kind of hot wire that is to implement q amp a against your own documentation, and this is this would be called prompt engineering. And so what you do is you, you first you run a text search against your documentation. So, so I was thinking okay the file coin spec, or like specific documentation about file coin, can an FVM developer hackathon developer, go in and know that they're, they're asking about a specific set of documents to know better how to code and file coins so all the FVM documentation all the file coin specs. You can put that into an app. Search that with like kind of traditional search technology get all of the snippets of code. All the snippets of documentation that contain a certain keyword or a certain idea, paste them together, and then construct a prompt to open AI hit, hit it through the API and say, given the above content, you just give it that blob blob of text you put that so you're like force feeding, you know hey open AI chat GPT answer a question based only on this context. And then it gives you the answer based on that. And so I coded up that they have a Python notebook here, which, which is an example of that, but I coded it up with the file coin spec. And it a pretty crude way. So if you go through and you use this web scraping library called beautiful soup, you can point it to the file coin spec, pull in the text, and then you say hey get the text out. You don't want to include the table contents. So let's just look after the 12,000 character. There's almost a million letters or characters in the file coin spec online. So let's search through that let's divide it up into 200 character chunks. This is a real there's much better ways to do this, like, you can divide it by paragraph or by P tag or h1 h2 tags and do this a lot more smart smartly. But I just kind of just to get a prototype done. Here you have a variable that's the query. So this is what you'd be getting from somebody. Maybe somebody's asking is initial pledge higher than or lower than pre commit deposit. So you could use some search to pull out all the all the texts in the spec that have anything to do with initial pledge. You just put that into a blob of text here. And then you just like feed that into open open AI chat GPT and so you say, this is the prompt you say, answer the question as truthfully as possible using the provided text. If the answer is not contained within the text below say I don't know. Finally, provide your answer. This is another thing that's just a cherry on the top, provide your answer translated into Spanish and Chinese Mandarin. And so like you could you could have developers all over the world asking in their own language over the file coin spec and it answers them in their own language. And so here's that context. Again, this is all just being fed into open AI into the question answering prompt. And then the question is initial pledge higher than a lower than pre commit deposit. Here's the API call you have to have your API key and an environment variable you can put it into the notebook as well. But then it comes back and says initial pledge is usually higher than pre commit deposit. Spanish deposit initial etc etc Chinese Mandarin. And there it is. So it answers the question in all three languages. So just think it would be interesting to try this out and have a webpage, you know, hosted for the hackathons that we're having all over the world, where people can like, ask, ask specific questions into the documentation. It's stuff like this that these these are now new tools available to us that I think we should be thinking about. So that's all I got. Let's see. That's, that's pretty awesome. Thanks. I mean, like, yeah, as a, as a follow up to this. So I use GitHub co pilot. And I think it's based off of chat GBT and it's absolutely incredible. As someone who has like worked on large language models, I am still skeptical but the capabilities of co pilot are amazing and it enhances productivity so I would recommend it. Nice. Yeah, I mean, you'd have to have your wits about you, like it can accelerate you in a completely wrong direction but then you test it and be like, okay, you know, saves 20 minutes, not having to look that up or, you know, How about math, like, it can do some like a little bit of coding and then I can think about kind of symbolic math questions. Not well that's one of that in fact is like I think one of the biggest like weak points of these language models it's all statistical. Like these words occur in relation to each other and it's like maps through that it won't like necessarily abstract away from that and say like oh good. And to input documents, you need to do this, this beautiful show library you cannot like ask it to directly read a PDF. Right. So that that's all like front and back end data wrangling and engineering. Like there's there's tons of ways to do that I'm not very good at that, but they're you know data engineers and front end engineers and people have tons of ways to to, you know, you can take a bunch of PDFs and put them into elastic search and have that hosted through an API and there's a library called deep set haystack that does that there's there's a bunch of them. That's kind of blocking and tackling sort of sort of work outside of the machine learning stuff.