 We'll start by reviewing some literature, some papers on EG to music conversion using these plugins in GPT-4. Can I get this Scholar AI? Yeah, that's how it looks like today. It will probably look very different tomorrow. No one knows. But when you use the plugins, you can select your plugins. I have these three selected. I did a review before this collater. It was kind of working okay. It was integrating with my Visual Studio code, but yeah, I do have a GitHub co-pilot as well. So yeah, so I'm not sure which one's better. You can probably guess. And then, yeah, I haven't used this YouTube summaries. It's actually wasn't working, I think. I have to try it again. But yeah, this Scholar AI says, unleash scientific research. Search 200 million plus peer-reviewed papers and explore images and text from scientific PDFs. Okay, that's fine. So when asking for EG to music generator, it's generated this general overview. That's fine. We covered it in previous streams. Keep suggesting anything with EG or ECG. Keep suggesting M&E library, which we're actually trying to avoid for various reasons. Well, mainly because we want to do things ourselves. And the ink M&E is quite outdated as well. So that's the same prompt with papers at the end. And yeah, here, so here it's using this Scholar AI, it has its own API, using these keywords, the query's paper on EG to music generation, sold by relevance to them, why it's given offset of four, seemingly there's three other papers. So it did give a list of papers. However, I wasn't able to replicate that search myself. So if I do a general search, EG to music generator, I get a whole different list of papers. Well, at least the first one is not on the list. The other four is actually not quite relevant. So I guess it's because of the brain computer interfaces, a keyword in there. Yeah. So then I tried doing an advanced search, which you can do from here. Where is it again? Why is it going away? Or, or, oh, come on. Well, that's how advanced search line looks like in Google Scholar, merely because you change, it's really hard to get there. Yes, we have music generation should be in quotation marks, I don't know. But anyway, this Scholar AI API is obviously different. Well, because actual Scholar Google.com doesn't give me the same papers. I mean, so that's something to further explore. So the first paper is the only one that is actually somehow relevant. But then it's actually talking about a more clinical case, very generic and a clear preprocessing EG signal. You can see by looking at this figure, you can kind of guess what the year is this paper from. So doing wavelength decomposition, I think they actually go into the details of how it's actually done. We have it reconstruction, filtering into sub bands, extraction of discriminative features, they actually say what the features are, I don't think so. I had a look at this paper before comparison between wavelength and Fourier transform. Okay, I can tell it's quite generic stuff. Okay, so going straight into methods section, okay, no information there. It feels like it was written by a change of something doesn't have a very low quality figure. Yeah, it's just a very low res can barely read it. What flows that? Okay, where's the actual results and intelligent way? If you have an intelligent way in the title, that should be your red flag activity using bring to the interface. Yeah, it's like very generic stuff. There's actually no details on what the system actually is. How's it? Is it the research? Ah, it's a conference. Sorry, so I should have looked at this first. So this is the conference paper. It's well closed, but that's the first way as for papers. And that's the first thing that this add on in GPT for scholar AI gives us. So first, we're asking for papers. It's not really a paper. Other things are not really relevant. Because it's mainly focusing on brain computer interfaces. This is games. So no music mentioned. It sounds full. So yeah, so I played it with this before seems like just better at this stage use Google scholar directly instead of this scholar AI, not sure exactly what it does. When we do use scholar, yeah, there are actually more relevant papers as well. Look at the first one from 2006. Wait a second. This has a very similar image of feeling the authors are different. The PDF is available. Some Japanese website. Yeah, this actually has more details in it. Yeah, not too much. The image is not there anymore. So it's estimating emotions doing a music generator out of emotions. It's interesting.