 We're going to be going over this paper today, well actually not going to be reading it as the title suggests it has the data associated with it so we're going to be looking at it. It's a large data set and this many patients and they did all these tasks while having electrodes implanted. So we'll be looking at this data set saying what we can do in terms of a flask application, displaying processing the data etc. Yeah we popped in 30,000 tokens of stuff into GPT-4 Wow that's a chunky piece of HTML and JavaScript code plus a detailed explanation of a flask app's structure. It looks like you're working on integrating complex neuroscience data into a web application. That's a solid approach especially using flask as a back-end. It's cool how you're planning to analyze and visualize this specific ECOG data and using JavaScript for real-time data interactions is a smart move. The data extraction and preparation steps you've outlined make sense. Converting MATLAB.MAT files for Python analysis can be tricky but SIPI or H5Pi should handle it well. Organizing the data for easy manipulation is crucial and structuring it as CSV or in a database will definitely make things smoother down the line. Your analysis and visualization plans sound pretty robust. Utilizing NumPi, SIPI, and pandas for the heavy lifting and data analysis is a good call and for visualization MATLAB and PLOTLY are great choices offering a nice balance between static and interactive plots. I'm intrigued by the use of D3JS for dynamic visualizations. It's a powerful tool for creating sophisticated interactive data visualizations in the browser. Developing the flask app with a user-friendly interface and offering options for different analyses is key to making this tool accessible. Ajax calls to fetch data from the back end and update the web page dynamically are a staple in modern web applications. It's great that you're thinking about additional features like user accounts, data upload, annotation, and collaborative tools. Just be sure to prioritize data security and privacy especially since you're dealing with medical data. Providing clear documentation and setting up a community forum are excellent ideas for engagement and support. This will not only help users but also foster a community Okay, so we are developing this flask application to give us any short responses. We will need to fix the code to actually fetch data from a locally stored dataset which I already uploaded the structure of the files. Can you do that? Now generally we would primarily like to rely on the front end and use back end only if absolutely necessary. We would like to load some of the data files, might start from the MOTA basic or if you can from the table upload it tell which task was done by most patients but anyway we will either start from fixation power law which I don't know what it is we have to can you look up in the document and explain what the different tasks are primarily focusing on the MOTA basic considering that that seemed to be the one that was done by most patients. It seems like there aren't any uploaded files in the directory right now. Could you please upload the dataset again? Once it's uploaded I can take a look at it and guide you through the process of integrating it with your we we are not uploading dataset I'm saying it's stored locally. I said I did upload the file structure for the dataset here it is again and we would like to update this last script Python script to load the data from starting from MOTA basic. Also could you explain the MOTA basic dataset? Yeah I'll have to sort out the how we accessing the dataset it's not generating any code. Let's break this down step by step. Firstly to integrate the MOTA basic dataset into your Flask application you'll need to make sure that the file paths in your Python script correctly point to the data and electrodes files for each subject. From the structure you provided it seems like the data files are stored in a folder named after each subject and you're dealing with matlab.mat files. In your app.py you're loading data for a sample subject using the scipy.io.loadmat function which is correct for matlab files. Just ensure that your directory paths match the actual structure of your data. Regarding the MOTA basic dataset specifically it sounds like it includes ECOG electrocorticography data from patients performing simple motor tasks. The data files include variables for stimuli what was shown on the screen and the recorded ECOG data. The electrodes file contains electrode locations crucial for understanding the brain regions involved in the tasks. Now based on your requirements here are a few suggestions and steps to integrate the MOTA basic data into your Flask app. One, file paths. Update the file paths in your script to accurately point to the MOTA basic data. For example if MOTA basic data is in data slash BPMOTETH.mat and electrodes in LOXBPElectrodes.mat your code should reflect these paths. Two, data loading. You'll need to loop through or select specific patient data files to load. Your current code only loads data for one example subject. You might need to modify this to handle multiple subjects or select a subject dynamically based on user input. Three, data processing. Since you're dealing with ECOG data any processing or analysis you want to perform like filtering, e-poking, averaging, etc. will need to be coded in Python. You've already included a step for common average referencing car. Four, front-end integration. For displaying or utilizing this data in a front-end application consider what information you want to show. Is it raw data plots, analysis results, or something else? This will determine how you structure your Flask endpoints and data responses. Five, scalability and performance. If you're dealing with large data sets consider perform... Okay, okay, okay, okay. So eventually all this there's some MOTA examples of how to load the data. We'll need to convert those into Python. We'll see if GPT can do it to test it all. We don't have MATLAB to to test side by side. That should be okay because we just essentially develop the code from scratch. Now there's always the main question is who is gonna who is gonna write the code as a GPT-4 in the OpenAI interface or GitHub core pilot in Visual Studio Code. I suggest we do both. Whatever we currently have is not running the subjects. Right, so the subjects have those IDs like BP. So we did get that correctly. I don't know if the interface, yeah, I would say the interface if we are potentially doing something like selecting which subject and which task we want to look at. Alternatively, it's probably they're probably gonna be very different anyway. So not sure if the tasks would be comparable at all. I wonder if you can just follow instructions and tell me which tasks out of the following table was performed by most of the patients. Also from the following document we need to be clear what each task is and try for example figure out if we should be comparing the EEG recorded between tasks or it would not be a valid comparison. So we be only looking within each task individually. The text below should have all the details. Well not all of them. I think it just has an overview of the tasks in general because then there is data specific to each task. Let's go back to the code quickly. So we have a GitHub core pilot generating some stuff for us as well. It's already suggesting using a model. I think it's too early for that. Hopefully we'll get this soon. Yeah, we added OS that will be required. I don't know why we loading a model already. Yeah, GPT-4 is trying to execute stuff on its own server. That's obviously, well first not gonna work. Second, I need the code to be running locally anyway and as I said the data set is only available locally. I don't think there's point uploading it. Yes, maybe if we can stop trying to execute things on the OpenAI server that will be great. Now we have Git running locally as well. So we're monitoring changes. In theory when we change the file it should have all those it should have all these modified, non-modified sections. Yeah, we don't have a model at the moment. It might be too early to load any models anyway. I'm not sure what is it trying to predict already. Let's load the data, display the data first and go from there. It's obviously struggling to run stuff on the server. All tasks in the data set were performed by 20 patients each indicating an equal distribution of participation across the tasks. This includes the motor basic tasks you're really interested in. Next steps for your flask application. One, dynamic file loading. Modify the script to dynamically load .mat files based on user input for the patient ID and task. For motor basic you'd load .mat files where .mat represents the patient code. Two, Python code development. Since you don't have Matlab you'll need to develop Python code to process these files. Python's sipi.io module can handle .mat files effectively. Three, comparison and analysis strategy. Given the diversity of tasks decide how you want to compare and analyze them. For instance are you looking at similar biomarkers across different tasks or are you more interested in task specific outcomes? Four, UI considerations. For the user interface think about how users will select a patient and task. This could be a simple drop-down menu or a more interactive selection tool. Five, analysis approach. Depending on your project's goals consider either focusing on within task analysis or developing a method to compare across tasks. Keeping in mind their distinct nature. Let me know if you need more detailed guidance on any of these steps. Yeah can you stop giving me generic stuff we're actually trying to. So we are live streaming at the moment. You are co-hosting a YouTube channel. We are developing this code that will by the end of this session load data displayed compare within or between tasks. We want to see which comparison is more viable. So can you actually look at the specifics of this dataset? You should have access to all the necessary information and we want to start developing the code but make sure you just generate the code as text. I will run it locally myself. Yes, the dataset is stored locally. Yep, the interior sidebar should be able to load the matter files. Let's even see that's a correct or not. So we have a JSON file message that data loaded successfully. Well, yeah we don't want to do the, we do the prediction later and it shouldn't be even working. At all at the moment you can just probably comment it out. Try and print the data. Let's actually give it a name. Now that's okay. The transcriber is working really well, isn't it? Got it. Let's focus on developing the Python code for your flask application that you can run locally. Since you're looking to load, display, and possibly compare data from the ECOG dataset, here's a plan of action. One, loading the data. Use Python scipy.io to load MAT files. Create a function in your flask app to dynamically load data based on user input, patient ID, and task. Two, displaying the data. Once loaded, format the data, e.g. convert to Panda's data frame for easy manipulation and visualization. Implement a simple route in flask to display basic stats or plots from the data. Three, comparison logic. Determine the comparison metrics, e.g. spectral features, power law distributions. Write functions to compute these metrics for within task or between task comparisons. Four, front end interaction. Update your flask app to include forms or drop downs for user input, patient ID, task. Ensure the front end communicates with the back end to load and display the relevant data. Five, running the code locally. Develop the code as text snippets here, which you can then test in your local environment. No code execution will be done on the server here. Let's start with the code snippet to load the .mat files. You can input the patient ID and task name, and the function will load the corresponding data. Here's an example Python function for this purpose. Python from flask import flask, request import cp.io, import os app equals flask at app.routes, load data, methods at get, def load data, example parameters replaced with user inputs. Patient ID request.args.get patient ID BP. Default example task equals request.args.get task motor basic. Default example data folder equals path to data slash folder. Update with the correct path. File name equals f patient ID task mat. Construct the file name file path equals ospath.join data folder file name. If os.path.exists file path, data equals sipy.io load mat file path. Process data as needed, e.g. convert to pandas data frame, return, data loaded successfully, else return file not found, equals score main, app.run debuggles true. This code snippet creates an endpoint in your flask application to load data based on the patient ID and task. You'll need to replace path to data folder with the actual path to your data. Let me know if you need further elaboration on any of the steps or additional code snippets for other parts of the application. Okay, so this is what we currently have. Just text the description of the whole thing. Obviously, we need to we want to turn it into interactive to whatever dashboard, whatever other names you want to use. That's fine. Let's clear that. Probably run it again. Currently, we don't have anything to call this root. We need to load the data. That's the base path that's correct. Yeah, that should be correct. Now we need to update the JavaScript. We're not uploading any files. We are not uploading any files. The data is stored locally. So if the backend is uploading it, we would like to display it using JavaScript. So how do we update the JavaScript code and the HTML? This will probably have to be fetching data. And yes, we need to update HTML as well. That was the HTML. We have that element that we that HTML will have a data ID. So we have a head already title. Why do we have two titles? The second one should be fine. Get rid of that header in the body. So yes, it will have the... So it will be like a blog but an interactive blog. So the interaction will come first. So we place this... Do I have a data data ID? Why do I have it twice? I don't know. But in JavaScript, get element data, JSON. It feels like listening to console. I have an error of some sort. All data subject giving me an error. So that will be a JavaScript issue. Yes, so we need to figure out how to update the HTML dynamically based on the data fetched from the flashback and using JavaScript. Yeah, don't like when it's referencing only what's where the current mouse is. Create element PY. That doesn't sound legit. Currently we don't have anything working so might as well. Yeah, that's not what the files... Okay, that's not what the files are called. And no, that's not right. Okay, we need the actual structure. Yes, let's go right. How about we just list the patients and the tasks? How about we fetch the list of patients and the tasks names on the HTML on the front panel and give an option to select the patient and task and we'll go from there. We need to populate that. Now those need to be populated. Populated drop-down. That's right. JSON JavaScript. That is the window on load. That didn't work. Yeah, load data. So how do we update the back end to actually retrieve all the names of the ITs of the patients and the names of the tasks so we can populate the front end? I don't think that will work. Okay, let's go into one of the... What's the basic... Yeah, Moto. Basic folder. And we ask how to show all the files and the sub in all sub folders in the terminal. Yep, thing. That's quite a bit. Barely large dataset would be really testing. Right, so a lot of the... It has a lot of MATLAB script from M and E. Yeah, downloaded the beast. Let's pop it into the tokenizer. 39,000. Well, we should be okay up to 128,000. Someone said thesis as well. If I so far... Yeah, we have the index html, the Python code, and the script js. Can we keep responses specific to the code at hand? And I'm writing into a wrong into the wrong GPT. Should we start a new one? Yeah, it's kind of refusing to look at the files anyway. So even if you have a very long prompt that it's not necessarily looking at all the information, just trying to find what's relevant to the question at hand. Might continue this once we have a go. Can we change the code? So we actually load the data based on how it's locally stored. So on the front panel we have a drop down to select the... to select the patient ID. And once you have the patient ID you have, you should update the code. Should update the tasks that were performed by that specific patient. And then we have the drop down for the task that should be again updated for each patient. And then we display the data somehow. So we want to do all that. Yep, got it. Right, so this is interesting. Let's see how long... how many tokens we have in this prompt. It's only 43,000. The model should be able to take 128,000. So what's up with that? What's up with that? And it's actually happening. It's only happening recently. The message is too long. Please reload the conversation. Blah, blah, blah. Okay, let's start a new NGP4 chat. Let's see if it works then. Yeah, only 43,000 tokens. That's all, isn't it? Let's try 3.5 for a sec. Because how is it too long? That's your tokenizer. It says 43 tokens. And your model should be accepting 128,000 128,000 tokens. So I don't even know... Yeah, it wouldn't even tell you how long... How much longer is it than necessary? It's a pickle, isn't it? The CEO got kicked out of the company again. Can anyone else try it? That would be great. Is it only for the... It's only using APIs. I mean, they're all 128 anyway. Except... So was I like downgraded? Okay, I actually didn't see that. So that's preview version. Am I using preview? Check something stuck. That's nothing wrong. What model is being used? And how many... What's the maximum number of tokens in each prompt that will be accepted? Yep. Yep. Where is the 4096 coming from? Are you sure about the number of tokens for the current model? Yep. It's actually lower. This is not updated stuff, is it? I was just thinking the speech addition is nice. Yeah, those voices are really good. Apologies for the confusion earlier. You're right to double check. For this specific model, the maximum token limit for a single interaction, including both your input and my responses, is actually 2048 tokens, not 4096. Thanks for catching that. Let me know if there's anything else on your mind. Clear. So this is actually wrong information. Can you browse for more updated info on the number of tokens this model accepts? Yep. Yeah, because GPT4 can browse. So it's actually... It should be able to keep skipping different... Yeah, this is more like it, 32,000. It's fine. It's just giving a link that is not open AI, which is odd. All right. Here's the updated scoop on the token limits for GPT4. It turns out there's a bit of variation depending on how you're accessing the model. If you're using chat GPT's GPT4, the token limit is set to 4,000 for both input and output combined. This is a key thing to keep in mind, especially when dealing with long conversations or inputs, as it shapes how much content can be processed or generated in one go. But that's not the full story. The GPT4 API in general offers more flexibility with token limits. The standard GPT4 model in the API can handle up to 8,000 tokens, and there's an extended version that goes up to a whopping 32,000 tokens. This higher limit is a significant jump from previous models, allowing for more extensive input and output content creation. It's worth noting, however, that the way tokens are used and counted can be a bit complex. A token isn't just a single word, it can represent parts of words or even raw text. So the actual word count that fits within these token limits can vary depending on the language and the specific content. In summary, for chat GPT's GPT4, you're looking at a 4,000 token limit, and for the broader API usage, it can range from 8,000 to 32,000 tokens, offering much more room for large-scale text processing tasks, 11 source. Check. I haven't used the API before, supposedly. That means that the API... Can you browse for official information from openai.com? A check for the current token limit for GPT4, accessed through the chat openai. Just check something pressed. On the openai website, it's mentioned that GPT4 is capable of handling over 25,000 words of text. This is indicative of a significantly large token limit, allowing for extensive tasks like long form content creation and detailed document analysis. However, the exact token limit for GPT4 accessed through the chat GPT interface on openai's website isn't explicitly stated. Generally, the token limits for GPT4 in various interfaces, like APIs or specific applications, can vary, with some configurations allowing up to 8,000 or even 32,000 tokens for more extensive uses, 17th source. Yeah, let's split it up. Yeah, we still want to do... Yeah, this one has 40. So this is the structure of the whole dataset. Let's see if it can handle that. So can it do just... Yeah, it cannot. 25,000. Yes, we can do about half of that. And where was it? It's still, as far as I'm concerned, 128,000 on the... That's a preview, some sort of preview. Am I using that? I guess the preview is only available via APIs, I don't know. Why do we have, say, a Mac OS folder? Just... Yeah, we have this table that should be fairly straightforward, is it? Question, do we have it in the file? Yeah, we have this PDF, but do we have it as a patient task table? Yeah, so supposedly there are this low-resolution MRI images as well. Probably won't touch them for now. Yeah, the MATLAB script is like from 2012, 2015, some of it. Yeah, there's this PhD thesis by Mila, supposedly when the electrodes are located. Yeah, that's the general overview. Here it will be. Yeah, so essentially what we're trying to do is in interacting with the last application version of this, where we can load the data, see the task timing, so when they were doing what. So essentially have one of those, yeah, we're not going into that. Right, so this is supposedly why the patient is looking at faces or houses, and that's the difference between the bold MRI, functional MRI, so it's the functional MRI data is there as well. Don't think so. So some of the tasks supposedly were performed while like simultaneously, that's something we need to figure out. How many tokens is this whole 20? So I'm pretty sure we were able to handle 20 before. I think I suspect it's downgraded me to a different model. It can deal with 20,000 tokens. Let's read that quickly. All right, let's break this down in a simpler way. This paper by Kai J. Miller from Stanford University is all about this cool library of brain data they've put together. It's like a treasure trove of information from people with electrodes implanted in their brains, for medical reasons of course. So the big deal here is that they've collected this electrocorticographic ECODE data from 34 patients during 16 different behavioral experiments. This is pretty rare stuff. They've got everything set at the same sampling rate and filter settings, which is great for consistency. Plus they've pinned down, pinned down exactly where each electrode was in the brain, so we know what we're looking at. They're not just hoarding all this data. They've made it publicly available along with a bunch of analysis scripts. This is super useful for other researchers, especially those who are just starting out or don't have access to this kind of data. The paper goes into a lot of detail about the technical aspects and what they found from all this data. Yeah, not hoarding data is a great idea for researchers. Okay, so we have these 16 tasks. Yeah, let's go over them quickly. Sure, let's break down each of the 16 tasks in the study. These tasks are part of the experiments that involved patients with implanted loads in their brains, which is pretty fascinating stuff. One, fixation pure law. Patients stared at an X on a wall for a few minutes. The focus was on examining the brain's power law structure during this baseline fixation task. Two, motor basic. Participants performed simple repetitive movements like hand or tongue movements. This task aimed to understand brain activity linked to these motor actions. Three, finger flex. Patients moved individual fingers in response to cues on a screen and their finger positions were recorded with a data glove. This task examined the brain's response to specific finger movements. Four, joystick track. Here, patients used a joystick to track a moving target. It was a way to explore brain activity during coordinated hand-eye tasks. Five, gestures. In this task, participants performed simple finger movements and gestures based on cues. This task provided insight into brain activity during varied motor actions. Six, imagery basic. Patients first performed actual hand and tongue movements, then imagined doing these movements. This task compared brain activity during real versus imagined movements. Seven, imagery feedback. Similar to the above, but with a feedback task based on imagery. It's about understanding brain activity when imagining movements and getting feedback. Eight, speech basic. Participants were shown nouns and either read them or spoke a related verb. This task helped in examining language processing in the brain. Nine, memory and back. This was a memory task using house pictures. Patients had to remember and identify when a specific image was shown. It's a way to explore working memory in the brain. Ten, visual search. Patients looked at arrays of colored squares and reported the color of a targeted square. It's about understanding visual processing and search patterns in the brain. Eleven, faces basic. Patients viewed simple pictures of faces and houses identifying specific images. This task explored face recognition and visual processing. Twelve, faces noise. This was similar to the faces basic task but used face scrambled images to make the task more challenging. It was a deeper dive into visual and face recognition processing. Thirteen, fixation pack. Again, a fixation task like in fixation power law, but this time with a focus on broadband and training on oscillations, basically how brainwave patterns are linked. Fourteen, fixation high freak. Similar to fixation poor law, but with high frequency recording. It's about examining finer details in brainwave patterns during fixation. Fifteen, mouse track. Patients used a mouse to track a target, similar to the joystick task but with a different control method. It tested coordination and brain activity during a tracking task. Speech lists. Here, patients read lists of nouns and generated verbs, testing language production and processing in a more complex scenario than the speech basic task. Each of these tasks helps to understand different aspects of how the brain works, from memory and language to movement and visual processing. The cool part is how they connect specific brain areas and activities to these different tasks. Okay, we have to see how we actually extract the data. Okay, so in the folder, what is, yeah, we have 22, so 20 folders. Yeah, let's just focus on a specific task. From the paper, which task had the most significant response in the recorded signals? Also, can you check if the functional MRI is available in this dataset? Yep, probably not. The paper by Kai J. Miller and team presents an extensive dataset of human brain activity, but it doesn't explicitly rank the tasks by the level of significance in terms of the responses recorded in the signals. Generally, the significance of the response in each task would depend on the brain regions being monitored and the specific nature of the task. Some tasks might elicit strong responses in certain areas of the brain while others might be more subtle. As for functional MRI, fMRI data, the study focuses primarily on electrocorticographic, ECOG data. ECOG involves recording electrical activity from the cerebral cortex using electrodes placed directly on the brain's surface. This is different from fMRI, which measures brain activity by detecting changes associated with blood flow. The paper does not mention including fMRI data in this specific dataset. ECOG provides more precise spatial and temporal resolution compared to fMRI, making it particularly valuable for the kind of detailed brain activity analysis conducted in these experiments. If fMRI data is a requirement for your research or interest, you might need to look into datasets specifically designed for fMRI studies. Yes, obviously you need to select the patient that had the electrode creed over the relevant area for the task. You would assume that the irrelevant positioning of the electrode resulted in certain patients not doing certain tasks. Is that correct? This is the description for the first uploaded image. And this is a description for the second uploaded image. Now, can you answer the question? Again, supposedly these tasks had the largest response in the signal between during the task there was a highly significant difference in the recorded signal. Can we go over this? Yep. Okay, so maybe it doesn't make actually sense to give an option to select the patient and the dataset. We'll just focus on a specific dataset. So I'm still trying to figure out which one is better to use. Certainly. Let's dive into the descriptions provided for the two ECOG tasks and infer which might have induced the most significant response in the recorded signals. For the visual search task, figure three, electrode location, occipital pole, which is relevant for visual processing. Response, notable oscillations in the alpha and beta ranges decreased decoherence during the visual search task, which is consistent with engagement in visual processing. Power spectral density showed a decrease in power in alpha and beta ranges during the visual task compared to the blank screen. Broadband response, there was a sustained increase in broadband signal during the visual search, which indicates a significant active response. For the face and house pictures task, figure four, electrode location across the ventral brain surface. Response, the experiment measured event related potentials, ERPs, which are highly focal and related to the stimuli, faces and houses. Face selective ERPs showed considerable variability, polymorphic, indicating complexity in how the brain processes faces. Broadband and ERP changes. While broadband responses were observed, the ERPs were highly specific and varied greatly across the cortex, suggesting that the brain's response to the stimuli is nuanced and not straightforward. Between these two tasks, the visual search task seems to show a clearer and more consistent change in the signal, both in terms of oscillatory behavior and broadband changes. The face and house pictures task, while showing significant focal ERP responses, seemed to suggest more complex and less predictable changes in brain activity that might not be as easily quantified in terms of a largest response. The visual search task resulted in a notable, sustained broadband increase across the task duration, which typically correlates with a robust and consistent neural activity, and therefore might be considered to reflect a more significant response signal-wise. It's worth noting that significance in brain signals can refer to different things. It could be about the strength of the signal, the clarity of the change compared to a baseline, or how well it correlates with the expected activity for that region of the brain. The visual search task seems to present a clear, strong response, which is likely why it might be considered significant. Lastly, regarding the availability of functional MRI data, the information provided doesn't include fMRI data as the focus is on ECOG recordings. Functional MRI data would likely be found in a separate dataset or study that is designed to combine ECOG with imaging data. We have the visual search or the faces versus houses, which sounds like a classic fMRI study. But yes, so in this case, the patient also had electrodes. I'm wondering how the electrodes managed within the MRI scanner, or was it recorded separately or something? Yeah, I don't know. It is possible that this race are MRI compatible. I'm not sure. Yeah, I don't like... I don't particularly like the motor tasks because would they potentially induce noise, emotional effects? Yeah, so the number of subjects. So most subjects, yeah, did the fixation power law, session tasks performed by the patient as they stared at two to three minutes at NX on the wall, three meters away. That was an example for power laws structure. And who did the faces random order? Those blank screen patients sat down house. Yes, this... Yeah, the face is basic. They might be a good one to look at. We had less patients in it, but it's a clearer task. So they were shown a face or house, random order. 400 millisecond each with 400 millisecond interstimulus blank screen and patients reported an upside down house. But it's only mentioned once, why? Yeah, you might look at that data set instead of trying to load all of them in a drop down menu. Let's modify the code quickly. Won't be quick, would it? So as I just said, let's do something quickly in 10 hours later. Let's see task application. Okay, so we'll be looking at the specific task. It's called the imagery basic. Yeah, obviously, we know what the each task is. We would not be doing a comparing between tasks. I think that's too much to ask. We'll be doing a Vivian task comparison. Yeah, we are not playing minecraft. Through EEG or not yet. Trying to do something way more basic is looking at this data set show. So at least those implanted electrodes, so they should produce a clean EEG. But maybe still we don't, maybe we still don't want to look at the motor tasks because they might be inducing some artifacts. So we'll be looking at yeah, this one here. And we essentially simply will be just trying to do a better version. Like an interactive thingy for interactive thing thing thing. Interactive flask application that essentially will be showing this figure. But obviously this will be just one patient. Yes, we will start from one patient. And yes, we so we have the above description of this specific task. And it's wrong. Which basic cortical activity doing more motor execution, motor imagery, blah, blah, blah. No, that's it's not the right file. Is this the one with the houses and faces? There's nothing in figures. We have location. We have how many patients? Seven. Why do we only have seven? It's a speech basic imagery. Ah, sorry, it's called the basis basic. Sorry for that. Which one is this? Do I feeling that this data set will be super messy basis basic? Yeah, this one. Yes, we have this description. It's by the way, all the data set is creative comments, creative comments. So it should be good to be used. Still trying to figure out why there's only seven, seven subjects. It meant to be 14. And the tasks are called the different things in different places. Yeah, I'm trying to figure out that. But supposedly we are in the correct folder. It's called faces basic. Yeah. Okay. Long story short, we would like to update the code that we have everything HTML, JavaScript, Python to load this specific data set. And load the raw EEG data to begin with. Or just essentially just see what is in the data files. Okay. Instead of loading the EEG data, we just want to see at first stage what is in the data files. So essentially we'll need on the front panel an option to select a patient from this specific the specific task based on the folder and file structure provided and show what does the data contain that files contain. All right. We have a clear task here. We're going to put together a code that allows us to load the data set specific, specific to the faces basic task for each patient as per the file structure provided. The objective is to be able to select a patient's data file and display its contents, which include the variables stem data and s rate. This is what we need to do. One, provide an interface to select a patient code. Two, load the corresponding half face showers is not mat file from the slash data slash directory. Three, display the contents of the stem data and s rate variables from that file. Let's start by outlining a Python code structure that can be used for this task. We'll need to use libraries like matplotlib for data visualization, scipy or h5py for loading matlab mat files, and potentially a web framework like flask if we want to make this an interactive web application. Please note that without direct access to an environment where I can execute Python with full internet access or the ability to directly browse the file system, I can't provide a complete working web application. But I can outline the process and give you the code structure to start with. Here's a Python code snippet using sapy.io to load matlab.mat files. Python import os from scipy.io import loadmat import matplotlib.pyplot as plt. Assuming you have a function to get the patient code and the data is locally available. Def load patient data patient code construct the file path. File path equals f path paste a data patient code patient code code faceshouses.mat. Check if the file exists if not os.path.exists file path print f no data file found for patient code patient code return none load the data data equals load mat file path return data. This function would be a part of your web application where the patient code is an input from the user def display patient data patient code data equals load patient data patient code if data is not none stem equals data stem ecog data equals data data s rate equals data s rate now you can display or process the data as needed for example you could plot the stimulus over time plt dot plot stem plt dot title f stimulus over time for patient code plt dot x label time ms plt dot y label stimulus plt dot show and you can do more visualizations or data processing else print f data could not be loaded for patient code patient code this would be an example called display patient data j a the snippet above is the back end logic for loading and displaying data for a given patient code if you're building an html javascript front end you'd create an interface with a drop down or input box for selecting a patient then use javascript to make a request to your python back end flask app for example which would then execute this code and return the data to be displayed on the web page remember this is just the basic logic integrating this into a full fledged web app would require additional code for the front end back end communication error handling and a proper setup to serve the app on the web and when either you deal you deal for them what's normally called the what's normally a utilities folder for python code called in the project yeah i have a load that up by let's update the info with the workspace thingy can you update the file and folder structure for the current project should have you deals and it's it and now so we currently have this we already have a data yeah we don't need the tasks because we we're going to be looking at a specific task and we do not need the tasks drop down we'll be looking at a specific task it's called the faces basic it's one that comparing a data recorded for when the patient is looking at the images of houses and then faces so like to update this page to do this and later we will update the javascript as well now it's not a motor task again this manner yeah we do have the patience we have yeah so we'll have just one drop down for the patient we need to populate it from the folder structure this will be too long wouldn't it so updating the javascript but let's do it again with the workspace handle it will say too long yeah so each patient has a separate the mod file that's where the data is stored you can use load data by script in the utilities directed to load the data that's right okay can we modify load data to read the mod files and convert them into a python data file so we have a load the mod no we won't be plotting here plotting in another file and then ideally we won't be doing it in python it will be process data and then plotting in javascript it's turning it into an array okay we want that base path that's correct we don't need to predict anything at the moment we need to process the data so we have a get get request it's called load data and has a subject id need to populate the subject id first yeah patient code is file the patient code should be in should we have it in javascript or not what why is it hallucinating stuff the yeah load patient data all underscores underscores underscores how do i do that yeah in general we want to rely more on the front front end but obviously the the data is at the back end so to process that we need to populate a patient code as we have in in for text add that uh folder structure oops for each patient we have two folders one with uh let's start with populating the patient code numbers because they should appear in this drop down menu hpd 4 becoming super slow super slow based on the flask app structure you've provided and the data folder structure from the faces basic task let's go through the steps to create populate eight payt code dot pi which is presumably the script responsible for populating patient codes assuming that the script's function is to create a list of patient codes that will be used to populate a drop down in the flask app here's an example of what populate pocode dot pi could look like python import os def get patient codes data directory get a list of patient codes based on the folders present in the data directory parameters data directory path to the data directory containing patient folders returns list of patient codes patient codes folder for folder in os dot lister data directory if os dot path is there os dot path joulder return patient codes this could be an example call you'll need to adjust the path as per your environment setup patient codes equals get patient codes slash path to desktop millards k 881 ps 05 22 faces basic slash data this function would be called by your flask app in app dot pi to populate the patient codes in the drop down on the front end here's a simplified example of how it could be used in app dot pi python from flask import flask render template from utals dot populate pt code import get patient codes app flask a name at app dot route slash def index patient codes equals get patient codes static slash day flash make sure this path points to your data directory return render template index dot html patient codes patient codes if course remain app dot run debug it true in your index dot html you would then create a drop down menu that gets populated with these patient codes html dot tml land head other head elements select id patient code dash drop down for code in patient code sent option value code code option and for select script searchy url for static file name equal j s slash script dot j s body the script dot j s file could contain java script code to handle the event when a patient code is selected and then make an ajax request to a flask endpoint that would use low data dot pi to load and return the contents of the selected patient's data file make sure to install flask and any other required packages in your requirements txt plain text flask equals 2.0 taunt one remember to replace the placeholder paths with the actual paths to your data directory in the flask app also ensure your server has the proper permissions to read from the data directory and that patient data is securely handled according to HIPAA guidelines and any other regulations