 Yeah, we have the music going on as well. So we'll be taking breaks doing some annoying music Yeah, there's a false detection science detecting ahead on my a hand on my head Quite unusual. Okay, so it's like I'm using the correct microphone and everything So just go. Yeah, what are we gonna do first? We're gonna go over the website if you checked it already, I'm happy to Get the feedback if you tried the website I'm also checking this so there's currently more ads On it. I'm checking this ad blocker Well, it's working the ad blocker is working. Well, there's no ads whatsoever Way that this blog recently this is just because someone mentioned it did a quick Python script as an example for a Blunt album plot, but then yeah, would be nice to actually turn it into a Web application like an interactive one where you could play the real data and see what the What this plot looks like for you know different channels It's like it's similar to what we Had before So some of the tools are pretty heavy. This one is using a lot of my server Resources We have to redo it ideally in the using JavaScript The mods using quite a bit of Python back and stuff So might be Changing it. Yeah, there's a few options. This one. It's just a warm-up stream to see When we go next Interior by the way, yeah, there should be if you do not use an ad blocker should be a lot of ads Yes, you will be supporting the project that way by Watching them ads But yes, I'm just trying to see how easily they can be blocked And yeah, I'm pretty sure Google is doing a lot on that front, but yeah, good luck Meteor could display Like a banner, you know saying, you know an ad blocker Usage was detected. Please turn it off kind of thing and it will keep You know popping up Yeah, so I'm just still playing with the settings Yes, I was just saying this histogram spectrum. Sorry is spectrum to is bit heavy. I have to Change it specially. Well, the easiest thing will be just to reduce the Optional window size so not let it go over whatever 10 seconds. Yeah, I can actually hear the This server going all the fans and everything When using it with a slightly larger Window time window, right? So we're doing some Stuff for this competition, which is about to and but we run actually competing. It's a kegel kegel competition We had something running for it. They're trying to pass the other that the data set is Not great to say the least and Yes, we had this Apps creeped. That's a flask application. It is being deployed But yeah, I didn't even put the link to it on the main site. Well mainly for several reasons Yeah, that's what it Looks like Is deployed on the site if you want to get check it out. Yeah, I'm happy to share the link is just A something's not working. Yeah, that does a bit the way the That is being stored is very peculiar to say the least So yeah, it's this Was quite a bit of a learning experience The competition is closing in six hours you know, let me know if you participating or Interested the good thing about kegel competitions that you get access to datasets And this one. Yeah, took me some time to realize that probably John want to be looking at it because it's yeah, there's a lot of issues with it that are really covered Before Yes, I'm trying to pass it. There's all sorts of things about the e.g. Actually not matching the spectrograms They are in that, you know different time Different time windows the spectrum is 10 minutes the ECG labels 50 seconds There's too many unnecessary labels Yeah, that's that's what it sounds like Sound on thinking Hey, what on earth Is it worth a continue looking at this data or not? Probably not But happy to be convinced otherwise Never option with Probably yes, someone mentioned this one to me recently. It might be jumping on using that a This is a different the other set However, yeah, that'd be a lot of problems in that one as well There's about 76 Publications on it already And also see just sleep barely sleep studies, let's say I think it's a whole cohort of Different experiments, I think to look into it What they made the paper go over it quickly happy to do it now Yes, this This cake of competition one is probably Gonna go one one thing we want to do is actually So now that the competition is about to end in six hours It was supposedly a This private public right so you have public score interesting how The score the public score is much lower. This is only essentially on one So in the training Yeah, I won't be going over this but essentially My understanding is competition this hand at the private little boy this prime preliminary And we'll be finalized after the results are verified They already have it Yes, you can see how Whatever the first whatever a hundred Teams or whatever that submitted gives to some of them like a zillion of entries That the public score is much lower than the private score well essentially because Not sure how significant it is but essentially it means that the the algorithm doesn't generalize quite well on to the hidden the private data set Might be a problem so these guys or girls a published a Solution couple of them published solutions Yeah, so you had this say this five That came first public team Sony will also say if you take team Sony public Well, they get the what's not Finalized yet because they have this is still doing some sort of verification of results interior meant to be Objective not subjective Because there's just this score On which you rely on I don't understand. I don't quite understand how they're gonna go about the You know all these five teams essentially having the same score Are they gonna distinguish, you know, which one first mother of fifth place? Could be quite a mystery in the public score had about a lot of them at the same Score and this core is some sort of objective measure which Could also look into how to measure performance of this algorithms what we want to do really is So if we already have this does a parson everything a we want to train a simple fuzzy logic on it and then compare results to Some of this Hey, this is So if you so they published the Solution Yeah, we can cover this one convolutional neural net five stages channel mixer published the Code would imagine. Yeah, the code is public as well. I Said how are you? Thanks for joining in Sorry might be a delay in the in the response Anyway, what I was saying is that we eventually so this competition have ended we eventually want to take their model and Try and compare it to a basic basic basic fuzzy logic a system so similar To this pot that we have here. Let's identify normal abnormal ECG in this case. We'll be doing the same with EG I'll be on looking at the a couple of parameters So essentially have your same as for ECG pretty much well not sorry not not the same but I'm saying as simple as if not simpler You would have your frequency amplitude You have channel interactions Yeah, this is just a single assuming a single channel ECG noise Yeah, we would not be doing much filtering. I assume the signal was already filtered even if not digitally it was filtered by the hardware of the recording Equipment and that data was collected by Harvard Medical School. So must be correct, right? It must be great quality. Yes, it's looking at 360 parameters Yeah, would be good to know what are they a I don't think there would be any EG expert who could a Name that many parameters So I have to aim Question what what parameters are parameters and it's to it's a music to relax a 60 well, so I'm pretty sure some of the models will have like thousands of parameters in the model So maybe 360 is not that bad. You do want to I don't know if they cover It in here and or not Probably not but it would be it's interesting to know which parameter had the stronger Impact on their performance of the algorithm, right? Because yeah, if I'm because eventually I want to deploy something We want to have, you know the data set And you have the label Just part of the data and then you have the prediction of the model We could do that the neural net model and then basic fast logic and see how they compare to one another that would be Good general idea. We actually have a list of Perhaps some changes on On the side trying to update The about section and we have potentially two students projects. Yeah, if you are interested in a project, maybe talk to your Professor whatever your lecturer see if they are happy for you to do a project Yeah, it's a bit of a But yeah, primarily interested in the X AI explainable stuff stuff that we can actually you know See into that black box make the black box transparent Maybe there's another little black box inside, but You'd have we want to have some sort of transparency Into the algorithms that we deploy and the models So the models essentially can be if we treat the fancy model as an expert using fuzzy logic, we could actually Simplified well because Yeah, but normally don't need the fancy models. So if you have In that case, how many 360 parameters they want essentially I'm pretty sure We can eliminate half and then potentially another half and another half of this number in the model will be still performing quite well not using too much electricity and The question is of course how applicable how Easily you can replicate it onto, you know, your performance onto another data set because that's normally quite problematic In this case don't exactly if anyone knows what this score And how this score was actually measured. So I don't know if this a team Sony a How significant of a difference it is between 0.2 1 to 0.2 7 And Yeah, I wouldn't know You have the public data set 0.2 7 is over 50 Places down the list in the public Score So I have a feeling it is quite a significant difference because you those numbers are Quite close to one another. Yeah, they're using This divergence Have just the tool on this divergence alone Not much of an explanation. Is it? It's how you a code that calculates this score Example considering this is a warm-up stream just when I get back to Regular regular streams and stuff Because that's how we generate content for the website Yes, something like this would actually be of interest Compare this to a D prime chip If it will suggest the prime as alternative Can do a bit of a comparison between them. What else election? Colbeck-Liebler KL Divergence is a concept from information theory often used in statistics and machine learning Technically it measures how one probability distribution diverges from a second expected probability Distribution think of it as a way to quantify the difference between two probability distributions Here's the formula Text KL p parallel q equals some x p of x left px qx q right in this equation P and q are probability distributions KL divergence calculates the sum of the probabilities from p times the log of the ratio of the probabilities in p and q It's important to note that KL divergence is not symmetric This means that text text KL p parallel q is not the same as KL q parallel p in practical applications KL divergence can be used for things like Model selection comparing how well different models represent the same data feature selection Determining which features in a data set are the most informative Anomaly detection Identifying outliers in data based on how much they diverge from a known distribution It's widely used in machine learning especially in scenarios involving probabilistic models or when dealing with information theory concepts I didn't mention a deep prime Well, it's from the fiction theory. I know You're looking for alternatives to Colbeck libeler KL divergence There are several other methods to measure the difference or similarity between two probability distributions These alternatives have different properties and are used based on the specific requirements of your problem. Here are a few one Jensen Shannon divergence JSD This is a symmetric version of KL divergence It's defined as the average of the KL divergence from each distribution to the mean of both distributions This makes it more stable and less sensitive to small sample sizes to Hellinger distance it measures the similarity between two probability Distributions and is bounded between zero and one making it easier to interpret It's defined as the square root of the sum of the squared differences between the square roots of the probabilities three total variation distance This is the maximum difference between the probabilities assigned to the same event by two different distributions It's simple and intuitive giving a direct measure of the probability that the two distributions disagree on an event for Wasserstein distance or earth movers distance Used in optimal transport theory this distance measures the minimum cost of moving and transforming a distribution into another It's particularly useful when the distributions have different supports Number five Fisher information metric This is more of a differential geometric approach where the distance between Distributions is seen in terms of the amount of information that separates them It's often used in the context of statistical manifolds Six cross entropy often used in machine learning, especially in classification problems It measures the difference between two probability distributions Focusing on the average number of bits needed to identify an event from a set of possibilities The choice among these depends on factors like the properties of your distributions the importance of symmetry the need for boundedness and computational considerations For example in machine learning if you need a more stable and symmetric measure than KL divergence You might opt for Jensen Shannon divergence Meanwhile for problems in optimal transport or when dealing with distributions with different supports Wasserstein distance might be more appropriate Yeah, back to that the competition for a second again Let's say that we just we do have the data so I'm kind of familiar with what the data is doing But yeah, this figure is quite Useful in the sense of Describing the data set so I have these different types of edge cases in this case. Those are the labels So this is supposedly 10 minute the EG for which you actually do not have the raw EG only have the spectrogram and Some 50 second Samples a of the row EG So essentially I think they want you just to look at the spectrum Essentially have one two three four spectrograms in this one image there So if that's the seizure there, yeah You should be able to see the Yeah, those images meant to be clearer in this case all the review was a great And placed All the annotations Labeled this segment as seizure It was odd that in some cases you had about six reviews and some you had only two or three This these are edge cases because there's essentially no agreement Between the different experts Some what's called proto somewhere in between them though. There's also a high a Disagreement, so it's pretty much similar I this case they labeled it does Hava yeah, it's really odd Jeff ten Seizure labels and then ten Hava The question is is it yeah, we'll be really interesting to know if it's a different review or what the same reviewer Just looking at the different parts of the same ten minute window anyway, there's a lot of questions about this data site and Way it was labeled. I wish there was the I don't know if the original thing was published as well That actually just has the raw data From which, you know, just the raw e.g. What was recorded from the device so not filtered or anything one where we could a Generate the spectrograms out of it So I don't know when they were labeling what they're looking at the spectrograms or at the time series at the raw e.g. Signal that's Think that was Mentioned There's a paper on the data. Yeah, we might then not be Using this does that anymore. Yeah, there's some problem with the ECG so what? Makes you think that the e.g. He might be problematic as well That's something the low Frequency on the left side you can barely see it. Yeah, maybe I'm not using Not plotting it correctly in terms of color intensity. Yeah, that's looks like it should be a marked as Ava That's correct or correct in sense. Yeah, there's something wrong with the recording. It's odd that this Spectrogram doesn't correspond to what's in them in the signal. Yeah, there's just some weird stuff in it But the spectrogram is different. So the spectrograms in many cases do not match to What you see in the raw data, which is unusual Yeah, I would expect something like this For your baseline this looks more like e.g. That was marked as LPD anyway, yeah, I think we need to know when to a Give up on the data set as well. I think this will be eat right, so I have this alternatives to Callback label a divergence. It's really about deep prime quickly D prime is a measure used in signal detection theory to quantify how well a signal can be distinguished from noise It's different from measures like KL divergence or Jensen Shannon divergence as those are about comparing probability distributions D prime is about sensitivity or discriminability in the context of detecting a signal against background noise Here's a bit more detail one concept D prime is used to quantify an observer's ability to detect a signal against the background noise a higher D value indicates better discrimination ability in practical terms It tells you how well a system or a person can detect a true signal like an event or a specific data pattern from noise Which could be literal noise or just irrelevant data to Calculation it's calculated based on the hit rate and false alarm rate hit rate HR the probability of correctly identifying the signal false alarm rate far the probability of incorrectly identifying noise as a signal D prime is calculated as the difference between the Z scores of the hit rate and the false alarm rate Z score is a statistical measure that describes a values relationship to the mean of a group of values measured in terms of standard deviations from the mean 3 use cases It's widely used in psychology particularly in psychophysics and perception studies in these fields It's used to measure an individual's sensitivity to particular stimuli But its applications can be broader extending to any area where you need to assess the accuracy of signal detection Like radar systems telecommunications and even some machine learning problems 4 technical significance D prime is useful because it considers both the ability to detect true signals and the tendency to falsely identify noise as a signal This makes it a robust measure when evaluating systems or individuals in terms of both their sensitivity not missing true signals and Specificity not misidentifying noise as signals in summary D is a different tool for a different job compared to KL divergence and its alternatives If you're dealing with issues of signal detection sensitivity and specificity D is the go-to metric for comparing probability distributions. You'd use something like KL divergence or its alternatives Yes, we said there's this Student potential student projects that are available. Yeah, that this one will be focusing on explainable AI AI turning taking black boxes making them slightly more transparent and Second project is mainly around visualization of data You may have a look And we're also dating this or trying to update this is an old list of Resources skip the first one Yeah, we have those Yeah, I want to take this data sets a list of data sets Yeah, there's this does it search apparently can even search data sets on Why this one doesn't have a term already in the URL Have it like that Imagine IEG datasets will cost a lot Kate coins that has an open source license If the Kegel one doesn't mention it, but it should be open source as well. It's nice. That's actually being mentioned 2020 That fast it's only 100 kilobytes Yeah, you need to log in Yeah, so I want to do like at least where We'll have parameters like Is it can do it table in? html sources nice to Add that table Yeah, what should the columns be so we'll have the name name of the dataset and then essentially we'll have things like accessibility So that will be a score depending on his login required or not Things like that I don't want to be able to get to the generic one You would expect slightly more from my GPT for But that's okay Where should he put it We just have two links to datasets should that include Hardware as well now or software Maybe Yeah, we've got a set you could have an API access because the problem from this thing is that I have essentially copy of the set of the dataset on my On my server by the way the sessions meant to be Interactive let me know Any complaints Questions suggestions. Okay, where were we? Yeah, by the way, I'm like editing those videos and they will not have the you know, they're knowing music I don't know maybe you'd like the music There's a video Generating all this musical masterpieces if you want to enjoy them as a stand-alone Video slash song go for it or maybe you can make your own. That would be even better Yeah, it's something going on there Right to Right, so we have this table And we have this table in HTML for the one fine That's it search Platform yet pop it in here. Okay, so we have a Data set It's a wanna table VG datasets Right, well another number of subject that format. Yeah, those are really good Suggestions You can actually populate the table Publish results Yeah, one the stuff like Number of subject that format. Yeah. Yeah, that type access. Yeah, link. That's kind of important Just give me a random sample stuff Access public restricted. Yes, that you was to have Stroke specific datasets then epilepsy those are all EEG and generic EEG General health. Yeah, I get rid of it Rid of it to make sure this links actually still work. Well, and the main thing is the Quality a quality of the data Hey, can we expand that table with things like quality of the data a Potentially accessibility might be more options than public or restricted Maybe give it Maybe have some sort of score for accessibility and nothing we interested in is if the data is processed pre-processed or raw from the recording device additional Parameter could be if ECG is present or any imaging or Any other recording modalities also would be absolutely crazy If you could start populating the table as well with actual existing datasets Yep, we didn't provide this one yet certainly Expanding the table further to include the quality of data a more nuanced approach to accessibility and additional parameters like data processing status and presence of other recording modalities like ECG or Imaging makes it more detailed and useful Let's structure the table with these new parameters and populate it with some hypothetical examples of actual datasets Note that for quality we could use a simple rating system like high medium low and For accessibility we might use a scoring system like 110 with 10 being the most accessible The data processing column will indicate if the data is raw or pre-processed and additional recording modalities will be noted under additional modalities Here's the updated HTML table HTML doctymal title detailed EEG datasets table body Table border one dataset name Description number of subjects data format year of release data type quality accessibility score Data processing additional modalities link sample EEG set one EEG data for cognitive tasks 30 CSV EDF 2019 event related high eight raw ECG a raffle URL to dataset one link T neuro brain data to resting state EEG recordings 50 EDF 2021 resting state medium six pre-processed Imaging fMRI RF URL to dataset to link add more rows here for additional datasets In this expanded version quality a subjective assessment of the data's reliability and usefulness Accessibility score wise and subject numerical value representing how easily the dataset can be accessed by researchers Data processing indicates whether the dataset is provided in its raw form or has undergone pre-processing Additional modalities notes other types of data included with the EEG data such as ECG fMRI etc The datasets sample EEG set one and neuro brain data to our fictional examples Replace them with real datasets as per your requirement you can further expand the table with more rows for additional datasets and Adjust the columns as necessary to suit the specifics of the datasets you're working with This structure offers a comprehensive view of each dataset making it a valuable resource for researchers looking for EEG data That meets specific criteria Okay, let's share the whole No, I didn't say that It's the transcriber making assumptions So assuming Profanities clear Why the quality assessment is subjective This is my whole html page above With a bunch of links we would like to use that to Start populating the table if the real datasets Yeah, no one is okay To populate the EEG dataset comparison table with real datasets from your provided list You need to extract the relevant details for each dataset and insert them into the table Let's start by focusing on a few datasets as examples I'll use the information from the links you've provided to create entries in the table For the quality field, which is subjective a possible approach could be to base it on factors like the datasets completeness cleanliness EEG presence of artifacts and the depth of accompanying documentation However, since I can't evaluate these datasets directly, I'll mark the quality as unknown for now Here's an updated table featuring some of the datasets from your list HTML dock to HTML Head your existing head content title EEG datasets and resources Body your existing body content table dataset compare hap Dataset name Description number of subjects data format year of release data type quality accessibility score data processing additional modalities link Example dataset row Stroke patient EEG EEG datasets of stroke patients 50 unknown unknown Stroke related unknown unknown Unknown ones I get it a ref fig share comm slash datasets EEG datasets of stroke patients slash two one sixteen seven nine link Additional rows for more datasets Your existing body content after the table in this example I've populated the table with one of the datasets listed under stroke EEG datasets You can follow the same format to add more datasets from your list For fields like number of subjects data format year of release, etc You would ideally fill these in with specific information from the datasets documentation or description If certain details are not available, you can mark them as unknown The table structure including columns for quality accessibility score, etc Is flexible and can be adapted as you gather more detailed information about each dataset Remember for an accurate and helpful table You'll need to dive into each dataset's specifics and possibly even contact the data providers for detailed metadata I would need to add the license as well um listening to Have a column if there is an api available I mean Yeah, it's a tricky business isn't it um well because if we have to download Then we have to store The data on a server Which will be costly I mean if I can recover the electricity cost that will be great. By the way, we do have our first Patreon supporter Which we probably could do a shout out Come and find anything on Patreon. Yeah, we have Johan Matthias Supporting us Patreon so thank you Johan Okay, so we currently have the following html We actually do want to go one by one and add each dataset into the table Can you use the links provided to fetch the necessary data From the description of the dataset Can you actually use the links provided in the html? To fetch the necessary data And we also would like to add or change the column names We would like to include the license type Now actually the accessibility should cover it should Should be g or something And this is a way that says you can supposedly access the data then says Log in to access the data files Log in log in log in. Yeah The usual thing with the hydriple Everything is available in the paywall or login or What using the links in your html to fetch the necessary data for each dataset Would typically involve some form of web scraping or data extraction Which is a process where a program or script automatically extracts data from web pages However, it's important to consider a few things One automation limitations As of now my capabilities don't include executing or running real-time web scraping scripts Or directly interacting with external web pages to fetch data Two manual process you can manually visit each link Extract the relevant data about each dataset like the number of subjects Data format year of release etc. And then populate your table with this information This process involves opening each link and reading through the dataset's description and documentation to find the details Three legal and ethical considerations always ensure that scraping a website complies with its terms of service and data privacy regulations Some websites prohibit scraping in their terms of use four technical approach If you decide to automate this task You would typically use programming languages like python along with libraries like beautiful soup or scrappy To write a script that visits each url Extracts the needed information and then formats this data into your table This task would require programming knowledge and an understanding of web technologies So in summary while I can guide you on how to structure your table and what kind of data to look for I can't directly fetch the data for you You'll need to manually visit each link and extract the needed information Or consider writing a script to automate this process If you have the technical skills and it's legally and ethically permissible Well, the other thing we could scrape It's actually probably a bit better if we scrape Google scholar And then we also have this search Yeah, we're happy with these two searches fixture And the dataset search by google for which I forgot to enter the eg Yeah, fissionette is a It's a quite a few I'm sure fissionette will have a search As well It does that search platform Dock it do I really need it? Yeah, that works And just place It does that search by google all the all eg searches They make Someone saying hello, and I don't know why is it not in my Hi, Vikram. I don't see it in the youtube app. It's a bit unusual To see it on my streaming software Ah now I do I need to refresh the app just kind of defeats the purpose of having an app Eh, isn't that okay, so we have this eg does that search we have fixture a google just put google Datasets search Vikram is asking what am I doing? What am I doing? That's a good question Eh, we are updating The site on finelycares.com in the bot section There is a resources page And currently just has a list of all these Datasets mainly for eg that the ecg to it as well king at table It's too original Yeah, this table We've a dataset compare And we have Number of subjects. Yes, it's important date that format you've released a type a Yeah, that type should be More in terms of The file format that processing gave it's if that is process pre-process to a raw Additional modalities. So normally if you have eg that I might have a ecg Unit as well will teach me programming. I don't know programming. I use gpt for to program This is what all the tools on Bionic chaos were made using gpt for And github co-pilot So they are really great at teaching you how to program. Well, we'll do the programming for you if you know which one This does it compared to why I said to me twice. Do I Spare time So yes, unfortunately You can play some music for you as well. If you like it or not like it And you can actually take the code for this one and Regenerate it and make the musical scale Something that sounds better to you It's on the website as well. Why? Yeah, it's when a window should be fixed now. Okay, just kidding Back to this data set comparison So it's suggesting as we actually have to yes, so And the robot is being a bit lazy doesn't want to generate html code for us Tells us we actually need to go each link one by one and scrape the data I don't know how to do it We could do it manually because we want to learn about each dance set. Anyway That could be good if you could give a summary and add the data into the table Yep, isn't 50 patients it is See if it's actually scraping the website or is taking it from my HTML like when it starts analyzing stuff Um on the server Just wanted to generate text Yes, we already have all that most of it the link actually works Yes Why that different versions of it doing more the imagery tasks Can scrape the Keywords that's exporting the citation for it. Then the other question is Eh, if there is an API for it or not the JSON file That's a description can just download stuff. Yes, we can So that's a You know a thumbs up for accessibility. We didn't need to log in. It's over here accessibility You can supposedly download that without logging in. It is not much in the JSON file And that's for sure Right. So those are edf Supposedly edf files Can view the file Details just one of the patients Yeah, let's continue for now and Get back to it We have to Download all files No Unknown website. Do you want to do it? What's this license? general terms of use Okay, can we add a second Entry in the table in the table clear Can we add a second entry in the table? We have this link and Following information In a text file below with five patients That's correct five healthy individuals It's a bit odd when they record from healthy subjects because what makes it a stroke data set electrical stimulation Yeah, I'm not sure if you should add the can download without the login considering We haven't actually downloaded the whole data set Yes quality unknown. Well, we wouldn't know until we Actually get the data So yes as you can tell This might be a long term project and if anyone wants to Help out More than happy to collaborate on this. I want to open my camera is frozen. It's my Camera frozen. Okay, it's frozen now. It's minimizing the window It doesn't help. Okay. This was a warm-up stream today because we haven't streamed for a while The next one then we'll correct it later make mistakes and Latest stage Matlab Matlab files is not Is not Great. Well, it's okay can Python can read the Can parse mat files Clear Hey, can you add the third entry into the table? Hey using the following link the hypothetical why creating a hypothetical Why is it a hypothetical just? Fetch data from the following link I'll play some music in the meantime We spent we spent to be telling Whom what to do to manually extract data set information from the xinodo link follow these steps one Open the link xinodo data set slash xinodo dot org slash record slash 4108215 Two read through the webpage to find key details about the data set such as the title or name of the data set A brief description of what the data set contains or its purpose The number of subjects or samples included in the data set The format of the data files e.g. csv edf, etc The year of release or publication of the data set the type of data e.g EEG recordings behavioral data, etc Any additional modalities or relevant information like whether it includes other types of recordings? Three once you have gathered this information you can fill in the respective fields in your html table Remember accurate and detailed information about each data set will be valuable for researchers or anyone interested in using the data set Yeah creative comments is nice What's your problem? I can't Above this data from the page. Can you a turn it into A nav entry into our html table Yep Based on the detailed information you provided from the zinodo page. Here's how you can structure the third entry for your html table html td Quantification of stroke lesion volume using epidural EEG This data set features EEG recordings in response to sound stimuli Including data from normal subjects n equals 10 and subjects with mild nx7 moderate n7 and severe n7 write auditory cortical infarction It is designed for the analysis of stroke lesions in a cerebral ischemic rat model 31 10 normal 21 with cortical infarction m at 2020 stroke lesion research Unknown unknown raw sound stimuli response data a rf 18 s zinodo.org slash records 4-418215 link In this table row Data set name quantification of stroke lesion volume using epidural EEG Description provides an overview of the data set's purpose and the nature of the subjects involved number of subjects total subjects and the breakdown by condition Data format specified as mat mat lab data format year of release 2020 Data type focused on stroke lesion research quality marked as unknown due to lack of specific details Accessibility score also unknown data processing Indicates the data is raw Additional modalities noted for the inclusion of sound stimuli response data You can insert this girl block into your html table replacing the placeholder text with actual information from the data set's web page This entry will provide a clear and informative overview for anyone interested in this particular data set Yeah, right. I'm not sure where it's getting the raw thing from The Description doesn't actually mention Now, yes, and important thing we need to mention You know who are the subjects are they human or not? um So I assume in this case it's 31 reds That does mention it the files are Matt. Yeah that format math. That's correct Now the subject you know type speech speech to mention Look in the other datasets A what are they this one say actually specifies human What the e g bus What the e g bus An actual bus Something lost in Translation Looks like a bus an actual bus right That's interesting There's a bus with e g equipment driving around europe somewhere collecting data Um, and I assume it's human I'll continue this next time. Let me know if you check the the Website Have any question suggestions complaints That will be of great help and I'll see you next time Bye