 I'm straight into it. Yes, I was counting on the eye-tracking tool. So I have it running in the local debugging mode and also And I have it as my camera as well, which ways it this way. No, I can't point it. It's like, yeah And it is the same camera Wait, why do I have two cameras running? That's not a good sign It could mean something is wrong Yeah, we we might try the eye-tracking with two cameras at some stage at the moment. There's a problem on the mobile I could not actually reproduce it. So if I go mobile And this is the resolution the dimension of the screen that I have On my mobile device, but it doesn't behave in the same way on my mobile and the video is Well, so it's loading it as a regular website not as a mobile website And that's I think a problem that might be a problem if the width of the This video container in styles as Yeah, this might be This might be an issue The rub flex I'm pretty sure I don't know anything about CSS It's what I have the port for Also the fact that I cannot replicate it Wait, are we calling this CSS correctly? Yeah, so we have one CSS for the entire project So all the website looks more or less the same And then we have another one for a Specific application And I wonder if they sometimes Glash in the actual application. We should not Where is it? Have a video container. Well, wait this Sessions meant to be interactive So if you want to pop your aim Anything in the chat will be more than happy to respond Except questions like where you're from and stuff like that, which yeah, which yeah for some reason some platforms are Encouraging this type of Questions and I don't blame them It's hard to get the engagement And let's check the CSS for Yes, this is the CSS for the whole project Probably be a Minimized Anyway, the main problem is that I can replicate the thing on my in my browser In the browser, it's working. Okay on my phone the overlay They can do like a split view But that that's fine, that's not what's happening what's happening is that the video footage is Is Not starting at zero and then the overlay is Offset I will see. Anyway, we'll see if We did change the code a bit so hopefully just Magically gets fixed Yeah, make sure that the position is the absolute for both the video and the tracking canvas that meant to be Of late it's the of light bit on top. Yeah, but you can change your eye we tied a Height and then the main thing is that the Racial thing Textually makes a lot of difference because it's it's finding the darkest point essentially in that the Region of interest Yes, if you're looking directly at the camera Yeah, it's working better phone for one Then the other And yes, the light in the room will make a lot of difference as well Just do more light and you can actually go Rakes for it Make the ROI smaller She's not looking directly at the camera. That could be a problem as well. Anyway, play around with it And let me know what you think It's on The main page and yes, it's everything all the tools being optimized for desktop not mobile at the moment, so It is work in progress if anyone knows how to optimize for mobile to Let me know as well Last time so we added Yes, we have a bunch of The blogs now it will be good to have this plan out month plot actually display real data Do have just the basic Python code, but that's just generating examples. It's just a random a random stuff a random values We will eventually apply it on to Real e.g. Or e.c.g. Let me know. What do you prefer today? Do more e.g. Or e.c.g stuff Right another thing we were doing yesterday This about section waited a few More pages there's a potential student projects if you have any students that you know or you are a student yourself you want to do a project maybe Send this to your supervisor a One is a many around the doing x AI was explainable AI with biomedical data. No black boxes big no to black boxes explain Everything and know what's inside the box And now project is around visualization which we see can tell highly interested in a Will start doing this resource page. So we actually updated this one a Red window Do some more work on it was actually putting it into a table the table in a sec was called resources HTML it is well run it locally. This was not published yet It's this table that we're still populating yellow the items are unknown even though Should be pretty straightforward to fill the year of release and this has a Data-side comparison thing a number of subjects. Yeah Of course, it's quite important. Some of the subjects are not human. So that's important to mention But yes, obviously once they're not human probably have a Model of some sort of condition a which is You know will require a completely different type of analysis Then we have the data format and yes the math files can be open in Brighton, I do not use muddle up anymore because that's too expensive This one, I don't know why it's unknown a DFS standard format for a AG I Don't know what these ones are If you know what they are Let me know. Yeah, we one big thing is if the And it's quite important to know if the data is raw process pre-processed so on so forth Heavyly process or lightly processed And if there were any additional modalities recorded So we know quite a bit about this neurofist implant data most of the tools on the page From the EEG tools are from that data ECG most of those actually Synthetic a AG like this one Looks real because there's noise added to it, but this is synthetic data. Well, no, maybe it doesn't look real a all the EEG data on the website is this a Neurovist that This essentially publicly available on the EEG org So it's in our website that the old Repos so we have this beat somewhere. Yes, search platforms Is essentially IEG is a search platform It's a It's a rep or not a rep or is it a rep or no, it's a platform and And that's it. Yeah, they just platforms Beat so that other set platforms and Yeah, I EEG Dot org would be one of those. It should go there It does require a basic A login so that's what we ranking the datasets Yes, we have a data set. Okay. I have a study Okay, you have a device It's because highly convoluted isn't it because we too also have devices So I have a recording device say for EEG then that device could have been used for a specific study that had a certain number of patients and The patients for example were in the different groups or healthy controls and Say epilepsy patients then the same device could be used in multiple studies then you have Potentially the data collected from one study being published in multiple Repositories Yeah, so I have to monitor all this stuff How do we make sense of it? Well, we pop it into GPT-4 What we normally do a optical and the actual HTML that we have Sometimes say refusing to Actually look at the links. Yeah, what I was just starting to Interesting it didn't start from the way from the bottom Yeah, we're showing ads on this page Should be okay. Hopefully getting relevant ads Yeah, we have an overview need to check how it looks like on smaller screens. Who knows Yeah, it doesn't look good How do you show tables on small screens? Do ask To rotate the phone or to deal with this late Right, so some of these links Guess I say this thing goes to a does it the specific does it the description On so forth. It's on the picture And This link however just goes Well because it requires a login so for you It will just go to the website and not the extension with the actual data It's no problem with it's no problem of websites that require login That's why there is no login on On my side So it looks the same for everyone Logins just Yeah, add additional complexities like it. I'm saying is Well because it's a link So because it requires a login it will just go to the general website not where the database is actually stored So should go into Platform. Oh, yes If you go scola and this Dataset was used 300 more than 300 times 400 times yeah, that looks like yeah Quite a good paper Reading it when it came up The war story behind launching a venture to treat epilepsy It's quite a Quite a roller coaster this one. Yeah, the data itself is from 2014 or something It's funny how when you sort by date you only get four results. It's really odd And so by relevancy at four hundred forty two results a Some of this data is on Kegel a kegel as well and my son jumping around but the Jumping around Offlit still makes some sort of sense What what work because we're really running something on it running the eye tracker. Don't we? Well, we don't have to do Dead We're running the eye tracker from Okay, no, they are The one that I'm using this face on is on the website. So it's deployed it's a Should I flask? Yeah flask application, right this one Right, so we have this data set it's currently On kegel this competition. It's finished and there you go Let's go straight to the a leaderboard and then you have your public point to one Point to seven. Yeah quite a big difference In this team Published the solution And I have different members did different things They just took all the models Together so one person was doing a 2d model then why did you need a frequency? transform They all used raw a g interestingly they didn't use the images the spectrogram images That's interesting Because the data Yeah was essentially Yeah, I covered this before Go look at my previous videos Yeah, I have the raw G there what the resolution of this thing is not great And then a spectrum essentially for the spectrum rounds for each recording each from a Quarter of a brain leaderboard Sony interesting say The team that got second place With the public score When to place us down with the private data set Private leaderboard is calculated with approximately 65% of the test data Discomputation is complete the very little water reflects the final so this yeah, so this 65 percent of that was not available to the competitors Kind of important to note That's why it's called private Anyway, interestingly the winning team only looked at the voyage you noted the spectrogram The frequency transform I don't know what they mean by to the model Publish this stuff separately today. No, they did take the spectrogram images Stacks kilogram image Still the same data anyway, the point is That we will try and deploy a You know train a fuzzy logic explainable a I thing on the same model and Then Compare it to a one of those models Obviously one, you know best a first place with a second place Don't worry about the actual H. D. M. L Hey, we want to focus on the table. We want to keep placing all the mentioned and potentially new datasets Into the table and yes, I tell you would like for it to be displayed okay on a mobile phone as well But if it's too hot Don't worry about it. The head around small screens. What why? Especially with mobile responsiveness in mind here are some technical suggestions one responsive design Making tables responsive on mobile devices can be tricky due to their row and column structure One common approach is to reformat the table into a more mobile friendly structure using CSS and sometimes JavaScript Here's a basic CSS only technique CSS at media screen and max width 600 px table with 100% display block why this bot is reading the whole The whole text including the code have another bot And in your HTML it doesn't modify each less than TD greater than to have a data label attribute It doesn't sound that good. Then it can Read Just anything you want really Data-driven approach if the data set is expected to grow or change frequently Consider loading the table data dynamically from a server or a JSON file This will make maintenance easier Pagination or load more if the table gets too long think about implementing pagination or load more functionality to avoid overwhelming mobile users with too much information at once Interactive elements for mobile users ensure interactive elements like links are easily tapable good spacing and larger font sizes help Testing regularly test the table on various devices to ensure compatibility and responsiveness Tools like chrome dev tools can simulate mobile devices for this purpose Accessibility keep accessibility in mind Screen readers interpret tables well, but make sure that any dynamic or responsive features also maintain this accessibility Remember CSS based solutions are often more maintainable and less intrusive than JavaScript based ones especially for something like responsive design The CSS method above keeps your markup clean and relies on CSS for the heavy lifting which is generally a good practice for responsive design And not overwhelming our mobile users sounds like a great idea How do we move the table into a JSON file and Pass it correctly on the page Yes load more For mobile users will be great as well Yep Yep, yep, yep Yep Rather complicated as it seem to become Rather seem to become complicated or come on It seemed to become complicated. Hey, can you generate the whole JSON file and Shall we turn it into a flask application? Because we're already deploying some on our server I mean surely this does not need a back end This is the HTML code that I have so far Turning your HTML table into a dynamic JSON driven flask application can make it more manageable and responsive Especially if the data set is large or frequently updated Here's a step-by-step guide to set this up Step one creating the JSON file You need to convert your table data into JSON format This can be done manually or with a script. But for this example, I'll manually create a JSON structure based on your table Turning your HTML table into a dynamic We're going for flask, are we? Create your flask app in your project directory, create a new Python file e.g.app.py and set up your flask application Link to a JavaScript file in your HTML Add a link to a new JavaScript file that will handle fetching and displaying the data Create a div for the table. In your HTML, replace the existing table with an empty div where the table will be dynamically inserted Sounds like we're going for a full application Run your flask application Ensure that your JSON file and HTML template are in the correct directories Templates for HTML and the root or specified directory for JSON Customize the JavaScript function to format the table as per your design requirements You can expand this setup with pagination or load more button for better performance on mobile devices Okay the question is We've back end Will it be fetching the JSON file from the server every time? Is the option of using a CBN a content delivery network is still available Yep When you use flask as your back end to serve a JSON file by default the server will read the JSON file from the disk Every time a request is made to the endpoint like get underscore datasets in the example This can be inefficient, especially if the file is large or the requests are frequent You can read the JSON file once when the server starts and store its content in memory This way the server will serve the data from memory for each request, which is much faster That's how my server works Don't think it will actually be faster If I keep it in memory all the time If you have static content like JSON files, that doesn't change often using a CDN can be highly beneficial A CDN caches your content in multiple geographic locations close to users reducing the load time Using CDN for static files. You can host your JSON file on a CDN Whenever there's a request for the JSON data the request will be served by the nearest CDN server instead of your flask server Flask class CDN in your flask app You would provide a URL to the CDN hosted JSON file instead of serving it directly from your server In memory caching while faster this approach increases your servers memory usage It's best for files that are not too large and don't change often CDN, this is more scalable and can significantly reduce server load and response times for static content However, there might be costs involved depending on the CDN service you use Both methods have their advantages and can be used in conjunction to improve the performance and scalability of your application Choose based on your specific use case considering factors like data size, update frequency and user geography But if we continue with having everything having the table in a HTML format Would it by default be Using a CDN Hosting static files a CDN is primarily used for hosting static files like CSS JavaScript images and even HTML pages Your HTML file with the table can be hosted on a CDN Faster content delivery the CDN will cache this HTML file in multiple locations globally When a user accesses your page the CDN delivers the content from the nearest server location to the user ensuring faster load times Updating content if you frequently update the HTML file, you need to update the file on the CDN The CDN may have a cache invalidation period during which the old file may still be served to users Dynamic data handling if your table data is dynamic and changes often you might want to consider using Ajax to load the data into the table In this case your HTML file with the static part of your website can still be hosted on a CDN While the data can be fetched from your server or another API endpoint Setting up CDN you typically choose a CDN provider like Cloudflare, AWS Cloudfront, etc Upload your static files there and then update your websites DNS settings to serve files from the CDN Performance, faster loading times due to geographically distributed servers Scalability, better handles high traffic loads Reliability, CDNs often have high uptime and can handle server failures more gracefully Bandwidth reduction, offloads traffic from your primary server Not ideal for rapidly changing data CDNs are great for static content But not ideal if your table data changes frequently unless you employ Ajax to load the data dynamically Complexity and cost, setting up and maintaining a CDN might add complexity and potential costs depending on the service provider In summary, using a CDN for hosting your static HTML file including the table structure is a good practice for performance and scalability If the table data is dynamic consider using Ajax to fetch the data from your server and populate the table while the rest of the static content is served via the CDN Data set, data set, platforms, big share, it's uploaded pretty well Some websites from the Netherlands say 4 to U, research data And we have, well, big share should work, we already tested this thing before Maybe not, I'm just a bit slow to load And those are pre-loaded with the EEG in the search This is a nodal electricity demand in Europe This is the best match, let's try an electron cell of photography Right, this is better, flower electrodes It's from, it's pretty recent, the second one is from 2017 Given that this, okay, that works Try and control, not just F5 It's another, seem to show something useful By the way, if you haven't checked out binarykills.com please go to so You will be supporting this project that way A physio net page not found, what? How do I just do it from the, from the URL? I go physio net data, can't seem to be loading data sets only anyway Those, I will check, it's working okay U, net, EEG Right, then we have the EEG portal that yeah, requires login And Google data set search Right now we actually is, yeah, probably the neurovisual implant Because the, yeah, I already said it like 5 times, 10 times But because it has, it requires a login I cannot have the actual link to the data set It goes to the platform, maybe I can Yeah, it's a bit of a broken website It's just loading forever, is it? Yeah, it's hard to search a general non-pleptic It was a pleptic experiment search Nine items, yeah, subject name and day No, that would be, no, that's the study number That must be subject, that must be day One, two, three, four, six, seven, eight, nine But it's a long term, so there's actually a lot of data And this doesn't have a URL, so it's a problem It's a problem, how this website works I think it's sort of designed quite some time ago Era on the server, that's not good Yeah, it can even view the data from here But I would not attempt it, I think it will not work It's safe to assume it is from one, no, one, two, three, four, five, six, seven, eight Eight dogs, to know, maybe It is stable, it's impossible to work with Your list, eight dogs, that was 31 rats Ten normal 25 with cortical infarction Yeah, do I want to know how that model is actually being achieved Probably not, probably not What else we've got, easy or not, that's not good Check, check this on the original, we need to replace it Let's find it, the html, maplipsid assets Just format this document That way it's a bit weird It's the second one, is it? And the link doesn't work anymore, that's the problem Of some of these data sets, of some of these repositories That data sets just might disappear Just disappear So I'm thinking on a server, shall we Considering they were in the public domain With a creative commons license We should be able to keep a copy, mostly human And let's go physiognatig Just to data Why doesn't it say how many There are, it's possible that there are only three Yeah, one of them is open access This one has quite a lot of tasks for a safe Actuation, music, perfume, coffee For enhancing cognitive states, include subjects, responses, reaction times And physiological data, EDA, HR, PPPG, temperature, accelerometer, EEG All this this one published last year End of last year So like a couple of months ago, it's open access It's interesting, but the one that we were looking for It just disappeared That's not good I'm pretty sure it's been used quite extensively Yeah, there's about almost 3,000 publications They mention it, so it's not going anywhere I'm sure there will be other repositories that have it It's particularly in Google Scholar When you are sold by relevance, it gives you 2,600 results You're sold by date, it gives you 144 One of them is three days ago But we don't want that on the older one It's not very useful What happened to the physio? To the physio.net link It's freely available on physio.net.org Is there a link? There is Yep, good luck Actually, I think I might have a copy of it Locally somewhere Should we start another repo? It's only 2.7 GB How did I manage to compress 2.7 into 500 MB? That's interesting I assume those are subject IDs The data is in CSV files Start time Unix time, I love Unix time CDA, sampling rate Sampling rate could be just a variable You don't need it with every entry Sampling time you again could have just had delta t Essentially your sampling rate Instead of having the actual numbers That do not change Why don't they change? It's a bit odd isn't it? Sorry, for red flags The EDA actually changes That's good It's 0.8, 0.9 somewhere It's quite variable Starting of 0 is a bit odd It's probably the first sample can be ignored Because the EEG EEG Right, only 4 electrodes Time stamps are actually changing It's a good sign It doesn't mean there is inconsistent sampling rate It doesn't mean there is a drift in the times between samples Why this is just not a single variable Then we have 4 electrodes Which might be not enough For most tasks I don't know what state is State sound Auditory So it's a response to sound A problem that could be some sort of artifact Especially if the sound is Yeah, there are no problems with sound Being played that could be Actually interfering The EEG electrodes could be picking up The sound Especially if coming through Ear buds that have a magnet But I don't know anything about it It would be interesting to have a look at it Considering it was published recently It's open source Creative commons license So we could look at that Yeah, it's a shame that some data sets just disappear From the public domain It's not good We'll see if we can find it somewhere else I think we actually might have a copy Of the data set ourselves Because all these papers that mention it Yeah, how many papers mention This link 87 Yeah The data set is gonski That's not good Might add Regulation Might add This data set to the mix Can we ask We might continue with the HTML Format for now Can you add the following data set Into the table Sure, let's add the specified data set to your HTML table You'll insert a new row into the existing table structure with the details of the data set Here's how you can modify the table in your HTML You can place this less than tr greater than table row Tag inside your existing less than table greater than tag in the HTML file Following the structure of the other data set entries This will add a new row to your data set table Remember to replace placeholder text like, unknown, with the actual information if you have it The description I provided is based on your link and might need to be adjusted based on more detailed information about the data set Yeah, did you actually Look At the link Rain verbal monitoring experiment Why is it experiment A1? How do you know the quality? How do you know the quality is high? 2020, no, it's 2023. Where did you get 2020 from? Yeah, we need to add the if it's a each I don't think you actually looked At the link Great Let's add this specific data set to your HTML table I'll create a new table row for rain wearable monitoring experiment one a one data set You can insert this less than tr greater than into your existing less than table greater than structure in the HTML The details are based on a general understanding of the data set from the provided link But you should just the specifics like the number of subjects exact data formats, etc. Is necessary to match the data sets actual properties Go back and something Use this link instead Sure, let's add the new data set to your HTML table Yeah, that's more like you will need to append a new road to the table with the details of the data set Here's the updated part of your HTML with the new data set added Insert this row in the appropriate place in your table If you have specific information about the number of subjects data format year of release quality accessibility score data processing or additional modalities You should replace the unknown placeholders with this information Remember for maintainability and scalability Consider the dynamic loading of this data using a back end service or JavaScript as discussed earlier Especially if the data set is large or frequently updated Do I do a scraping like automated scraping Probably not allowed. Okay, we'll do it manually Can we like semi Can we semi automate the scraping of this website can actually look at the website Don't think you are Why there's conflict of interest Rose T. Faggy and MD Raffiol Armina co-inventors of a patent application filed by the University of Houston related to this research to The rest of the authors have no conflicts of interest to declare that are relevant to the content of this article It might finish now, might continue this Hopefully tomorrow or next time Let's just check it there Any questions Facebook Someone watching on Twitch No questions Supposedly nine People probably zoned out On LinkedIn Don't think there are questions Sometimes the platform is just quite bad Anyway, I can't see any YouTube Haven't actually monitored YouTube No, there's nothing in the chat So, yeah, I'll see you next time. Thanks for watching. Don't forget to check out Let me know what you think