 Hi everyone, my name is Eva. Mind Effect is a BCI AI company spun out of the Donders Institute at Red Bound University in the Netherlands. We are developing an application of our technology to create brain-response-based hearing systems. Our test can, within 10 minutes, produce a fully objective diagnostic result for the difficult to test patient populations, such as children, elderly and disabled. Take a look at how the test is performed. For development purposes, 32 water-based channels were used, however we plan on reducing the number of channels down to 8. The cap is placed on the participant who sits in a chair facing a screen. Additionally, headphones are placed over the cap for the auditory stimulation, which includes broad-band frequency chirps at frequencies 500,000, 2000 and 4000 Hz in the volume range between minus 10 dB up to 70. The participant watches a video while the audio is played. This is how that looks like. The pseudo-random stimuli sequence in combination with our patented algorithm allows us to perform a much faster test compared to the traditional ERP studies. The output of our model shows the spatial and temporal response across all tone frequencies and volume levels. Most importantly, the output weights of the model show the per-tone frequency estimated amplitude for each volume level. The first increase in amplitude signifies that the participant can hear the tone volume combination and this is where the EEG-estimated threshold is placed. To validate these thresholds, we also obtained behaviorally measured thresholds using the standard Tewkson-Westleek procedure. Based on these thresholds, an audiogram is created. The first validation study was performed in collaboration with the Hordes Centrum Olnenberg. Pure-tone audiometry and EEG-estimated thresholds were obtained from a mixed group of 25 hearing-impaired and normal hearing subjects. The outliers were defined as those with an absolute difference between the PTA and EEG thresholds bigger than 10 dB. Thank you everyone.