 I want to talk to you today about the diagnosis of noise-induced hearing loss sustained during military service using deep neural networks. So some background to this. It's people who have noise-induced hearing loss often try to claim compensation from their employer and to, for such a claim to succeed, they need to have a positive diagnosis of noise-induced hearing loss. And all current diagnosis methods involve rules for identifying features in the audiogram that are typically associated with noise-induced hearing loss. One example is a notch or bulge in the audiogram centered near four kilohertz. Another is hearing loss at high frequencies, greater than would be expected from Asia alone. And usually each year is considered separately and an individual is diagnosed as having noise-induced hearing loss if the diagnosis is positive for either or both ears. And we've developed an alternative approach based on the use of multi-layer perceptrons or MLPs using the age and the audiogram for both ears as the input features. So to train the multi-layer perceptron, we use two databases. One was a database called NullDB-1 that contained the audiograms and ages of former military personnel who were claiming compensation for noise-induced hearing loss. And all of these people had approached a legal company to help with their claim. And there was a medical report available for all of these, including the audiogram. And it was assumed that most of these men had noise-induced hearing loss sustained during military service, which I note here, M-N-I-H-L. Then we had a database called CUNTDB-1, which contained the audiograms and ages of men without known exposure to intense noise, but matched in other characteristics to the men in NullDB-1. It was assumed that these men did not have any form of noise-induced hearing loss. And the multi-layer perceptrons were trained to use the audiograms and ages in the database to characterize each individual as belonging to the noise-exposed group or the non-exposed group. And if they were categorized as belonging to the military DB-1 group, that's regarded as a positive diagnosis. And once training was complete, we evaluated the multi-layer perceptrons using two very similar databases, but based on different individuals. And originally, these were MilDB-1 and MilDB-2 were a single large database that was randomly split to give the two, and similarly for CUNTDB-1. This shows the audiograms of each group, and you can see that on average the exposed groups, shown by the circles and squares, had greater hearing loss than the non-exposed groups, as you would expect. And for the exposed group, the hearing loss was somewhat greater on average for the left ear than for the right ear, and that's quite common among military personnel. It's thought to be due to differences in the exposure of the left and the right ears. For comparison, we used two methods that were specifically designed for the diagnosis of noise-induced hearing loss sustained during military service. And one of these, I denote the MNIHL method, published in 2020, and then a revised version of that, that was published a couple of years later. Both methods depend on identifying either or both of a notch or bulge in the audiogram at three, four or six kilohertz, and hearing loss at high frequencies greater than expected from age alone, which I denote age-associated hearing loss values, AAHL, and these are taken from the latest ISO standard, ISO 7029. And as I'll show you later, the original 2020 method had high sensitivity but poor specificity. The revised method had slightly lower sensitivity and moderately good specificity. So we trained a number of different MLPs with different architectures and numbers of input features with between one and five hidden layers and two and five hidden units. I won't go into all these technical details, but we found that the performance of the networks did not vary significantly depending on their complexity. And so we chose to use the simplest version, an MLP with one hidden layer and two hidden units. And the highest validation accuracy that's still using these DB1 and data sets, that was obtained for a perceptron using 18 input features. We denote the MLP-18 and I'll be focusing on that from now on. And this used the hearing threshold level for each ear at frequencies from one to eight kilohertz and the age-associated hearing loss values, which are the same for both ears. This shows the structure of MLP-18. You can see the input units here, the two hidden units and then the output unit. And the numbers here denote weights for the weight, the connection between a given unit and a hidden unit and then from the hidden unit to the output unit. And these lines without a circle are so called bias weights constants that are added to the activation of the hidden units or the output unit. So this summarizes the results for the different methods. And as I said before, the original 2020 method had very high sensitivity but rather poor specificity. And the overall index of performance D prime was pretty good, but not great. The revised method had better specificity but slightly lower sensitivity and a slightly higher D prime. But you can see the multi-layer perceptron achieve sensitivity nearly as high as for the original method but with much better specificity and much better overall performance as measured by D prime. So I can conclude that the best performing multi-layer perceptron was one trained to identify whether or not an individual had military noise induced hearing loss based on age and the audiogram for both ears considered together. And for the test databases that were not used for training, this achieved a sensitivity of 0.986 and a specificity of 0.902, which is giving an overall accuracy much better than obtained for the earlier methods. So we recommend this MLP method for future use. Thank you.