 I think there are lots of, there are many, many advances and we are beginning to go through the process of reviewing now, but for example, the large chunks of data that you want to analyze to understand the pattern within a public health, you know, disease trend, for example, through the electronic medical record review, that's one advancement that is already taking a good positive step. The other ones is large volumes of images being analyzed and those, those degree of granular details that are necessary for a radiologist to look at it, that often radiologists may not be able to see it, the computer can see it, so in a way artificial intelligence will allow us to be more better than what we do, than what we've been doing in the past, whether it's an image analysis of a radiology image or an NMR or a CAT scan, any of those things and also for precision medicine, you know, ability for us to be able to develop new medicines and understanding the human genome and understanding where those intersections of human genomes are into the point where we need to study them and then devise a particular medicine that will help a person who is suffering from a particular genetic problem. These are all the advances. I think one of the key things is people have to trust that the AI decision making process is actually trustable. In other words, an algorithm that gives you a predictive diagnosis ahead of the doctor who will be diagnosing in the final state, these algorithms need to be reviewed enough that the person needs to accept the outcomes are actually something that the person can trust and rely on. The second part I think is developing an AI algorithm itself demands quality data and structured data and the data for benchmarking and these data are very hard to combine and that's a big challenge, you know. Then the third part I think is countries do not have right policies and framework that have been developed well enough for AI to be used under the particular framework. So there needs to be tweaking of these policies and framework in the countries about use of data, especially relevant to the AI. So that is something that I think we need to work on. That's another challenge. Of course, getting the best minds to come together meaning that, you know, there's only shortage of human resources. So we need to have a lot of human resources who have been trained into thinking about AI and AI data analytics, big data analytics, machine learning and all of those expertise along with the medical and the diagnostics expertise that we need. They all need to come together in a group, which is kind of a good example here in the focus group. What's the strategy behind this? I think, you know, if you say an algorithm is giving a reliable result, let's say a cat is identified as a cat or a dog is identified as a dog in a simplistic sense and that means if a machine can identify that in absolute 100% certainty, that's an example of an algorithm actually learning to identify. Now, if you think about an analogy of X number of conditions that a person has to which the doctor is using all of those multivariate conditions to diagnose, oh, you have a malaria, you have another particular disease or you have a TB, those diagnosis has to be emulated in the AI algorithm for which AI needs to learn the different ways of using the data to quantify and to arrive at a particular point of decision. So, to me, you know, these two things are, AI has a long ways to go on solving certain problems. In other words, for simple problems, by routing a traffic and avoiding an accident, that may be a simple problem for AI. But the complex problems such as diagnosing a person's ailment or a physical problem is quite complex challenge, even for physicians. So the AI algorithms has a way to go. And I think this particular focus group, in a way, is identifying certain topical areas to which we know the benchmark data being correct, to which we are going to validate these algorithms to see how the performance is. And we are going to rate those algorithms for the performance that it is conducting. Because we know the quote-unquote undisclosed data sets is actually good data. And using that data, you can get a 100% reliable result. But not all the algorithms can pass that test. And we need to do that process here. That's one of the key things we're trying to do. And we have lots of experts here who will be able to identify and expand this particular conversation. I think, ultimately, the level of confidence that ministries of health need to have, in order for them to approve certain devices that has an AI algorithm embedded in it, for which the caregiving environment, like doctors and physicians and clinicians, are going to use these AI embedded algorithms and the diagnostics devices as a 50% solution to their analysis, if you will. And I think that confidence had to be brought in to bear. That means that this particular focus group need to be able to create a mechanism for benchmarking and identifying and ranking the reliability of these algorithms in relation to what it will do in a situation A, B, or C, or D.