 So, it wasn't intentional and I didn't know about it, but when I was sitting through the previous session, it sort of seems like a continuation of the previous session into what we're talking about. What's that? Kind of. So, in the last session, we talked a lot about in terms of medical imaging in terms of how we take a look at x-rays and stuff like that. This is more at the rehabilitation stage, where it is more from an outwardly looking kind of a perspective. So, I'll quickly walk through the problem, the issue that we have, and then we'll take a look at how we are looking at addressing this issue. So, the rehabilitation problem, there are people who have a stroke, they have people who have a spinal cord injury, and there are people who suffer from various different elements, and when one of these happens, they undergo an operation, they undergo a certain set of treatments, wherein they are literally bedridden. One of the things that doctors tell is that there is a golden period of roughly around six weeks, wherein the body can come back and start rehabilitating and becoming better much faster. So, as an example, if you come back and have a spinal cord injury and you're bedridden, you had an operation, you're in the ICU, the doctors actually believe that on the second day in the ICU, you can stand up and start walking. But there are a lot of other complications to it. So, when they come back and tell a therapist, come back and say, can you rehabilitate the person? Can you get them to walk? Taking care of all the other medical wires, etc., holding on to them, they can't bear their own weight. They're completely paralyzed from the waist downwards. Trying to get them to stand, trying to get them to move, etc., is a big problem all by itself. So, as part of a rehabilitation definition, it is really tough to come back and get them to stand and start moving. The problem that happens is, this is also the stage wherein the whole body function completely changes. So, the more you're lying down, your blood flow is horizontal rather than vertical. In fact, there are many instances wherein the amount of other elements that you get, like blood source, etc., bed source, etc., is more problematic than just the element that you originally had. So, in that sense, it becomes a complicated problem to come back and address as long as they're lying down on the bed. Similar thing that happens is, assistance for daily activities is something that makes them very dependent on other people. In fact, internally, we have this one thing that we call the washroom challenge wherein, for somebody who's paralyzed from the waist down, coming back and sitting down on a chair is very difficult. Coming back and using the washroom is very difficult because it's a place wherein you have to orient yourself completely differently. You have to look at whether the chair is there behind you or not. You have to align yourself. So, for people who don't have a problem, we do this naturally. But the moment people have this problem, trying to do something like this is really, really tough. There are two other issues. One is this whole area is being addressed with something called robotic rehabilitation. So, there are devices in the US as well, which is a robotic assistance, which comes back and you strap it on. There is a body weight support system. It can help you ensure it's fall safe, etc. So, with devices like this, on day one, it can help you in terms of standing up. On day one, it can help you in terms of regulating your body movements. However, a few big problems is, most of this is not covered under insurance at all. So, when we come back and say, we had some kind of spinal cord injury stroke, etc., the actual operations are covered. The rehabilitation after that is not necessarily covered. And it's not because of intent. It is because they don't know how to measure it. So, the biggest challenge they have is, and in India, it is two steps backwards, wherein we don't have robotic devices and stuff. If you go to a center like Nimhans, after an injury, they spend seven to eight months just giving you a massage and hoping that you will stand at some point in time. After that, a whole team of six, seven, eight people hold on to you, get you to just stand up and try to take a few steps. Now, at this point of time, even if you have a wheelchair, even if the wheelchair is not covered under insurance because they don't know whether you're actually using it or you're putting clothes on it. So, from the insurance perspective, their biggest challenges are coming back and saying, we are more than happy to come back and cover a whole ton of this as part of insurance, but our biggest problem is there is no way to measure what is happening. There is no way to measure any kind of an improvement. So, in order to address this, measuring the human gait is a critical parameter for all these activities. So, how do we come back and know that somebody is able to walk properly? How do we come back and know that the human gait is okay? If there is some way to come back and measure this, it helps us address a lot of other problems in this domain. Now, in the US, when they go through a rehabilitation kind of a process, they have to document everything, right? As part of their HIPOLOS, they have to come back and document everything. So, the way they measure, this is the clinical protocol in terms of the way they measure rehabilitation. So, they come back and say, and this is only for the lower limb, right? So, we are talking only of waist down paralysis or waist down rehabilitation kind of a stuff. They have a whole bunch of additional clinical protocols in terms of things about the waist and stuff. So, waist down protocol, they have a whole bunch of terms wherein they come back and measure. Hip flexion, hip extension, knee flexion, ankle, door flexion, et cetera. And they take it off in terms of saying, is there a moderate impairment? Is there a heavy impairment? Is there a low impairment? Does the person have pain? Now, all of this is subjective, right? It depends on the person who's measuring it. So, you take two different therapists at two different, you know, locations, the answers are completely different. So, you come back and have, you know, the same person coming back in the morning and in the evening, it's completely subjective. So, the challenge that happens is, doctors, while they do document this, they literally discard this. So, after doing a lot of documentation for HIPAA rules and other rules and stuff, they file everything there and they go back and tell the patient, do something, let me see what is happening, right? So, the whole data becomes inconclusive and it's something that they rarely rely upon and they can rarely come back and trust. So, the challenge that we sort of took upon or the challenge that we are looking at is how do we come back and take a space like this wherein we are trying to rehabilitate a person? How do we come back and put objective measures for their whole gate in terms of how they walk and how do we come back and numerically encode this back into a clinical protocol? That's the space that we are looking at. Now, few things. One of the first things is it starts off in terms of how do we look at the human pose identification? So, when we take a look at the human pose identification, we have a certain person. They can be in any, you know, angle, they can be walking, they can be running, et cetera. We need to come back and identify what are the various, you know, what are the various parts of the body? What is this person? What is it that they're doing? Are they walking? Are they sitting? Are they, you know, upside down pose? We have to identify the human pose. That's the first structure. Now, the challenge, obviously, is that in many instances it's not just one person. We rarely get the image of one person. We start getting the image of multiple people and in many times they are not spread out. They are all over the place, right? So it becomes difficult to come back and say which knee belongs to whom, which hand belongs to whom kind of a stuff. And in many cases it's even more complex wherein it's not all, you know, in a logical fashion. So you've got hands mixed with knees, you've got knees mixed with faces and it's like all over the place, right? So one of the first challenge that happens is how do you come back and identify a human out of this and from this, how do you extract the pose and from the pose, how do you now extract the entire gate problem parameters? Now, the initial thought process in the initial study around this, it went about in terms of trying to identify the human person around this. So very similar to what he showed last time, he showed one sheep, two sheep kind of a stuff. You come back and do object identification. You come back and say let me identify an object. So you would first come back and say from this I will carve out the object. It's my scribbling so it's a bad diagram. So you carve out the object from this. Once you carve out the object from this, now within this object you come back and start doing multiple detection which says that okay, I can identify different parts of the body, right? So this was the first approach, this was the first thought process but obviously this thought process had a pain which says the moment I look at something like this I literally can't carve out people easily, right? They're mixed up, they have, I've got hands and legs all over the place. It's really, really tough to identify a single image object in terms of extraction out of this. So one of the interesting studies that came about was to look at the whole problem differently, right? Instead of coming back and looking at identifying a person it's actually easier to identify different body parts, right? So you come back and say ignore the person. All you do is look at this photo, identify all the body parts that you can come back and figure it out. So in this case what we essentially do is we ignore the person and we come back and say I will directly detect body parts. What you would do in a case like this is you just directly detect all the body parts that are identifiable, right? So you suddenly get a group of body parts and you take all these various different body parts and then start putting them together. So quick demo in terms of how we would go about doing something like this. So this is an example wherein it takes a single person image, runs through it and identifies all the body parts. I forgot which image I used. I had one of my daughter and one of Shah Rukh Khan. It's one of the two will come up. And it basically comes back and says, I can find out different parts of the body and mark it out. Now there are two interesting things that it can do. One, it can identify the parts of the body. Okay, that's my daughter and she had told me specifically not to use that, but anyway. If you look at it, what it does is it identifies the pose. It identifies all the different parts of the body, but it also highlights each part. So it knows where the forehead is. It knows where the chin is. It knows where the shoulder is. So it identifies all these body parts and marks it out. Now at this point of time, it actually does not know the person. It does not know if it's one person, many person. So potentially if you have a crowd, you will have 32 shoulders, 47 knees, 52 ankles and numbers all over the place. So the initial algorithm just comes back and detects all the different various body parts to it. You had a question? It's actually multiple ones, but this whole thing is open, so I'll show you the link and you can take a look at it. So given the fact that you identify all the various different body parts, what we then do is we look at the structure slightly differently. So it uses a CNN based body part detector, but once it detects all the body body parts, it creates a dense connected graph of all the body parts. So it comes back and says, let me figure out all the knees, let me figure out all the ankles, and let me come back and say knees are connected to, knees are connected to whatever, hips are connected to shoulders, shoulders are connected to elbows. Let me come back and create a dense graph in terms of connection of all of these different parts. Once you create a dense graph, we then come back into an IPL for subsetting for the body part laboring. We come back and identify all the different body part laboring, and using that we create a subset partitioning in terms of what the people would be. So it's more of a bottom up approach which says I'll identify all the body parts, I'll connect all of them together. It uses a little bit of extra logic which is a pairing logic. It says that if I've identified one knee, let me see if we can find one more knee. If I find the second knee of that person, I can discard other knees of that person. So it does a bottom up approach in terms of coming back and using all the body parts identification to come back and identify the person. So in a sense, what it does is it takes a look at a pose with multiple people. So in our case, when we are doing a physical therapy, you will have the therapist, you will have the doctor, you will have the patient, you will have a few friends around them. So it takes the photo of all of that, connects all the various different body parts. Once it creates a connected graph, it does a joint partitioning and labeling, wherein it says these are the body parts with a detect bottom up approach that this belongs to a few set category of people. And from that, it detects specific poses and identifies all the coordinates of the poses and all the representations. And now, one of the additional things that it does is along with detecting all the poses and stuff, it also comes back and gives you a ranking in terms of how confident it is of that part. So it'll come back and not only tell you that this is a forehead, but it'll also tell you this is a forehead and I have a 70% guarantee that this is a forehead, right? It can tell you this is an ankle, but I have only a 20% guarantee that this is an ankle, right? So it gives you the identification and it also comes back and recognizes what is the score around that, right? Now, for us, this is a great start, right? When we're coming back and taking a look at rehabilitation, we can now come back and say that we can take a photo, we can identify all the different body parts of the person, we can also have a guarantee in terms of how good it is in terms of knowing what the different body part is and using this, we can visually identify what is happening to the person, right? So this now allows us to come back and say that I can identify forehead, chin, shoulder, et cetera, et cetera. Now, what do we do with this? We run this on a video analysis. So we now come back and take a video and as part of the video, we generate frame by frame record and on each of the frames, we come back and run this analysis. So we essentially come back and get the position of the hip on a time scale as part of the video, right? So when you come back and map it on, it becomes something like this, right? Which says, I now know all the parts of your body which is part of your, you know, the bottom part of your body as part of your gait. Now, I will take this and figure out on a time scale series in terms of the entire operation, right? And we use this to translate it back into what the medical practitioners had in terms of their clinical protocols. So clinical protocols, they come back and say, how much can you lift your knee? How much can you bend your ankle, right? How much of a twist can you handle as part of your hip? So all the clinical protocols that we saw which are all the, you know, hip, lexian, et cetera, all of them is something that can be determined using these set of parameters. So we take all the set of parameters from a video, extract all these out, and we are able to now come back and generate the whole set of clinical protocols around it. Right? So a quick definition around it. So with this, you get all the clinical protocols as a numerical factor, and this becomes a good record for the person to come back and say, we ran through, I will now not focus so much on measuring the whole rehabilitation, but I'll actually focus in terms of helping the person do better, right? And the video will in turn come back and measure out the entire clinical protocol for it. Right? So that's the process that we follow in terms of figuring out the entire gait pattern analysis for the person. Now, after figuring out the gait pattern analysis, there are two more parts to it. We still have to figure out whether he had a pain or not. We still have to figure out whether there is a problem or not, right? And second, we have to correlate all of these problems together so that we give a normal, common representation to the doctor or to the therapist, right? How do we do that? We resort to sentiment analysis, right? So let me quickly show you in terms of how that structure works. Second. So this is just a wraparound demo. This is a hard-coded demo sheet just to show you how it would work. So a doctor would come, look at their regular EMR system, pick out some person, and say for this person, I want to now come back and look at a detailed analysis plan for their gait. So this is the rehabilitation plan, right? So we're coming back and saying he comes in at a certain point in time. He has to go through a certain exercise. He has to cover so many distance. He has to take so many steps. This is the speed that he has to cover. And remember, all this data comes from the system, right? We are now doing a video analysis which translates what is the speed of his walking? What is the change of his knee? What is all the parameters that are changing, right? So we are doing all this from the video analytics and pushing it back to him. So for the doctor or the therapist when they run this, they would come back and say, okay, let me now run a therapy session. The therapy session would include the video. This is just my video right now, but it would include a video. And from the video, it would include the graph analytics in terms of all the things that we have measured, right? But one of the nice interesting things is along with this, you also have the ability to come back and do sentiment analysis, right? So you can take their facial expressions. You can take what they are feeling, et cetera. And you can come back and see there are nine types of sentiments that it can analysis. And you can come back and extract in terms of saying, are they sad? Are they happy? Are they seeing a pain, et cetera? You extract all the data out and add on the sentiment analysis as well, right? So you notice the last column in our protocol which said that along with measuring the parameters, they also need to come back and need to measure in terms of how they are feeling. That is something that can be clubbed in from the sentiment analysis. Now, is there a problem that you notice here? Most sentiment analysis, even here, it has a 2% smile and a neutral emotion. Guess what is the sentiment that you get when you take a video of a patient in a hospital? You never see eight out of the nine sentiments, right? It is always pain, right? So the challenge is not extracting sentiment. The challenge is making sense and meaning out of the sentiment. So what we need to do is we are not so bothered about the absolute sentiment analysis at all. We are bothered about changes in the sentiment, right? You started off with a pain. You're just undergone an operation, right? I can't expect you to be smiling, right? You start off with a pain. What is the delta change as part of your sentiment? That is what we measure, right? So we leave out the absolute numbers. So in many cases, it actually doesn't matter to us what is the logic we use to do the sentiment analysis. We come back and say, we will figure out a delta change as part of that sentiment. We will map the delta change back onto the actual analytics. So what is that? This is a facial detection. It does that. We discard all the data. One of the few things that are critical for us is we are in the clinical space. So there is no data of the patient that we can hold. In fact, one of the things that we do as part of this operation is we collect a whole bunch of this data. All the data is stored on the hospital servers. We collect only specific data. For example, the sentiment part is something that we collect. The gate part is something that we collect. But all the rest of the data is something that we discard and it's left only in the hospital servers. It's very similar to that, right? Correct, right? So there are standard rules which comes back and says that because we have got the imaginary database which is already labeled, we will take a collection of all of that and we'll detect sentiments. But like I said, if you go back and take a look at it, you'll have a ton of images wherein people are celebrating birthday parties. You'll have a ton of images where people are having a holiday, right? So it is not directly relevant, right? So which is why right now we are ignoring the absolute sentiment and we are taking only a differential sentiment, right? So in our case, one of the additional challenges that we have which we are starting to do is build out the image in terms of sentiment analysis labs for medical patients specifically, right? That's a separate activity. But right now because we don't have the labeled data, we are only considering a delta in terms of what is the change in sentiment, right? Now, with all of this, what it essentially does, it allows the doctor to come back and say, okay, there was a rehabilitation done on a specific day. I want to go back and revisit it, right? He can go back, take a look at it. This is just one random image, but he can go back, take a look at what actually happened as part of the rehabilitation. He can come back and say, as part of the rehabilitation at a point in time, what were the activities that happened? He can posit and there is a time sink between the video and the data that was given, right? So if you are coming back and saying, you know, hip flexion was so much degrees, knee angle of rotation was so much degree, that data along with the video, there is a correlation in terms of time series and the two are mapped together, right? So at any point of time, the doctor can go back and revisit this data. One of the reasons we did this is, this also gives us the ability to relabel the data. So if there's some amount of data labeling issues, we can take a look at the video. We can take a look at what the system has generated. We can update it in terms of saying, is this labeling right or wrong, right? So we have the ability to come back, take the same structure and also use it for relearning, training and putting the whole thing back as well, right? So it gives us a good structure to come back and create the whole thing as a numerical data structure in terms of working with it, right? So in a sense, what now happens is that we can come back and map not only the gate analysis parameters numbers, but also the sentiment numbers. So this gives the ability for a therapist to come back and say that, I will focus on the therapy. I will focus in terms of how to help them be better. And in turn, I will use my video analytics to come back and figure out the entire gate pattern to figure out my entire rehabilitation pattern and that becomes a numerical analysis. One of the things that we have done is, this is something that's been run through. This is a relatively new space. One of the things the government of India has done is they have included robotic rehabilitation for insurance as part of their last bill in March, right? So that's one of the things they did, saying as long as data can be made available, as long as any kind of data that can be verified is made available, we'll start including all those cases as part of insurance, right? So one of the things that can happen is, there is a poor guy who went to get coconuts and fell down from the tree, even that starts getting covered under insurance with the condition that the data is gathered, there is proof of rehabilitation and there's proof in terms of where he went, right? So we can come back and start using these kind of analysis to cover those situations. A few small problems that is still pending to be addressed which we are working on is, right now if you take a look at it, most of this data is generated using a simple camera. There are two things that we are considering. One is something like a Kinect or Intel 3D kind of a camera because one limitation currently is if the person is walking towards the camera, the coordinates don't change, so it comes back and says nothing has changed. Second thing that we are considering in terms of, one of the research that is happening right now is a grid array camera wherein we put multiple cameras and do a 3D motion detection so that we know in whatever direction he moves what are the things that are changing, what are the things that are getting updated so that we are able to map it back. Now, two other areas where this is very critical. Remember I mentioned about the robotic rehabilitation before? So we have a robotic rehabilitation system. There are few in the US, there are one or two companies in India as well who are building it out. Now how does the robotic device take a decision on how to power, how to make you walk? There are two kinds of robotic devices, one which are in hospitals which are used for rehabilitation. It is done under supervision. There's a therapist who's looking at it, he's driving it and he's sort of taking care of what is happening. Second kind of a robotic device is something that you take home, right? If you're paralyzed for a long time, you take it as an assistive device and take it home and that's also covered under insurance, so it becomes a long term play for that whole stuff. Now, the moment you go back home, you need some intelligence powering the robotic device. The robotic device is not like a car, right? You can't sit and say I want to walk. The way the robotic device behaves on a slippery surface is very different from the way it behaves on a carpeted surface, which is very different from the way it behaves on some other surface, right? So the robotic device now needs additional information in terms of how to come back and power it and that is very specific to you as a person. So it needs to understand your gate, it needs to understand the way you walk and it has to temper itself to that specifically. So not only is this going to be used for measuring purposes, this also becomes the bank to come back and feed data back into such robotic devices so that it can help you power. Some of the advantages with this, you know, re-powering back is, remember the example I told you wherein sitting down in a chair with a robotic device today is next to impossible, even in the US, right? So today in the US, they have a few robotic limbs. What happens is they have a bunch of people around you, you have sticks, you make the effort to stand up and when you want to sit down, you essentially remove it and somebody helps you sit down, right? While, you know, engineering-wise, the device can sit down, it actually has no clue of how it has to sit down. It has no clue in terms of how it has to orient itself and stuff, right? One of the advantages with using something like this is this can be used to feed data back into that in terms of saying, if there is a chair, I can take a side-wise video and I can guide you in terms of how you have to orient yourself to go back and sit, right? So we have one company here in India which is building out a robotic device. So what we have started off doing with them is we have started using this kind of a data to power it back. So that device now takes data from here, takes input data from here and uses that to have its logic in terms of how to power gate for somebody, how to power gate for people with spine cord injury, how to power gate for people with, say, knee damage, right? So they have something like 30 different medical elements and for all of these different elements, they have come back and created a set of exercises. The way the exercises are designed is three parts. One, you have designed an exercise. Second, every exercise has a certain set of actions. As an example, exercise one would be sit and stand three times. Exercise two would be walk 10 steps. Exercise three would be try to see if you can climb up a flight of steps, right? Now, each of these exercises are in turn broken down into actions, right? So an action is take one step out of the tip, take the next step out of the tip. Now, therapists have the ability to pick this from a solution and say, I want to customize this. I don't want this guy to walk three steps. I want him to take one step and sit and stand once. So they can customize this and use these exercises to come back and trigger those robotic devices, right? So in collaboration with that company which is building up the robotic device, this whole data is being fed back so that they can use this to power their robotic devices in terms of helping out from a implementation perspective. So that device is ready. It is going into clinical trials right now and it's something that they're looking at rolling it out. So they were questions in terms of saying, how much of this is ready from a commercialization perspective? So this is an exercise. This is an academic exercise but this academic exercise is being given to them. They are looking at commercializing this and they're looking at taking it to market by end of this year, wherein they will have three things in place driven off this. One, they will have a way to measure rehabilitation in all patients today, right? Irrespective if they're using robotic device or not, put up a bunch of cameras in demands, put up a bunch of cameras elsewhere, you can start measuring rehabilitation today. Second, using this data, they plan to power their robotic device and say our robotic device is something that will be customized for each individual and it will help you in terms of a specific gate pattern under specific scenarios. Third, they plan to use all of this data to come back and share it with the insurance as well. Using with national health insurance of India has come back and said that this is acceptable data to us and as long as this is verified with a biometric identification, that we know who the patient is, we will actually come back and cover a whole bunch of these scenarios as part of the insurance setup, right? So all of this is something that's expected to get into a commercial kind of a mode by around end of the year kind of stuff, right? One of the reasons I like this example is two things. One, it is something which is going out the door, right? It's not too much in an academic kind of a space. It's sort of set to go outside the door. Second is, I can make a mistake in price calculation on Flipkart. You'll just end up paying more, right? If I start making mistakes here in terms of what the robotic device will power you, it is much more critical, right? So this is one of the spaces while we can have any amount of algorithmic logic, we also need a very strong amount of trust from doctors and therapists. There were questions in the past session saying, will this replace a doctor? Will this replace a therapist? It will not, very clearly, right? However, what the therapists are saying is, there is just so much of fatigue that I'm not able to come back and look at everyone. Today, when I come back and look at somebody rehabilitating, when I come back and try to get somebody to walk, 80% of my time is spent on holding him up, not actually looking at him whether he's walking or not. So their pain point is, I cannot spend time figuring out what is happening because I'm doing so much of non-valuable stuff, right? So their point is, if you can get a device which is fail-free, if you can get a device which monitors, I can get a few people to still hold him up and stuff, but it gives me valuable input in terms of what actually happened, right? Did he have a pain when he slipped? Did he have a pain when he took a step? Did he have a pain when he turned a bit, right? If I can come back and get that kind of information, the amount of benefit that we can provide as part of a therapy is significantly higher. And one of the best feedback for this that we had was therapists not in clinical hospitals in Bangalore, not in Mumbai, but in remote way of place, come back and see if I can get this data, it is something that will help a lot in terms of how we can come back and help people from a rehabilitation perspective, right? So good benefit structure, simple kind of a cost structure, but that's essentially the environment that we are looking at for a practical application of something like this. So to quickly summarize, started off as an academic exercise wherein we start off in terms of identifying specific parts of the body from images. So we provide as an image as an input and come back and say from this input, we identify different body parts. It had to be clean enough to look at whether it's a single person image, multi-person image, convoluted image, somebody fallen on the floor, et cetera. It still had to come back and identify various body part images, right? So it took that as an input, identified all the body parts image. It next went to the definition of saying we will extract all these body part locations from a video. So we come back and take multiple videos in terms of the entire rehabilitation process. And as part of that, all of this data was gathered. This is then used to map the body part coordinates for a rehab session with clinical protocols. So we use the entire video to come back and say across seven minutes, 13 seconds, this was the amount of knee flex that he had. This was the amount of angle of rotation of the ankle. This was the different gait pattern that he underwent. This is how he can actually walk. A few other challenges is today when we come back and look at rehabilitation, we put everyone into one common bucket. So we have a person, kid coming in. We come back and say, boss, first walk. Maybe he wants to be a sportsman. Maybe he wants to be a military person. It's completely ignored. We just come back and say walk, go away. One of the advantages this gives is the moment you do the first three, four sessions, you can actually put him into a category of rehabilitation which is undergone by sports people, which is undergone by military people. So even a regular person, you can say from an ability perspective, you actually have a much higher ability. So we'll not map you in terms of just getting you to walk. We'll actually help you in terms of doing significantly more and getting out. So it actually customizes the entire therapy protocol on a per person basis by analyzing what they're able to do. And it takes all of this by correlating all these clinical protocols with sentiment analysis to come back and see how they felt with the delta sentiment analysis in terms of how they felt. So we know that we're not pushing something wherein they're not very comfortable. That's something that the robotic platform will understand and continue in terms of taking forward. Any quick questions, comments, please. Taking a point from what you said, like errors can be quite costly when it comes for medical field. So like the sentimental analysis which you do, like the facial thing. So in pure psychology perspective, there is a research by a researcher called, from psychology researcher, he's called Paul Ekman. And he's done a 40 years of research for micro expressions. Are you familiar with it? Yeah, we are familiar with it. So if you're already familiar with it, my question here is humans are very good at making deceiving expressions. So whatever phase detection algorithms, whatever it is, how it was sophisticated, human beings can deceive it with your expression. So one funny input that we got, that was the belief that we also started with, but one funny input that we got from doctors is in a hospital event, in a place where you're undergoing rehabilitation, apparently you don't fake your emotions too much. For example, that's one of the reasons why they come back and tell you that if you're giving you an injection or doing an operation, if possible keep your eyes open. Because it's a detection, it's a thing that you can't fake. So the doctors even, and they go to the extent that this was interesting input from Dr. Ravi Gopal Varma, he's a head neurosurgeon at ASTAR. He's one of the four who has done deep brain surgery across the world. One of his inputs was, even if your body is completely paralyzed, there are still signals from the brain and to the brain, right? What happens is interpretation of the signals is something that changes, right? So especially in painful scenarios, especially in scenarios wherein you are in a therapy, et cetera, his point was early parts of the therapy, you pretty much can't come back and fake at all. The second part that he made was, if you can detect somebody's faking, that's a very good input to us that he's actually becoming better, right? So it was a good input in terms of saying, yeah, you know, sentiments can be faked, but at some point of time, the sentiment is really critical and helpful, specifically in a medical scenario. Okay, yeah, but this has to be debated a lot because there's a lot of loopholes in it, because one area where you can easily fake is mental disorders, especially a patient with cesophrenia because they have episodes of high mental disorder, I mean, abnormality, and they have episodes of where they are quite normal, abnormal, so they can fake it when they are very normal to bypass the treatment because they don't want to go through that tremendous. So this area has to be detected. What I want to know is like, see, I'm an Asperger person, so for me, the why I got into microexpressions is, for me, it is very Aspergers are high functioning autistic where they cannot detect what the other person feels or thinks. So I've been training myself for the last four years with microexpressions, so I can actually pretty, now I can pretty much detect microexpressions in a phase and kind of, I can say 80% of the time, I'm rightful, I can, you know, much more than the normal neurotypical person, I can detect emotions much better and find out if the person is deceiving or not. So are these sophisticated algorithms not centered on microexpressions? They're not. And one of the things I believe is that as we get into the whole AI ML kind of a space, I think we should take it very softly and slowly, right? Especially in the medical space, doctors have this whole habit of either accepting or discarding it, right? And in most cases they discard. Anything that's not a medical procedure, they first tend to discard, right? So the big challenge here is while we can do all the sentiment analysis, while we can do all this video analysis and kind of stuff, this is something that has to supplement what the therapist is doing and this is something that the doctor has to believe in, right? In fact, I was talking to somebody during the break. One of the things that happens is trust is very critical here. Today, when we come back and have a whole bunch of AI ML tools which comes back and does a spam detection and throws spam into my box. Two friends call up and say they sent me a mail which I didn't get and I find it in spam. My trust on spam goes down. The trust going down on spam, trust going down on a pricing is okay. Trust going down on a medical procedure is not very okay, right? So very clearly from a positioning perspective, this is not something that we can say that this is something that will help you do a bunch of stuff. This is very clearly in terms of saying this is one way of starting to measure how this protocol works. It is something that is useful for insurance and others but it is something that will get refined over a period of time, right? And end of the day, most of the things, in fact, one of the things that insurance has mandated is you can give me all the records that you want. You can give me all the videos that you want. End of the day I want it signed off by a doctor, right? The doctor has to say this is okay, right? So very clearly we are looking at this to help and supplement the doctor and not to come back and say this by itself will be great. Yeah, one more thing to add on to this. You have been talking about robotic rehabilitation. I've taken a course on neuroeconomics in Kosera from Higher School of Economics from Russia. So in a case study, the professor actually shows a video. It is correlated to robotic rehabilitation and deep mind research. So the lady is fully paralyzed except for her brain. The brain is the only part which is working her. So what they did was in the brain stimulation they convert the robot's, the lady can actually think that she wants to eat. Her robotic arm will move and make her heat. So I think they showed us a live demo of this happening in a hospital in France. So I think this is already happening and if they've experimented on humans it's already been approved and it's already happening. In fact, as an example, this robotic company, today what they have is, they have something called an intent detection. Intent detection is you can do all this sentiment, all this analysis after you walk, but they do an intent detection to find out if you want to walk, right? So as an example, they come back and say, you tilt a bit forward, I detect an intent to walk and I start getting the robot to walk. You tilt a bit backwards, I will stop. But they already have a challenge today which is they don't have a way to find out an intent to lift your leg, right? There's no physical parameter, I'm half paralyzed. So I have an intent to lift the leg. This is one of the places wherein they came back and said that the brain signals will still go even if you're paralyzed. So you can actually trap the brain signals and convert it to electrical signals in terms of saying you can think that you want to walk and it'll actually detect that and start attempting to walk. So yes, research like this is there, but again, this has to be taken in a very normal fashion. The more that when we're working with other industries, we tend to do a whole bunch of algorithms and jargons. The challenge that we have continuously seen when working in the medical profession is we have to take it slowly and softly. So as an example, if there is some mismatch in terms of our analysis of the nerve signals, an intent to do something else will be construed as an intent to walk, which can lead to disastrous consequences, right? So in many ways, getting it right is more important than just doing it, right? So this is one of the spaces while there is a lot of things around what can be done, it needs to be taken in steps and done slowly. Thanks Praveen, it's a nice talk. Real pleasure for a very exciting topic. The continuation of what Mr. Balaji has... Correct. What you told about is a mixture of machine learning and computer vision. So actually you are telling about rehabilitation. But this rehabilitation is constant to a particular area that is limbs and bones, that is the therapy, that is orthopedic. Correct. And you are told the golden time for a patient to recover doctor's thing that in the six weeks time. Correct. My question is that how the rehabilitation is done in other sections of the medical science and how you are making the segmentation. Right. You are drawing the different parts. Fair question. In fact, it's not more in terms of lower limb or upper limb. It's more in terms of saying the lower limb has a function wherein you sit and you walk. The moment you can stand up and walk, even if for example your hands are paralyzed, there is still a whole bunch of other body functions that you can do. You can balance yourself, right? Which you can't do if you are paralyzed from the waist downwards, right? So the doctors actually classify it in terms of two things, saying, do you have the ability to manage yourself versus you don't have the ability to manage yourself? So they actually don't look at it in terms of lower limb, upper limb kind of stuff. So as an example, they come back and say, even if you have lost paralysis in one part, you still have the ability to balance on the other part. However, if you lost abilities of your hips, you don't have the ability to balance yourself at all, even if your legs are functioning, right? So they classify it as things that can help you be independent versus things that can't help you be independent, right? All other things, for example, coordination of the hands, et cetera, are treated as things that you can still be independent. One of the things that she mentioned, if you've got any mental neurological disorder that has actually created something that you can't be independent. You can't even plan and stand and stuff like that, right? There are instances wherein you get a spasm and you can't even control yourself. So those things very clearly are a separate category. So broad-level category from a rehabilitation is, can you be rehabilitated independently? Can you not be rehabilitated independently? These are the broad two categories and that's how they act. But the body response of different people are different. Completely different. So how the prediction comes? So the prediction doesn't come, obviously doctors can do the predictions today, but the way they come back into the prediction is, they put you through a round of initial exercises. They come back, it's like, think of it like telling a kid to learn to do some stuff. So you tell the kid, you give it a bunch of stuff, you give it a bunch of exercises saying, classify all the blue together, classify all the red together, walk 10 steps, see if you can draw this. Now, can you jump up here? So you give it a set of exercises and from which you derive what they can do. So very similarly here, what happens is, they give them a set of exercises, but the derivation is something this really helps them because now you can come back and sort of form the entire analysis and say, in this exercise for this kind of an action, this was possible, somewhere else, this was not possible. So not only is it used to come back and give them feedback, it's also used for an assessment kind of stuff. And how the segmentation is done, when you are comparing the different parts. We'll take the questions outside, okay?