 Hi, I'm Richard Juday. I'm the robotic vision manager in the tracking and communications division here at the Johnson Space Center. Today we're in the hybrid vision laboratory where we do a combination of vision techniques. We use laser lights. We use spatial light modulators. And we use a thing that we'll talk about some more today, which is a image remapping technique that allows machine vision to have some of the aspects that have proven advantageous in the development of human vision and other biological human systems. The main function of the programmable remapper within the machine vision environment is to provide things like rotation invariance and scale invariance. And you have some of those features built into your vision system that you're watching the show with. You know what my face looks like as I stand here. As I stand closer or closer or closer, you don't have to remember a different shape of face because you have a character to your vision system that doesn't mind the changes of scale. Machines have more trouble with that. And we built the programmable remapper in order to assist with that kind of a problem in machine vision. Now, some of the robotic vision applications that the agency has would include working with a spacecraft in orbit. We have machines that simulate some of the change in viewing angle and distance that will be required in space operations. Beyond this kind of an application, we also intend to build vision systems that will land robotically on Mars. I am very turned on to the idea that I might be able to tell my grandkids that I worked on a vision system that enabled a robot to land on the surface of Mars. Technology utilization's interest in this particular technology is for its application to some human low vision problems. That's one of the things that the agency is supposed to do, is develop technology that can be spun off into public application. Some of the things that can help make a machine see better, we think have a solid chance of helping the person who has certain forms of diminished vision make the best use of the remaining function that he has. It's not a cure. It's not available next week. But we think that within something on the order of five years, this kind of technology, the results of our research here can't have a benefit to the low vision community within the country. I have formed an association here with Dr. David Lotion from the University of Houston's College of Optometry. Dave came and sought out what technology there might be at the Johnson Space Center that would assist him in his activities. And this is the association that resulted. Hello, I'm David Lotion, associate professor and assisted dean at the University of Houston College of Optometry. Dr. Richard Douday and I have developed a collaboration trying to bring some of this NASA technology developed using the programable remapper to a low vision application. There are many problems that individuals have with low vision. These problems usually are associated with visual field effects that is regions of the world that they cannot see. These can be mainly isolated to the central vision, as in the case of age-related maculopathy, or in peripheral vision, as in the case of retinitis pigmentosa. These individuals have problems with specific tasks, including reading, facial recognition, object recognition, mobility, and driving. Let me give you some examples of these. This is the back of the eye or the retina. This individual has age-related maculopathy. As you notice, the central portion of the retina is affected by the disease process. Individuals with this disease have difficulty reading print, because wherever they look, they will see a region that is missing or black. These individuals will also have a problem with facial recognition. Here you can see the full face, a face we all recognize. But these individuals may see something like this. All the detail of the face is missing. In the case of a peripheral field effect, as here shown with retinitis pigmentosa, again, this is the retina or the back of the eye, individuals with this disease will not see a full view of the world, but rather a very small portion or isolated part of the world. What we're trying to do with the programmable remapper is take information that would normally fall within these defects and bring it into the viable retina. Let me give you some examples of image warping. Some of these remappings, we can show you applications with a grid. Here you see a reduced field of a grid. Normally, someone with a peripheral field defect would be given a device that minifies the entire field. This would allow more information to go into the central field where they have their healthy retina. With remapping, we can take the information and actually show more information in the central field or a larger portion, a larger in the central field, and gradually go off into the periphery. And here we can get the central field exactly the same size as our reduced field, but we get more information in the periphery of the field. One of the problems with overall minification is that people lose localization within the field, and they have a hard time doing any kind of mobility that is walking. In the case of central field defect, the information with remapping is pulled outside of the central defect. You can see here a line coming down will actually be remapped around the field defect. Notice that we don't have this distortion throughout the field, but just around the defect. We don't have to remap the entire field. Instead, we can only do part of it. Here is a partial remapping where we're not taking all the information that would fall behind the defect, but just part of that information. Now this can be applied to application of reading. As you can see here, information is lost behind the defect, and it'd be difficult to read across the line. By remapping, we can take information that would normally fall behind this defect and remap it outside. There is slight distortion, but the letters are still legible. For example, to eat you, we cannot see this here, but to eat you up, the word up is pulled out of the defect. This would make it easier, we believe, for individuals to read text. We can also apply this to facial recognition. Individuals with the central defects would see we can pull the information out of the defect. You can see here what happens when we do a complete remapping. We get a lot of distortion. However, with a partial remapping, we can get features outside of the defect that may indeed help individuals recognize faces. Let's talk about what we might be able to do with some field remapping with the machine that we have here, Dave. Suppose that a normal person's field of view about that size, then what might a person with the pigmentosa, retinitis pigmentosa that you were mentioning, what might they see? Well, Richard, that obviously has a lot of forms of disease, but they would be left with a central region of good vision, and this disease progresses. Any information that would fall out in the periphery here would actually be lost. They just wouldn't be able to see that. They wouldn't be able to see that. They'd have a very restricted field of view. That looks pretty severe. That might be like looking through a paper towel tube. Well, the obvious thing to do is just use a telescope, turn it around backwards, and shrink the whole field of view into that. That's what's done right now for the optical aid that we give individuals. The one problem with that is that first of all, they lose a lot of the resolution. By making things so small, they can't pick out objects in the field. Secondly, there's problem at localization. When you reduce everything and you put your hand into the field, it's hard to tell where your hand is relative to objects. And I think if I were walking down the hall, looking through a paper towel tube, I might bump into things too. That's true. You'd have a lot of difficulty with mobility. All right, so what I think I hear you saying is that in the center, you'd like to maintain a fairly high resolution of the object that we're looking at, but bring some of these picture elements that used to be outside the field of view because of the defect, and then let them jam up a little more closely together so that we have picture elements faced widely here, but close together, and the periphery of the remaining part. Would that work? I think it would work. Obviously, when you jam up your elements here, you're going to get some distortion. But again, it won't make that much difference about the distortion in that you could tell there's an object out there. Whether you can actually tell what the object is is not as important as being able to navigate around it. Okay, so the motion cues that you normally have, say as you walk down the hall and things move by you in your perspective, you can see that they're moving without being able necessarily to read the sign or tell what it is. So we might be able to retain some of that? I think so, yes. All right, well, let's go give it a try. I think I know how to do that. What we have here is the programmable remapper itself in the box underneath the monitor. It was manufactured at Texas Instruments to specifications laid out here at the Johnson Space Center. Some of the design is there, some of it is ours. It functions by taking a video image as from this camera and that video signal is fed into the remapper. There are coefficients that have been calculated offline and are entered into the remapper by floppy disks. Those coefficients are used to push picture elements around and to create a display in which a warping has occurred. Now, after that image has been warped, in this case, it is brought back and displayed here to a person who is wearing this helmet that was manufactured at the University of Houston. The displays here show what the remapped image is and they are shown to the left and to the right eyes. The same image is, I will see here as Dave will see on the monitor. And so we'll now try on the lightweight summer version of the Darth Vader helmet. Should point out also that this version that we have built here is mainly for testing. This is not gonna be the low vision device that ultimately this will be used with. The monitor, we have to set up a system like this so that we could do the testing to see how useful and how feasible this type of remapping will be for patients with low vision. All right, so what transform do you have put in here, Dave? What are we looking at? This is a full field. We really haven't done any transform at all. Actually, we're just taking the input and displaying it as an output. We're not changing the transform. We're not changing the image at all. Okay, so here we are looking at some letters on a chart. This is actually not remapped at all, Richard. What we're taking is the direct input into the remapper and it's coming out as it goes in. This would be simulating a full field. Okay, so I should regard this as my normal field of view as I look around the laboratory. That's right, and try to find to see what we got somebody over there we can. I think this might work, Dave. All right, so this is what I might see with the reduced field of view from the retinitis pigmentosa. Yes, and you can try to look around the room and see how difficult it is to find objects within the room. I can't see as much of that computer scene, nor can I see near so much of Chuck and that computer over here. Okay, well, that would be sort of hard to find things. All right, and what is it that you might be able to do for us here? With remapping. This is the remapped image, as I said, this has quite a bit of remapping on it where we have a very sharp fall off, just to give you one example. All right, so if I wanna read the words, I can put them in the center, but I don't lose them as they go off to the edge as quickly as I did the other time. It's sort of like looking at a beach ball, and if I go looking around the lab, I can now see the computer that does not have someone working at it, and let's go look at Chuck over here. Well, I can see Chuck, and I can see the computer, or, and I can resolve the computer if I want to by putting it here at the center, or I can resolve Chuck here as well. And right here I have the computer and Chuck in the same field, and I think that'd find that a little easier to use in looking around the room and trying to find all of that. Well, this is all very well for me to be looking at. I have reasonably good vision, but how are we gonna find out if this is any good for people that have a real problem? Well, our next approach, Richard, is we're going to try to have patients try looking at through this device. We're going to solicit patients from both the University of Houston College of Optometry and the White House of Houston. These patients will have several different low vision problems, and we're going to try to see how much better they can see the world with this particular device. Well, I'm looking forward to that. As I look around here, this actually seems rather easy to get used to. It'll be nice to get some practical experience with it. As we stated before also, this device will not be the final device. This has to be reduced in size and obviously, cosmesis in order for low vision patients to wanna wear something like it. Some of the technology that we are developing with the remapper has also been explored further for us by Transitions Research Corporation. They have built some robotic equipment that use this same kind of technology. Our objective here is to have a machine have some of the same kind of advantages in pattern recognition that your own eye-brain combination does in pattern recognition. Now, you're accustomed to looking at an object that would appear like so if you were looking at the front of a space shuttle. But actually what goes on in the back of your head is that this representation, as you think of it out in front of your head, really looks more like this, where there's more area. This corresponds to small radius near the center. These fins are these locations. This is what's called a log polar domain where the logarithm of the radius is in this direction and the azimuthal angle is in the vertical direction. So you see that there is more area in the brain given to the center of the object. Now, the advantage for pattern recognition is that as one were to take this object and were to rotate it or to change size on it that as appears here in this lower diagram that its representation here in the log polar domain is merely a translation of what one sees in the upper picture. So you don't have to remember and learn a really different representation of the object just that it has slid over and slid up compared with the method in which you learned it originally. Now, in another simplistic application of this technology, a centering algorithm is very easy to implement if in some of our space applications you need to track an object, keep a camera system pointed right toward that object, then in this as a Cartesian representation of that object you can see that in order to center this object you'd push on it here and let it protrude further on this side of these circles. Well, these red and blue circles if you were to take them over here into the log polar representation become this vertical straight line, this red and this vertical straight line which is blue and you see that in order to center the object the expression of that action over here becomes find the place where the object sticks out the farthest and in a purely left right sense and push to the left on that protrusion which will have this matching protrusion come out. When your two highest protrusions are the right distance on this chart which is about like so separated from each other then that object is centered and that's the algorithm that is implemented in the version of the tracking that we will shortly show you. Okay, what should be happening now is that the robot is looking for the place from which the docking target bears just the right kind of resemblance to what it has been told to look for. When it finds that place it will align itself to be on the axis of the docking target all done visually now as a machine vision process and we'll then dock with that docking target. Okay, it's looking for the places where the bulges that result from the fact that the box is not a sphere but is instead a cylinder where those bulges appear at the right orientation in its field of view. It should be very close to that by now, okay. What we have here is the video, the same as you would expect to see it with your eyes of the video camera looking at the docking target and the top you have the representation of that object as the robot sees it and this looks very much like what the part of the, what this image would look like in the back of your head in the brain. Now we'll give the thing a bit of an offset just to make the problem harder and we will tell the tracking system now just to go dock with this target. Okay, well here we'll start the demonstration. The robot now centered the docking target and you can see that it's staying in the center to peel off this overlay that we use just to make this a little more easy for us to see. Now what we are seeing is this is the, what we call a Cartesian representation of this object. It's the television image in much the same form as you're accustomed to looking at it yourself but what's happening in the back of your head is more nearly like what we see in the upper region where this is the log polar version of this lower image. Now what the robot is doing is it's, because this is a cylinder, it has bumps on it compared with what it would look like if it were a sphere or just a circle and it has found the place where those bumps are in just the orientation that it wants at about this angle. Now it's going to lower, you see that it's, it's jumping, it's lowering until the bumps have mostly disappeared and now it's coming in on a straight on approach to the doffy target. And so you're looking at the nose of the spacecraft. Ah, we made it.