 I'm Jason McCarthy. I am under Devon. We have a group of three, but we're all doing independent projects. So I'm one of three for that group. First, I'd like to thank Devon for advising me all year and for UW-Stove for hosting REU. It was a great experience. I had a lot of phone work. But to get to it, my title of my research is Path-Oriented Power Wheelchair Navigation Assistance. It's kind of a lot for a title, but it really describes the whole project in a few words. In some more words, what I basically did was put a camera on a wheelchair and using that, it can help a power wheelchair navigate down any sort of path like a sidewalk or any sort of consistent terrain. And through my project, I'm going to, on my presentation, I'm going to go over some motivations, kind of the objective I tried to achieve, as well as some methods I used to discern the sidewalk from other terrain, and then finally some results from experimenting. And if you ever can't see this corner down here, just let me know. Alright, so first of all, I needed a reason to do this research. I thought it was important to increase independence for users. And for some people with cognitive or fine motor skill disabilities, they aren't able to navigate away from home without a caregiver or someone to help them, even drive a wheelchair by themselves. And that makes it difficult to achieve everyday tasks, like maybe going to the store to grab something or to visit friends or family. I feel like that takes away some serious independence from users. I feel like you can have a greater quality of life if you're able to do more things on your own. So I thought that was the most important aspect going into this. And I want this to be like an inexpensive add-on. Maybe it could be coupled with the system from the beginning, but for the vast majority of wheelchairs that already exist, I feel like it would be important to make it easy to integrate into the rest of the system. And talking to some other professionals and some wheelchairs themselves, they did not see anything like this, even some research papers are. There's not really any consumer versions. There's lots of versions being studied, lots of research. MIT is working on some other schools, but it's not really seen every day. No wheelchairs really have it. And part of that reason is because they have lots of sensors. I'll go over in the next slide. There's lots of different components and some of these have computers tied to them, so they're not very usable. And here are some of the technologies. So the most recent example I found is this one. It's got a connect somewhere to that over there attached to the top. And it's got some other sensors on it. It's got laser down hand as well as some proximity sensors. This doesn't seem very applicable to me because it takes up a lot of space and some of these sensors don't work in all conditions. For example, the connect doesn't work outside, so it'd be mainly kind of a home device. And also some of these other sensors don't have a lot of range, so you'd need more sensors to make up for that. It increases the complexity of the whole project and it's not very useful. So some sensors that are using current technology, like I said it would be an infrared sensor like the Kinect, or some distance sensors and also GPS, and some even have a physical bump sensor. Kind of similar to your guys group, you'll see that later on. It'll just have a physical bend in the front and it'll just detect if it hits something. These systems I don't see as very robust and one of the survey articles I read agrees with me. You kind of need all of them to be working together to achieve something that's actually functional. Now what I tried to accomplish was, like I said before, to have the camera identify sidewalk and navigate accordingly. So this is just a map of the area. In the end, in my results section, I'll show you a little bit of this path, but I thought that these areas showed a decent variety of terrain. In this area, this is Frickland Hall up top, this is the student center. In this area there's kind of a red circle where all the paths meet and it's kind of an offset in color. It's a different color than Frick. Down here in front of the SBRI, there's a couple obstacles. There's some poles and some cross-block identification. And it would need to take in all that information and know that it's still sidewalk and it's still navigable. And I'd like to test during a daytime outdoor sidewalk environment. So it's not robust in that it won't work at night because the camera needs to see. And I want it to be tethered towards kind of an urban or suburban environment where there are sidewalks that are designated places to move. And like I said, I want it to transition automatically between environments. So whether you're walking on a sidewalk in the middle of a field or driving on a sidewalk in the middle of a field or on a sidewalk next to a street or on a cross-block, I want it to automatically recognize that and continue to move. Okay, so you guys have probably all seen this slide a lot already. But some thought to how I want the process to work. You can see easily that the sidewalk isn't actually red, it's gray, but the sidewalk would contrast from the grass behind it and the rest of the image. It's a consistent gray. So that would be easy to pick out from the rest of the picture. By analyzing that and doing some sort of transformations on it, you can single that up from the rest of the picture and you can analyze the shape to determine if the user is on the path you'd want it to be or not. And I think it's, in addition to this, I think it's important that the wheelchair doesn't control where the user goes, but just limits the user from driving off path so that gives them the freedom to choose the route they want to take and it'll just hinder them if they are leaning towards the edge. So here's just a flow chart of what I'd like to accomplish or in the beginning what I'd like to accomplish with that wheelchair in particular. The joystick on the wheelchair has a Bluetooth output that can send to computers to control a cursor on the screen or something similar. So I was going to take a Raspberry Pi and hijack that and use that as the input and then the camera would consider the image and determine if it was on path or not and then if it was the Raspberry Pi would send the signal right through to the drive control and just drive straight but for example, if they were driving off path and they wouldn't shut the wheelchair down or anything it would just limit the speed so they could maybe recognize if they were driving off. But some things didn't work out so I like to focus on this part and instead of having the wheelchair modify the steering I decided I'll put to an LED array to determine if they're actually on path or not. Here's some hardware I used. Like I said, I wanted to keep it in not very expensive so $100 was my price point and I did achieve that. I did it with a Raspberry Pi and the camera up over the components there too. A little battery that you can use to charge your phone an SD card and just some simple electronic components. I used an open source software package. I used the Raspberry and Jesse which is a Linux version that runs on the Raspberry Pi and I used Python for my programming and I used a big chunk of the OpenCV library which is for computer vision. Here is the design of the LED array. I just put it on a breadboard just because in the end this wouldn't actually be used on the wheelchair. This is just to demonstrate the perfect concept. As you can see, it's the same as the breadboard. It's just got jumper cables on the bottom and attached to the right area. You can see that using the GPIO. The purpose of this is when the user starts to travel off path the LED is on the side of the light bar. So if the user is going straight to the middle on the light up but if they start to turn then these ones will start to light up and then finally if they're really off course the red ones will light up and there's a logging procedure in the code that highlights when the red ones light up and the time that it happens so if anyone looking at the logs can look at any sort of queues that may have done that. So here's the rest of the system. You can see the Raspberry Pi on top and this is a little camera module inside. It all fits in a nice little case and on the back there's an attachment kind of similar to a picture frame where you can put a mail in but instead I 3D printed something that will fit on the rail and this would be unique to the wheelchair. There would have to be a different implementation for each thing. I of course tailored it to that and I used the rail that I put the arms on and that is just attached by USB to the battery. If you want you can see that the 3D printed part just comes right off. But it's all very contained it's all small piece it fits real nicely just wedges right in there and it's velcro to the arm and to the bottom it's pretty stable I drove around a lot and I didn't really have any problems with it falling off at all and you can see on the 3D printed part that it's kind of a goofy angle because it needs to be pointed down and to the side because it's offset from the center and there's a function in the code that allows for recalibration that considers the camera being offset. I'm just looking at some of the software I didn't put any raw code in here because I didn't think it would be that important but I considered a point right in the middle of the screen because whatever is right in front of the wheelchair would hopefully be sidewalk in the beginning and it will take the average color value and then consider the rest of the screen and only keep the things that are in range so the little box up top is just kind of what I played with to find the range and I did this with around 30 different pictures of sidewalks so I could kind of get a feel for how the different properties of colors would affect different terrains although there wasn't really a correlation to any of the appearance properties the range values all were pretty similar so I could set a range and it would be applicable to all kinds of different terrains so I was happy that that worked out and here are some results here we're starting on the south end of Frickland Hall traveling south and now the S3RI is on the left if you're kind of picturing the map so you can see that it'll recognize the sidewalk it has troubles with shadows once in a while but that'll be a problem fixed in future releases but it'll recognize the sidewalk and then find the center of the shape that's what this pink dot is and then according to where the pink shape is horizontally on the sidewalk it'll light up the corresponding entity so it'll be more evident the next one this was just showing how it sees things it'll even have a little bit of object detection you'll see when people are walking by it tracks them because they're obviously not gray like the sidewalk so it'll pick those out as objects and move the center over a little bit so that was just around the block with Frickland and the S3RI and here is here's a test from me driving the this is going up the hill on the east side of the block I would drive straight and then drive towards the sidewalk and this is just to demonstrate how far the how far the centroid moves so that would be the center and then as it moves moves further out to the side the closer you get to the other side of the sidewalk and the next video I'll have move over to the second screen sorry for the portrait video I should have hired somebody to video this for me the LEDs aren't very bright in daytime but again that's not super important because you wouldn't really be using LEDs for the actual application so you can see right now super dim but the middle green one is lit and because I was just fixing the speed as I move over you can see the second green one the yellow and the next one that goes back you can kind of see the grass popping when I get close to it see yellow and then yellow and as I move back it will just back to the middle it's definitely more apparent during the daytime when we're actually driving rather than in this video presentation restart just some statistics from that video about six frames per second it was a little bit over it probably would have been less if it wasn't gathering the frames and recording them each time and traveling at the medium speed that translated to 4.4 frames per meter which I think is a pretty good rate for traveling down the sidewalk the little battery that I have is 7.8 amp hours and the Raspberry Pi takes about 1.2 amps depending on where it feels attached so that will equate to about 4 hours and you can see in the video it was still kind of still not super consistent but it did handle the changing terrain subsequent versions would need shadow detection and a little bit more robustness in the terrain and for some further work you could add some object avoidance so that would be something more than 5 but maybe if a ball wants in the way or some curve detection anything relevant to that as well as you could integrate this with the other system that I've shown and GPS alerts caregiver intervention and control those would need some more technologies added which would increase the complexity and overall this would just need to be easier for consumers to access so this would need to go probably through a wheelchair company rather than just an add-on so that they can actually integrate it into their systems any questions? I've talked to some people and they think a lot of these systems are covered by insurance so I think if they were inexpensive enough to pay out a pocket to buy them I think they would be adopted more in the real world because they mainly just exist in research right now In terms of the potential for caregiver intervention and control do you envision that you could include an additional braking system or would you be able to at least steer within also My original vision for this would be other systems attached to the Raspberry Pi there's some modules that you can add to it that allow to use that as a mobile interface so you could with a lot more work you could maybe even put in a phone app that would connect to the camera and integrate with the steering system so that you could have full control of the wheelchair Yeah, definitely one of the main articles I based off of was a lane departure warning research and what that did was it kind of used the same thing, it used a normal camera and it would find the road markings so mainly for highway driving because there's very consistent markings, there's dotted lines on the side so there wasn't really a lot of surprise going on in the highway so that's where the technology was really applicable and I thought I could use kind of a similar idea they use a different algorithm they use more edge detection and I use more of a color based algorithm