 My name is Jared Keele. I'm from the University of Wisconsin-Milwaukee. I'm a mechanical engineering student. I got a physics degree though from Carroll University, which is in Waukesha, Wisconsin. And I'm here to present my topic, a different approach to the industrial design process, using integrated bitmap trajectory plotting, a process using linear robotics. I'm really excited. Works really hard. First of all, I would like to express my appreciation to this university and of Stout. It's beautiful. With the National Science Foundation grant, we're giving us the opportunity to work with such great mentors in all aspects of this research, especially thanks to Dr. Divenberg for his time in support of being a great mentor. And then also, thank you to Mr. Cross, which he was supposed to be here today. He's a person I worked with. He graduated last semester in industrial design at this university. So it's good to have some feedback from this university as well. And he wanted to jump in with my project as well. And we worked kind of one-on-one and we bounced ideas up, which later in this presentation, throughout the presentation, I outlined his design choices and the beauty of his renderings. So my project statement and kind of a background in some facts. The conventional industrial design process today involves an industrial designer drawing a multi-perspective drawing that will eventually make it to an engineer or himself to be drafted up in CAD. And he doesn't have that feel aspect in the draft phase. Products are conceptualized and built with the consumer in mind. Therefore, its revisions are critical, and especially in an ever-changing market. Being up-to-date with the current, whatever is being brought to market is very crucial. Some facts that kind of brought me into this and why I kind of liked it and kind of a background approach is that there's more than 40,000 industrial designers in the United States that's just in the profession alone. A lot of them, approximately 1,500, are in the design business, and it's about $1.4 billion industry. Approximately one-third are self-employed, so they have a lot to do on their hands. And it's four times greater than the self-employed rate of all United States workers. This was kind of interesting fact. Michigan, Rhode Island, Wisconsin, Indian, and Pennsylvania are at the top five states by a percentage of industrial designers in the workforce, and it's projected over the next decade. A growing number of those industrial designers will need to find work in another professional services or kind of move up because there's such just a high demand for this work. So I kind of showed this slide earlier, kind of bringing it back to life in perspective. This company, Architectural Services and Consultation, did a study where they got a whole bunch of, they got some people together and they tried to sketch out different shadings and drawings and techniques that you can do with a pencil. There's quite a few. There's landing, there's stroke weight, there's hatching and shading, and there are different techniques that you can do to describe a perspective into a drawing. And from there, they wanted to bring it to life and to elevate a projected field from that sketch or from a 2D plane. And they were trying with multiple different patterns of how to achieve a figure in so a sense. And then here they kind of show different pixelations and how, when you blow that up, it shows the pixel density and how many of those are closely related, and then it shows the projected height field as well. So once again they say in their words that it's an array of tools from physical to digital, 2D to 3D is to utilize the test notions of rule-based design, techniques of projection and graphic strategies of line weight, hatching and shading. And then they place emphasis on rigorous and interoperative process of translating geometrical information from one media to another. So I did a case study and me and Mr. Koss came up with some ideas that we thought maybe it would be practical. And actually it was kind of interesting. Vanessa, you should have came forward. I would have liked to have seen your design thrown into my program, which I'll show later. I think we could have came up with something kind of neat for the design. I like the box shape. So kind of in the Japanese language, they use konso, which is simply, or the elimination of unnecessary clutter. Things should be expressed in a plain, simple, natural manner. This reminds us not to think in terms of decoration, but in terms of clarity. So we kind of just took everything and we kind of just admitted just over the top fancy because we want, it's supposed to be a cocktail napkin kind of sketch where it's kind of a square rendering that we can take, capture it, take a picture of it or scan in 2D image of it and build an elevated height field, which I'll show coming up. So we came together. He kind of drew this outline. It's in his photo album he gave me, which I'll show. We'll outline the process too as we go along. It's called Homework Reminder Heat Chain, and it's supposed to be a heat chain for high school students. And they wanted something where it was easy to wear a clip on and wanted them to kind of remind students of when they have homework and what time it needs to be done and stuff like that. He showed different backpacks or positions that you could attach it to. And we have different methods that we came up with as well of how to... There was a bracelet design, a necklace design. And so we wanted to kind of see which one was the better one. We didn't know. We didn't know the visual space that we were working in. So that's why I kind of wanted... That's why I came up with this design, and I'll show you as we step through it what the power of this can do. So here it was his bracelet design. And to do this program it's a library through C++. It's called Cinder++, which is the whole library in the visual bit mapping. And you can do a whole bunch of other really cool stuff with it. That works, and depending on how that pixel density is with the different mine weights, we can elevate trajectories based on that, and we can get a surface meshing. This wasn't the one that we ended up going with, but I just wanted... This is a good photo to kind of show height elevation from the ground, surface up, and also kind of the field we're working in. And the farther you get away, the more depth of field you get to as well. So it's working by mapping out each individual pixel to a reference point. And this is kind of a blown up kind of image of it, so we have a whole bunch of dot fields in a 2D plane that's elevated, and this is a representation of X and Y. And as you can see, as the pixels get closer and closer together, it means that there's a slower change in elevation or it's much flatter, and as we get further out, there's a much greater elevation change, and this is really zoomed in. So I zoomed out a little bit more and you can see how many data points we're working with. And as they get more darker, we use a lot more line weight to show density and a more subtle curve within the figure. So then what I did was I tried connecting these points into a mesh. So if we have all these data points, we mesh them together, 2D concur of the image. So the goal here is they're not trying to design the whole surface, the whole final product. We just want a representation of the outer shell of the image, which I'll show in a couple slides. This is just kind of what's going on underneath the surface of the object. So here is his representation that we ended up throwing in and this was kind of the final one that we ended up going with. And then kind of representing that, he kind of told me it was a fukensai, which is an asymmetric or irregularly kind of shape. The idea of controlling balance and composition via this asymmetry. So we didn't want something completely totally... We wanted something unique that kind of was a visual pop out that had something that would stand, but it has a very unique shape that it's very easy to clip on and it won't get lost easily. One thing that we thought with the bracelet that we kind of steered away from was that with this you can put it on more than one like a backpack or you can put it on a purse or something without just being limited to wrist mobility. If you were to go to the bathroom or something, you can put it down and maybe lose it. So here's what's going on at the surface. So we took that kind of final image, we blew it up, and at this corner we're creating a surfage where it kind of overlays over the top. It's kind of overlaying on the top of the skin of the surface. And you can see here, this part is actually the MATLAB version of this corner right around there. And it's the curved surface and it's meshing all those points. Now there's a lot of points here. I was trying to shrink them down. That's one of the things that kept going back and trying to break vision. But once you shrink an image down or you try to take the pixel density and you try to suppress it in a way, you lose a lot of that detail which is vitally important to how the program builds the image. And then we can see here's a 2E rendering of the surface. If you can visualize this, this part is flat. It rounds around this corner and it comes back on the other side. And then this part is a cutoff extension on the other side of that. So this is the surface part of the image. So it's got lines going vertical and horizontal and then it's got stitching in between. But as you can see I'm working with a lot of data points and that's one thing I was really trying to limit and trying to work under constraints. So kind of how is this all tied together now? So I kind of came up with an idea to have a desktop robot that would, this is the dynamics of the robot or the range of motion of the robot and I tried mapping out all the possible points I can. There's more points in here. This, pretty much this whole circle or this whole radial trajectory is acceptable for a build space. So it would scan an image and then it would build out that surface plot using kind of like a 3D printing kind of eschway of where it uses the hard resin filament. And I kind of just wanted to show all the different dynamics that it could possibly hold. And I'll go through the build of it actually in the next slide but just kind of going through that there's an input. So you have that napkin sketch, that cocktail napkin sketch. You scan it, there's a dynamic response from the robot and then it gives that system feedback. So here's kind of the build. Here was the first kind of gen and then kind of a model showing the intricacy of how it actually kind of all comes together. So there's a base for it. There's two motors. There's a motor cap which goes over the motors and holds everything into place. There's a gasket. There's a ring that is fixed to move and pivot around the base. There's the gear system. There's a base plate. There's a long cylindrical rod that raises the z-height elevation of the extrusion head and then there's the cap or the body of the robot. And then the arm. The arm is disclosed or is not... This is supposed to be a static kind of a figure so it's not proportional to what the final desired value of whatever you want the build to be because the object is still under debate and how to scale that. And then here's some of the robot dynamics that I wanted to go into and more about how that surface meshing kind of works and then I also have a video of how it actually builds so you can actually see it. But I'll step through each one of the different photos that I got here and also what's going on in my MATLAB code. So on my MATLAB code I specify all the dynamics of the robotic arm itself and I gave it weights, I gave it motion, I gave it base coordinates and I measured each of the plastic weight for what I thought was an acceptable design value for each of the arms and it has inertia in there and it's outlined in that table. So as it's going to build which I'll show it in a second here's a blown-up version of that height field that I have. It stitches all those together and it labels each one of the points so if there was an error it would come up with an error for each point system you have and then it meshes all those points together and it has an overlaying surface which is all the black with the dots and then underneath it I'm not sure if you can see it really far back but there are little gray points or there's lines, little gray lines underneath showing the underneath layer. So now I'll kind of show I used a low resolution capturing thing so it's not showing the full it actually is a pretty fluid motion but it was using a lot of data in my part. Okay and now we come to the product vision and kind of what we took away and kind of how his vision kind of compared to my vision kind of how we kind of summed it all up and so I asked him without even looking at my vision and what I would process if he would just come up hypothetically with some images that he thought would be a final product and this was a bracelet that he kind of came up with using Styrofoam, poster board and putty and then this is the keychain that we used but as you can see it goes through multiple, multiple different processes and that's kind of one of the things that we can take away is that it ends up not being even what I thought would be his he would have designed more towards this aspect and actually the final product that he told me had occurred two weeks a second which he actually then made out of plastic was this model and it's a lot more refined but having that validation and making sure, especially if you had a client that you were trying to get that feedback and also if there was something already that was hitting the market that needed revisions on the spot something that you could easily do without having to spend a lot of time or money and this is kind of a typical process from concepts to what I call out the door and everything with an X or the cross through it that wouldn't eliminate but I would try to shrink or minimize and especially if you're doing this with a small team of people it would really be beneficial or even yourself so having that concept to the draft and then I don't know if you're going to have multiple meetings and that's kind of why I have it in a triangle and then you have to go on to the artist rendering and then a budget and then you have the pitch and then you go to a prototype model and then CAD but sometimes then you go back depending on what is the budget or the sales don't go right also with manufacturing if you're manufacturing large amounts of these products you've got to take into consideration what's the cost and how to reduce that and maybe if they can cut something out or modify it and then we hit them then we ship it in bag it up and put it out the door for multiple people to use and then something that sums this kind of all up Shibuya Shibumi which is beautiful by being understood or by being precisely what it was meant to be and not elaborated on I think we can take that away because if it's too confusing or if it's the product is not precisely pitched in the right way it might get some negative connotation or feedback which is what we're trying to minimize and then here's where I've been sources from in articles where today is at in the market what is that robot for exactly yes it's to implement the so once you scan the image you want to see it being built or elevated and that's the way of bringing it from a concept to an actual physical tangible model that you may be able to show a client or something or in a way yes but it uses a surface meshing so it doesn't have all that detail below you're just worried about what's going on at the surface but yet it's still pretty strong with that image that I showed multiple slides back that you can really get a really tight mesh of that outer surface now I wouldn't use as many points if I could eliminate a lot of those points or reduce them that would be optimal something maybe look into a little bit more so it's a 3D printer that just prints the shell of the object that's a good point this is because it's a one first of all it's a desktop machine so you can have it right next to you it eliminates that having to build the draft CAD model for it in order to 3D print it or at least in today's right now so if we can get something where you can get a cocktail napkin sketch of it where you can just purely aesthetically just draw it physically out and you can see that and kind of manipulate it in a way because they are parts you can always go back to the drawing board before you hit it before you actually bring it to CAD physical model where you are actually taking a lot of time to build them How long do you think you could the robot would take from end of drawing to finished for the program wise the program spits it out relatively quick it's just the conversion from all the data points that I have now to MATLAB that there's a bit of a miscommunication is that due to the process we go over I don't think so I think there's a lot of extraneous points because if we look in the model there's a lot of flat region and what I tried doing was I tried cutting out all the zero implements for it but it left a lot of holes where it was trying to divide by zero and it thought it was trying to then mesh those back around again so that was another slight hiccup but I think it's eventually I see it for a future where it might be very possible where something like that could be implemented so when you're kind of when you have that point cloud and there's all those little points together so this seems like they're really close from in. What kind of accuracy would you expect the robot to need to be able to precisely? Like I said that's a great point I would eventually like to reduce the number of data points that I was I think it's because when you take that image or the picture it has such a high resolution to it that that's where the problem lies like I said what I was trying to do is I was trying to there's programs you can do you can scale it down you can take 100% image and you can reduce the pixel density to 10% but you're losing a lot of those data points that if you just somehow took a smaller image res of it if you could take a smaller that's what I was thinking about maybe using a lower res camera or something because if you could just take that square better photo or whatever scanned image but have that resolution I think that would solve a lot of the issues. So when you were speaking about a cocktail napkin you started with an actual physical drawing Yes. But could you start with a digital drawing and alleviate something? We used both actually but yes the pencil sketch drawing that I outlined before but you can I'm assuming you're referring to a what it's called a Wacom or it's a digital you know to bridge that gap with your pixelation or with the pixel essentially bring the pen into the considerate. Let me see if I'm getting so you're just saying that if it was automatically there was a large I'm sorry I'm not understanding what you're trying to say You said you had some issues in terms of translating pixelation by scanning an image. Yes. I'm saying if you go with a monochromatic situation might be easier or if you just look at what kind of pen is used to create an image so you can control that image a little better So you're saying like a number two pencil versus a pen. A pen for instance. That's actually a very good point because with a pen you get very different results with if you use a pencil over a fluid ballpoint pen for instance and that's the one thing was that we try to stay with the number. A lot of this was done that was done in a number two with a number two pencil because you get those breaks or you get those chalk lines where you get landings it's when you take when you put the pen on the paper and then you draw it and then you lift off you get a tail end or a streak and so a lot of those do come into play but a lot of the data points that those would it just automatically throws out so if they're small and minute it can just throw those out. Teacher do you envision taking this house out in terms of the application for images of disabilities in your service? One of the things that we work very hard to do is to try to minimize the amount of travel people need to make to come to us to visit us in laboratories instead of taking the services out to the field so you were saying that the vision is to have something desktop right? Could you envision this actually to get a step further so that it would be available if you could bring that up on site so let's say this is the technology that's going through the individual that's actually a very good point you brought up there was a some articles and I was also talking with a professor here who we described which that students that maybe can't conceptualize 3D spaces that well but they have a great app or they know of a great extensive drawing skill would be able to convey those kind of what they're trying to say or they just want to see what they can produce and going from those it would be a great tool in that aspect for schools to kind of bridge that gap and maybe kind of see or shed some light into a kind of 3D modeling in that aspect and also it was also interesting because there was an article where this kind of surface meshing with 3D printing in an essence would also create also if you could get that meshing too would create like skin grafts you could build or you can make those kind of would be kind of interesting too we're also talking with how many data points in accuracy we're not only in today's technology there's not only just 3D printing that's available with just plastic now we're getting to a lot of hard liquid resins and I think you can eventually or even now you can get a little bit more accurate with that and I think if we could build those kind of meshes underneath and kind of have an overlaying surface that would be really cool and have a numerous amount of applications as well