 My name is Cody Paulett and this is George Loutin. We're from the University of Tulsa and we research under the Institute for Information Security. Our talk today is about, well, it's a good title, Hack Like the Movie Stars, a big screen multi-touch network monitor. So the talk we're going to run through just multi-touch interfaces in general and talk about some of the different methods that you can use to get that. We're going to talk about the prototype that we've built and then the tool that we've built for this large screen network monitoring. So there's a few different multi-touch methods. The pretty popular one, the one on the Apple iPhone and some other new devices is electronic capacitive multi-touch. It's typically capable of probably about four touches at once, but there is some new technology just released not long ago that has capabilities for up to ten touches at once. But that's not really something that you can build at home. It's not really something you can do much with in sort of the homebrew capacity. So there's other optical methods that are a lot more feasible to implement at home. George and I have actually both built a homebrew. We took apart LCD monitors and built our own multi-touch devices. It works a lot better than mine. Yeah, it's not saying much. And so we actually, I built mine at home and brought it in and got it to work as a research project at the university. So that's kind of what we're going to talk about. There's a few different optical methods and we'll get into them right now. The one that's used in like the Microsoft Surface is called diffused illumination. And so the idea is that you have, all the methods have in common infrared camera. And so this infrared camera watches the back of their screen and tracks blobs, touches, blobs of light that you make by illuminating these blobs in different ways. And so the first one is this diffused illumination. So the Surface uses this and it's basically a very bright infrared light source inside of a box and there's a few cameras watching the back of this Surface. You diffuse the light across the Surface and so your cameras see this even tone of light. When something gets close to the Surface and touches it, it reflects a little bit more light back. So you're seeing just a brighter spot. So you're watching for contrast and then you can track this in software and we'll get into what you do with it there. But diffused illumination sees contrast. FTIR is a method developed by Jeff Hahn from NYU. You basically take a sheet of plexiglass and build a frame of infrared LEDs around it such that the infrared LEDs shine into that sheet of plexiglass and are totally internally reflected. So that when you touch the Surface, it changes the angle at which the light reflects and that emits light out of the back. That's the method that we use for ours. With both of these methods you can use either, well, with FTIR you can use either an LCD panel or a projector to get an image onto the Surface and that's why you use infrared light for this so that it doesn't interfere with that at all. With diffused illumination you can really only use a projector which is something that we kind of found out the hard way. So this is our device. It's a big, ugly prototype made out of, I think, 80-20 extruded aluminum. It's a thing that we all trip over as we come into the lab. We've got about 168 IR LEDs that line the top and bottom of it. It's about a 4 foot by 30 inch screen. We run it with a 1280 by 800 projector and in focus. It's pretty nice. We have just standard plexiglass on the inside and our projection Surface is just drafting vellum that we ran two of them through a laminator at Kinko's kind of a part. So one side is laminated, we painted silicone on the back and it makes a nice connection when you touch the Surface. It connects with the plexiglass and causes light to be emitted out of the back. We have a modified PlayStation camera that you remove the IR filter and replace it with an IR band pass so that the camera sees only infrared light. And I think... Well, here's our device. There on the left is just one of the rails that has the LEDs in it. One of the reasons we didn't bring it here is we didn't think the TSA would approve of all the PCBs on there. Yeah, they kind of tend to think anything that looks like that's a high power explosive. You can see there in the top right, our projector actually shines into a mirror just to kind of shorten the behind screen volume that we need. In that image there's actually an Xbox camera sitting on a roll of tape. It is. We're pretty high tech. So that was kind of where we had that setup. We've changed it a little bit since then. That bottom picture is just another look at sort of what this setup looks like. So the basic idea with tracking software is that the camera feeds its image into the computer and the tracking software then feeds that image through a chain of filters, the background subtraction, a few different things like that. And that image then is scanned for the... You can see in the bottom left picture on the right side of that those light spots. That's actually the result of all of the filter chain that it runs through. So you track that, you find the coordinates in the picture that those spots are at, and then you can convert that to screen coordinates and feed it off to your application. The tool that we used to do this is an open source tool by the Nui group. It's called currently Community Core Vision. I think it's gone through three or four names. Works really well as multi-platform, and it handles taking input from the camera, running it through the filter chain, and then bundling it in the protocol they call TUIO. It's for Tangible User Interface Objects. That piggybacks over the open sound control protocol and it's just a really lightweight protocol that you use to package information. As far as fingers are concerned, there's only really three messages that it sends. Touch down, touch up, and touch moved. So it's just a pretty simple lightweight protocol. In that diagram, we have a dedicated tracker computer that would then send the information over to a computer that powers a device or that powers a projection. But you can do it just as easily with just looping back and running it along with a computer. So some of the things that we want to do with this... My research is in collaborative user interfaces, and so that's kind of what we're... One of the things that we want to do with this is have these large-scale sort of immersive environments that we can experiment with collaborative user interfaces specifically for visualizing information and the visual exploration of data. But we're also going to do some fun stuff with an elementary school, build some devices, and build some software form to kind of explore collaborative learning and see what we can do with that. But we are a security lab, so we try to apply some of our research to security too. Oh yeah, we also play games. The wall pong is especially fun. So the tool that we've developed is... We call it the dynamic visualization for... Up there at least is the dynamic visualization for network environments. I don't know that we really... We call it Dune. Primary goal being just to have this sort of flashy interface for displaying network traffic. The idea... Almost that it's kind of like a screensaver. You've got this device standing there doing nothing for a lot of the time. So if you have this cool real-time visualization of your network information, it's kind of neat to sit around and watch. But we also want to have it be useful. Yeah, those of us in the security lab got kind of sick of watching them just play games on it. So we thought, well maybe we'll build something that actually does something. I think by sick he means jealous, I'm not sure. But so we've got... We built this on top of a network monitor that George wrote. Just uses signature-based protocol identification. We implemented the interface using Python with an excellent library called PyMT that handles all of the reading, T-U-I-O and all of that and actually just gives you events that you can fire or that you can respond to when it touches down on an object and that sort of thing. And of course that's built on top of T-U-I-O. So we're going to go ahead and... Oh, I'll let you handle it. Oh, there we go. So we'll just kind of run through. Yeah, so here's a short video of what we've got here and this is an image of the calibration of the screen so you can just kind of get a feel for what it looks like and then we turn the light off because we decided that's better. Cody, you're the one who wrote the video. Okay, so this really shaky part kind of shows you the filter chain that it goes through because we thought this would be interesting. So right there, the pressure map is actually the raw image from the camera and then it runs through a series of filters to get the image on the right, which shows the blobs that are actually used and transmitted over the T-U-I-O protocol and are used for the actual interface. Let's see. So along the bottom there, you can kind of see but not very well the filter chain and it shows the images that are the results of the various filters that are being applied. This is the community core vision application. We made a light pin because it doesn't work so well. Yeah. And so here's the back of the screen. It's a little bit messier than when we took the photo earlier to see the... Yeah, there's actually a crayon box part of the construction of the device. There's our projector, the mirror, and then there's the camera. It keeps getting moved, which is why we have to calibrate it every time. Okay, so here's an image of a really simple network. We've got clouds that correspond to subnets. You can see across the bottom, slash 8, slash 16, slash 24, you press those and that corresponds to the size of the clouds and then you... We lay out the hosts that we've detected inside of the clouds and then show streams that we detect between them, TCP streams, in different colors. Here's another one that's a little bit bigger. We've got SMB, SSH, and HTTP traffic that we're showing here and we're kind of dragging it around and playing with it. This is Cody using the light pin, which is a dry erase marker that he stuck a battery in and IR LED into because it's kind of frustrating trying to drag your hands along the board for too long. We have it set up. You can sometimes see it at the bottom. An area you can put in some text so that you can define actually the center cloud, the bigger one, as the submit that you're focused on. And then everything else outside of that is clouded based on your selection. Yeah, and this one blue is bit torrent. Which was kind of an interesting discovery. I had a little talk with somebody about that. I'm not sure what the red stuff is, but you can see all of these clouds out here that correspond to the peers on the bit torrent sessions and then the cloud in the center is actually our network. Please don't write down the IP addresses. You can write down all of the other IP addresses. If you're interested in looking at any of this stuff and building it, there's some really tremendous resources at the Natural User Interface Group, newegroup.com, I always type .org, and it's not right. Newegroup.com, they have forums and wikis and everything about every conceivable manner of optical multi-touch and PMT, again, is the Python library that we use. Piglet is an open GL library for Python, which PMT uses as well. And there was actually another slide here. There it is. And we're funded by the National Science Foundation, so we want to acknowledge that. They let us buy all that crap. So, I think we actually have some time if anybody has any questions for us. We finished a little bit early, so... Thank you. Okay, Megan. So, go ahead. Oh, that's you. The architecture for the... What was the, repeat the question? Sorry, Megan kind of knows of our plans for the larger, sort of more immersive, we call it a cave, which is actually going to be all multi-touch devices all hooked together. So, she asked how we're actually going to do that. We've kind of got a pretty cool idea in mind. We're going to have one main computer that runs a projector for all three different devices, three sort of satellite computers that do the tracking, so those each have cameras installed or plugged into them, and those three computers then send the information to the main computer and that handles the interaction for everything. Louder. Oh, okay. Was there somebody back there I thought I saw? Yeah. Any plans to add like flashy background, skinning anything like that? Oh, yeah. Oh, yeah. We've really only been working on this for not a whole lot of time. We just kind of wanted to show you the toy we've been working on. But, yeah, when we continue to work on this... Oh, yeah. We have no plans in that regard at the time. We were kind of hoping that we'd build this thing that would look like stuff out of the movies and that maybe some usefulness would just sort of emerge. Like I said, the main goal was really to build a toy, which I think stands on its own merits. But if it turns out that maybe you can just have it projecting on a wall and playing with it, maybe you can discover something about your network that you didn't realize was happening at Torrent. Yeah. Oh, right. Yeah. And one of the things that we actually talked about is that it would be kind of interesting if we could integrate like system administration into it. Like you have your own hosts inside of the center of it and you might be able to shut down connections and adjust firewall rules depending on the sort of stuff that you can see. But that's just all kind of conjecture. Yes. I'm not sure where. Do you know where? Hmm. I didn't expect a question like that to stump me. Yes, it's available. Probably not. I'm trying to figure out how to tell you where to go tomorrow when I finally figure out how to get it up. Okay, yeah. Okay, how about there was a, I think there was a URL on here, right? Like at the beginning of this talk. Hopefully it should be in the slides. If not, it needs to be added to the template. Yes, we'll do that. Great idea. We'll do that. What's that? I won't. So that's kind of the beauty of this. Something like the surface costs about $15,000. The device that we built cost less than $2,000, including the computer to run it. So that's kind of, you can build it with a lot of things you can get at Home Depot. So it's kind of nice. We're both computer science. That probably explains the crayon box. Thanks.