 One of the things I really like about the computer science department at UNC is the professors not only Span a huge area of computer science, but they also mix it pretty well in the different areas So you'll see professors working together on different projects right now in my group for example We're doing real-time systems and one of the big applications is self-driving cars So we're working with some people in the computer vision lab to figure out how we can improve self-driving car Applications to make them safer. I work in an area called real-time systems and what that means is I don't care Only that a program runs correctly I care that it runs on time and that doesn't just mean that it's fast. It means it's predictable So in a system like a self-driving car. I care the pedestrian detection runs on a specific frequency And it always finishes on time There's a bunch of other research areas in the department. There's security graphics robotics medical image analysis vision and People also work on things between the areas. So there's a lot of different research happening The department does a lot of different things So there's graduate students and professors working together on research as well as undergrads learning about computer science and doing research with a grad student grad students and professors So one thing I really like about the research the undergrads are able to do here is they don't have to know a bunch of stuff going into it so we have an undergrad working in our group and He didn't know as much as the grad students did ahead of time. So he helped with little pieces of the project to get familiar I've seen undergrads helping on Anything ranging from tasks like doing a user study So they'll meet with the different users and try out the product or watch the video and determine if it's accurate And as they progress through working in the group they actually get more involved in the general project So I've seen a huge range of it But the professors and the grad students are pretty good about getting the undergrads up to speed Without expecting them to already know everything that it takes four years to learn So something that I think helps a student coming into the UNC CS department be really successful is Being curious about the field It you can get by by taking classes Maybe doing some internships get a job and you can be happy But the people I've seen that really thrive are the ones who want to learn more So they'll take a class and then they spend the summer doing a project on their own something that was inspired by the class But completely based off of their interests and so I think that curiosity and that desire to apply what they're learning to their Own lives is really great These are images taken from the south building, which is one of the buildings here at UNC Here somebody went around the building and took a bunch of photographs of it and the idea is to go from these Individual images and work out an actual 3d model of the scene So in this case we went from these images and turned it into this 3d model here So each one of these Red rectangles is one camera, but the idea is basically that We look at these images and we detect Points in the images that could be recognizable say corners of windows and Once we have that information that we try to match find corresponding points in multiple images so if I have these two images say I might want to find points on the door that match up and And then I determined that this is a correspondence between the two images and I kind of work up from there To actually figure out where these points might lie in 3d and Actually place them there at the same time recover the position at which the images were taken these each one of these red rectangles here And so in the end you get this 3d model and this is kind of like the first step and actually Going from imagery to like nice realistic Models this is kind of like our final output is once we have these camera positions. We can actually Build up denser and nicer Representations of the scene so here we had a drone flying over a golf course and we got a bunch of individual images and We joined them together using this first approach Which is called structure from motion and then once we do that then we can build up actual like Realistic models, so if someone was say wanted to use this for Mapping terrain of an environment or obtaining a 3d model of a building that they could view say in virtual reality or augmented reality Or other applications could include like autonomous driving There's there's so many applications for figuring out where cameras or where images are located And figuring out what the shape and Structure of terrain is straight from structure of buildings are I think the thing that's most rewarding is going from Basically having no information, which is just the images and actually building up things that look really realistic And that where you can actually kind of have this idea of Visiting somewhere that you've never been before Just because you can build up a 3d model of it and look at it Once we have these 3d models and we want to visualize them one of the things that is missing is that we don't have any people in the scene And we also don't our method right now doesn't really get ground surfaces. So one thing that we've Been working towards is actually placing people into the scene just by detecting them in the individual images and And going from there and making some reasoning about where they might be standing one of the other things that We've worked on at UNC is building up 3d models of the inside of the body if someone if a doctor suspects that a patient has throat cancer and they want to See if they have a tumor what they'll do is they'll do an endoscopy Which is where they thread a camera through the nose and down into the throat and they get a video of the throat and there They can see maybe if there's a tumor present But one of the problems is that when the doctors Actually want to maybe perform some procedure like removing the tumor and they're not able to use this data in any meaningful way They can just look at say a CT scan and say roughly where they think the tumor is based on what they saw the endoscopy So one of the projects that we were we have been working on is to take the endoscopy and actually turn it into a 3d model that we can view in the space of the CT and There the doctor can actually see the texture and the color of the tumor and Better localize it for treatment. We've also done work on this in the colon as well So one of the things that we've become interested in is how are people going to interact in VR? So it's not enough to just have somebody be physically present. They should look like them They should move like them. They should work the way that they work So we've been using some brand new fancy equipment to actually physically scan people's avatars Take their whole body and bring it into virtual reality and then try to capture the motions that they produce when they move So essentially we've developed a pipeline where we can bring someone into our lab We can take a scan of their body We use a connect center to capture properties of their movements We can analyze those movements and from a database of high quality human motion We can actually generate their walking style and generate a virtual avatar that they can use to play games or Interact socially that looks and moves the way that they do. So how long does it take to do something like this? Well, right now we can bring someone into our lab We take a physical scan of their body, which takes about a minute that physical scan gets uploaded to the cloud and processed And then we actually do the whole walking study while that video is processing and by the time that's done Usually about 15 minutes later. We can have their avatar ready to put into a game So the end result is that we're now looking at how to design social experiences that revolve around Playing games and interacting with other people using your own avatar So what you're seeing here is my my co-author on these papers and the lead researcher on a lot of these products Project Sahil and I are actually going to play a game where we're moving around in VR We're interacting with virtual agents. We're talking to the crowd all using our own bodies and our own walking styles so the other aspect of my research here is Leveraging some of what we've learned about how people and and entities navigate to do some work in self-driving cars So we've actually built a simulator in partnership with some colleagues at the University of Central Florida where we can test cars driving around on the road and Subject them to emergency conditions and see how they react So this vehicle is actually planning and and navigating autonomously And we can do things like having a car pull out in front of it and it swerves to avoid it safely The idea is that when we use simulation to test these kind of things We can put the vehicles in dangerous situations where we might not be willing to put a real driver But we can still test and evaluate how safe autonomous vehicles are Ultimately, these kind of vehicles will have a huge impact on the future of transport in life in the United States and around the World once we can solve the problems of safety and and those kind of issues So these are just some of the projects that researchers like me get to work on here at the University of North Carolina And researchers all over the country So pursuing a career in computer science and an advanced degree in this field gives you the opportunity To work on cutting-edge problems that have direct impacts on the lives of billions of people For example with autonomous driving where the cusp of our evolution which will change the way every person on this planet lives These are the kind of problems we get to tackle and the kind of impacts we get to make When when we work in computer science and you can work in entertainment You don't have to just work in engineering or autonomous driving. There are millions of possibilities When you get a degree in a field like this, there's a certain kind of thinking that's required for computer science You have to be able to break problems down Logically think of things in steps because a lot of times what's intuitive for humans is not so intuitive when we try to Encode it in machines So what one thing I would suggest is try logic puzzles try to think about things as a sequence of steps Can you think about how a design gets created from start to finish and try to work that out? Obviously there's a lot of programming so familiarity with computers is a good idea It's never too early to start and in fact there are more tools available now than ever To self-study some basic computer science concepts to learn to do some basic programming on your own There there's a multitude of resources even before you get into college I can give you a jumpstart on on how to solve these kind of problems and tackle these issues so I work in surgical robotics here at UNC in the Department of Computer Science and mainly what we're looking at is Enabling minimally invasive surgery Through new robotic technologies and so a lot of these technologies look a lot different than what you would expect robots to look like They can look like small Needled tentacle like devices and so that's kind of the the class of robots that we work on here from a surgical robotics perspective Currently in minimally invasive surgery a lot of the tools end up looking a lot like this tool here, which as you can see Is what would be inserted into the body But is very long and straight and this means that in order to get to different parts of anatomy the physicians have to frequently Make very large holes in the patient to get the tools to the piece of anatomy that they want to manipulate Surgically so what we're looking at doing is instead of having long straight devices like this Adding small curved devices to the physician's toolbox. So that would be things that look like this maybe Or this and if we have these small curved devices what we can then do is access different pieces of anatomy in less invasive fashion. So if we have say Tumor in the pituitary gland, which is in the base of the skull in your brain Then it rather than having to open up large holes in the anatomy to get to that we can enable the physician to go in through the nasal passageway and Do that surgery in a much more minimally invasive fashion? So that's what this robot here is designed to do. So it's made up of nested Pre-curved night and all tubes which is what these are and night and all is a super elastic material It's a nickel titanium alloy and what these tubes the property that these tubes have that makes them special is that when we Set their shape so we put a curve into this tube from then on it will be flexible But return to that shape we set it in so what this allows us to do is take multiple of these tubes and nest them inside One another and then we can rotate and translate these tubes with respect to one another and that causes the tubes themselves to take very Interesting curved shapes through space and so then by rotating and translating these tubes We can manipulate tissue through these nasal passageways Without having to make large holes in your anatomy This doesn't look like what you think a robot looks like what you see on TV with big arms and moving around maybe so Basically, it's a robot because it's manipulated by these motors and It causes some change in the world. So that's what we think of as robots something that's moving and causing change in the world so these are the motors here that Drive this robot and there are six of them total because there are three tubes So it takes two motor motors to manipulate each tube what we do here at UNC in the Department of Computer Science is we Take sensory input for the robot in the form of CT scans of the patient's anatomy and magnetic tracking systems That tell us what the state of the robot is with respect to the patient's anatomy and we take that input and some Specification of what the physician wants the robot to do in the anatomy and we determine the motion planning algorithms What the motors each need to do in order to make the robots accomplish that task that the physician is specifying in order to Evaluate our algorithms and experiment with this robot currently. We're doing work in 3d printed anatomical models so that's like this 3d printed model of the skull base here and we get these by taking CT scans of patients and Segmenting out these volumes from the CT scan and then creating a mesh of the surface and then sending that to be 3d printed so by doing that it allows us to experiment in anatomy that's Exactly the same as what would be in the patient but without having to Experiment with tissue at this stage yet to get into this work You don't need a medical background one of the wonderful things about UNC is that we work very closely with The physicians and surgeons in the medical school here So they kind of handle the medical background for this and we come in With a computational background So if you have experience with programming if you have experience with math with physics These are all really great stepping stones to get you into work like what I do here with this robot what I really like about robotics specifically is When you look at the field of computer science a lot of the other fields you write code you can Solve a lot of very complicated problems But at the end of the day a lot of it's kind of contained in the computer What's neat about robotics is that the code that you write the algorithms that you develop Actually ends up moving things in the real world because these robots go around Changing how the physical state of the world is and that's not something that you get in a lot of areas of computer science And that's something that I find particularly satisfying so a lot of what I was describing with the surgical robots where we Determined how to move the robots in the real world to accomplish some task doesn't have to stay Restricted to the surgical robots So a lot of those algorithms that we develop actually apply to these robots as well Which look a lot more like what you expect robots to look like when you think of robots So we have three robots here in the lab. We have this robot, which is the fetch robot We have this little guy, which is the now robot and this large robot over here is Baxter And so we use all three of these to do experimentation with these motion planning algorithms to see how well our algorithms are performing in the real world Most of our robots are Stationary we call them manipulators because they take arms or something like the tentacles that you saw on the surgical robot and manipulate the world The robots like the Mars rover or the MIT running animal-based robots Those are more mobile robots and so these are kind of two different fields and robotics But a lot of the algorithms applied of both but This robot here fetch actually is a mobile robot with a manipulator arm so we call that a mobile manipulator but Fetch drives around in the world and then can manipulate the state of the world with its arm as well so in a lot of sense or in some senses the Algorithms required to tell fetch where to go in the world before it starts manipulating things Those are very similar to what you would see when you're trying to determine how to move the Mars rover around the surface of Mars For these robots the sensors are very different this this robot has an RGBD depth sensor like what you would imagine like the Microsoft connect has and This robot has some ultrasonic sensors as well as this one and we use a lot of cameras and other things to kind of get the State of the environment as well as the state of the robots That's a lot different than with the surgical robots where we're looking at magnetic fields and CT scans but at the end of the day, it's just some input telling you the state of the world and You're trying to generate the output for the motors to tell the robot what to do One of the big challenges with mobile robots and mobile manipulators like fetch is battery life So a lot of what we look at especially in the algorithmic side is how to Make the robot do the things that it needs to do while it consumes as little power as possible And that means making the motor movements very efficient making the planning for the paths that the robots take very efficient And even making the code which has to execute on the robot very efficient because the CPUs take up power as well Not just the motors so a lot of what we do is Determining these motions for the motors and writing code that's optimal from a power consumption standpoint because battery life is a very big constraint and a very applicable topic in in robotics at the end of the day what you want to do is take some sensor input and Determine how to move the motors and the robots to accomplish the task That's the same for both the surgical robots and these robots But the sensors look a lot different the motors look a lot different and how the motors actually the robot in the world looks a lot Different between these different cases