 Hello, and good morning, good afternoon, and good evening, everyone. Thanks for joining the fifth installment of the A&A Avatar XPRIZE Meet the Teams interview series. Today we're here with Team Northeastern and several of their team members. This is going to be a really interesting session, so we are glad you took the time to join us. My name is Colin Pairtree. I'm the team coordinator for the A&A Avatar XPRIZE. I'm also joined by my colleague and technical consultant, Jackie Maury. And we are here once again with Team Northeastern. There are six members of this team joining us today, and they have a lot of great interesting information and content to share with us today. And so we are very excited to move forward. Before we begin, just a few notes about this webinar. You will all be muted during the call, but you're welcome to use the chat function at the bottom of your screen to get in touch with us or to say hello to the others on the call. We also invite you throughout the session to submit questions that you have for Team Northeastern into the Q&A function. We'll take some time at the end of the call to add those to our discussion, so we invite you to present any questions to us. We'd be happy to take a look at them. A quick overview of our agenda today, we're going to say hello and welcome to you and also to Team Northeastern. I'll be passing it over to the director of the Institute of Experiential Robotics in Northeastern in just a moment. And as I mentioned, we'll take some time at the end of the call to do some questions and discussions with this team. So be sure to submit once again those questions. So once again, we're joined today by several members of a university-based team in Boston, Massachusetts from Team Northeastern University. The members of this team on the call are decidedly multidisciplinary in their work and studies, and that leads to a highly dynamic approach to their avatar system that you're about to hear more. We're going to be viewing some presentations from these team members, so be sure to once again submit your questions. We'd love to have those questions from you as discussion points and we'll make time throughout the session to address those. To begin, I'm really excited to introduce Tashkin Pideer, who is the director of the Institute for Experiential Robotics at Northeastern University, and he is the leader of the avatar XPRIZE team of Northeastern. Tashkin, welcome. It's a pleasure to have you here and the rest of your team, please take it away. Sounds good. Sounds good Colin. Thank you so much for the introduction and, you know, hello everybody and welcome to this session. So I will share my slides and then I'll try to give you an idea about what we're trying to do and why. So let me start with the presentation. So what keeps you going? You know, why avatar? Why did we decide to participate in this? So sometimes it helps us to reflect back in time and see what the technology achieved, right? So these are the early attempts to fly, a technology that we take it granted these days, but it all started with failures. It all started with exploration. It all started with trying to push the limit, trying to push the envelope towards something that hasn't been done before. So with this in mind, you know, those videos are long videos, but you know, I'll just keep scrolling. I want to take you to five years ago when we worked on the DARPA Robotics Challenge. So these are robots embracing the risks, right? So trying to achieve things that haven't been done before and, you know, every single team on the field experienced one failure or another. This is one of the famous videos of the DRC. Again, you can find it on YouTube. Now, we take flight granted and you saw some of the grandfathers of the Atlas robots from Boston Dynamics on the previous video. And now this is where the technology is, right? So for those of you who think that, well, robots will never get there. Or for those of you who think that technology will not get there, I think we are making as a community great, great progress. So now what is in our inventory in terms of projects that we worked on and why are we doing the Avatar XPRIZE challenge? So this is a quick time travel. This is exactly the team that I worked on in 2015. And this was an avatar. This was the DARPA Robotics Challenge to address the disaster response scenarios. Actually, what you see is a robot doing a number of tasks. You know, it just drove this vehicle. It's going to open the door, go through the door. Again, a disaster response inspired by Fukushima. But what you don't see is a 20, 30 people team that's 1000 feet away sitting in the garage, not seeing what the robot does, but seeing through the eyes of the robot and trying to complete these tasks. So these robots were not fully autonomous. They were called supervised by human operators. But then we started pushing the limits of task completion when it comes to robots that are complex in shape and form and enabling the operators to operate these systems to complete hard tasks, such as picking up a drill, turning it on and running it. You know, I was one of the team leads for the Team WPICMU and we did fairly well in that competition. But aside from that, you know, here are some of the visuals, right? So a complex robot, a team of people who look at the robot on the field, but then another team that is really controlling the robot behind the scenes and interpreting the data and understanding what happens. Now, how do we change that paradigm, you know, through the avatar XPRIZE and how can we achieve that, you know, many to one to perhaps one to one, right? So I will talk about some of the lessons learned from the DRC, even though it is five years ago, it is still pushing us to in our research, in our study. We truly believe in maximizing the utility of the human and the robot team, human and the machine team and achieving these complex tasks, making robots of all sorts and forms an avatar for the human capabilities. Oh boy, did this become important in the past six, seven months, right? So how many times did you think, I wish I had a robot avatar that can do my job somewhere far away from where I'm located? So here are some of the visuals from the challenge again, just to give you an idea of where we were and because I want to talk about some of the things, what we are, you know, there's no way that an untrained operator can operate a human or robot by looking at these screens. You need to train your operators, definitely. So what is the secret sauce? What did we do? We looked at the tasks very carefully during the challenge. They were defined, but then there were a lot of variability as well, right? Uncertainty is variability. You cannot plan everything and say that, OK, so the humans will be able to control these systems from end to end. For a human or robot like this, you get the first plan out, OK, it's just simply opening the door task. It is very intuitive for a human, but how will you do it with your avatar? You have to plan it out, you have to reach, you have to turn it on and you have to factor your tasks. And then this factorization is critical because it enables human robot teaming, right? So where do you want your operator input to come into picture and what does the robot do autonomously, right? So one of our key contributions here or your key success stories here was the behaviors of nudging, right? So when the robot thought, OK, here's where I'm going to put my hand to engage the door handle, we allowed the operator to be able to nudge that decision because there's always that deeper human intuition where we think, oh, wait a minute, you won't be able to grab that handle because you're way too off from the center and things like that, right? So this is where we were building and that enabled us to formulate complex models of computation for task completion. Again, perception will be a key aspect because we will split at the end of the day, we will split the autonomy of the robot with the supervision of the human operator to achieve the true promise of the avatar technology. And again, that's a step-by-step process. Manipulation and motion planning, motion and task planning has been a huge focus of our teams. And what we can do is we can generate multiple tasks, multiple solutions to perform the same task using a complex system. Now, very briefly, what did we learn, right? So I mean, I think the capabilities make sense. But what was really critical as an outcome of the DARPA Robotics Challenge? Reliable hardware is critical. You cannot just put everything on the software side and hope that the hardware will operate. You have to have a strong hardware design and creation. Reliable software is critical. You know, those videos, those pictures have stories behind them, right? So for example, here, we dropped the drill during the day one tasks because a one-off software bug triggered and the robot just opened its hand when it was holding it. So reliable software is critical. This is, I think, one of the most important aspects from my perspective for the challenge, for the Avatar XPRIZE Challenge. We need to validate robot or the avatar behaviors. And then at the end of the day, we are aiming to maximize the utility of the human robot team. This is essential. A human understanding the capabilities and limitations of the robot avatar is important. And this is where we are aiming towards. And this is how we think that we'll be able to complete the tasks or the scenarios that are outlined in the challenge. And a rigorous validation strategy is critical. You know, even though I said, you know, we are trying to lower the barriers to operator training, you know, it's always difficult, especially if your avatar is a complex system such as a humanoid robot. And then a capability from the avatar side on understanding the errors based on uncertainties. It was based on the limitations of the technology to understanding the source of errors is critical. So this was a quick wrap on what we have done for the DRC five years ago, but then what have we been doing since then? You know, we've been working with NASA's humanoid robot technology, Valkyrie, and basically moving more and more autonomous behaviors to the robot side to achieve more complex tasks. So this is still a couple years old results where we were able to autonomously, you know, the human just says, go and pick up the box and then deliver, right? So this is a very intuitive way of, you know, that that supervision, human supervision, and then we have a whole line of autonomous behaviors where the robot can navigate through tight corners, understand, you know, what to pick up and what to do with the object and so on. Again, all these videos are available, will be available later on. And then, you know, more complex manipulation tasks as well. So how do you turn a valve to be able to adjust, you know, antennas and things like that? Valkyrie is a complex system, so it is not really intuitive for a human to control it, you know, joint by joint. So it needs to be at the supervisory level. But this is much more recent work where we simplified, if you just remember, although I very briefly, very quickly went through it, if you remember our DRC interface, it was much more complex. So now we can put a lot of things behind the scenes and then, you know, we can even add the autonomy at the interfaces level. So we have a much simpler user interface, if it's just to the, you know, keyboard mouse type of an interface. Oh, by the way, you know, with the emergence of the virtual reality interfaces, now we can do even better. We can allow the operator to pick up, you know, okay, so you want this robot to walk, right? So robot can plan out its steps and then all what remains to the operator or the human far away is to validate that plan and say, go and execute or, you know, reaching, right? So if you want to, you know, it's interesting when you start controlling a human robot, you know, things are really complex, right? So do you reach with the right arm or do you reach with the left arm, right? So we can leave these decisions selectively to the human operator and these are just some of the results that we recently obtained. And then here's another idea, although this is a little bit far away from the concept of avatar, but this shows us how we use the virtual reality tools to achieve this transparency in the human robot collaboration. And I think it will be another key capability for the avatar challenge. How does the robot understand what the human is going to do versus how does the human understand what the robot actions are, right? So it's easier said than done when I said that we put more autonomy on the robot, but then whenever you have human in the loop, if the human doesn't have an idea about what the robot is doing, that doesn't become a true team arrangement in my opinion, in our opinion. So we are trying to push the limits on that front. This wraps up my quick overview of, you know, specifically what I do in my lab and, you know, some of the visions that we have for our team's participation in the avatar XPRIZE. Next, I'm going to pass on the button to Professor Peter Whitney and he will talk about what he works on and some of the hardware technologies that we are developing. Peter. Thank you, Tashkin. I'm sharing my screen now. So Tashkin took us through a big picture of all the pieces that we need to be successful in the avatar challenge and my focus and one of the critical things that Tashkin pointed to is reliable hardware. So I'm going to talk about some of the research I've done in the past and how we're going to bring some of those elements in, in particular for hand design and tactile interaction. And after my presentation, I'll pass it off. We have some student groups that are working more on the human machine interface side. So exoskeleton design and bringing some of these elements all together. All right. Just a second. So actually, before I came to Northeastern, I actually worked for Disney and this is a project some of you might be familiar with. This is actually an avatar system named Jimmy. And we developed the, we developed a system to create physically interactive experiences via robots for guests in Disney theme parks. So that's actually me behind the scenes behind the wall, tele-operating Jimmy. And I'll speak in a second about some of the technology behind the scenes. We looked at a couple of different approaches. This is some older work using pneumatic transmissions to generate very smooth, lightweight robot arm motion. And in the rest of the video, Jimmy is driven through a water based hydraulic transmission. So highlighting some of the features and capabilities of this kind of a system. So this is tele-operated needle threading. Where are my cheeks? And of course, if you're going to build human safe robots, I feel like you have to test it on your own children. If you want to show people what it's really capable of by Jimmy. So in this clip, actually, there was a brief second where it shows behind the scenes what's going on here. There's actually two identical robots, but they're not electrically coupled through motors. They're actually physically coupled through a hydrostatic transmission. And the concept is not unlike what some of you might have seen. This is a kind of a popular educational exercise for students. You take some syringes and connect them and fill them with water, and you create a sort of manual remote transmission. And so this concept took this concept in and improved it in several areas. So one thing, you have a lot of friction in the seals of this sort of a hydrostatic transmission. So we use special nearly zero static friction rolling diaphragms. It's sort of a soft fiber reinforced element. And it gives us a nearly frictionless ability to transmit force in motion. And one thing that's not obvious in that Disney video is the operator can actually feel what's going on. So picking up that egg and moving it, you don't stumble and drop it because you actually can feel contact. So it's kind of hard to demonstrate to you the sense of touch in a video, but we'll come back to that in a later slide. So what we've just seen is basically the sort of passive teleoperation. And of course, if you've got hoses connected, you can only go a few tens of meters before the friction in the fluid lines becomes too much. And so obviously, we can't stretch hydraulic hoses across the country or over long distances. So instead what we can do is we can use this similar kind of transmission to build a motorized arm. And so here for an avatar, you'd actually have two copies. So we have a motorized avatar, and then we would have a motorized haptic interface. So Towshkin talked a lot about shared autonomy, and what I'm presenting about is direct haptic telemanipulation or teleoperation. So again, for the avatar system, we would have two of these systems, and they would be coupled electrically over arbitrary distances. The video from Disney, we used rotary actuators for all of the joints, but in recent work here at Northeastern, we've developed these ultra low friction linear motion actuators. And maybe I skipped over it, but you might be wondering, well, why do we need these hydrostatic actuators that are driven by motors? Why not just use motors directly? And the main reason is that if we want many actuated degrees of freedom, if we have to put motors at all of these joints, it makes the arms incredibly heavy. And so heavy arms can be dangerous. It sort of filters out and blunts your sense of touch. So by using this transmission, we're able to take every single motor out of the hands, out of the arms, and mount all of those motors remotely. So here, this actuator can actually push with up to 100 pounds of force, but it only weighs 70 grams itself. So we can take a couple of these and build a two degree of freedom gripper. So we have a sort of wrist flexion extension motion and then a grasping motion. And so to operate this, we have two motors you can see at the top. These are direct drive brushless motors, and they're remotely operating this gripper here. And so you can see tele-operating. We can get very, very fast, very, very smooth motion. And we can also, the system is very easily back drivable. This is showing actually measuring the pressure in the hydraulic lines. So the sense of touch is incredibly sensitive. This is a five degree of freedom system doing some simple tele-operation. This is kind of interesting. This is showing squeezing different bits of foam. The traces above, you can see in orange, that's actually the force that we're measuring. So that memory foam has sort of a relaxation property. We actually had a project to do seafood handling. So being able to pick up seafood and actually sense the mechanical properties of the filet there. This is the students having some fun with high speed seafood processing. This is a slide from a related project that we have. Actually on doing underwater avatar type tele-manipulation for explosive ordinance disposal tasks. So this is kind of highlighting the fully haptic tele-manipulation strategy. So you'd have two duplicate arms. And here we're showing just with sort of a glove gripper interface. So there's a duplicate gripper that's actually allowing force coupling between the avatar and the operator. So they're getting a fine sense of touch at the gripper. And later Carl is going to talk about some efforts to actually build a full exoskeleton system. Before I hand it off, I want to talk a little bit about sort of a connection that I have to avatar that's sort of interesting is you know when you think about avatar and everything you'd want it to do and all the sort of tasks that you'd want to do. I've had sort of similar ideas about you know what would be the ultimate task, what would be the hardest task that we could ask a robot to do. So this is Baxter, some of you might be familiar. It's a series elastic actuated robot that was built by Rethink Robotics. It's not incredibly accurate, but it does have more compliance and force sensitivity. But my question to you is would you be willing to take a Baxter give him a straight razor and then have Baxter shave you with a straight razor? Maybe in the time of COVID this is not so extraordinary a concept, but if you think actually about the task of shaving you know if we take a straight razor and we think about well not only do we need to be able to control the position of the blade as you're trying to shave someone's face, but you need to be able to control the contact forces. And then you also for example need maybe a reflex motion. So if you notice that there is some kind of a slicing motion you can quickly pull out of the way. And of course this kind of task can't be executed if there isn't a sense of touch. So we got the idea not with a fully autonomous system yet, but could we actually take on straight razor shaving with this hydrostatic transmission technology? And so we actually recently did a test here. So this is a little more. So this is actually a three degree of freedom Jimmy style manipulator. That angle is great. Now just keep trying to keep your head in that same angle. Okay. All right. So when we're gonna put the razor on. Yeah. Straight your skin nice and tight from here. Yeah. Okay. And then I'll come straight down on that side. Okay. So just a point to note out here is this is a friend of mine who is a professional barber that's actually going to shave me with the straight razor here. So there's no motors involved in this case. This is just again the passive direct physical manipulation. Lean your head over to your left a little more. The idea here is that is that he has a very fine sense of touch. He could feel the blade sliding along my face and cutting your head a little more to your left. Feels pretty good. Does it feel like it's slippery enough? Yes. It's not too sticky. So yeah, this was kind of a pre avatar challenge for myself thinking about you know an ultimate an ultimate test that that really it's getting more and more natural. Yeah. So we can do it. It really captures the challenges of robotics and really kind of brings right to your mind you know what are the weaknesses and what are the limitations. And if you think about that task and you think about handing it over to a robot I think it helps identify areas that need improvement. So yeah, I just briefly want to mention some sponsors that have funded this work. Some of this is co-work with Toshkin and also wanted to mention some graduate students and collaborators that have participated in this work. So thank you. I'm going to hand it over to Carl Swanson. Carl is one of many undergraduate students that are working on several different projects to kind of piece together different elements of technology with a primary focus on human machine interface. So thank you and I'll hand it over to Carl. Thank you, Professor Whitney. All right. I'm going to start sharing my screen then. All right. So now as a team we're going to give a brief overview of the system that we're developing now. So my name is Carl Swanson. I am a senior at Northeastern University currently pursuing a combined degree in computer engineering, computer science. Outside of classes I've spent a lot of time within Northeastern's aerospace organization working on anything from drones to avionics bays to NASA student launch which is a competition involving rovers and drones as well. So in a project lead role I've worked within Northeastern's NUAV group which is Northeastern unmanned aerial vehicle working on companion computing between drones where multiple drones work together to accomplish various tasks. On co-op I've worked as a software engineer or co-op at iRobot working on their new Tara Lawnmower. Then I worked at Square Robot working on an AUV, a Thomas underwater vehicle that went into oil tanks to inspect and for damage using various sensors and then I'm currently working at Amazon Robotics on other things. Growing up reading Asimov my interests in robotics rely in removing humans from dangerous situations and additionally giving them time back into their lives. And so the avatar project really presents like the perfect mixture of that. I mean so this project the avatar system can remove a human from a dangerous situation for like Fukushima's like that power plant. If someone has to work in outer space like an astronaut inside a space station needs to very carefully manipulate something to fix a satellite. Same thing there can work there but it can also allow us to get more time back into her lives like Professor Padir mentioned earlier removing a human from like a workplace when they don't necessarily have to be there. And now Anya will give her a I'll lead over to her. Yes I am Anya Derek. I'm a fourth year Electrical Engineering student at Northeastern University. Outside of class I've done two co-ops my first one being at Teradine in the semiconductor industry and right now I'm at Lutron Electronics doing design and development. My interests are mainly in PCB design so that's why I joined this project. And then outside of class I typically volunteer for STEM mentoring and teaching as well as some side research projects related to corporate response to social movements. And the rest of the EE team consists of Shay and Peter. So Shay is okay. Shay is a fifth year Electrical and Computer Engineering student. Shay is currently on co-op at Draper Labs but she has also done co-ops at Boston Engineering and Parkstone in Singapore and Shayne's interests are mainly in electronics as well as wireless capabilities which is why she decided to join this project. And Peter downward Peter is currently so he couldn't join us. Peter is a fifth year Electrical Engineering student. He's done two co-ops as well. His interests are mostly in the mix between electrical and mechanical systems so he's done a co-op at Electrical GT as well as Professor Padir's robotics lab where he worked on a five degree of freedom robotic arm and he is also very much interested in sustainability and the technology sector and kind of integrating that with mechanical and electrical design. And then additionally on the software side we have Peter Albanese and Spencer Sotian. Peter has a bunch of experience in better robotics both at the university and on co-op. On co-op he's worked at Kinetic North America and MCAS Instruments and he's currently working on some research with machine learning and artificial intelligence at the Northeastern Burlington campus that calls this research institute. He's very excited to work on this system that's going to include haptics and really work on providing like biomimetic motion to the human user and he's excited to create new software solution for a project that is really cutting edge and hasn't been seen before. Spencer additionally has a lot of experience with better firmware development and electrical engineering. On co-op he worked at analog devices, MCAS Instruments and then currently Amazon Robotics and he has a specialized interest in real time operating systems which will be extremely useful for us when we move into that on the system itself and is interested in low-level communication protocols like SPI and things like that. And now we'll lead off into the MECI team. Hi, my name is Mike Polkarri. I'm just going to introduce myself and the four other members that are of the mechanical end of things. So like I said, my name is Mike Polkarri. I'm a candidate for a Bachelor's of Science in Mechanical Engineering and it's my senior year at Northeastern. I've been lucky to have two vastly different co-op experiences at Northeastern. My first was with A-123 Systems where I got to work in process engineering. We mainly work in streamlining the manufacturing process for electrodes in electric car batteries. Right now I'm working at Viking Detection in a mechanical design role aiding with the design of an under vehicle x-ray system that's designed for law enforcement. For me personally I love being an engineer because engineering bridges the gap between science and creativity and now it has the potential to bridge the gap between technology and the human experience. And so being part of Northeastern's avatar team is a once-in-a-lifetime chance to do that. I'm going to hand it over to my friend Al to introduce himself real quick. Hey, I'm Albert Saunders. I'm a fifth year mechanical engineering major at Northeastern University with minors in material science and mathematics. I've had three co-op experiences so far. My first two in the biomedical device sector where I worked at diffuse synthese on the shoulder reconstruction team, a similar role at spine frontier on the spine implant team and right now at Amazon Robotics. I also work part-time on the information technology team for the Boston Red Sox at Fenway Park. And I'm hoping to use the great knowledge that I've gathered in classes and on co-op in robotics shoulder biomechanics and material science to inform our team decisions on mechanical design and biomimicry. I got into engineering for a very simple reason. I just want to help people using science technology and I feel like at this particular point in human history, the avatar competition presents itself at a great time. You know, remote telepresence is becoming more and more impactful and necessary and I feel like getting on this project at this point in time is very filling to me and I hope to help as much as I can. I'm really excited to work on this. So to introduce the rest of the mechanical engineering team that couldn't be here today, we have Albert Demers who is a senior mechanical engineering major with a minor in robotics. He enjoys volunteering through Northeastern's outreach program to teach robotics to elementary school children. He has held electromagnetic co-op positions at Stone Ridge in Stron and Raytheon and he hopes to utilize his knowledge of robotics to achieve effective telepresence. We also have Sepair Kansalar who is a combined BS MS in mechanical engineering major. He is a former product design development engineering co-op at Proctor & Gamble. We're very lucky to have his experience in design for manufacturability, injection molding, mold design and the like. Not only that, he is also professionally certified in SOLIDWORKS and he enjoys studying a variety of robotics topics and I'll hand it back over to Mike to introduce John Corey. Thanks Al. Last but not least we have John Corey. John is a senior mechanical engineering major here at Northeastern with a minor in mathematics. He's also the co-founder and lead project manager at Northeastern's VR club. Currently he's working at FLIR unmanned ground systems in an R&D position. In addition to all this, he's also a self-made entrepreneur with a number of patents pending on various products and he's excited to apply his robotics and VR skill sets towards making a motive and expressive telepresence impossible. So the current avatar system was designed with a few high level design requirements in mind. The team desired a wearable exoskeleton driving robotic arm with five degrees of freedom, a three kilogram maximum load at the end effector and a .5 meter per second maximum end effector speed. The team wanted to focus on biomimicry first to really effectively communicate the human presence from one spot to a remote location while maximizing the positional accuracy of the end effector. Kind of like what Professor Whitney was talking about earlier to improve safety and allow it to be used around humans. So with those requirements in mind the current exoskeleton design has four degrees of freedom, three at the shoulder and one at the elbow. The components are 3D printed to minimize the weight to about seven pounds. All of the joints are directly driven to eliminate backlash and the shoulder joint has three axes of rotation that intersect at one point and that back plate is fixed behind the user for usability and comfortability. Straps comfortably fix the exoskeleton to the operator's arm for effective motion. And coming over to the robotic arm design, it currently has five degrees of freedom, three degrees at the shoulder, one degree at the elbow, and one degree at the wrist for pronation and supination. This has brought the total weight down to about 15 pounds total actually weight due in part by the spanners that are single 30 millimeter carbon fiber tube which offer us high stiffness at a low weight and we utilize 3D printed bonding coupling ends for ideal torque transfer. I'm going to pass it back over to Mike to talk about differentials and get into more of an integrity of our design. Okay, so with specific regard to the robotic arm it is comprised of three main sub-assemblies. They are two identical differentials and one shoulder rocker. The differential itself was designed with modularity and ease of iteration. Differential to mimic either the shoulder or the elbow and to be swapped freely if necessary. Each differential is comprised of four sub-assemblies that you can see here. They are the motor housing, two identical timing belt transmissions, and a bevel gear drive. This design grants that differential two degrees of freedom. They're mainly made of 3D printed plastic but critical components are made of steel such as the gears and the overall weight is approximately four and a half pounds. Similarly, the shoulder rocker is an additional motor sub-assembly that is only present at the shoulder of the robotic arm. This rocker grants the shoulder one additional degree of freedom to fully mimic the functionality of the human shoulder and has identical belt transmission to that of the differential and is made of a single piece of 3D printed plastic. Additionally, this is mounted at a 30 degree angle to biomedically match the human shoulder and make that transition more intuitive. Another thing we've been working on is a virtual reality operator interface and essentially this would allow the user to take in visual data through the use of a VR headset letting them look around the room. It mirrors the roll pitch and yaw of the head and allows the user to convey subtle body language. I will let the video speak for itself. It's presented by our teammate John Corey. The avatar camera system is able to copy the roll pitch and yaw of the operator by converting the gyroscope data from the VR headset into motor commands. This not only allows for real-time visual feedback but also helps give the avatar personality by depicting the subtle head movements of the operator. This feature, in addition to others incorporated throughout the system, brings us closer to achieving the desired simulation of the user's persona at the point of the avatar or son's icon. Thanks guys. All right, now we're going to go into a little bit more detail on how our software stack is going to work. So on the software side we're going to be employing ROS as a robotics middleware. ROS is going to handle much of the heavy lifting, including the physical simulation, motion planning, and importantly the message passing between applications on our arm. Following preliminary research we're going to be using ROS control which is a common ROS package to tie together our simulation and hardware. This is going to allow us to run hardware and loop testing, permitting us to visualize how our arm is moving not only in the real world but also in software. So as I mentioned before we're leveraging ROS control to provide this interface between the simulated real life and simulated in real life environment. It also gives us the ability to tune easily the PID loops for our system and gives us the ability to send simple effort commands to our arm's joints which then are seamlessly translated into electrical signals sent to our drive boards. ROS is going to assist in creating a software structure where each one of our nodes can easily access information from all the other nodes on our system. In such a complex system simulation is more important than ever. By simulating our robotic arm bugs that may cause our arm to physically break in the real world we're instead just simply fail in the simulation. It's a lot easier to fix problems that are like a simple line of code rather than buying more parts spending money and reordering broken materials. To simulate the motion of our arm we're going to be using gazebo here. It's a real-time physics simulation that hooks into ROS and by using an accurate model of our arm with mass distributed correctly across the model we will additionally be able to simulate PID tuning in software. Using gazebo has additional benefits as it's going to allow us to visualize the requested and actual position of the robotic's arm as it is commanded by the exoskeleton. All right at this point I'd like to go through and run a demo so I'm going to switch over to my phone. One moment that will take me just five seconds here. All right can everyone see me there? Good? Yeah. So as you can see our system here we have the Apollo system in the strap to the user to go in. It's currently powered by a power supply that means I salvaged off the previous projects. Unfortunately due to COVID we were forced to split up some of the project parts between different members so I could work on it. So currently we only have the exoskeleton here while the robotic arm itself is in another location. Currently for this demonstration I'll be showing driving a single motor and then reading the encoder positions off of two of these motors right there. Set that up real quick. So it's fairly easy for the user to strap in such just mount this right above the elbow. Connect it down there and then if you watch my arm here I am mirroring the movement off of this single joint onto that joint over there onto that single o-drive powering that motor. It mirrors it quite well so I didn't like make my new change like one degree. It's extremely responsive and this will allow the user to perform very delicate tasks and in general love to act like a human would in normal situation. Now while I'm working with this Anya is going to give a couple just details on how we're choosing code for our system and the power systems in general. So as you can probably see hanging off the side there we are currently at the moment using o-drives to control our motors. O-drives are open source and affordable so they gave us the flexibility we needed in times of COVID to quickly and rapidly just integrate everything. We did create our own encoder PCBs that are mounted. There are slots inside the exoskeleton where the encoder PCBs are mounted and we specifically picked on-axis magnetic position sensors to do our encoding specifically the AS5047B because it's high precision high speed it has 14-bit core resolution so it really gave us what we needed in terms of having precise motion and being able to mirror it. At the moment we're using SPI as our communication protocol for the encoders but we did pick an encoder that does have options for both absolute and incremental positioning so we can in the future potentially add that to our system and then we do also plan on developing custom controllers in addition to the o-drives just to give us more flexibility in terms of picking our communication protocol although all drives that recently start supporting CAN communication which is what we do plan on using but having our own board will enable us to kind of integrate the controller part of our system with the encoders and kind of have it all in one single board with the communication protocol that we want and it's sort of future-proofing us in a way because if we end up adding any functionalities to our boards in the future it will be easy to just add that on instead of starting from scratch pretty much but yeah we are using or planning on using STM32 microcontrollers for those boards which are going to be modeled after the o-drive but we're going to add our own player to it to show another demo of how we're able to accurately read the additional values from here this one this is from the one of the shoulder joints and then the arm joint itself that one is discreetly set there and it shows that we can track individually each one of these joints as it moves to a relatively high degree of precision that's it for the demo i'm going to hop back on my normal computer and we can do any questions yeah well Carl hops back on this is definitely just the beginning stages of our project we're definitely looking forward to integrating all of the work that professor for the year and professor Whitney have been doing in their own labs and kind of putting it all together eventually but COVID has definitely presented itself as a challenge in terms of getting our team together and kind of starting the piece together our system wow thank you very much for that demo and the walkthrough of your entire system is all your team as well whether they were able to be here or if you were just showing their their head showing screen it's clear to me that as i said before that there really is a multidisciplinary approach to this team tons of knowledge base that's coming through for each of your team members so it's really impressive to see that hear about it and also really cool to just have despite the the challenges that were presented by current state of affairs in this world just having that in literally your backyard your living room to to work on that pretty interesting car so thanks for showing that demo anyone who's listening in in the audience you're welcome to submit questions as we go through we're just going to be doing a discussion for the next 10 15 minutes or so so we invite you to write into us and post questions to this team i do want to hop back to maybe go through the beginning of the presentation peter the way that you were using that system for not only jimmy but also the shaving experience that's physical connection and it presents a really impressive level of dexterity and also compatibility mapping the motions of your of yourself what kind of how similar can you get that sort of level of i guess maybe fidelity is a good word but quality of motion with a system that's teleoperated there's a lot of fine motions happening in that one what's what's the state of the art as far as what we can do with something that's farther away yeah that's a great question so i think the in the video on the gripper i showed the there was a teleoperation and then there was also teleoperation and scooping up scooping up the fish so all of that teleoperation is just a positional command so there the operator did not have a duplicate gripper to serve as like a haptic glove of and so that's actually work that that's that's ongoing right now so we're we're currently trying to design a system to allow someone to wear a duplicate copy of the gripper so that that so that we can have a basically a force reflection coupling between the gripper and the the avatar hand so in terms of the the performance that could be achieved actually in the in the electrically coupled version you actually can get better performance than the the direct physical teleoperation and the reason is that for example when Jesse the barber was shaving me he's manipulating one end and that's going through the transmission and then through the hydraulic line and then through another transmission actuator so there's there's there's two actuators with the fluid in between and so any amount of imperfection or friction in the actuators you go through that twice but in an autonomous system if you can measure the pressure in the hydraulic line that that basically allows a measurement of the of the contact forces where you can skip one of the actuators so you actually the the measured pressure is better than the sense of touch that an operator would have in the passive system so that's yeah that's we have an area of research on how do we make best use of measured forces and measured haptic information in the in the overall control loop but yeah really interesting I wouldn't almost wouldn't have expected that but it makes sense that there's a lot of different hardware the things that it's moving through actually transmitting that signal as it were in more physical sense versus just a singular more more simplistic although seemingly complex channel yeah if you if you remember the example with the the two syringes like the the the school experiment you're pushing on a syringe and you feel the friction of that syringe seal and then the pressure in the high in the water line and then it's pushing the second syringe so yeah you feel it twice in that case have you guys decided what the actual form of the avatar robot will be yet you've got all the parts going there I just wondered what you're going to do for the final I'll take this question on how about that as as as you mentioned I we think that we have the foundation of blocks to put it together and we've been waiting for the final scenarios to be announced so that we can analyze them and and you know decide where to go and now that they are out as of last month you know to answer your question in short no we did not settle on okay here it is yeah however we have I think in our inventory we have quite a lot of platforms that will serve well for the for the announced scenarios for the semi-finals but then eventually you know we are converging as as the team mentioned on integrating all these different little pieces together to have the whole system set up yeah I think it's great that you're you're trying to put that human experience into the the whole robotic system and I I love the little the demo you showed with the two eyes and and trying to get the persona of the operator in there even with just two camera eyes I think that you know you can do a lot and it doesn't have to be you know fully formed humanoid robot it can do as long as it can connect to two people together that that's what we're going for that's why we are known as the institute for experiential robotics to enrich to enrich human robot experiences thanks tashkin I just had a quick question coming through the q&a from anonymous attendee and it's quick why not full hydrostatic I don't know who exactly to direct that question to but anyone can feel free to dive in yeah I can take that so that's that's a great question I mean we we do have projects where we're envisioning building a fully hydrostatically actuated system sort of a trade-off between complexity and performance and another way to think about it is that if you think about from your fingertip to your hand to your wrist to your elbow to your shoulder the the closer you get up towards the body up towards the shoulder you know the more you need your joints to be stronger and and more precise and the closer you get out towards the fingertips the more you need the system to be able to behave compliantly and to be able to measure forces so out at the fingertips out in the hand the very low friction hydrostatic driven remotely driven system I think is definitely the best option I mean if you if you tried to build a direct drive motorized gripper that's going to be incredibly heavy to have those those big motors out there where the mass penalty is is the worst you know the farther away you are from from the shoulder so I yeah there's a there's a range of solutions which could involve you know maybe having hydrostatic hand and wrist and then having a more traditional motorized upper upper body and another thing is actually the exoskeleton design that we're working on it's actually fully motorized but but each joint is direct drive and it's and it's actually designed not to exert the full torque that the avatar is capable of it it has a force scaling because what we found is for the human operator they don't nest the the the interaction forces you don't necessarily need to be able to feel the full magnitude of it but it's it's about changes in forces you know if you're touching something or if you're sliding along and you're feeling friction it's those it's those force vibrations that are really giving you that that tactile and proprioceptive feedback so yeah so again in in terms of making things simpler you know not having motors and motor control and a hydrostatic transmission and all of the the complexity of that i'm just having a simple lightweight electric system that's not capable of full strength but it's capable of full haptic quality is the direction that we're we're going towards and so that system doesn't have a gripper at the end and so the that is is slated to be hydrostatically actuated got it yeah carl or anybody anything to add about the the exoskeleton system itself and where things are going next as far as how you're developing it next stages of testing other things to that are going to be integrated with that yeah so i think professor ready to explain quite well there the plan's been forward i mean this it's a back to travel system so the user can't experience like haptics on it by himself that is one of our next steps um we've taken kind of like a complete 180 from our our previous direction which was a custom developed pcb using a ti chip and now we're moving on to using a more open source on the solution called oDrive which is one of one of our sponsors now as well and it's an amazing uh so it's a single board that allows it performs like all the pd tuning um for your robotic system and allows you to very easily create a system that can be back driven so we're currently using you might have seen uh the taped board onto the arm there i was working with that's our solution now it's going to change very soon so we have a mechanical mounting option but what we're trying to now is move that onto a footprint that can be put onto the back of the um the current motors we have so currently we have these pcbs that on your designs that only read off the encoder value and then report that back to the oDrive once we can pull the chips off the oDrive and make our own pcb we can get them to much smaller form factor and this is going to let us move forward to actually like back drive these motors um and give the user an idea of like a true haptic feedback and make them feel like they're really in that situation so that's where we're going on this year i think end of this year our goal is back drive the um dexel skeleton to give the user that true like haptic feedback i just have to say that the fish demo and haptic feedback together just boggles my mind i can't imagine really feeling fish not that you're doing that but uh it was something i'd never thought about um how something could handle an object that has such a strange texture well i also really appreciated the quote-unquote high-speed seafood handling as a so i think that maybe there's there's more of a future there's fishing is a big industry and you know northeast um what so you mentioned you talked about shaving as an as an ultimate task for the use case of this avatar i'm curious um maybe albert or mike or ania if there are other use cases that are sort of inspired um your development or the way that you've applied your knowledge to this challenge maybe i'll go to ania if there's any thoughts that you have on now what's a use case that that that really has driven the work that you've done um i don't know if anyone else has anything specific in mind i can jump in if you want yeah go ahead michael well i think one well in terms of very specifics i think the obvious one that jumps to a lot of people's mind is surgery um there's only so many high level surgeons in this world and obviously you know travel being a thing is um limits the access of care for a lot of people so if you can get a telepresencing robot to the accuracy and responsiveness needed for some kind of complicated surgery that could save a lot of people's lives and additionally there's also like search and rescue situations anything where the human life would normally be in danger and you require a human level of interactivity those things need to be done through telepresence thing in the future and so i don't know if that's specific enough but that's sort of the things that drive us and why we're doing this and um i think just to add to that mike um for me at least the public school system has been instrumental in getting me to where i am and and proceeding to where i'm going and i think that in the current state of affairs a lot of school is being shut down and things like that people are looking for a solution as to how teachers are going to interact and successfully teach their students and how coach is going to interact and successfully teach and coach their athletes and i think that this avatar system um if worked on and done correctly could really be instrumental in and being the solution there imagine students um being in a classroom or like separated in different classrooms and instead of having much teachers having to come in you could have robotic coaches maybe in every single home um just the the opportunities are endless so yeah and uh i had one last thing to add um following on what what albert said and and it reminded me of something jackie said earlier about the emotional connection you know even if all you have is sort of the um you know two eyes and a and a pen and a tilt in a yaw um you're sort of able able to get kind of an emotional connection um i have a very short video um if there's time here i'm going to try to share my screen this is actually a a clip this is from back in disney days but this is this is actually trying to focus on that emotional connection as sort of the the the primary focus in a robotic system so i think it actually doesn't take a it doesn't take a a really human mimetic system to necessarily be able to convey a lot of emotion and so one of the things that um you know interested me at disney and that and that interests um me about avatar is is the ability to have an enhanced emotional connection with someone you know in in ways that just with a sort of a flat screen and no physical interaction is is missing and i think that you know you don't need uh sort of future level technology to actually get there i think that um there's really interesting things that can be done with with today's technology yeah it's just the eyes and the and the mouth and the tilt of the head and the movements it's we don't really care what your skin looks like there's something subtle too about even the the color of the the nubs at the end of the ineffectors that kind of gives it some brightness as well and then even just a little bit of that fluid motion of waving beckoning and saying hello is is definitely a way to convey some emotional connection even though it would it's really just a a mechanical being right in front of you yeah i appreciate you showing that it's a really great tie-in to connection especially to what albert mentioned about having a having or conveying presence in a way that doesn't um doesn't limit the experience that that maybe students would have teachers would have and really important roles that require connections we are just coming over the time um if you was wanted to learn more about the work that you're doing uh whether it's related to the avatar enterprise or just at the institute for experiential robotics where could they go is there the best website any more videos that anything they might be able to see about your work um yeah i would say um searching the news archives on northeastern's website um there's several stories about uh past projects that are related to this work and uh yeah i think that's that's a good a good place to go yeah the the institute itself is actually just being stood up so i don't know that we have a um a fancy website that has a an archive of all of the projects related to affiliated faculty um but we're working on it great yeah well look forward to when that's uh standing tall with more work about the avatar x prize and more to come from from your team as you continue to develop your work so i want to thank you all we are out of time for today's session i appreciate everybody taking the time to join the session whether you were a panelist or an attendee or viewing the recording later on thanks for jumping in so on behalf of the avatar x prize we really appreciate um having tashkin peter carl albert ania and michael as well as thanking all the other members of team northeastern for your contributions to the team and your hard work on the avatar x prize um it is really a bit of pleasure to speak with you today hearing more about your approach to the competition and learning how all of your studies and work are coming together to integrate into this challenge so everyone this has been the fifth in a series of meet the teams webinar interviews you can view the previous meet the team sessions and other innovation driven content by visiting x prizes youtube channel if you have questions about the avatar x prize or you may want to have questions more for team northeastern you can email us at avatar at x prize dot org and you can visit avatar dot x prize dot org for more information about our competition and to see the full list of qualified teams so we are wishing you well from los angeles and hope you're staying safe and healthy and i hope you enjoy the rest of your mornings afternoons and your evenings take care