 Good morning. My name is Brandon Roar and I'm a data scientist at iRobot. I'm really happy to be here because we get to talk about two of my very favorite things, data and robots. While robots are inherently cool, they've shown themselves to be particularly useful for handling the 3Ds. Jobs that are dirty, dull, or dangerous. Most household chores fall into the dirty and dull categories, making them ideal candidates. We've made a lot of progress toward reliable home robots in the last 10 years, but often that progress shows just how far we have to go. There's still an enormous gap between what a robot and a human butler can do. We're a long way from Rosie of the Jetsons. I'm going to cover what I see as the four largest challenges and what a path forward looks like through each of them. I'd like to emphasize that these are my own opinions and don't necessarily reflect the position of iRobot in any respect. The four main challenges a robot has to overcome to be effective in the home are interacting with humans, completing tasks, determining affordances, and ensuring privacy. Or in the form of questions, what does my family want? What should I do? What can I do? And how do I protect my family's data? We'll step through these one at a time, but privacy is the foundation for everything that follows. Without trust in our home robot assistants, any other value they provide just won't be worth it. Data security and privacy has become a popular topic, but we're still figuring out as a society how important privacy is to us and how to protect those most vulnerable to harm from the lack of it. Ensuring user data protection is just the first step. I'll take it as a given that we and the companies we work for are acting in good faith to protect and respect our customer's information. But a more subtle and equally important issue is the inadvertent release of sensitive information. To show what I mean, here's a story about taxis in New York City. In 2014, a data analyst named Chris Wong requested a year's worth of taxi data from the city of New York. The medallion numbers of the cabs and the hack license numbers of the drivers had all been encrypted. On the surface, there was no information that could tie a record in the data to a specific car, driver, or passenger. However, a software engineer named Vijay Pandurangan realized that the identifying numbers had been encrypted using MD5, a weak encryption algorithm. And he also figured out that if you know the format of the original data, for instance, that it's a six-digit number, then you can brute force reverse engineer the encryption. Another software engineer named Jason Hall uploaded the de-anonymized version of the data set, now making what was an anonymized data set available for searching by medallion number and hack number. The data now revealed the cars and the drivers involved, but so far had no information that could tie it to individual passengers. Then, a graduate student named Anthony Tucker was able to make a connection with photographs of celebrities taken by paparazzi. Celebrities in New York are often photographed entering or exiting taxi cabs, and these photos often capture the taxi's medallion number. With this final link in the chain, a collection of celebrity cab rides became publicly available. Data that was intended to be private, and some data that was never even recorded, suddenly became part of the public record through just a little cleverness. This is an example of how compliance with privacy practices is not sufficient on its own for protecting data. It takes careful thought and deliberate action to keep the data safe, with tools like aggregation, the addition of noise, and avoiding unnecessary data collection altogether. There's another more prominent example of unintentional information released more recently in 2017. The fitness tracker company Strava released a world map showing an aggregation of all its users' activity. It showed every recorded user location ever uploaded. The resulting maps were beautiful visualizations, and there was nothing in them to tie a location to an individual user or even a particular time. However, an analyst named Nathan Roycer noted that U.S. military bases in the Middle East, some of which are covert, lit up clearly on Strava's map. They showed the outline of roads and jogging tracks, and their brightness gave hints at how many people lived and exercised there. To their credit, in the subsequent version of the heat map, Strava plugged this information leak by omitting low-density locations where activities of individuals or a small group might be represented, requiring a sign-up to be able to view street-level data and allowing individual users to create geographic privacy zones or to opt out of the heat map data collection entirely. I'm proud to say that respecting and protecting customers' privacy is a high priority at iRobot, both in policy and in practice. Our data handling principles focus on minimization, doing only what directly benefits our customers. We only collect as much data as necessary, collected data as de-identified, associating it with individuals only as necessary. The data that we do collect is only processed to the extent necessary, and is only retained as long as necessary, and the only employees that have access to it are the ones that need it. When I say as much as necessary, I mean necessary for the customer's benefit. If it doesn't give the customer a better product or a better experience, then it's not worthy of being collected, processed, or stored. The mindset of minimizing our data footprint is unusual in industry, and it makes an environment that I'm proud to work in. It's been said that data is the new oil. I prefer to think of customer data as the new uranium. It's undeniably powerful, but if not treated carefully and protected, it can end up causing far more harm than good to everyone involved. Data minimization is a principle that robots of all sorts can use to protect their homes. As robots get woven into the fabric of our lives at home, it's not enough for them to be compliant with privacy laws and policies, although that's an excellent start. But home robots will sit somewhere in the middle ground between appliances and pets, between tools and assistants. They'll see us at our worst, at our lowest and most vulnerable. For them to be effective, we will have to trust them. Part of this comes back to limited data collection and limited processing. For instance, in polite society, there are observations that we refrain from making. Imagine a smart lighting system that can detect activity room by room so that it can turn off unused lights. If it focused on patterns in the bathroom light and based on recent anomalies offered to order you a fiber supplement, not everyone would appreciate that. That's an instance of excessive observation and processing that destroys trust rather than builds it. It introduces a creep factor. This is top of mind for us at iRobot. We are supported by our customers and our survival depends on trust. In order to build that trust, even beyond what's required by a sensible and ethical privacy policy, we give the user transparency and control. Customers can request their data and they can correct any inaccuracies or misrepresentations. Customers are notified and have to explicitly opt in to any data sharing or transfer outside the company. These approvals for opt-ins stand alone and are not buried in or tied to end user license agreements. Customer trust is a very high bar and one that we have to meet every day. To illustrate how important trust will be between us and our home robots, consider a medication dispensing robot. Note that this and all the other hypothetical robots presented here are to show how much work remains in these grand challenge areas. Any relation to actual robots real or planned is purely coincidental. This imaginary robot has hoppers for liquid medication and pills. It can sense and reorder medications as they get low. It can even offer reminders to individuals who are in danger of missing a dose. Imagine the convenience. I have a shih tzu who is currently on nine different medications. A device like this would be very convenient. Now, put your black hat on. If you had access to all of the information this robot collects, how could it be misused? Medication and its scheduling would allow you to make a very good guess at users' health issues. This extremely sensitive information could be horribly misused by unscrupulous insurance companies or potential employers. It could also be used to target healthcare-related advertising to the most vulnerable populations. This robot could only be useful if you were able to place a great deal of trust in it. Now, let's consider a dishwashing robot. This seems more benign, and who wouldn't love an automated solution to the problem of dirty dishes? However, a clever dishwashing robot has access to plenty of sensitive information too. Like, what did this household eat, and when? How many people are there? Do they cook, or eat prepackaged food, or carry out? What products are they likely to buy? Where do they shop? What might their health issues be? How much do they spend on food? It's easy to see that with an aggressive data collection and processing environment, the dishwashing robot can be made to reveal much more about you than the fact that you don't like washing dishes. This underlines the importance of being able to trust any intelligent device you bring into your home. For a robust home robotics ecosystem, this will be an absolute requirement. Affordances is a fancy robotics term for, what can I do? If you think about text-based computer game, sometimes figuring out what all your options are is one of the toughest pieces of the puzzle. If the game says you're in a room, with a table, and a chair, and a chest, what are your options? Sit on the chair, try to open the chest. If you find it locked, smash it on the floor. Break the chair leg and use the table, using the table, and then use the splinter chair leg to pry open the chest. Determining your affordances can be the biggest part of the challenge. The same is true of home robots. For instance, doors look a lot like walls. How do you know whether a wall is a door? Whether it pushes, pulls, folds, or slides? How do you know which walls are cabinets? Whether they slide out or pivot to the side? It doesn't help that built-in furniture can sometimes be made to look exactly like a wall. How do you know which objects can be moved or which can be navigated around? Determining affordances is a fascinating robotics problem. It's an example of something that tends to come naturally to humans. So naturally, in fact, that we have a hard time posing the problem for a machine to solve. A great example of tackling a tough affordance problem is robotic towel folding. Peter Abil, a University of California Berkeley professor, and his team taught a two-armed robot called PR2 to fold towels. Towels are tricky affordance problems because it's not obvious where to grab and what to pull in order to get the results you want. Towels are supple and change shape whenever you touch them. It's hard to make an explicit set of instructions for how to fold a towel because it's never in the same shape twice. Professor Abil and his team tackled this problem using a series of images from different angles to make a 3D model of the towel. Based on that, they were able to identify corners, and using the corner locations, they were able to exercise a standard routine for folding towels. It was a very impressive accomplishment. Their achievement was limited, though. It assumed a rectangular towel, but most laundry consists of things more complex than that. Also, the robot was very slow. It took 20 minutes to fold a towel in its first incarnation. It eventually got down to 90 seconds, but still much slower than a human folder. An even higher stakes case for determining affordances is identifying pedestrians. Self-driving cars continually have to answer the question, can I drive this way? And the consequences of getting it wrong are very high. Any location containing a pedestrian is an automatic no-go zone. Determining pedestrians' locations is a critical part of self-driving cars' affordances. For the most part, this is a problem that's been solved well. Deep neural networks that specialize in finding patterns and images identify all the pedestrians in a given scene quickly and with high accuracy. Technical capabilities like this will enable home robots to navigate throughout the home without continually tripping over chairs and bumping into people. However, it should be noted that even with the massive effort that's been devoted to this particular problem, failures still occur. In cases where robots encounter entirely unfamiliar conditions, they may fail to identify pedestrians. In other cases, subtle assumptions can be built into the identification algorithm. For instance, that pedestrians will only be in crosswalks, not jaywalking. This may be true in some communities, but it's certainly not true in Boston. And any algorithm that relies on assumptions like these will certainly fail. When building robots that need to work well in homes all over the world, we can't afford to make convenient assumptions about what they'll see. To a large extent, they'll have to figure it out as they go. Working with floor cleaning robots has given us a lot of experience determining affordances. One strategy that has proven useful in physical environments is having a variety of sensors. iRobots Roomba and Brava have cameras for doing visual navigation, bumpers for navigating by touch, and downward-facing proximity sensors for detecting sudden drops. A combination of these sensors and a great deal of experimentation and testing has helped us find strategies for the robots to determine their affordances, to get all of the places they need to get, but not go the places they should not, like down the flight of stairs. One of the big early design decisions behind Roomba was whether to use laser rangefinders or cameras for navigation. iRobot shows cameras, in part because they're better at determining affordances. Lasers are good at finding how far away the nearest thing is in any direction, but a laser can't tell a bedskirt from a wall. It doesn't know what can be pushed out of the way and what's a true obstacle. Determining affordances will continue to be a challenge to help home robots extend to more interactive tasks. Imagine a robot whose job is to tidy up, to put things back in their place. Shoes, toys, clothes, pillows, chairs, papers. It will need to know how it can safely lift an object, which objects can be scooted along the floor, what might spill. The size and shape and compliance of objects will vary more than can possibly be represented in a set of explicit rules. It will fall to the robot to figure out what in its environment can be manipulated, how to move things from one place to another, and how to stack, organize, pack, and nest these items. Consider the still higher bar of a pet grooming robot. Here the environment in question is in motion, neither rigid nor entirely fluid. It is sensitive to pressure and sudden changes in pressure and may actively be trying to escape. Rushing through fur requires applying consistent pressure in the right direction while responding gently but firmly to snags. Interaction with the moving, compliant, and possibly adversarial environment is definitely a distant goal for determining affordances. Now even after you determine all of your affordances, challenges remain. At the next level up, knowing what job needs to be done and knowing when it's complete is a tough problem in its own. One solution to the challenge of task completion is to define your problem very carefully. Machinery for harvesting grain runs almost entirely on autopilot. Its job is straightforward. Cover every part of the field at least once. This clear task definition allows for efficient path planning and straightforward execution. The system knows that it's done as soon as it has visited every point on its planned path. This approach works well when there are no unexpected glitches, no inaccuracies in your map. But if a flood left standing water, or if a fence had been relocated since the map was last updated, these would result in a discrepancy between the planned path and the real world. Probably something unpleasant would happen. In addition to narrowly defining the task, robots can achieve high rates of achievement, high rates of task completion by working in a carefully structured environment. This is a robotic retrieval system for books in a library at the University of Utah. Book seekers log their requests and the robot pulls the bin with their book in it and brings it to them. Books can be stored in any location and in any order, allowing for efficient use of space and storing frequently accessed books closer to the access point. The entire system is closed to humans when in operation and carefully structured. Each bin is the same size and shape. The robot handles bins which are rugged and uniform rather than books which are varied and can be delicate. All of this together means that unintended interruptions to each retrieval mission are rare. This is also the approach used by warehouse robots. Floors are kept smooth, flat, clean, and free of obstacles. Bins and pallets all come in a standard size and are placed in specific locations. The warehouse is closed to the elements, lighting and temperature carefully maintained. When it's feasible to do so, creating a structured environment is a great help for task completion. The only drawback to this approach is that it's brittle. It's sensitive to the assumptions that everything is well controlled. To illustrate a couple of years ago in one of Amazon's giant warehouses, a package of microwave popcorn was dropped. It was crushed, the liquid butter leaked out, and made a greasy puddle on the floor. Because it was anomalous, a robot came to inspect the puddle. It drove through the puddle, spreading the butter and causing the robot to lose traction. This disrupted the robot's odometry as it counted wheel rotations to help estimate its position, spinning wheels introduced errors into the process and the robot became disoriented and stuck. Another robot came to investigate and also slipped and got stuck. This was repeated several times before the problem was discovered and cleaned up. Because of popcorn. The downside to relying on a carefully structured environment is that as soon as that structure is violated, the system can break. Now contrast a book retrieval system or a warehouse with a home. Not just a single home, but all the homes in every city in every country. Imagine all the things that could possibly be on the floor of a home. Whatever you're picturing, a Roomba has run into it. This is the challenge of task completion with home robots. It is a somewhat structured environment. The floors are mostly flat, but this still admits a dizzying spectrum of deviations. iRobot's contributions to task completion touch both on clarifying the task definition and on being robust to disturbances. Newer Roombas come with upward-facing cameras. They find features overhead like corners of the ceiling and use these to navigate the way a sailor might use stars in the sky. They also use odometry, which is particularly helpful when light is low or the robot goes under some furniture. And they use the front bumper to determine when they've hit the edge of the open floor. With the combination of this data and some very clever algorithmic work, robots can make a fairly accurate map of the space they've covered. After a few exploratory and learning runs, the robot builds confidence in its map and can use it the same way the grain harvester plans a mission that meets its objectives. It covers the entire area efficiently. And it does this while keeping an eye on its own battery charge and dust bin, returning to the base to empty its dirt or recharge as necessary. In order to handle obstacles or unforeseen events, a few other tricks have been used. Once the initial plan is in place and being executed, the robot continually watches for deviations between what it expects and what it observes. It may lose light and lose its ability to navigate by camera. It may encounter a closed door. It may get a shoelace wrapped around its brush or it may high center on a threshold. In order to handle everything that a capricious world can throw at it, the Roomba continually checks its assumptions. It recalculates its position over and over again based on what it sees to make sure it's not getting off track. It checks that its wheel odometer is consistent with its accelerometer readings. It makes sure that the brushes are spinning as they should. And if any of these things stops being true, the robot initiates a fail-safe routine, a sequence of corrective actions that helps get it back on track. These are hard to build well based only on theory. One of the benefits of having 30 million robots in homes is that they encounter some things that you would never think to test for. For users that opt in and agree to let iRobot use their robot's experiences to improve their performance, we can gather some information about what happened where and how often. The patterns in these events give us clues for common failures which we can then recreate in the lab and find effective remedies for. I don't know of any shortcut to this. The only path I know to robust task completion is a great deal of experience. There are other exciting task completion challenges to be addressed. Picture a home security robot, a system whose job it is to monitor your home for safety and security threats. For such a robot, defining its mission goals is harder. What should it look for? How often should it check? How will it know if it's done a good job? A lot of tasks aren't as easy to define as discrete missions like cleaning a floor, harvesting a field, and retrieving a book. The way we think about task definition for robots like security robots has yet to be worked out. You probably have an intuitive sense for what a security robot should do and how often, but you may find that when you go to reduce that to a specific set of instructions, it's hard. Worse, those rules may result in activities so structured and predictable that it would be easy for a savvy adversary to avoid. One best guess is that task definitions for robots like these will need to be more adaptive, more responsive to what they experience, more dependent on the quirks of the spaces they're in and how they're used. It may be difficult to create a detailed specification that works well for all situations. The task definition may need to become more abstract, leaving more of the implementation details up to individual robots. There's also a lot of work to be done in dealing with unexpected challenges during a task. Consider a robot whose job it is to go into the kitchen, any kitchen, and make a cup of coffee. This task was proposed by Apple co-founder Steve Wozniak as an alternative to the Turing test. The Turing test was originally derived from a thought experiment proposed by Alan Turing, in it human judges are given the task of deciding whether the other side of a typed conversation is being carried by a human or a machine. Wozniak proposed the coffee test as a more interesting variant. In order to carry on a conversation, a robot can get by without knowing much of anything about the physical world. And in fact, most of the systems that have successfully passed the Turing test have done so by incorporating superficial markers of human communication, like sarcasm or slang, rather than relying on deep understanding of the world. To be successful in the coffee test, a robot would have to, for instance, be able to identify and handle spoons of every shape, size, and variety. It would have to handle open coffee containers, possibly grind its own beans, transfer liquids, and figure out how to boil water. Even being able to locate coffee in an unmarked container is a hard problem for robots. All of these uncertainties are barriers that must be overcome in order for the robot to complete its task. Vacuuming is certainly challenging, but it's definitely not the hardest household task we can find. The challenges of task definition and handling unexpected variations in a task make task completion loom large among the grand challenges of home robotics. Autonomous robots are fantastic, but to reach their maximum potential, we want them to be more than that. Autonomy suggests carrying out a mission in isolation without additional instruction or interaction. It's a useful concept for developing theory, like a closed thermodynamic system, but neither of these exist in practice. A home robot will be surrounded by a constantly changing collection of sounds and movement, people and things. Some of these will be obstacles, and some of these will carry important information the robot should listen to. In order to behave intelligently, the robot will have to navigate and interpret these. Colin Engel is the CEO of iRobot, and he spent some time wrestling with this concept. In a LinkedIn post, he laid out three important attributes beyond autonomy that an intelligent robot will need. They'll have to be responsive, collaborative, and act as part of a larger system. Much of humans' high bandwidth social coordination takes place through speech. Star Trek ship computers and Hal 9000 helped us imagine the seamless human-computer interface that speech recognition makes possible. The most primitive form of this is the clapper. If you happen to have seen the 80s commercials that show people happily clapping their hands to turn their lamps on and off, you know what I mean. When I was 6 years old, I had a toy that operated on a similar principle. A handheld clicker signaled an orange truck to switch between two behaviors, moving forward in a straight line and moving backward into the right. This crude remote control let me drive the toy through nothing but audio. I learned quickly that the clicker could be simulated with a sharp hand clap, a trick my dad used to hijack the truck more than once. Human speech recognition has come a very long way. For example, I dictated the entire text of this presentation into my phone. Now speech recognition is being used in automated customer service systems and to command digital assistants of all sorts. Performance isn't perfect, but most of the time it's surprisingly good. However, it's worth mentioning that when performance isn't good, it doesn't fail uniformly. English speech recognition, for those with non-native accents, varies between poor and horrendous. The Washington Post detailed this discrepancy in a 2018 report. The examples of failures would be humorous and isolation, but unfortunately the pattern they present is a disturbing one, since it fails to serve populations that are already at a disadvantage. One particularly tough aspect of human interaction is getting machines to work well with all people, not just a subset. But the next step, after understanding what a human has said, is to take action based on that, to follow the instructions. What a robot can do is of course limited by its construction. A vacuum cleaning robot will not be able to wash the windows. A voice assistant won't be able to vacuum the floor. However, digital home assistants are perfectly capable of doing anything that can be done with the click of a mouse. They can queue up movies, turn on some music, call a friend, even make purchases on the internet. Now it's possible when you're up to your elbows in bread dough to call your mom and ask her advice on gluten texture and relationships, all through voice commands. And when you notice you're out of trash bags, you can ask Alexa to have Amazon send you some more. This was taken to a bit of an extreme a couple of years ago. A six-year-old girl in Dallas asked her echo, can you play dollhouse with me and get me a dollhouse? Alexa complied ordering a dollhouse mansion and four pounds of sugar cookies. It made for a heartwarming story which was picked up by the local news in San Diego. The end of the story, the anchor said, I loved the little girl saying, Alexa ordered me a dollhouse. And then this utterance broadcasts through viewers' TVs which in turn triggered a whole new batch of devices to order dollhouses for their families too. Following instructions isn't always straightforward. Sometimes it matters who's issuing them. It's important to have home robots do what we expect and what we see as reasonable. When we ask Alexa to open the pod bay doors, we sure as hell want those doors to open. Roomba and Brava, iRobots Mopping Robot, are also becoming more responsive to what their families want. One way this is happening is with keep out zones. If you have some place that you don't want the robot to go, you can pull up a map and outline that region. Maybe you have something delicate or a place where the robot tends to get stuck a lot. You can just head off trouble before it happens and make a virtual no-go zone. Having this capability, the robot can also start to help. With keep out zone recommendations, the Roomba or Brava can look back over their history and notice whether there are certain locations where they've had trouble in the past. Maybe a tall threshold or a pile of computer cables where it tends to get stuck. Then it can offer to create a keep out zone around this area. The robot starts to anticipate what the human might want. Another way iRobots trying to facilitate human-robot interaction is by integrating with home assistants like Alexa and Google Home. Now you can say, Alexa, ask Roomba to clean my kitchen. And Roomba will start up cleaning the kitchen floor. For a while now, families have been able to schedule Roomba missions in advance. This is an example of autonomy, of telling the robot what it needs to do up front and then letting it continue to do that thing indefinitely. The ability to casually initiate a cleaning job by voice is a step forward. Past autonomy toward responsiveness and collaboration. It helps the Roomba take a step from being an appliance toward being a partner. It's a small but clear step along the path toward seamless coordination with humans. You can imagine the possibilities when robots in our home get really good at understanding what we want based not only on verbal commands but also body language and situational observations. Picture a robot waiter. To do its job well, this waiter would not only have to place and remove dishes and utensils and drinks, but it would also have to watch for physical cues. He would have to handle dishes so as not to touch or alarm diners. It might have to interpret subtle or ambiguous verbal commands. For instance, one guest might say, mmm, that soup was good to hint that she's ready for the next course. Another might say, I'm good when offered a second helping. The robot would also have to infer based on drink levels and food amounts and how actively diners are taking bites and sips, whether it might be time to clear some plates and glasses. These capabilities are well beyond the current state of the art, but there are no obvious roadblocks to creating them. I expect that working in close physical quarters with humans and communicating with them, both through speech and physical cues, will be a productive line of development and will make home robots that much more helpful. Now imagine taking this interaction to the next level, a robot that anticipates human needs before they're expressed. This step takes us from moment to moment interaction into long-term planning and considerably more complex predictions. In order to be good at preparing meals, the robot will have to do more than be good at the coffee test. Yes, it'll have to navigate a kitchen and combine ingredients to make a palatable result, but it will also have to make guesses as to what the humans might want to eat, when and how much. Initially, this is something that can be commanded explicitly, but eventually, you can imagine a robot that takes into account past requests as well as what it knows about the schedules for the household and perhaps other variables like the weather and prepares appropriate dishes based on those guesses. It will never have enough information to know for sure that its guesses will be correct and it will always be seeking additional feedback so that it can make better guesses in the future, but on any given day, it'll simply do its best, make an executed meal plan, order the groceries it needs, and prepare and serve them at the right time. The ability to adapt to the latest information, incorporate new feedback, and still be able to make reasonable guesses along the way will be an important capability as robots gain more responsibility in our homes. Adjusting a home's temperature and humidity, for example, is a constant dance with the changing needs and schedules of its inhabitants and is further constrained by the desire not to waste energy and money. Like meal preparation, good environmental control will be both responsive to human feedback and will also anticipate human needs. These four areas, ensuring privacy, determining affordances, completing tasks, and human interaction have all shown amazing progress in the last 10 years. Technologies that are commonplace now were science fiction at the start of the century, but there is a lot of work left to do. These challenges are grand both in scope and in the reward we can expect if we solve them well. These are the problems I feel energized by. I'm excited to tackle them together. Thank you.