 Researchers from Canada have developed a new control scheme that could improve robots' ability to home in on and interact with target objects. Their approach could help rev up robots designed to perform various industrial tasks from polishing and welding to painting and inspection, and perhaps beyond. Robots these days are capable of some pretty amazing things, but one area where robots could use more help is in navigating toward and handling objects. It's a seemingly simple task that we often take for granted. Consider the scenario of returning to our vehicle after a trip to the supermarket. As we step outside, we must first spot our car among those parked next to it, identifying it by shape or model and color. We then have to make our way toward it, unlock and open the door when we're within the right range, unload and enter the driver's seat, all while other shoppers and cars interfere with our path. In trying to get robots to perform similar tasks, robotics designers are faced with two problems, vision and switching between long range and short range movement. Detecting an object based on data pulled from 3D cameras, scanners and even sonar can be computationally taxing. Despite providing high resolution, these devices generate enormous amounts of visual data, making detection a slow process. And when an object is spotted, there's the problem of actually getting to it and maintaining a safe distance from it. That can be relatively straightforward when an environment is unchanging and predictable, but that's hardly the case in the real world. To address the vision problem, the research team co-opted a video game controller to quickly and continuously obtain 3D visual data. Originally designed to capture gamer's motion during gameplay, the Microsoft Connect casts a net of invisible data points on any scene almost instantly. That data can be translated into measures of depth, position and color to help a robot, in this case a mobile robotic arm, get its bearings. The team then developed a self-tuning controller that enabled their robotic arm to seamlessly switch between different modes of operation, depending on the arm's position relative to its target, a car door. For different ranges of operation, the robot was supplied with different input data. The Connect-generated model went far away, infrared sensor data went nearby and force sensor data obtained through a pliable wrist when touching the door. Experiment showed that the arm could successfully map out paths for both approaching and inspecting the door, and it followed those paths with minimal error and no damage. Despite the favorable results, the researcher's strategy could benefit from some improvements. New or more adaptive controllers, for example, could allow for faster execution. Nevertheless, the approach may already provide much-needed solutions to important problems faced by researchers today, helping to further expand the range of abilities that robots currently possess.