<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns:yt="http://www.youtube.com/xml/schemas/2015" xmlns:media="http://search.yahoo.com/mrss/" xmlns="http://www.w3.org/2005/Atom">
 <link rel="self" href="http://www.youtube.com/feeds/videos.xml?channel_id=UC4Vgg1DuTldG9N8PHB9sMrw"/>
 <id>yt:channel:UC4Vgg1DuTldG9N8PHB9sMrw</id>
 <yt:channelId>UC4Vgg1DuTldG9N8PHB9sMrw</yt:channelId>
 <title>RBO TU Berlin</title>
 <link rel="alternate" href="https://www.youtube.com/channel/UC4Vgg1DuTldG9N8PHB9sMrw"/>
 <author>
  <name>RBO TU Berlin</name>
  <uri>https://www.youtube.com/channel/UC4Vgg1DuTldG9N8PHB9sMrw</uri>
 </author>
 <published>2009-10-23T15:46:05+00:00</published>
 <entry>
  <id>yt:video:xFw-zetsMfY</id>
  <yt:videoId>xFw-zetsMfY</yt:videoId>
  <yt:channelId>UC4Vgg1DuTldG9N8PHB9sMrw</yt:channelId>
  <title>Visual Detection of Opportunities to Exploit Contact in Grasping Using MABs</title>
  <link rel="alternate" href="https://www.youtube.com/watch?v=xFw-zetsMfY"/>
  <author>
   <name>RBO TU Berlin</name>
   <uri>https://www.youtube.com/channel/UC4Vgg1DuTldG9N8PHB9sMrw</uri>
  </author>
  <published>2017-07-25T13:48:16+00:00</published>
  <updated>2017-07-31T08:03:40+00:00</updated>
  <media:group>
   <media:title>Visual Detection of Opportunities to Exploit Contact in Grasping Using MABs</media:title>
   <media:content url="https://www.youtube.com/v/xFw-zetsMfY?version=3" type="application/x-shockwave-flash" width="640" height="390"/>
   <media:thumbnail url="https://i1.ytimg.com/vi/xFw-zetsMfY/hqdefault.jpg" width="480" height="360"/>
   <media:description>'Visual Detection of Opportunities to Exploit Contact in Grasping Using Contextual Multi-Armed Bandits&quot;

http://www.robotics.tu-berlin.de/fileadmin/fg170/Publikationen_pdf/Eppner17_IROS.pdf

Clemens Eppner and Oliver Brock

Environment-constrained grasping exploits beneficial interactions between hand, object, and environment to increase grasp success. Instead of focusing on the final static relationship between hand posture and object pose, this view of grasping emphasizes the need and the opportunity to select the most appropriate, contact-rich grasping motion, leading up to a final static grasp configuration. This view changes the nature of the underlying planning problem: Instead of planning for static contact points, we need to decide which environmental constraint (EC) to use during the grasping motion. We propose a method to make these decisions based on depth measurements so as to generate robust grasps for a large variety of objects. Our planner exploits the advantages of a soft robot hand and learns a hand-specific classifier for edge-, surface-, and wall-grasps, each exploiting a different EC. Additionally, we show how the model can continuously be improved in a contextual bandit setting without an explicit training and test phase, enabling the continuous improvement of a robot's grasping skills during throughout life time.</media:description>
   <media:community>
    <media:starRating count="3" average="5.00" min="1" max="5"/>
    <media:statistics views="21"/>
   </media:community>
  </media:group>
 </entry>
 <entry>
  <id>yt:video:CXaN8ZWRMT0</id>
  <yt:videoId>CXaN8ZWRMT0</yt:videoId>
  <yt:channelId>UC4Vgg1DuTldG9N8PHB9sMrw</yt:channelId>
  <title>Interleaving Motion in Contact and Free Space for Planning Under Uncertainty</title>
  <link rel="alternate" href="https://www.youtube.com/watch?v=CXaN8ZWRMT0"/>
  <author>
   <name>RBO TU Berlin</name>
   <uri>https://www.youtube.com/channel/UC4Vgg1DuTldG9N8PHB9sMrw</uri>
  </author>
  <published>2017-07-21T12:44:34+00:00</published>
  <updated>2017-07-31T08:02:51+00:00</updated>
  <media:group>
   <media:title>Interleaving Motion in Contact and Free Space for Planning Under Uncertainty</media:title>
   <media:content url="https://www.youtube.com/v/CXaN8ZWRMT0?version=3" type="application/x-shockwave-flash" width="640" height="390"/>
   <media:thumbnail url="https://i4.ytimg.com/vi/CXaN8ZWRMT0/hqdefault.jpg" width="480" height="360"/>
   <media:description>Arne Sieverling, Clemens Eppner, Felix Wolff and Oliver Brock

http://www.robotics.tu-berlin.de/fileadmin/fg170/Publikationen_pdf/Sieverling17_IROS.pdf

In this paper we present a planner that interleaves free-space  motion  with  contact  motion  to  reduce  uncertainty. The planner finds such motions by growing a search tree in the combined space of collision-free and contact configurations. The planner  reasons  efficiently  about  the  accumulated  uncertainty by factoring the state in a belief over configuration and a fully observable  contact  state.  We  show  the  uncertainty-reducing capabilities  of  the  planner  on  manipulation  benchmark  from the  POMDP  literature.  The  planner  scales  up  to  more  complex  problems  like  manipulation  under  uncertainty  in  seven-dimensional  configuration  space.  We  validate  our  planner  in simulation  and  on  a  real  robot.</media:description>
   <media:community>
    <media:starRating count="2" average="5.00" min="1" max="5"/>
    <media:statistics views="23"/>
   </media:community>
  </media:group>
 </entry>
 <entry>
  <id>yt:video:r68ZkNo0xMU</id>
  <yt:videoId>r68ZkNo0xMU</yt:videoId>
  <yt:channelId>UC4Vgg1DuTldG9N8PHB9sMrw</yt:channelId>
  <title>The Last Waltz in Berlin</title>
  <link rel="alternate" href="https://www.youtube.com/watch?v=r68ZkNo0xMU"/>
  <author>
   <name>RBO TU Berlin</name>
   <uri>https://www.youtube.com/channel/UC4Vgg1DuTldG9N8PHB9sMrw</uri>
  </author>
  <published>2017-06-14T13:35:13+00:00</published>
  <updated>2017-07-11T18:28:11+00:00</updated>
  <media:group>
   <media:title>The Last Waltz in Berlin</media:title>
   <media:content url="https://www.youtube.com/v/r68ZkNo0xMU?version=3" type="application/x-shockwave-flash" width="640" height="390"/>
   <media:thumbnail url="https://i3.ytimg.com/vi/r68ZkNo0xMU/hqdefault.jpg" width="480" height="360"/>
   <media:description></media:description>
   <media:community>
    <media:starRating count="3" average="5.00" min="1" max="5"/>
    <media:statistics views="129"/>
   </media:community>
  </media:group>
 </entry>
 <entry>
  <id>yt:video:H8p4WbFqtgQ</id>
  <yt:videoId>H8p4WbFqtgQ</yt:videoId>
  <yt:channelId>UC4Vgg1DuTldG9N8PHB9sMrw</yt:channelId>
  <title>A Method for Sensorizing Soft Actuators and Its Application to the RBO Hand 2</title>
  <link rel="alternate" href="https://www.youtube.com/watch?v=H8p4WbFqtgQ"/>
  <author>
   <name>RBO TU Berlin</name>
   <uri>https://www.youtube.com/channel/UC4Vgg1DuTldG9N8PHB9sMrw</uri>
  </author>
  <published>2017-02-25T19:09:49+00:00</published>
  <updated>2017-05-09T02:19:26+00:00</updated>
  <media:group>
   <media:title>A Method for Sensorizing Soft Actuators and Its Application to the RBO Hand 2</media:title>
   <media:content url="https://www.youtube.com/v/H8p4WbFqtgQ?version=3" type="application/x-shockwave-flash" width="640" height="390"/>
   <media:thumbnail url="https://i1.ytimg.com/vi/H8p4WbFqtgQ/hqdefault.jpg" width="480" height="360"/>
   <media:description>The compliance of soft actuators makes manipulation safer and simplifies control. But their high flexibility also makes sensorization challenging. From the large space of possible deformations not all are equally important. We present a method for sensorization of soft actuators that, for a given application, finds an effective layout from a set of sensors. It starts from a redundant sensor layout and iteratively reduces the number of sensors. Applying the method to the PneuFlex actuators of the RBO Hand 2, we identify a layout of four liquid metal strain sensors and one pressure sensor to predict actuator deformation in three dimensions: flexional, lateral, and twist. Finally, the layout is used to build a sensorized RBO Hand 2. It can detect passive shape adaptation while grasping and reveals failure cases during manipulation, e.g. slipping fingers while opening a door.</media:description>
   <media:community>
    <media:starRating count="2" average="5.00" min="1" max="5"/>
    <media:statistics views="303"/>
   </media:community>
  </media:group>
 </entry>
 <entry>
  <id>yt:video:KO3XfspsXJY</id>
  <yt:videoId>KO3XfspsXJY</yt:videoId>
  <yt:channelId>UC4Vgg1DuTldG9N8PHB9sMrw</yt:channelId>
  <title>Achieving Robustness by Optimizing Failure Behavior</title>
  <link rel="alternate" href="https://www.youtube.com/watch?v=KO3XfspsXJY"/>
  <author>
   <name>RBO TU Berlin</name>
   <uri>https://www.youtube.com/channel/UC4Vgg1DuTldG9N8PHB9sMrw</uri>
  </author>
  <published>2017-02-25T18:10:27+00:00</published>
  <updated>2017-02-25T21:48:26+00:00</updated>
  <media:group>
   <media:title>Achieving Robustness by Optimizing Failure Behavior</media:title>
   <media:content url="https://www.youtube.com/v/KO3XfspsXJY?version=3" type="application/x-shockwave-flash" width="640" height="390"/>
   <media:thumbnail url="https://i4.ytimg.com/vi/KO3XfspsXJY/hqdefault.jpg" width="480" height="360"/>
   <media:description>The most prominent criterion for learning of manipulation skills is the optimization of task success, modeled as expected reward or probability of success. This is sensible if we only want to optimize a single controller. But if learned manipulation primitives are used as modules in a larger system, then it is also important that their generated sensor traces facilitate recognition of action-outcomes. Optimization solely for expected success of a primitive does not guarantee this. We demonstrate a simple example for optimization of actions towards observability, combined with optimization for expected success. Our experiment is a manipulation task with a soft manipulator, where an action primitive is learned such that its generated sensor trace helps a classifier to distinguish task success and task failure. The experimental results indicate that adding auxiliary forces to the original manipulation primitive can indeed facilitate outcome recognition for manipulation tasks.</media:description>
   <media:community>
    <media:starRating count="1" average="5.00" min="1" max="5"/>
    <media:statistics views="81"/>
   </media:community>
  </media:group>
 </entry>
 <entry>
  <id>yt:video:Ry6JzeW0HOM</id>
  <yt:videoId>Ry6JzeW0HOM</yt:videoId>
  <yt:channelId>UC4Vgg1DuTldG9N8PHB9sMrw</yt:channelId>
  <title>Probabilistic Multi-Class Segmentation for the Amazon Picking Challenge</title>
  <link rel="alternate" href="https://www.youtube.com/watch?v=Ry6JzeW0HOM"/>
  <author>
   <name>RBO TU Berlin</name>
   <uri>https://www.youtube.com/channel/UC4Vgg1DuTldG9N8PHB9sMrw</uri>
  </author>
  <published>2016-10-17T14:27:11+00:00</published>
  <updated>2017-05-13T11:53:50+00:00</updated>
  <media:group>
   <media:title>Probabilistic Multi-Class Segmentation for the Amazon Picking Challenge</media:title>
   <media:content url="https://www.youtube.com/v/Ry6JzeW0HOM?version=3" type="application/x-shockwave-flash" width="640" height="390"/>
   <media:thumbnail url="https://i3.ytimg.com/vi/Ry6JzeW0HOM/hqdefault.jpg" width="480" height="360"/>
   <media:description>IROS 2016 Best Paper Award finalist: Rico Jonschkowski, Clemens Eppner, Sebastian Höfer, Roberto Martín-Martín, and Oliver Brock. Probabilistic Multi-Class Segmentation for the Amazon Picking Challenge. IEEE/RSJ International Conference on Intelligent Robots and Systems, 2016. (link: http://www.robotics.tu-berlin.de/fileadmin/fg170/Publikationen_pdf/Jonschkowski-16-IROS.pdf )</media:description>
   <media:community>
    <media:starRating count="3" average="5.00" min="1" max="5"/>
    <media:statistics views="310"/>
   </media:community>
  </media:group>
 </entry>
 <entry>
  <id>yt:video:n26xDiVIkqc</id>
  <yt:videoId>n26xDiVIkqc</yt:videoId>
  <yt:channelId>UC4Vgg1DuTldG9N8PHB9sMrw</yt:channelId>
  <title>Mass Control of Pneumatic Soft Continuum Actuators with Commodity Components</title>
  <link rel="alternate" href="https://www.youtube.com/watch?v=n26xDiVIkqc"/>
  <author>
   <name>RBO TU Berlin</name>
   <uri>https://www.youtube.com/channel/UC4Vgg1DuTldG9N8PHB9sMrw</uri>
  </author>
  <published>2016-08-24T10:42:47+00:00</published>
  <updated>2017-02-01T01:36:58+00:00</updated>
  <media:group>
   <media:title>Mass Control of Pneumatic Soft Continuum Actuators with Commodity Components</media:title>
   <media:content url="https://www.youtube.com/v/n26xDiVIkqc?version=3" type="application/x-shockwave-flash" width="640" height="390"/>
   <media:thumbnail url="https://i3.ytimg.com/vi/n26xDiVIkqc/hqdefault.jpg" width="480" height="360"/>
   <media:description>The video demonstrates the capabilities of a pneumatic soft hand when it is operated with a mass controller. Mass control enables the effective use of actuator compliance while maintaining continuous and reactive control of the complementary preset position. Additionally it gives an intuition on the attainable performance of mass control in terms of speed, stability and precision.</media:description>
   <media:community>
    <media:starRating count="5" average="5.00" min="1" max="5"/>
    <media:statistics views="282"/>
   </media:community>
  </media:group>
 </entry>
 <entry>
  <id>yt:video:Va0O0JS9bx0</id>
  <yt:videoId>Va0O0JS9bx0</yt:videoId>
  <yt:channelId>UC4Vgg1DuTldG9N8PHB9sMrw</yt:channelId>
  <title>Planning Grasp Strategies That Exploit Environmental Constraints</title>
  <link rel="alternate" href="https://www.youtube.com/watch?v=Va0O0JS9bx0"/>
  <author>
   <name>RBO TU Berlin</name>
   <uri>https://www.youtube.com/channel/UC4Vgg1DuTldG9N8PHB9sMrw</uri>
  </author>
  <published>2016-04-15T09:37:08+00:00</published>
  <updated>2017-06-13T16:37:11+00:00</updated>
  <media:group>
   <media:title>Planning Grasp Strategies That Exploit Environmental Constraints</media:title>
   <media:content url="https://www.youtube.com/v/Va0O0JS9bx0?version=3" type="application/x-shockwave-flash" width="640" height="390"/>
   <media:thumbnail url="https://i3.ytimg.com/vi/Va0O0JS9bx0/hqdefault.jpg" width="480" height="360"/>
   <media:description>Clemens Eppner and Oliver Brock. Planning Grasp Strategies That Exploit Environmental Constraints. Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pp. 4947 - 4952, 2015.

http://www.robotics.tu-berlin.de/fileadmin/fg170/Publikationen_pdf/eppner_icra2015.pdf</media:description>
   <media:community>
    <media:starRating count="5" average="5.00" min="1" max="5"/>
    <media:statistics views="263"/>
   </media:community>
  </media:group>
 </entry>
 <entry>
  <id>yt:video:DNjlAv2cBQ8</id>
  <yt:videoId>DNjlAv2cBQ8</yt:videoId>
  <yt:channelId>UC4Vgg1DuTldG9N8PHB9sMrw</yt:channelId>
  <title>Fundamentals of Robotics (Summer Term 2015)</title>
  <link rel="alternate" href="https://www.youtube.com/watch?v=DNjlAv2cBQ8"/>
  <author>
   <name>RBO TU Berlin</name>
   <uri>https://www.youtube.com/channel/UC4Vgg1DuTldG9N8PHB9sMrw</uri>
  </author>
  <published>2016-04-11T15:31:09+00:00</published>
  <updated>2016-10-31T18:36:47+00:00</updated>
  <media:group>
   <media:title>Fundamentals of Robotics (Summer Term 2015)</media:title>
   <media:content url="https://www.youtube.com/v/DNjlAv2cBQ8?version=3" type="application/x-shockwave-flash" width="640" height="390"/>
   <media:thumbnail url="https://i1.ytimg.com/vi/DNjlAv2cBQ8/hqdefault.jpg" width="480" height="360"/>
   <media:description>This video shows the final challenge of our undergraduate robotics course: Fundamentals of Robotics. Our teaching staff really enjoyed this course, as did the students, one said: &quot;I have never worked so hard for a university course before and I have learned more during this semester than during my entire study program.&quot; Thank you all for your great motivation and thank you so much for editing this video, Tessa!</media:description>
   <media:community>
    <media:starRating count="5" average="5.00" min="1" max="5"/>
    <media:statistics views="367"/>
   </media:community>
  </media:group>
 </entry>
 <entry>
  <id>yt:video:-RdwLMCD8G4</id>
  <yt:videoId>-RdwLMCD8G4</yt:videoId>
  <yt:channelId>UC4Vgg1DuTldG9N8PHB9sMrw</yt:channelId>
  <title>In-hand manipulation with the RBO Hand 2</title>
  <link rel="alternate" href="https://www.youtube.com/watch?v=-RdwLMCD8G4"/>
  <author>
   <name>RBO TU Berlin</name>
   <uri>https://www.youtube.com/channel/UC4Vgg1DuTldG9N8PHB9sMrw</uri>
  </author>
  <published>2016-04-04T08:11:17+00:00</published>
  <updated>2017-01-11T19:43:41+00:00</updated>
  <media:group>
   <media:title>In-hand manipulation with the RBO Hand 2</media:title>
   <media:content url="https://www.youtube.com/v/-RdwLMCD8G4?version=3" type="application/x-shockwave-flash" width="640" height="390"/>
   <media:thumbnail url="https://i2.ytimg.com/vi/-RdwLMCD8G4/hqdefault.jpg" width="480" height="360"/>
   <media:description>The passive compliance of the soft robotic RBO Hand 2 allows for easy in-hand manipulation. In this video the hand repeats a simple, pre-defined motion to turn a plastic ball it holds between its fingers.</media:description>
   <media:community>
    <media:starRating count="2" average="5.00" min="1" max="5"/>
    <media:statistics views="1445"/>
   </media:community>
  </media:group>
 </entry>
 <entry>
  <id>yt:video:jmJemIP7dZQ</id>
  <yt:videoId>jmJemIP7dZQ</yt:videoId>
  <yt:channelId>UC4Vgg1DuTldG9N8PHB9sMrw</yt:channelId>
  <title>Mass Control of Pneumatic Soft Continuum Actuators with Commodity Components</title>
  <link rel="alternate" href="https://www.youtube.com/watch?v=jmJemIP7dZQ"/>
  <author>
   <name>RBO TU Berlin</name>
   <uri>https://www.youtube.com/channel/UC4Vgg1DuTldG9N8PHB9sMrw</uri>
  </author>
  <published>2016-03-11T08:45:12+00:00</published>
  <updated>2017-06-04T12:24:29+00:00</updated>
  <media:group>
   <media:title>Mass Control of Pneumatic Soft Continuum Actuators with Commodity Components</media:title>
   <media:content url="https://www.youtube.com/v/jmJemIP7dZQ?version=3" type="application/x-shockwave-flash" width="640" height="390"/>
   <media:thumbnail url="https://i3.ytimg.com/vi/jmJemIP7dZQ/hqdefault.jpg" width="480" height="360"/>
   <media:description>Compliant actuation is the dominant paradigm for soft hands. We argue that to fully leverage the compliance available in soft pneumatic actuators, they should be controlled using air mass rather than position or force, as is customary in most soft robotics research. We propose an air-mass controller that enables setting a preset position, as in position control, but allows for the exploitation of fast, mechanical compliance without additional control burden. The proposed mass control scheme is based on discrete commodity valves and pressure sensors, filling a gap in available mass control systems for small-scale soft continuum actuators. The mass controller exhibits low drift for mass trajectories tens of seconds in duration, without requiring a precise model of the actuator. 
Continuous mass control opens up new applications for soft robotics in which compliance is of central importance.</media:description>
   <media:community>
    <media:starRating count="3" average="5.00" min="1" max="5"/>
    <media:statistics views="835"/>
   </media:community>
  </media:group>
 </entry>
 <entry>
  <id>yt:video:TsVUQtRNIts</id>
  <yt:videoId>TsVUQtRNIts</yt:videoId>
  <yt:channelId>UC4Vgg1DuTldG9N8PHB9sMrw</yt:channelId>
  <title>Probabilistic Multi-Class Segmentation for the Amazon Picking Challenge</title>
  <link rel="alternate" href="https://www.youtube.com/watch?v=TsVUQtRNIts"/>
  <author>
   <name>RBO TU Berlin</name>
   <uri>https://www.youtube.com/channel/UC4Vgg1DuTldG9N8PHB9sMrw</uri>
  </author>
  <published>2016-03-10T17:15:06+00:00</published>
  <updated>2017-02-21T07:17:00+00:00</updated>
  <media:group>
   <media:title>Probabilistic Multi-Class Segmentation for the Amazon Picking Challenge</media:title>
   <media:content url="https://www.youtube.com/v/TsVUQtRNIts?version=3" type="application/x-shockwave-flash" width="640" height="390"/>
   <media:thumbnail url="https://i1.ytimg.com/vi/TsVUQtRNIts/hqdefault.jpg" width="480" height="360"/>
   <media:description>Rico Jonschkowski, Clemens Eppner, Sebastian Höfer, Roberto Martín-Martín, and Oliver Brock. Probabilistic Multi-Class Segmentation for the Amazon Picking Challenge. Technical Report RBO-2016-01, Department of Computer Engineering and Microelectronics, Technische Universität Berlin, 2016. http://tinyurl.com/gt4srne

We present a method for multi-class segmentation from RGB-D data in a realistic warehouse picking setting. The method computes pixel-wise probabilities and combines them to find a coherent object segmentation. It reliably segments objects in cluttered scenarios, even when objects are translucent, reflective, highly deformable, have fuzzy surfaces, or consist of loosely coupled components. The robust performance results from the exploitation of problem structure inherent to the warehouse setting. The proposed method proved its capabilities as part of our winning entry to the 2015 Amazon Picking Challenge. We present a detailed experimental analysis of the contribution of different information sources, compare our method to standard segmentation techniques, and assess possible extensions that further enhance the algorithm's capabilities. We release our software and data sets as open source.</media:description>
   <media:community>
    <media:starRating count="2" average="5.00" min="1" max="5"/>
    <media:statistics views="289"/>
   </media:community>
  </media:group>
 </entry>
 <entry>
  <id>yt:video:gDBLGSrSWXM</id>
  <yt:videoId>gDBLGSrSWXM</yt:videoId>
  <yt:channelId>UC4Vgg1DuTldG9N8PHB9sMrw</yt:channelId>
  <title>A Compact Representation of Human Single-Object Grasping</title>
  <link rel="alternate" href="https://www.youtube.com/watch?v=gDBLGSrSWXM"/>
  <author>
   <name>RBO TU Berlin</name>
   <uri>https://www.youtube.com/channel/UC4Vgg1DuTldG9N8PHB9sMrw</uri>
  </author>
  <published>2016-03-10T11:12:45+00:00</published>
  <updated>2016-09-22T13:33:05+00:00</updated>
  <media:group>
   <media:title>A Compact Representation of Human Single-Object Grasping</media:title>
   <media:content url="https://www.youtube.com/v/gDBLGSrSWXM?version=3" type="application/x-shockwave-flash" width="640" height="390"/>
   <media:thumbnail url="https://i4.ytimg.com/vi/gDBLGSrSWXM/hqdefault.jpg" width="480" height="360"/>
   <media:description>Observations of human grasping reveal that the exploitation of environmental constraints is a key structural aspect for the robustness and versatility of human grasping behavior. We analyze 3,400 human grasping trials with 17 subjects grasping 25 objects to show that viewing environmental constraints as the central structural aspect of human grasping yields surprisingly simple  representations of human grasping behavior. We present hypothesis-driven experiments that emphasize the centrality of environmental constraints in human grasping and extract from data a simple “grasping plan” that is a generative model for all of the human grasping trials we observed. This grasping plan can in principle be transferred to a robot system in an attempt to leverage environmental constraints to improve the performance of robotic grasping.</media:description>
   <media:community>
    <media:starRating count="1" average="5.00" min="1" max="5"/>
    <media:statistics views="89"/>
   </media:community>
  </media:group>
 </entry>
 <entry>
  <id>yt:video:VpwiFbmSp5Q</id>
  <yt:videoId>VpwiFbmSp5Q</yt:videoId>
  <yt:channelId>UC4Vgg1DuTldG9N8PHB9sMrw</yt:channelId>
  <title>Coupled Learning of Action Parameters and Forward Models for Manipulation</title>
  <link rel="alternate" href="https://www.youtube.com/watch?v=VpwiFbmSp5Q"/>
  <author>
   <name>RBO TU Berlin</name>
   <uri>https://www.youtube.com/channel/UC4Vgg1DuTldG9N8PHB9sMrw</uri>
  </author>
  <published>2016-03-09T13:34:34+00:00</published>
  <updated>2016-09-22T13:33:05+00:00</updated>
  <media:group>
   <media:title>Coupled Learning of Action Parameters and Forward Models for Manipulation</media:title>
   <media:content url="https://www.youtube.com/v/VpwiFbmSp5Q?version=3" type="application/x-shockwave-flash" width="640" height="390"/>
   <media:thumbnail url="https://i3.ytimg.com/vi/VpwiFbmSp5Q/hqdefault.jpg" width="480" height="360"/>
   <media:description>The effectiveness of robot interaction depends on the robot's ability to perform task-relevant actions and on the degree to which it is able to predict the outcomes of these actions. 
In this paper we argue that the two learning problems - learning actions and learning forward models - must be tightly coupled for each of them to be successful.
We present an approach that is able to learn a set of continuous action parameters and relational forward models from the robot's own experience. We formalize our approach as simultaneously clustering experiences in a continuous and a relational representation. 
Our experiments in a simulated manipulation experiment show that this form of coupled subsymbolic and symbolic learning is required for the robot to acquire task-relevant action capabilities.</media:description>
   <media:community>
    <media:starRating count="1" average="5.00" min="1" max="5"/>
    <media:statistics views="88"/>
   </media:community>
  </media:group>
 </entry>
 <entry>
  <id>yt:video:BolevVGJk18</id>
  <yt:videoId>BolevVGJk18</yt:videoId>
  <yt:channelId>UC4Vgg1DuTldG9N8PHB9sMrw</yt:channelId>
  <title>Learning State Representations with Robotic Priors</title>
  <link rel="alternate" href="https://www.youtube.com/watch?v=BolevVGJk18"/>
  <author>
   <name>RBO TU Berlin</name>
   <uri>https://www.youtube.com/channel/UC4Vgg1DuTldG9N8PHB9sMrw</uri>
  </author>
  <published>2015-06-08T13:46:57+00:00</published>
  <updated>2017-05-02T11:41:07+00:00</updated>
  <media:group>
   <media:title>Learning State Representations with Robotic Priors</media:title>
   <media:content url="https://www.youtube.com/v/BolevVGJk18?version=3" type="application/x-shockwave-flash" width="640" height="390"/>
   <media:thumbnail url="https://i3.ytimg.com/vi/BolevVGJk18/hqdefault.jpg" width="480" height="360"/>
   <media:description>Rico Jonschkowski and Oliver Brock. Learning State Representations with Robotic Priors. Autonomous Robots, 2015. 
paper: http://tinyurl.com/ogp23m5 code: https://github.com/tu-rbo/learning-state-representations-with-robotic-priors</media:description>
   <media:community>
    <media:starRating count="8" average="5.00" min="1" max="5"/>
    <media:statistics views="583"/>
   </media:community>
  </media:group>
 </entry>
</feed>
