Loading...

Neural modularity helps organisms evolve to learn new skills without forgetting old skills

6,203 views

Loading...

Loading...

Transcript

The interactive transcript could not be loaded.

Loading...

Rating is available when the video has been rented.
This feature is not available right now. Please try again later.
Published on Apr 2, 2015

Video summary of Ellefsen, Mouret, and Clune (2015) Neural modularity helps organisms evolve to learn new skills without forgetting old skills. PLoS Computational Biology.

Summary: A long-standing goal in artificial intelligence (AI) is creating computational brain models (neural networks) that learn what to do in new situations. An obstacle is that agents typically learn new skills only by losing previously acquired skills. Here we test whether such forgetting is reduced by evolving modular neural networks, meaning networks with many distinct subgroups of neurons. Modularity intuitively should help because learning can be selectively turned on only in the module learning the new task. We confirm this hypothesis: modular networks have higher overall performance because they learn new skills faster while retaining old skills more. Our results suggest that one benefit of modularity in natural animal brains may be allowing learning without forgetting.

Abstract: A long-standing goal in artificial intelligence is creating agents that can learn a variety of different skills for different problems. In the artificial intelligence subfield of neural networks, a barrier to that goal is that when agents learn a new skill they typically do so by losing previously acquired skills, a problem called catastrophic forgetting. That occurs because, to learn the new task, neural learning algorithms change connections that encode previously acquired skills. How networks are organized critically affects their learning dynamics. In this paper, we test whether catastrophic forgetting can be reduced by evolving modular neural networks. Modularity intuitively should reduce learning interference between tasks by separating functionality into physically distinct modules in which learning can be selectively turned on or off. Modularity can further improve learning by having a reinforcement learning module separate from sensory processing modules, allowing learning to happen only in response to a positive or negative reward. In this paper, learning takes place via neuromodulation, which allows agents to selectively change the rate of learning for each neural connection based on environmental stimuli (e.g. to alter learning in specific locations based on the task at hand). To produce modularity, we evolve neural networks with a cost for neural connections. We show that this connection cost technique causes modularity, confirming a previous result, and that such sparsely connected, modular networks have higher overall performance because they learn new skills faster while retaining old skills more and because they have a separate reinforcement learning module. Our results suggest (1) that encouraging modularity in neural networks may help us overcome the long-standing barrier of networks that cannot learn new skills without forgetting old ones, and (2) that one benefit of the modularity ubiquitous in the brains of natural animals might be to alleviate the problem of catastrophic forgetting.

Sources of the media used in this video:
Background music: The Passion HiFi - Redemption
https://soundcloud.com/freehiphopbeat...

Images:
Brain Zoom to neurons: www.dreamstime.com
Chess player: Flickr commons, by Jeffrey Barke
Soccer player: Wikimedia Commons, by AFP/SCANPI
Thinking robot: Pixabay, by DrSJS
Vacuum cleaner: Pixabay, by Nemo
Monkey with stick: Wikimedia Commons, by Mike R
Cheetah robot: Boston Dynamics
Robot in rocky terrain: Flickr commons, by JBLM PAO

Loading...


to add this to Watch Later

Add to

Loading playlists...