<h3 class="showhide_heading" id="Participants">Participants</h3>
<p>Nicolas Bredèche, Fei Jiang, Cédric Hartland. Hélène Paugam-Moisy, Julien Perez, Marc Schoenauer, Michèle Sebag</p>
<p>
INRIA-Alchemy Hugues Berry,
LIMSI-PS Philippe Tarroux, Jean-Sylvain Liénard, Mathieu Dubois, Sylvain Chevallier</p>
<h3 class="showhide_heading" id="Research_Themes">Research Themes</h3>
<p>Reservoir Computing is a generic name that appeared at the end of 2006, gathering new paradigms for computing large recurrent neural networks with random connectivity, such as:</p>
<ul>
<li>Echo State Networks (ESN)
Jaeger, 2001</li>
<li>Liquid State Machines (LSM)
Maass, Natschlager, Markram, 2002</li>
<li>Back-Propagation
DeCorrelation (BPDC)
Steil, 2004</li>
</ul>
<p>Since they are dynamical, non-linear systems, recurrent networks are both highly powerful and very hard to tame. Highly powerful for solving complicated engineering tasks: control and modeling complex dynamical systems, temporal pattern recognition, speech recognition, autonomous robotics. Very hard to tame as long as the designer aims to control all the clockwork of the network dynamics.</p>
<p>The trick of Reservoir Computing is to design an internal network - the so-called "reservoir" - as a large random structure, with sparse connectivity, and to take advantage of its free auto-organization by learning, with easy rules, a set of "read-out" neurons, in order to extract relevant information from the reservoir dynamics. Neurons involved in such structures can be as well traditional neurons (threshold, sigmoid units) as spiking neurons that take into account the precise timing of spike firing.</p>