Chargement...
 

Historique: Crossing the Chasm

Aperçu de cette version: 9

Under Construction...

Participants

Alejandro Arbelaez, Alvaro Fialho, Philippe Rollet, Marc Schoenauer, Michèle Sebag

Research Themes


Many forefront techniques in both Machine Learning and Stochastic Search have been very successful in solving difficult real-world problems. However, their application to newly encountered problems, or even to new instances of known problems, remains a challenge, even for experienced researchers of the field - not to mention newcomers, even if they are skilled scientists or engineers from other areas. Theory and/or practical tools are still missing to make them crossing the chasm (from Geoffrey A. Moore's book about the diffusion of innovation).
The difficulties faced by the users arise mainly from the significant range of algorithm and/or parameter choices involved when using this type of approaches, and the lack of guidance as to how to proceed for selecting them. Moreover, state-of-the-art approaches for real-world problems tend to represent bespoke problem-specific methods which are expensive to develop and maintain.

More specifically, the following research conducted in TAO is concerned with Crossing the Chasm
  • Adaptive Operator Selection, or how to adapt the mechanism that chooses among the different variation operators in Evolutionary Algorithms. We have proposed two original features
    • Using a Multi-Armed Bandit algorithm for operator selection (GECCO'08 paper)
    • Using Extreme values rather than averages as a reward for operators (PPSN'08 paper)
    • On-going work is investigating the recombination of the above ideas with the Compass approach of our colleagues from Angers University (J.Maturana, F.Saubion: A Compass to Guide Genetic Algorithms. PPSN 2008: 256-265)
  • Meta-parameter tuning for Machine Learning Algorithms
  • Active Learning, or how to choose next sample depending on previously seems examples
  • Designing Problem Descriptors is a longer-term goal: being able to accurately describe a given problem (or instance) will allow us to then learn from extensive experiments what are the good algorithms/parameters for classes of instances, or even indvidual instances (see e.g. F.Hutter, Y.Hamadi, H.H.Hoos, and K.Leyton-Brown. Performance Prediction and Automated Tuning of Randomized and Parametric Algorithms, CP'2006).


(waiting for the nice team publication page to point there???)

Historique

Avancé
Information Version
jeu. 20 de Nov, 2008 12h47 evomarc from 129.175.15.11 9
Afficher
jeu. 20 de Nov, 2008 12h45 evomarc from 129.175.15.11 8
Afficher
jeu. 20 de Nov, 2008 12h17 evomarc from 129.175.15.11 7
Afficher
jeu. 20 de Nov, 2008 12h16 evomarc from 129.175.15.11 6
Afficher
jeu. 20 de Nov, 2008 12h15 evomarc from 129.175.15.11 5
Afficher
jeu. 20 de Nov, 2008 12h14 evomarc from 129.175.15.11 4
Afficher
jeu. 20 de Nov, 2008 12h13 evomarc from 129.175.15.11 3
Afficher
jeu. 20 de Nov, 2008 12h12 evomarc from 129.175.15.11 2
Afficher
mar. 18 de Nov, 2008 09h58 evomarc from 129.175.15.11 1
Afficher