Under Construction...
Participants
Alejandro Arbelaez,
Alvaro Fialho, Philippe Rollet,
Marc Schoenauer,
Michèle Sebag
Research Themes
Many forefront techniques in both Machine Learning and Stochastic Search have been very successful in solving difficult real-world problems. However, their application to newly encountered problems, or even to new instances of known problems, remains a challenge, even for experienced researchers of the field - not to mention newcomers, even if they are skilled scientists or engineers from other areas. Theory and/or practical tools are still missing to make them
crossing the chasm (from
Geoffrey A. Moore's book about the diffusion of innovation).
The difficulties faced by the users arise mainly from the significant range of algorithm and/or parameter choices involved when using this type of approaches, and the lack of guidance as to how to proceed for selecting them. Moreover, state-of-the-art approaches for real-world problems tend to represent bespoke problem-specific methods which are expensive to develop and maintain.
More specifically, the following research conducted in TAO is concerned with
Crossing the Chasm
- Adaptive Operator Selection, or how to adapt the mechanism that chooses among the different variation operators in Evolutionary Algorithms. We have proposed two original features
- Using a Multi-Armed Bandit algorithm for operator selection (GECCO'08 paper)
- Using Extreme values rather than averages as a reward for operators (PPSN'08 paper)
- On-going work is investigating the recombination of the above ideas with the Compass approach of our colleagues from Angers University (J.Maturana, F.Saubion: A Compass to Guide Genetic Algorithms. PPSN 2008: 256-265)
- Meta-parameter tuning for Machine Learning Algorithms
- Learning Heuristics Choice in Constraint Programming: several heuristics have been proposed to choose which branch to explore next within Constraint Programming algorithms. The idea we are exploring is to learn which one is the best given the characteristics of the current node of the tree (e.g. domain sizes, number of still unsatisfied constraints, etc).
- Active Learning, or how to choose next sample depending on previously seems examples
- Designing Problem Descriptors is a longer-term goal: being able to accurately describe a given problem (or instance) will allow us to then learn from extensive experiments what are the good algorithms/parameters for classes of instances, or even indvidual instances (see e.g. F.Hutter, Y.Hamadi, H.H.Hoos, and K.Leyton-Brown. Performance Prediction and Automated Tuning of Randomized and Parametric Algorithms, CP'2006).
- The goal of the Microsoft-INRIA Joint lab project on Adaptive search for E-Science, is precisely to automate the parameter tuning of search algorithms.
- EvoTest is a European STRAP that started in September 2006, dealing with Evolutionary generation of test data. The work of TAO is to provide the Evolution Engine to the framework, including on the one hand the GUIDE interface for easy EA design, and adding to it automatic parameter tuning facilities.
(waiting for the nice team publication page to point there???)