To be held on June 29, 2006, at Carnegie-Mellon University, Pittsburgh, USA as part of the workshop day at ICML 2006.
Every participant to the ICML'2006 conference is welcome to attend and
participate to the workshop. This requires that the fee for the
workshops has been paid. For that, please refer to the ICML
On behalf of the presentations of submitted papers and invited speeches, we want to keep the discussion very open to anyone who wishes to express his/her opinion, as far as this is relevant with the topic of the workshop. To let us somehow organize the day, and have a sort of schedule, we kindly ask anyone who would like to attend to:
Please, send emails to philippe -dot- preux -at- univ-lille3 -dot- fr
Reinforcement Learning (RL) is an adaptive method for learning to make
good decisions in a complex, stochastic and partially unknown
In order to deal with large-scale RL problems, the functions of interest (such as the value function or the policy, or a model of the unknown state dynamics) must be approximately represented. Since the quality of approximations directly influences the performance measures of ultimate interest, the function approximation methods employed should be sample-efficient whilst being able to deliver high quality estimates at the same time. For instance, in Approximate Dynamic Programming the performance of policies greedy with respect to approximate value functions are bounded in terms of the approximation's precision.
Thus real-world applications of RL need efficient function approximators.
Kernel methods are at the heart of many modern machine learning techniques. They make it possible to derive efficient algorithms that work in function spaces of high representation power and come with PAC-style theoretical results.
This workshop will be entirely dedicated to bridging the gap between kernel methods and reinforcement learning.
Appropriate topics for papers include, but are not limited to:
This is the temptative schedule of the workshop day (as of June, 28th).
We primarely expect original submissions. However, we will also accept submissions of already accepted papers if the authors make it clear that this is not an original submission. In the process of acceptence, we will favor original submissions and accept resubmissions only if there is enough time in the schedule.