Found 1 result(s)

01.01.1970 (Thursday)

FM Robust Reinforcement Learning with Dynamic Distortion Risk Measures

regular seminar Anthony Coache (Imperial College London)

at:
15:00 - 16:00
KCL, Strand
room: s5.20
abstract:

In a reinforcement learning (RL) setting, the agent's optimal strategy heavily depends on her risk preferences and the underlying model dynamics of the training environment. These two aspects influence the agent's ability to make well-informed and time-consistent decisions when facing testing environments. In this presentation, we propose a framework to solve robust risk-aware RL problems where we simultaneously account for environmental uncertainty and risk with a class of dynamic robust distortion risk measures. Robustness is introduced by considering all models within a Wasserstein ball around a reference model. We show how to estimate such dynamic robust risk measures using neural networks by making use of strictly consistent scoring functions, derive policy gradient formulae using the quantile representation of distortion risk measures, and demonstrate the performance of our actor-critic algorithm on a portfolio allocation example. This is a joint work with Sebastian Jaimungal (U. Toronto).

Keywords: