Rotting Bandits Are No Harder Than Stochastic Ones

Conference on Artificial Intelligence and Statistics (AISTATS)

Abstract

In stochastic multi-armed bandit (MAB), the reward distribution of each arm is assumed to be stationary. This assumption is often violated in practice (e.g., in recommendation systems), where the reward of an arm may change whenever is selected (i.e., rested bandit setting). In this paper, we consider the nonparametric rotting bandit setting, where rewards can only decrease. We introduce the filtering on expanding window average (FEWA) algorithm that constructs moving averages of increasing windows to identify arms that are more likely to return high rewards when pulled once more. We prove that for an unknown horizon T, and without any knowledge on the decreasing behavior of the K arms, FEWA achieves problem-dependent, Õ(log (KT)), and problem-independent, Õ( √ KT), regret bounds. This result substantially improves over the algorithm proposed by Levine et al. (2017), which suffers regret Õ(K1/3T2/3), and it matches standard bounds for the stochastic MAB setting, thus showing that the rotting bandit is not harder. Finally, we report simulations confirming the theoretical improvements of FEWA.

Featured Publications