Abstract
We investigate the problem of unconstrained combinatorial multi-armed bandits with full-bandit feedback and stochastic rewards for submodular maximization. Previous works investigate the same problem assuming a submodular and monotone reward function. In this work, we study a more general problem, i.e., when the reward function is not necessarily monotone, and the submodularity is assumed only in expectation. We propose Randomized Greedy Learning (RGL) algorithm and theoretically prove that it achieves a 1/2-regret upper bound of Õ(nT 2/3) for horizon T and number of arms n. We also show in experiments that RGL empirically outperforms other full-bandit variants in submodular and non-submodular settings.
Original language | English (US) |
---|---|
Title of host publication | 26th International Conference on Artificial Intelligence and Statistics, AISTATS 2023 |
Publisher | ML Research Press |
Pages | 7455-7471 |
Number of pages | 17 |
State | Published - Jun 4 2023 |
Bibliographical note
KAUST Repository Item: Exported on 2023-07-28Acknowledgements: This work was supported in part by the National Science Foundation under Grants 2149588 and 2149617.