Published January 18, 2023 | Version v1
Preprint Open

Lazy Lagrangians for Optimistic Learning With Budget Constraints

Description

Abstract—We consider the general problem of online convex optimization with time-varying additive constraints in the presence of predictions for the next cost and constraint functions. A novel primal-dual algorithm is designed by combining a Follow-The-Regularized-Leader iteration with prediction adaptive dynamic steps. The algorithm achieves \(\mathcal{O}(T^{\frac{3-\beta}{4}})\) regret and \(\mathcal{O}(T^{\frac{1+\beta}{2}})\) constraint violation bounds that are tunable via parameter \(\beta \in [1/2, 1)\) and have constant factors that shrink with the predictions quality, achieving eventually \(\mathcal{O}(1)\) regret for perfect predictions. Our work extends the FTRL framework for this constrained OCO setting and outperforms the respective state-of-the-art greedy-based solutions, without imposing conditions on the quality of predictions, the cost functions or the geometry of constraints, beyond convexity

Files

2201.02890.pdf

Files (1.4 MB)

Name Size Download all
md5:83e550dba8d9b76e83f9973fc6b82057
1.4 MB Preview Download

Additional details

Related works

Is previous version of
10.1109/TNET.2022.3222404 (DOI)

Funding

European Commission
DAEMON – Network intelligence for aDAptive and sElf-Learning MObile Networks 101017109