Solving a global optimal problem requires only two-armed slot machine
Abstract
For a general purpose optimization problem over a finite rectangle region, this paper pioneers a unified slot machine framework for global optimization by transforming the search for global optimizer(s) to the optimal strategy formulation of a bandit process in infinite policy sets and proves that two-armed bandit is enough. By leveraging the strategic bandit process-driven optimization framework, we introduce a new {\bf S}trategic {\bf M}onte {\bf C}arlo {\bf O}ptimization (SMCO) algorithm that coordinate-wisely generates points from multiple paired distributions and can be implemented parallel for high-dimensional continuous functions. Our SMCO algorithm, equipped with tree search that broadens the optimal policy search space of slot machine for attaining the global optimizer(s) of a multi-modal function, facilitates fast learning via trial and error. We provide a strategic law of large numbers for nonlinear expectations in bandit settings, and establish that our SMCO algorithm converges to global optimizer(s) almost surely. Unlike the standard gradient descent ascent (GDA) that uses a one-leg walk to climb the mountain and is sensitive to starting points and step sizes, our SMCO algorithm takes a two-leg walk to the peak by using the two-sided sampling from the paired distributions and is not sensitive to initial point selection or step size constraints. Numerical studies demonstrate that the new SMCO algorithm outperforms GDA, particle swarm optimization and simulated annealing in both convergence accuracy and speed. Our SMCO algorithm should be extremely useful for finding optimal tuning parameters in many large scale complex optimization problems.
- Publication:
-
arXiv e-prints
- Pub Date:
- December 2024
- DOI:
- arXiv:
- arXiv:2412.05604
- Bibcode:
- 2024arXiv241205604C
- Keywords:
-
- Mathematics - Optimization and Control;
- Mathematics - Statistics Theory