=Paper= {{Paper |id=Vol-1455/paper-01 |storemode=property |title=Applying Model-Based Optimization to Hyperparameter Optimization in Machine Learning |pdfUrl=https://ceur-ws.org/Vol-1455/paper-01.pdf |volume=Vol-1455 |dblpUrl=https://dblp.org/rec/conf/pkdd/Bischl15 }} ==Applying Model-Based Optimization to Hyperparameter Optimization in Machine Learning== https://ceur-ws.org/Vol-1455/paper-01.pdf
 Applying Model-Based Optimization to
Hyperparameter Optimization in Machine
              Learning

                              Bernd Bischl

    Ludwig-Maximilians-Universität München, München, Germany,
              bernd.bischl@stat.uni-muenchen.de



Abstract. This talk will cover the main components of sequential model-
based optimization algorithms. Algorithms of this kind represent the
state-of-the-art for expensive black-box optimization problems and are
getting increasingly popular for hyper-parameter optimization of ma-
chine learning algorithms, especially on larger data sets.

The talk will cover the main components of sequential model-based op-
timization algorithms, e.g., surrogate regression models like Gaussian
processes or random forests, initialization phase and point acquisition.

In a second part I will cover some recent extensions with regard to parallel
point acquisition, multi-criteria optimization and multi-fidelity systems
for subsampled data. Most covered applications will use support vector
machines as examples for hyper-parameter optimization.

The talk will finish with a brief overview of open questions and challenges.