2018年11月24日土曜日

Hyperparameter optimization


In machine learninghyperparameter optimization or tuning is the problem of choosing a set of optimal hyperparameters for a learning algorithm.

The same kind of machine learning model can require different constraints, weights or learning rates to generalize different data patterns. These measures are called hyperparameters, and have to be tuned so that the model can optimally solve the machine learning problem. Hyperparameter optimization finds a tuple of hyperparameters that yields an optimal model which minimizes a predefined loss function on given independent data.[1] The objective function takes a tuple of hyperparameters and returns the associated loss.[1] Cross-validation is often used to estimate this generalization performance.[2]

Approaches[edit]

Grid search[edit]

The traditional way of performing hyperparameter optimization has been grid search, or a parameter sweep, which is simply an exhaustive searching through a manually specified subset of the hyperparameter space of a learning algorithm. A grid search algorithm must be guided by some performance metric, typically measured by cross-validation on the training set[3] or evaluation on a held-out validation set.[4]

Since the parameter space of a machine learner may include real-valued or unbounded value spaces for certain parameters, manually set bounds and discretization may be necessary before applying grid search.

For example, a typical soft-margin SVM classifier equipped with an RBF kernel has at least two hyperparameters that need to be tuned for good performance on unseen data: a regularization constant C and a kernel hyperparameter γ. Both parameters are continuous, so to perform grid search, one selects a finite set of "reasonable" values for each, say

C\in \{10,100,1000\}
\gamma \in \{0.1,0.2,0.5,1.0\}

Grid search then trains an SVM with each pair (C, γ) in the Cartesian product of these two sets and evaluates their performance on a held-out validation set (or by internal cross-validation on the training set, in which case multiple SVMs are trained per pair). Finally, the grid search algorithm outputs the settings that achieved the highest score in the validation procedure.

Grid search suffers from the curse of dimensionality, but is often embarrassingly parallel because typically the hyperparameter settings it evaluates are independent of each other.[2]

Random search[edit]

Main article: Random search

Random Search replaces the exhaustive enumeration of all combinations by selecting them randomly. This can be simply applied to the discrete setting described above, but also generalizes to continuous and mixed spaces. It can outperform Grid search, especially when only a small number of hyperparameters affects the final performance of the machine learning algorithm.[2] In this case, the optimization problem is said to have a low intrinsic dimensionality.[5] Random Search is also embarrassingly parallel, and additionally allows to include prior knowledge by specifying the distribution from which to sample.

Bayesian optimization[edit]

Main article: Bayesian optimization

Bayesian optimization is a global optimization method for noisy black-box functions. Applied to hyperparameter optimization, Bayesian optimization builds a probabilistic model of the function mapping from hyperparameter values to the objective evaluated on a validation set. By iteratively evaluating a promising hyperparameter configuration based on the current model, and then updating it, Bayesian optimization, aims to gather observations revealing as much information as possible about this function and, in particular, the location of the optimum. It tries to balance exploration (hyperparameters for which the outcome is most uncertain) and exploitation (hyperparameters expected close to the optimum). In practice, Bayesian optimization has been shown[6][7][8][9] to obtain better results in fewer evaluations compared to grid search and random search, due to the ability to reason about the quality of experiments before they are run.

Gradient-based optimization[edit]

For specific learning algorithms, it is possible to compute the gradient with respect to hyperparameters and then optimize the hyperparameters using gradient descent. The first usage of these techniques was focused on neural networks.[10] Since then, these methods have been extended to other models such as support vector machines[11] or logistic regression.[12]

A different approach in order to obtain a gradient with respect to hyperparameters consists in differentiating the steps of an iterative optimization algorithm using automatic differentiation.[13][14]

Evolutionary optimization[edit]

Main article: Evolutionary algorithm

Evolutionary optimization is a methodology for the global optimization of noisy black-box functions. In hyperparameter optimization, evolutionary optimization uses evolutionary algorithms to search the space of hyperparameters for a given algorithm.[7] Evolutionary hyperparameter optimization follows a process inspired by the biological concept of evolution:

  1. Create an initial population of random solutions (i.e., randomly generate tuples of hyperparameters, typically 100+)
  2. Evaluate the hyperparameters tuples and acquire their fitness function (e.g., 10-fold cross-validation accuracy of the machine learning algorithm with those hyperparameters)
  3. Rank the hyperparameter tuples by their relative fitness
  4. Replace the worst-performing hyperparameter tuples with new hyperparameter tuples generated through crossover and mutation
  5. Repeat steps 2-4 until satisfactory algorithm performance is reached or algorithm performance is no longer improving

Evolutionary optimization has been used in hyperparameter optimization for statistical machine learning algorithms,[7] automated machine learning,[15][16] deep neural networkarchitecture search,[17][18] as well as training of the weights in deep neural networks.[19]

Others[edit]

RBF[20] and spectral[21] approaches have also been developed.

Open-source software[edit]

Grid search[edit]

Random search[edit]

  • hyperopt, also via hyperas and hyperopt-sklearn, are Python packages which include random search.
  • scikit-learn is a Python package which includes random search.
  • H2O AutoML provides automated data preparation, hyperparameter tuning via random search, and stacked ensembles in a distributed machine learning platform.
  • Talos includes a customizable random search for Keras.

Bayesian[edit]

  • spearmint Spearmint is a package to perform Bayesian optimization of machine learning algorithms.
  • Bayesopt,[22] an efficient implementation of Bayesian optimization in C/C++ with support for Python, Matlab and Octave.
  • MOE MOE is a Python/C++/CUDA library implementing Bayesian Global Optimization using Gaussian Processes.
  • Auto-WEKA[23] is a Bayesian hyperparameter optimization layer on top of WEKA.
  • Auto-sklearn[24] is a Bayesian hyperparameter optimization layer on top of scikit-learn.
  • mlrMBO, also with mlr, is an R package for model-based/Bayesian optimization of black-box functions.
  • tuneRanger is an R package for tuning random forests using model-based optimization.
  • BOCS is a Matlab package which uses semidefinite programming for minimizing a black-box function over discrete inputs.[25] A Python 3 implementation is also included.
  • SMAC SMAC is a Python/Java library implementing Bayesian Optimization. [26]

Gradient based[edit]

  • hypergrad is a Python package for differentiation with respect to hyperparameters.[14]

Evolutionary[edit]

  • TPOT[15][16] is a Python package that automatically creates and optimizes full machine learning pipelines using genetic programming.
  • devol is a Python package that performs Deep Neural Network architecture search using genetic programming.
  • deap is a Python framework for general evolutionary computation which is flexible and integrates with parallelization packages like scoop and pyspark, and other Python frameworks like sklearn via sklearn-deap.

Other[edit]

Commercial services[edit]

  • BigML OptiML supports mixed search domains
  • Google HyperTune supports mixed search domains
  • Indie Solver supports multiobjective, multifidelity and constraint optimization
  • SigOpt supports mixed search domains, multiobjective, multisolution, multifidelity, constraint (linear and black-box), and parallel optimization.
  • Mind Foundry OPTaaS supports mixed search domains, multiobjective, constraints, parallel optimization and surrogate models.

See also[edit]

References[edit]

  1. Jump up to:a b Claesen, Marc; Bart De Moor (2015). "Hyperparameter Search in Machine Learning". arXiv:1502.02127 [cs.LG].
  2. Jump up to:a b c Bergstra, James; Bengio, Yoshua (2012). "Random Search for Hyper-Parameter Optimization" (PDF)J. Machine Learning Research13: 281–305.
  3. Jump up^ Chin-Wei Hsu, Chih-Chung Chang and Chih-Jen Lin (2010). A practical guide to support vector classification. Technical Report, National Taiwan University.
  4. Jump up^ Chicco D (December 2017). "Ten quick tips for machine learning in computational biology"BioData Mining10 (35): 35. doi:10.1186/s13040-017-0155-3PMC 5721660PMID 29234465.
  5. Jump up^ Ziyu, Wang; Frank, Hutter; Masrour, Zoghi; David, Matheson; Nando, de Feitas (2016). "Bayesian Optimization in a Billion Dimensions via Random Embeddings"Journal of Artificial Intelligence Research55: 361–387. doi:10.1613/jair.4806.
  6. Jump up^ Hutter, Frank; Hoos, Holger; Leyton-Brown, Kevin (2011), "Sequential model-based optimization for general algorithm configuration" (PDF)Learning and Intelligent Optimization
  7. Jump up to:a b c Bergstra, James; Bardenet, Remi; Bengio, Yoshua; Kegl, Balazs (2011), "Algorithms for hyper-parameter optimization" (PDF)Advances in Neural Information Processing Systems
  8. Jump up^ Snoek, Jasper; Larochelle, Hugo; Adams, Ryan (2012). "Practical Bayesian Optimization of Machine Learning Algorithms" (PDF)Advances in Neural Information Processing SystemsarXiv:1206.2944Bibcode:2012arXiv1206.2944S.
  9. Jump up^ Thornton, Chris; Hutter, Frank; Hoos, Holger; Leyton-Brown, Kevin (2013). "Auto-WEKA: Combined selection and hyperparameter optimization of classification algorithms" (PDF)Knowledge Discovery and Data MiningarXiv:1208.3719Bibcode:2012arXiv1208.3719T.
  10. Jump up^ Larsen, Jan; Hansen, Lars Kai; Svarer, Claus; Ohlsson, M (1996). "Design and regularization of neural networks: the optimal use of a validation set". Proceedings of the 1996 IEEE Signal Processing Society Workshop.
  11. Jump up^ Olivier Chapelle; Vladimir Vapnik; Olivier Bousquet; Sayan Mukherjee (2002). "Choosing multiple parameters for support vector machines" (PDF)Machine Learning46: 131–159. doi:10.1023/a:1012450327387.
  12. Jump up^ Chuong B; Chuan-Sheng Foo; Andrew Y Ng (2008). "Efficient multiple hyperparameter learning for log-linear models". Advances in Neural Information Processing Systems 20.
  13. Jump up^ Domke, Justin (2012). "Generic Methods for Optimization-Based Modeling" (PDF)Aistats22.
  14. Jump up to:a b Maclaurin, Douglas; Duvenaud, David; Adams, Ryan P. (2015). "Gradient-based Hyperparameter Optimization through Reversible Learning". arXiv:1502.03492[stat.ML].
  15. Jump up to:a b Olson RS, Urbanowicz RJ, Andrews PC, Lavender NA, Kidd L, Moore JH (2016). Automating biomedical data science through tree-based pipeline optimizationProceedings of EvoStar 2016. Lecture Notes in Computer Science. 9597. pp. 123–137. arXiv:1601.07925doi:10.1007/978-3-319-31204-0_9ISBN 978-3-319-31203-3.
  16. Jump up to:a b Olson RS, Bartley N, Urbanowicz RJ, Moore JH (2016). Evaluation of a Tree-based Pipeline Optimization Tool for Automating Data ScienceProceedings of EvoBIO 2016. Gecco '16. pp. 485–492. arXiv:1603.06212doi:10.1145/2908812.2908918ISBN 9781450342063.
  17. Jump up^ Miikkulainen R, Liang J, Meyerson E, Rawal A, Fink D, Francon O, Raju B, Shahrzad H, Navruzyan A, Duffy N, Hodjat B (2017). "Evolving Deep Neural Networks". arXiv:1703.00548 [cs.NE].
  18. Jump up^ Jaderberg M, Dalibard V, Osindero S, Czarnecki WM, Donahue J, Razavi A, Vinyals O, Green T, Dunning I, Simonyan K, Fernando C, Kavukcuoglu K (2017). "Population Based Training of Neural Networks". arXiv:1711.09846 [cs.LG].
  19. Jump up^ Such FP, Madhavan V, Conti E, Lehman J, Stanley KO, Clune J (2017). "Deep Neuroevolution: Genetic Algorithms Are a Competitive Alternative for Training Deep Neural Networks for Reinforcement Learning". arXiv:1712.06567 [cs.NE].
  20. Jump up to:a b Diaz, Gonzalo; Fokoue, Achille; Nannicini, Giacomo; Samulowitz, Horst (2017). "An effective algorithm for hyperparameter optimization of neural networks". arXiv:1705.08520[cs.AI].
  21. Jump up to:a b Hazan, Elad; Klivans, Adam; Yuan, Yang (2017). "Hyperparameter Optimization: A Spectral Approach". arXiv:1706.00764 [cs.LG].
  22. Jump up^ Martinez-Cantin, Ruben (2014). "BayesOpt: A Bayesian Optimization Library for Nonlinear Optimization, Experimental Design and Bandits" (PDF)Journal of Machine Learning Research15: 3915−3919. arXiv:1405.7430Bibcode:2014arXiv1405.7430M.
  23. Jump up^ Kotthoff L, Thornton C, Hoos HH, Hutter F, Leyton-Brown K (2017). "Auto-WEKA 2.0: Automatic model selection and hyperparameter optimization in WEKA"Journal of Machine Learning Research18 (25): 1–5.
  24. Jump up^ Feurer M, Klein A, Eggensperger K, Springenberg J, Blum M, Hutter F (2015). "Efficient and Robust Automated Machine Learning"Advances in Neural Information Processing Systems 28 (NIPS 2015): 2962–2970.
  25. Jump up^ Baptista, Ricardo; Poloczek, Matthias (2018). "Bayesian Optimization of Combinatorial Structures". arXiv:1806.08838 [stat.ML].
  26. Jump up^ Hutter F, Hoos HH, Leyton-Brown K. "Sequential Model-Based Optimization for General Algorithm Configuration" (PDF)Proceedings of the Conference on Learning and Intelligent OptimizatioN (LION 5).
  27. Jump up^ Gorissen, Dirk; Crombecq, Karel; Couckuyt, Ivo; Demeester, Piet; Dhaene, Tom (2010). "A Surrogate Modeling and Adaptive Sampling Toolbox for Computer Based Design" (PDF)J. Machine Learning Research11: 2051–2055.

0 件のコメント:

コメントを投稿