官术网_书友最值得收藏!

Hyperparameter tuning with scikit-optimize

In machine learning, a hyperparameter is a parameter whose value is set before the training process begins. For example, the choice of learning rate of a gradient boosting model and the size of the hidden layer of a multilayer perceptron, are both examples of hyperparameters. By contrast, the values of other parameters are derived via training. Hyperparameter selection is important because it can have a huge effect on the model's performance.

The most basic approach to hyperparameter tuning is called a grid search. In this method, you specify a range of potential values for each hyperparameter, and then try them all out, until you find the best combination. This brute-force approach is comprehensive but computationally intensive. More sophisticated methods exist. In this recipe, you will learn how to use Bayesian optimization over hyperparameters using scikit-optimize. In contrast to a basic grid search, in Bayesian optimization, not all parameter values are tried out, but rather a fixed number of parameter settings is sampled from specified distributions. More details can be found at https://scikit-optimize.github.io/notebooks/bayesian-optimization.html.

主站蜘蛛池模板: 从化市| 封丘县| 大邑县| 镇宁| 娄烦县| 香港| 吉隆县| 乐陵市| 应城市| 伊川县| 正定县| 碌曲县| 云龙县| 上思县| 泰安市| 安新县| 剑河县| 田东县| 襄城县| 闽清县| 昌江| 湘乡市| 顺平县| 永安市| 临高县| 岐山县| 太仆寺旗| 嘉义县| 文成县| 蓬莱市| 治多县| 普陀区| 徐州市| 嘉兴市| 南木林县| 上高县| 怀远县| 丹东市| 高邮市| 阿瓦提县| 桦甸市|