- Machine Learning for Cybersecurity Cookbook
- Emmanuel Tsukerman
- 177字
- 2021-06-24 12:29:01
Hyperparameter tuning with scikit-optimize
In machine learning, a hyperparameter is a parameter whose value is set before the training process begins. For example, the choice of learning rate of a gradient boosting model and the size of the hidden layer of a multilayer perceptron, are both examples of hyperparameters. By contrast, the values of other parameters are derived via training. Hyperparameter selection is important because it can have a huge effect on the model's performance.
The most basic approach to hyperparameter tuning is called a grid search. In this method, you specify a range of potential values for each hyperparameter, and then try them all out, until you find the best combination. This brute-force approach is comprehensive but computationally intensive. More sophisticated methods exist. In this recipe, you will learn how to use Bayesian optimization over hyperparameters using scikit-optimize. In contrast to a basic grid search, in Bayesian optimization, not all parameter values are tried out, but rather a fixed number of parameter settings is sampled from specified distributions. More details can be found at https://scikit-optimize.github.io/notebooks/bayesian-optimization.html.
- Hands-On Internet of Things with MQTT
- 腦動(dòng)力:Linux指令速查效率手冊(cè)
- 精通MATLAB神經(jīng)網(wǎng)絡(luò)
- Hands-On Artificial Intelligence on Amazon Web Services
- 手把手教你學(xué)AutoCAD 2010
- 教父母學(xué)會(huì)上網(wǎng)
- 腦動(dòng)力:PHP函數(shù)速查效率手冊(cè)
- Cloud Analytics with Microsoft Azure
- Hands-On Cybersecurity with Blockchain
- 大數(shù)據(jù)安全與隱私保護(hù)
- 塊數(shù)據(jù)5.0:數(shù)據(jù)社會(huì)學(xué)的理論與方法
- 觸控顯示技術(shù)
- 步步圖解自動(dòng)化綜合技能
- 網(wǎng)絡(luò)布線與小型局域網(wǎng)搭建
- 運(yùn)動(dòng)控制系統(tǒng)