- Deep Learning with Keras
- Antonio Gulli Sujit Pal
- 179字
- 2021-07-02 23:58:06
Hyperparameters tuning
The preceding experiments gave a sense of what the opportunities for fine-tuning a net are. However, what is working for this example is not necessarily working for other examples. For a given net, there are indeed multiple parameters that can be optimized (such as the number of hidden neurons, BATCH_SIZE, number of epochs, and many more according to the complexity of the net itself).
Hyperparameter tuning is the process of finding the optimal combination of those parameters that minimize cost functions. The key idea is that if we have n parameters, then we can imagine that they define a space with n dimensions, and the goal is to find the point in this space which corresponds to an optimal value for the cost function. One way to achieve this goal is to create a grid in this space and systematically check for each grid vertex what the value assumed by the cost function is. In other words, the parameters are pided into buckets, and different combinations of values are checked via a brute force approach.
- 24小時學會電腦組裝與維護
- Linux KVM虛擬化架構實戰指南
- Augmented Reality with Kinect
- 電腦軟硬件維修大全(實例精華版)
- 電腦常見故障現場處理
- 分布式系統與一致性
- 微型計算機系統原理及應用:國產龍芯處理器的軟件和硬件集成(基礎篇)
- LPC1100系列處理器原理及應用
- Wireframing Essentials
- 可編程邏輯器件項目開發設計
- Arduino項目案例:游戲開發
- 筆記本電腦維修技能實訓
- 詳解FPGA:人工智能時代的驅動引擎
- 微服務架構實戰:基于Spring Boot、Spring Cloud、Docker
- Practical Artificial Intelligence and Blockchain