- Hands-On Artificial Intelligence for Beginners
- Patrick D. Smith
- 222字
- 2021-06-10 19:33:48
Basic tuning
So you've built a model, now what? Can you call it a day? Chances are, you'll have some optimization to do on your model. A key part of the machine learning process is the optimization of our algorithms and methods. In this section, we'll be covering the basic concepts of optimization, and will be continuing our learning of tuning methods throughout the following chapters.
Sometimes, when our models do not perform well with new data it can be related to them overfitting or underfitting. Let's cover some methods that we can use to prevent this from happening. First off, let's look at the random forest classifier that we trained earlier. In your notebook, call the predict method on it and pass the x_test data in to receive some predictions:
predicted = rf_classifier.predict(x_test)
From this, we can create evaluate the performance of our classifier through something known as a confusion matrix, which maps out misclassifications for us. Pandas makes this easy for us with the crosstab command:
pd.crosstab(y_test, predicted, rownames=['Actual'], colnames=['Predicted'])
You should see the output as follows:
As you can see, our model performed fairly well on this dataset (it is a simple one after all!). What happens, however, if our model didn't perform well? Let's take a look at what could happen.
- 計算機應用基礎·基礎模塊
- 圖解PLC控制系統梯形圖和語句表
- Hands-On Data Science with SQL Server 2017
- CentOS 8 Essentials
- 大數據驅動的設備健康預測及維護決策優化
- LAMP網站開發黃金組合Linux+Apache+MySQL+PHP
- Excel 2007常見技法與行業應用實例精講
- Mastering GitLab 12
- MATLAB-Simulink系統仿真超級學習手冊
- 網絡脆弱性掃描產品原理及應用
- 基于RPA技術財務機器人的應用與研究
- 智慧未來
- Learning Cassandra for Administrators
- 智能+:制造業的智能化轉型
- ROS Robotics By Example(Second Edition)