- Hands-On Machine Learning with ML.NET
- Jarred Capellman
- 143字
- 2021-06-24 16:43:25
Model evaluation
Once the model is trained, the last step is to evaluate the model. The typical approach to model evaluation is to hold out a portion of your dataset for evaluation. The idea behind this is to take known data, submit it to your trained model, and measure the efficacy of your model. The critical part of this step is to hold out a representative dataset of your data. If your holdout set is swayed one way or the other, then you will more than likely get a false sense of either high performance or low performance. In the next chapter, we will deep dive into the various scoring and evaluation metrics. ML.NET provides a relatively easy interface to evaluate a model; however, each algorithm has unique properties to verify, which we will review as we deep dive into the various algorithms.
- Oracle WebLogic Server 12c:First Look
- Learning Firefox OS Application Development
- Building a Recommendation Engine with Scala
- INSTANT OpenNMS Starter
- 名師講壇:Spring實戰(zhàn)開發(fā)(Redis+SpringDataJPA+SpringMVC+SpringSecurity)
- iOS自動化測試實戰(zhàn):基于Appium、Python與Pytest
- Python語言實用教程
- Qlik Sense? Cookbook
- Learning Android Application Testing
- Java7程序設(shè)計入門經(jīng)典
- 原型設(shè)計:打造成功產(chǎn)品的實用方法及實踐
- Responsive Web Design with jQuery
- 程序員的英語
- Building Microservices with .NET Core 2.0(Second Edition)
- Learning Akka