官术网_书友最值得收藏!

What this book covers

Chapter 1, Introduction to Machine Learning, will introduce you to the different machine learning paradigms, using examples from industry. You will also learn how to use data to evaluate the models you build.

Chapter 2, Making Decisions with Trees, will explain how decision trees work and teach you how to use them for classification as well as regression. You will also learn how to derive business rules from the trees you build.

Chapter 3, Making Decisions with Linear Equations, will introduce you to linear regression. After understanding its modus operandi, we will learn about related models such as ridge, lasso, and logistic regression. This chapter will also pave the way toward understanding neural networks later on in this book.

Chapter 4, Preparing Your Data, will cover how to deal with missing data using the impute functionality. We will then use scikit-learn, as well as an external library called categorical-encoding, to prepare the categorical data for the algorithms that we are going to use later on in the book.

Chapter 5, Image Processing with Nearest Neighbors, will explain the k-Nearest Neighbors algorithms and their hyperparameters. We will also learn how to prepare images for the nearest neighbors classifier.

Chapter 6, Classifying Text Using Naive Bayes, will teach you how to convert textual data into numbers and use machine learning algorithms to classify it. We will also learn about techniques to deal with synonyms and high data dimensionality.

Chapter 7, Neural Networks – Here Comes the Deep Learning, will dive into how to use neural networks for classification and regression. We will also learn about data scaling since it is a requirement for quicker convergence.

Chapter 8, Ensembles – When One Model Is Not Enough, will cover how to reduce the bias or variance of algorithms by combining them into an ensemble. We will also learn about the different ensemble methods, from bagging to boosting, and when to use each of them.

Chapter 9, The Y is as Important as the X, will teach you how to build multilabel classifiers. We will also learn how to enforce dependencies between your model outputs and make a classifier's probabilities more reliable with calibration.

Chapter 10, Imbalanced LearningNot Even 1% Win the Lottery, will introduce the use of an imbalanced learning helper library and explore different ways for over- and under-sampling. We will also learn how to use the sampling methods with the ensemble models.

Chapter 11, Clustering – Making Sense of Unlabeled Data, will cover clustering as an unsupervised learning algorithm for making sense of unlabeled data.

Chapter 12, Anomaly Detection – Finding Outliers in Data, will explore the different types of anomaly detection algorithms.

Chapter 13, Recommender Systems – Get to Know Their Taste, will teach you how to build a recommendation system and deploy it in production.

主站蜘蛛池模板: 六枝特区| 双桥区| 巴南区| 永胜县| 景洪市| 兴国县| 翼城县| 大同县| 巴塘县| 安图县| 杨浦区| 习水县| 钟祥市| 铜陵市| 罗甸县| 宿松县| 双桥区| 渭源县| 林甸县| 梁河县| 沙河市| 休宁县| 扬州市| 神农架林区| 大城县| 通河县| 息烽县| 抚宁县| 滦南县| 巴彦县| 安泽县| 辽宁省| 黑龙江省| 高邑县| 兴义市| 信丰县| 青海省| 保亭| 航空| 巫山县| 永春县|