官术网_书友最值得收藏!

Summary

Feature selection is the first (and sometimes the most important) step in a machine learning pipeline. Not all the features are useful for our purposes and some of them are expressed using different notations, so it's often necessary to preprocess our dataset before any further operations. 

We saw how to split the data into training and test sets using a random shuffle and how to manage missing elements. Another very important section covered the techniques used to manage categorical data or labels, which are very common when a certain feature assumes only a discrete set of values.

Then we analyzed the problem of dimensionality. Some datasets contain many features which are correlated with each other, so they don't provide any new information but increase the computational complexity and reduce the overall performances. Principal component analysis is a method to select only a subset of features which contain the largest amount of total variance. This approach, together with its variants, allows to decorrelate the features and reduce the dimensionality without a drastic loss in terms of accuracy. Dictionary learning is another technique used to extract a limited number of building blocks from a dataset, together with the information needed to rebuild each sample. This approach is particularly useful when the dataset is made up of different versions of similar elements (such as images, letters, or digits).

In the next chapter, we're going to discuss linear regression, which is the most diffused and simplest supervised approach to predict continuous values. We'll also analyze how to overcome some limitations and how to solve non-linear problems using the same algorithms.

主站蜘蛛池模板: 武义县| 平定县| 新闻| 绥芬河市| 江城| 铜梁县| 罗平县| 江源县| 罗城| 黎川县| 桐乡市| 景德镇市| 本溪市| 彭山县| 庆阳市| 大埔区| 麻阳| 筠连县| 荆州市| 英吉沙县| 赤水市| 文成县| 彰武县| 巫山县| 同德县| 若尔盖县| 车险| 来宾市| 涞源县| 同江市| 彭州市| 浙江省| 金沙县| 黄平县| 哈巴河县| 诏安县| 潼南县| 靖安县| 平谷区| 太湖县| 锡林郭勒盟|