官术网_书友最值得收藏!

Feature selection – say no to bad attributes

By this chapter, we will have a level of comfort when dealing with new datasets. We will have under our belt the abilities to understand and clean the data in front of us. Once we are able to work with the data given to us, we can start to make big decisions such as, at what point is a feature actually an attribute. Recall that by this distinction, feature versus attribute, the question really is, which columns are not helping my ML pipeline and therefore are hurting my pipeline and should be removed? This chapter focuses on techniques used to make the decision of which attributes to get rid of in our dataset. We will explore several statistical and iterative processes that will aid us in this decision.

Among these processes are:

  • Correlation coefficients
  • Identifying and removing multicollinearity
  • Chi-squared tests
  • Anova tests
  • Interpretation of p-values
  • Iterative feature selection
  • Using machine learning to measure entropy and information gain

All of these procedures will attempt to suggest the removal of features and will give different reasons for doing so. Ultimately, it will be up to us, the data scientists, to make the final call over which features will be allowed to remain and contribute to our machine learning algorithms.

主站蜘蛛池模板: 竹山县| 吴忠市| 西盟| 通榆县| 通海县| 西峡县| 洛阳市| 奈曼旗| 唐河县| 上林县| 柘城县| 崇礼县| 新田县| 松阳县| 莎车县| 张家界市| 青冈县| 田林县| 隆尧县| 辰溪县| 石屏县| 彰武县| 同江市| 靖边县| 固安县| 通州市| 锡林郭勒盟| 顺平县| 若尔盖县| 桦川县| 青浦区| 邵东县| 丰都县| 玛沁县| 山西省| 元江| 武功县| 分宜县| 方城县| 丰镇市| 温宿县|