- Feature Engineering Made Easy
- Sinan Ozdemir Divya Susarla
- 222字
- 2021-06-25 22:45:52
Feature selection – say no to bad attributes
By this chapter, we will have a level of comfort when dealing with new datasets. We will have under our belt the abilities to understand and clean the data in front of us. Once we are able to work with the data given to us, we can start to make big decisions such as, at what point is a feature actually an attribute. Recall that by this distinction, feature versus attribute, the question really is, which columns are not helping my ML pipeline and therefore are hurting my pipeline and should be removed? This chapter focuses on techniques used to make the decision of which attributes to get rid of in our dataset. We will explore several statistical and iterative processes that will aid us in this decision.
Among these processes are:
- Correlation coefficients
- Identifying and removing multicollinearity
- Chi-squared tests
- Anova tests
- Interpretation of p-values
- Iterative feature selection
- Using machine learning to measure entropy and information gain
All of these procedures will attempt to suggest the removal of features and will give different reasons for doing so. Ultimately, it will be up to us, the data scientists, to make the final call over which features will be allowed to remain and contribute to our machine learning algorithms.
- 數據分析實戰:基于EXCEL和SPSS系列工具的實踐
- 新型數據庫系統:原理、架構與實踐
- Redis應用實例
- 卷積神經網絡的Python實現
- 數字媒體交互設計(初級):Web產品交互設計方法與案例
- 數據挖掘原理與SPSS Clementine應用寶典
- 淘寶、天貓電商數據分析與挖掘實戰(第2版)
- Augmented Reality using Appcelerator Titanium Starter
- SIEMENS數控技術應用工程師:SINUMERIK 840D-810D數控系統功能應用與維修調整教程
- Filecoin原理與實現
- 實現領域驅動設計
- Microsoft Dynamics NAV 2015 Professional Reporting
- Artificial Intelligence for Big Data
- 大數據計算系統原理、技術與應用
- SQL Server 2012 數據庫教程(第3版)