官术网_书友最值得收藏!

  • Feature Engineering Made Easy
  • Sinan Ozdemir Divya Susarla
  • 444字
  • 2021-06-25 22:45:52

Feature improvement – cleaning datasets

In this topic, we take the results of our understanding of the data and use them in order to clean the dataset. Much of this book will flow in such a way, using results from previous sections to be able to work on current sections. In feature improvement, our understanding will allow us to begin our first manipulations of datasets. We will be using mathematical transformations to enhance the given data, but not remove or insert any new attributes (this is for the next chapters).

We will explore several topics in this section, including:

  • Structuring unstructured data
  • Data imputing—inserting data where there was not a data before (missing data)
  • Normalization of data:
    • Standardization (known as z-score normalization)
    • Min-max scaling
    • L1 and L2 normalization (projecting into different spaces, fun stuff)

By this point in the book, we will be able to identify whether our data has a structure or not. That is, whether our data is in a nice, tabular format. If it is not, this chapter will give us the tools to transform that data into a more tabular format. This is imperative when attempting to create machine learning pipelines.

Data imputing is a particularly interesting topic. The ability to fill in data where data was missing previously is trickier than it sounds. We will be proposing all kinds of solutions from the very, very easy, merely removing the column altogether, boom no more missing data, to the interestingly complex, using machine learning on the rest of the features to fill in missing spots. Once we have filled in a bulk of our missing data, we can then measure how that affected our machine learning algorithms.

Normalization uses (generally simple) mathematical tools used to change the scaling of our data. Again, this ranges from the easy, turning miles into feet or pounds into kilograms, to the more difficult, such as projecting our data onto the unit sphere (more on that to come).

This chapter and remaining chapters will be much more heavily focused on our quantitative feature engineering procedure evaluation flow. Nearly every single time we look at a new dataset or feature engineering procedure, we will put it to the test. We will be grading the performance of various feature engineering methods on the merits of machine learning performance, speed, and other metrics. This text should only be used as a reference and not as a guide to select with feature engineering the procedures you are allowed to ignore based on difficulty and change in performance. Every new data task comes with its own caveats and may require different procedures than the previous data task.

主站蜘蛛池模板: 鄂州市| 晴隆县| 卢龙县| 若尔盖县| 鲁甸县| 屏山县| 侯马市| 福安市| 汾阳市| 海南省| 红桥区| 金坛市| 鄂伦春自治旗| 方正县| 富源县| 罗甸县| 东山县| 周至县| 涿州市| 岳阳市| 托克逊县| 灵川县| 勐海县| 新龙县| 墨玉县| 松潘县| 凤庆县| 永州市| 海城市| 获嘉县| 东乡族自治县| 三门县| 繁峙县| 淳化县| 昔阳县| 常德市| 搜索| 迭部县| 安吉县| 车险| 阿勒泰市|