- Machine Learning for OpenCV
- Michael Beyeler
- 248字
- 2021-07-02 19:47:26
Representing Data and Engineering Features
In the last chapter, we built our very first supervised learning models and applied them to some classic datasets, such as the Iris and the Boston datasets. However, in the real world, data rarely comes in a neat <n_samples x n_features> feature matrix that is part of a pre-packaged database. Instead, it is our own responsibility to find a way to represent the data in a meaningful way. The process of finding the best way to represent our data is known as feature engineering, and it is one of the main tasks of data scientists and machine learning practitioners trying to solve real-world problems.
I know you would rather jump right to the end and build the deepest neural network mankind has ever seen. But, trust me, this stuff is important! Representing our data in the right way can have a much greater influence on the performance of our supervised model than the exact parameters we choose. And we get to invent our own features, too.
In this chapter, we will therefore go over some common feature engineering tasks. Specifically, we want to answer the following questions:
- What are some common preprocessing techniques that everyone uses but nobody talks about?
- How do we represent categorical variables, such as the names of products, of colors, or of fruits?
- How would we even go about representing text?
- What is the best way to encode images, and what do SIFT and SURF stand for?
Let's start from the top.
- Dynamics 365 for Finance and Operations Development Cookbook(Fourth Edition)
- INSTANT Mercurial SCM Essentials How-to
- QTP自動化測試進階
- Java:High-Performance Apps with Java 9
- Python Data Science Cookbook
- RocketMQ實戰與原理解析
- Python數據科學實踐指南
- Java程序設計教程
- 創新工場講AI課:從知識到實踐
- Docker on Windows
- WordPress 3.7 Complete(Third Edition)
- Hadoop MapReduce v2 Cookbook(Second Edition)
- 軟件測試實驗實訓指南
- Getting Started with Kubernetes
- 可信軟件基礎研究