官术网_书友最值得收藏!

Chapter 4. Unsupervised Learning

Labeling a set of observations for classification or regression can be a daunting task, especially in the case of a large features set. In some cases, labeled observations are either unavailable or not possible to create. In an attempt to extract some hidden associations or structures from observations, the data scientist relies on unsupervised learning techniques to detect patterns or similarity in data.

The goal of unsupervised learning is to discover patterns of regularities and irregularities in a set of observations. These techniques are also applied in reducing the solution or features space.

There are numerous unsupervised algorithms; some are more appropriate to handle dependent features, while others generate affinity groups in the case of hidden features [4:1]. In this chapter, you will learn three of the most common unsupervised learning algorithms:

  • K-means: Clustering observed features
  • Expectation-Maximization (EM): Clustering observed and latent features
  • Function approximation

Any of these algorithms can be applied to technical analysis or fundamental analysis. Fundamental analyses of financial ratios and technical analyses of price movements are described in the Technical analysis section under Finances 101 in the Appendix . The K-means algorithm is fully implemented in Scala, while the EM and principal components analyses leverage the Apache commons math library.

The chapter concludes with a brief overview of dimension reduction techniques for non-linear models.

主站蜘蛛池模板: 民县| 呼和浩特市| 富锦市| 凯里市| 怀来县| 介休市| 信丰县| 漳浦县| 城步| 邛崃市| 临猗县| 沾化县| 三穗县| 洪泽县| 青川县| 迁西县| 云阳县| 进贤县| 同心县| 涡阳县| 石河子市| 从江县| 灵台县| 肃宁县| 岑巩县| 家居| 德江县| 四会市| 梁河县| 永寿县| 称多县| 科技| 陈巴尔虎旗| 南昌县| 平遥县| 潞西市| 三台县| 南溪县| 安顺市| 广丰县| 庆元县|