- Scala for Machine Learning(Second Edition)
- Patrick R. Nicolas
- 224字
- 2021-07-08 10:43:09
Chapter 4. Unsupervised Learning
Labeling a set of observations for classification or regression can be a daunting task, especially in the case of a large features set. In some cases, labeled observations are either unavailable or not possible to create. In an attempt to extract some hidden associations or structures from observations, the data scientist relies on unsupervised learning techniques to detect patterns or similarity in data.
The goal of unsupervised learning is to discover patterns of regularities and irregularities in a set of observations. These techniques are also applied in reducing the solution or features space.
There are numerous unsupervised algorithms; some are more appropriate to handle dependent features, while others generate affinity groups in the case of hidden features [4:1]. In this chapter, you will learn three of the most common unsupervised learning algorithms:
- K-means: Clustering observed features
- Expectation-Maximization (EM): Clustering observed and latent features
- Function approximation
Any of these algorithms can be applied to technical analysis or fundamental analysis. Fundamental analyses of financial ratios and technical analyses of price movements are described in the Technical analysis section under Finances 101 in the Appendix . The K-means algorithm is fully implemented in Scala, while the EM and principal components analyses leverage the Apache commons math library.
The chapter concludes with a brief overview of dimension reduction techniques for non-linear models.
- 極簡算法史:從數學到機器的故事
- ASP.NET MVC4框架揭秘
- Offer來了:Java面試核心知識點精講(原理篇)
- R語言編程指南
- Servlet/JSP深入詳解
- 深入理解Java7:核心技術與最佳實踐
- aelf區塊鏈應用架構指南
- 老“碼”識途
- 人人都懂設計模式:從生活中領悟設計模式(Python實現)
- Learning Network Forensics
- Mastering Predictive Analytics with Python
- Spring+Spring MVC+MyBatis整合開發實戰
- Learning Apache Karaf
- Fastdata Processing with Spark
- XML程序設計(第二版)