- Mastering Machine Learning Algorithms
- Giuseppe Bonaccorso
- 204字
- 2021-06-25 22:07:30
Cluster assumption
This assumption is strictly linked to the previous one and it's probably easier to accept. It can be expressed with a chain of interdependent conditions. Clusters are high density regions; therefore, if two points are close, they are likely to belong to the same cluster and their labels must be the same. Low density regions are separation spaces; therefore, samples belonging to a low density region are likely to be boundary points and their classes can be different. To better understand this concept, it's useful to think about supervised SVM: only the support vectors should be in low density regions. Let's consider the following bidimensional example:
In a semi-supervised scenario, we couldn't know the label of a point belonging to a high density region; however, if it is close enough to a labeled point that it's possible to build a ball where all the points have the same average density, we are allowed to predict the label of our test sample. Instead, if we move to a low-density region, the process becomes harder, because two points can be very close but with different labels. We are going to discuss the semi-supervised, low-density separation problem at the end of this chapter.
- Mastering Mesos
- Ansible Configuration Management
- 大數據管理系統
- Deep Learning Quick Reference
- 腦動力:C語言函數速查效率手冊
- Python Artificial Intelligence Projects for Beginners
- 網上生活必備
- Maya極速引擎:材質篇
- 3D Printing for Architects with MakerBot
- 網絡安全與防護
- Python:Data Analytics and Visualization
- Artificial Intelligence By Example
- 智能制造系統及關鍵使能技術
- Visual Basic項目開發案例精粹
- 手把手教你學Photoshop CS3