- Machine Learning Quick Reference
- Rahul Kumar
- 142字
- 2021-08-20 10:05:11
Kernel trick
We have already seen that SVM works smoothly when it comes to having linear separable data. Just have a look at the following figure; it depicts that vectors are not linearly separable, but the noticeable part is that it is not being separable in 2D space:

With a few adjustments, we can still make use of SVM here.
Transformation of a two-dimensional vector into a 3D vector or any other higher dimensional vector can set things right for us. The next step would be to train the SVM using a higher dimensional vector. But the question arises of how high in dimension we should go to transform the vector. What this means is if the transformation has to be a two-dimensional vector, or 3D or 4D or more. It actually depends on the which brings separability into the dataset.
- 并行數據挖掘及性能優化:關聯規則與數據相關性分析
- Mastering Salesforce CRM Administration
- 自動檢測與轉換技術
- JMAG電機電磁仿真分析與實例解析
- Docker High Performance(Second Edition)
- 嵌入式操作系統
- Implementing Splunk 7(Third Edition)
- Implementing AWS:Design,Build,and Manage your Infrastructure
- 基于ARM9的小型機器人制作
- 工業機器人入門實用教程
- 電腦故障排除與維護終極技巧金典
- 玩機器人 學單片機
- PostgreSQL High Performance Cookbook
- ARM體系結構與編程
- NetSuite ERP for Administrators