官术网_书友最值得收藏!

Choosing a good k

It is important to pick a proper value of hyperparameter k, since it can improve a model's performance as well as degrade it when chosen incorrectly. One popular rule of thumb is to take a square root of the number of training samples. Many popular software packages use this heuristic as a default k value. Unfortunately, this doesn't always work well, because of the differences in the data and distance metrics.

There is no mathematically-grounded way to come up with the optimal number of neighbors from the very beginning. The only option is to scan through a range of ks, and choose the best one according to some performance metric. You can use any performance metric that we've already described in the previous chapter: accuracy, F1, and so on. The cross-validation is especially useful when the data is scarce.

In fact, there is a variation of KNN, which doesn't require k at all. The idea is to make the algorithm take the radius of a ball to search the neighbors within. The k will be different for each point then, depending on the local density of points. This variation of the algorithm is known as radius-based neighbor learning. It suffers from the n-ball volume problem (see next section), because the more features you have, the bigger the radius should be to catch at least one neighbor.

主站蜘蛛池模板: 陆川县| 泗洪县| 盘锦市| 茶陵县| 惠水县| 兴海县| 米林县| 永修县| 交城县| 西昌市| 日喀则市| 临安市| 雅安市| 喀什市| 宜阳县| 湟中县| 东丰县| 芜湖市| 富民县| 游戏| 临猗县| 卢龙县| 北宁市| 集安市| 乌拉特后旗| 潼南县| 衢州市| 承德县| 铜鼓县| 尉犁县| 汉寿县| 太仆寺旗| 合水县| 桐庐县| 石楼县| 库车县| 信丰县| 雅安市| 夏津县| 新疆| 九台市|