官术网_书友最值得收藏!

Linear separability

Linear separability implies that if there are two classes then there will be a point, line, plane, or hyperplane that splits the input features in such a way that all points of one class are in one-half space and the second class is in the other half-space.

For example, here is a case of selling a house based on area and price. We have got a number of data points for that along with the class, which is house Sold/Not Sold:

In the preceding figure, all the N, are the class (event) of Not Sold, which has been derived based on the Price and Area of the house and all the instances of S represent the class of the house getting sold. The number of N and S represent the data points on which the class has been determined.

In the first diagram, N and S are quite close and happen to be more random, hence, it's difficult to have linear separability achieved as no matter how you try to separate two classes, at least one of them would be in the misclassified region. It implies that there won't be a correct possible line to separate the two. But the second diagram depicts datasets that can easily be separated based on given conditions.

Separation methodology changes based on the number of dimensions. If there is just one dimensional situation, we can have a point separating classes. Adding more dimensions will require a different arrangement to split the class. Once we have got a 2D situation, a line (as seen previously) will be required to separate it. Similarly, more than 2D will need a plane (a set of points) in order to separate the classes, as shown:

Separation method:

 

What if we have more than 3D? What do we do? What's the solution? Any guesses?

主站蜘蛛池模板: 新兴县| 晋中市| 阳江市| 彩票| 甘德县| 榆林市| 安丘市| 辽源市| 合作市| 莱州市| 普兰店市| 乌兰察布市| 华安县| 磐安县| 外汇| 古浪县| 西青区| 广南县| 永宁县| 漯河市| 宝鸡市| 苗栗市| 绥芬河市| 三江| 宜章县| 吉隆县| 绥滨县| 灵山县| 鲜城| 广灵县| 滨海县| 贵州省| 遂宁市| 乐都县| 清新县| 清远市| 静安区| 台北县| 西峡县| 泗洪县| 酉阳|