- Deep Learning By Example
- Ahmed Menshawy
- 411字
- 2021-06-24 18:52:41
Linear models for classification
In this section, we are going to go through logistic regression, which is one of the widely used algorithms for classification.
What's logistic regression? The simple definition of logistic regression is that it's a type of classification algorithm involving a linear discriminant.
We are going to clarify this definition in two points:
Unlike linear regression, logistic regression doesn't try to estimate/predict the value of the numeric variable given a set of features or input variables. Instead, the output of the logistic regression algorithm is the probability that the given sample/observation belongs to a specific class. In simpler words, let's assume that we have a binary classification problem. In this type of problem, we have only two classes in the output variable, for example, diseased or not diseased. So, the probability that a certain sample belongs to the diseased class is P0 and the probability that a certain sample belongs to the not diseased class is P1 = 1 - P0. Thus, the output of the logistic regression algorithm is always between 0 and 1.
As you probably know, there are a lot of learning algorithms for regression or classification, and each learning algorithm has its own assumption about the data samples. The ability to choose the learning algorithm that fits your data will come gradually with practice and good understanding of the subject. Thus, the central assumption of the logistic regression algorithm is that our input/feature space could be separated into two regions (one for each class) by a linear surface, which could be a line if we only have two features or a plane if we have three, and so on. The position and orientation of this boundary will be determined by your data. If your data satisfies this constraint that is separating them into regions corresponding to each class with a linear surface, then your data is said to be linearly separable. The following figure illustrates this assumption. In Figure 5, we have three dimensions, inputs, or features and two possible classes: diseased (red) and not diseased (blue). The dividing place that separates the two regions from each other is called a linear discriminant, and that’s because it's linear and it helps the model to discriminate between samples belonging to different classes:
If your data samples aren't linearly separable, you can make them so by transforming your data into higher dimensional space, by adding more features.
- Dreamweaver CS3+Flash CS3+Fireworks CS3創意網站構建實例詳解
- 3D Printing with RepRap Cookbook
- OpenStack for Architects
- 商戰數據挖掘:你需要了解的數據科學與分析思維
- 自動控制原理
- Learning Social Media Analytics with R
- 自動檢測與轉換技術
- 人工智能與人工生命
- Ruby on Rails敏捷開發最佳實踐
- Dreamweaver CS6精彩網頁制作與網站建設
- SQL Server數據庫應用基礎(第2版)
- 新一代人工智能與語音識別
- Serverless Design Patterns and Best Practices
- 從零開始學ASP.NET
- 運動控制系統