- Deep Learning By Example
- Ahmed Menshawy
- 411字
- 2021-06-24 18:52:41
Linear models for classification
In this section, we are going to go through logistic regression, which is one of the widely used algorithms for classification.
What's logistic regression? The simple definition of logistic regression is that it's a type of classification algorithm involving a linear discriminant.
We are going to clarify this definition in two points:
Unlike linear regression, logistic regression doesn't try to estimate/predict the value of the numeric variable given a set of features or input variables. Instead, the output of the logistic regression algorithm is the probability that the given sample/observation belongs to a specific class. In simpler words, let's assume that we have a binary classification problem. In this type of problem, we have only two classes in the output variable, for example, diseased or not diseased. So, the probability that a certain sample belongs to the diseased class is P0 and the probability that a certain sample belongs to the not diseased class is P1 = 1 - P0. Thus, the output of the logistic regression algorithm is always between 0 and 1.
As you probably know, there are a lot of learning algorithms for regression or classification, and each learning algorithm has its own assumption about the data samples. The ability to choose the learning algorithm that fits your data will come gradually with practice and good understanding of the subject. Thus, the central assumption of the logistic regression algorithm is that our input/feature space could be separated into two regions (one for each class) by a linear surface, which could be a line if we only have two features or a plane if we have three, and so on. The position and orientation of this boundary will be determined by your data. If your data satisfies this constraint that is separating them into regions corresponding to each class with a linear surface, then your data is said to be linearly separable. The following figure illustrates this assumption. In Figure 5, we have three dimensions, inputs, or features and two possible classes: diseased (red) and not diseased (blue). The dividing place that separates the two regions from each other is called a linear discriminant, and that’s because it's linear and it helps the model to discriminate between samples belonging to different classes:
If your data samples aren't linearly separable, you can make them so by transforming your data into higher dimensional space, by adding more features.
- Hands-On Deep Learning with Apache Spark
- Google Cloud Platform Cookbook
- Deep Learning Quick Reference
- TestStand工業自動化測試管理(典藏版)
- Blockchain Quick Start Guide
- 數控銑削(加工中心)編程與加工
- 基于ARM 32位高速嵌入式微控制器
- 項目管理成功利器Project 2007全程解析
- 中國戰略性新興產業研究與發展·工業機器人
- 邊緣智能:關鍵技術與落地實踐
- HTML5 Canvas Cookbook
- 中國戰略性新興產業研究與發展·增材制造
- ESP8266 Robotics Projects
- 機器學習案例分析(基于Python語言)
- Machine Learning with Spark(Second Edition)