- Python Deep Learning
- Ivan Vasilev Daniel Slater Gianmario Spacagna Peter Roelants Valentino Zocca
- 902字
- 2021-07-02 14:31:05
Linear regression
We have already introduced linear regression in Chapter 1, Machine Learning – an Introduction. To recap, regarding utilization of the vector notation, the output of a linear regression algorithm is a single value, y , and is equal to the dot product of the input values x and the weights w: . As we now know, linear regression is a special case of a neural network; that is, it's a single neuron with the identity activation function. In this section, we'll learn how to train linear regression with gradient descent and, in the following sections, we'll extend it to training more complex models. You can see how the gradient descent works in the following code block:

At first, this might look scary, but fear not! Behind the scenes, it's very simple and straightforward mathematics (I know that sounds even scarier!). But let's not lose sight of our goal, which is to adjust the weights, w, in a way that will help the algorithm to predict the target values. To do this, first we need to know how the output yi differs from the target value ti for each sample of the training dataset (we use superscript notation to mark the i-th sample). We'll use the mean-squared error loss function (MSE), which is equal to the mean value of the squared differences yi - ti for all samples (the total number of samples in the training set is n). We'll denote MSE with J for ease of use and, to underscore that, we can use other loss functions. Each yi is a function of w, and therefore, J is also a function of w. As we mentioned previously, the loss function J represents a hypersurface of dimension equal to the dimension of w (we are implicitly also considering the bias). To illustrate this, imagine that we have only one input value, x, and a single weight, w. We can see how the MSE changes with respect to w in the following diagram:

Our goal is to minimize J, which means finding such w, where the value of J is at its global minimum. To do this, we need to know whether J increases or decreases when we modify w, or, in other words, the first derivative (or gradient) of J with respect to w:
- In the general case, where we have multiple inputs and weights, we can calculate the partial derivative with respect to each weight wj using the following formula:

- And to move toward the minimum, we need to move in the opposite direction set by
for each wj.
- Let's calculate the derivative:

If , then
and, therefore,

- Now, that we have calculated the partial derivatives, we'll update the weights with the following update rule:

We can see that η is the learning rate. The learning rate determines the ratio by which the weight adjusts as new data arrives.
- We can write the update rule in matrix form as follows:

Here, ?, also called nabla, represents the vector of partial derivatives.

You may have noticed that in order to update the weights, we accumulate the error across all training samples. In reality, there are big datasets, and iterating over them for just one update would make training impractically slow. One solution to this problem is the stochastic (or online) gradient descent (SGD) algorithm, which works in the same way as regular gradient descent, but updates the weights after every training sample. However, SGD is prone to noise in the data. If a sample is an outlier, we risk increasing the error instead of decreasing it. A good compromise between the two is the mini-batch gradient descent, which accumulates the error for every n samples or mini-batches and performs one weight update. In practice, you'll almost always use mini-batch gradient descent.
Before we move to the next section, we should mention that besides the global minimum, the loss function might have multiple local minimums and minimizing its value is not as trivial, as in this example.
- 國(guó)際大學(xué)生程序設(shè)計(jì)競(jìng)賽中山大學(xué)內(nèi)部選拔真題解(二)
- FreeSWITCH 1.8
- The Modern C++ Challenge
- LabVIEW入門與實(shí)戰(zhàn)開發(fā)100例
- C語言程序設(shè)計(jì)
- jQuery從入門到精通 (軟件開發(fā)視頻大講堂)
- Scratch 3游戲與人工智能編程完全自學(xué)教程
- Java:Data Science Made Easy
- Python數(shù)據(jù)分析(第2版)
- Rust Essentials(Second Edition)
- HTML 5與CSS 3權(quán)威指南(第3版·上冊(cè))
- Mastering Adobe Captivate 7
- 程序員必會(huì)的40種算法
- Mastering Unreal Engine 4.X
- SAP HANA Starter