官术网_书友最值得收藏!

Understanding backpropagation

When a feedforward neural network is used to accept an input x and produce an output y?, information flows forward through the network elements. The input x provides the information that then propagates up to the hidden units at each layer and produces y?. This is called forward propagation. During training, forward propagation continues onward until it produces a scalar cost J(θ). The backpropagation algorithm, often called backprop, allows the information from the cost to then flow backward through the network in order to compute the gradient.

Computing an analytical expression for the gradient is straightforward, but numerically evaluating such an expression can be computationally expensive. The backpropagation algorithm does so using a simple and inexpensive procedure.

Backpropagation refers only to the method to compute the gradient, while another algorithm, such as stochastic gradient descent, refers to the actual mechanism.
主站蜘蛛池模板: 恩施市| 遂昌县| 六安市| 眉山市| 百色市| 辽阳县| 康乐县| 乐陵市| 壶关县| 上饶市| 兴和县| 临夏市| 大兴区| 四川省| 岳西县| 博野县| 仁怀市| 明星| 全南县| 天峻县| 和平区| 花莲市| 石景山区| 根河市| 新泰市| 营口市| 神池县| 江西省| 邢台市| 杭锦旗| 南和县| 吉木乃县| 西青区| 滦平县| 仁布县| 天峨县| 罗定市| 樟树市| 资讯 | 景德镇市| 濉溪县|