官术网_书友最值得收藏!

Problems in training the perceptron and a solution

Let's consider a single neuron; what are the best choices for the weight w and the bias b? Ideally, we would like to provide a set of training examples and let the computer adjust the weight and the bias in such a way that the errors produced in the output are minimized. In order to make this a bit more concrete, let's suppose we have a set of images of cats and another separate set of images not containing cats. For the sake of simplicity, assume that each neuron looks at a single input pixel value. While the computer processes these images, we would like our neuron to adjust its weights and bias so that we have fewer and fewer images wrongly recognized as non-cats. This approach seems very intuitive, but it requires that a small change in weights (and/or bias) causes only a small change in outputs.

If we have a big output jump, we cannot progressively learn (rather than trying things in all possible directions—a process known as exhaustive search—without knowing if we are improving). After all, kids learn little by little. Unfortunately, the perceptron does not show this little-by-little behavior. A perceptron is either 0 or 1 and that is a big jump and it will not help it to learn, as shown in the following graph:

We need something different, smoother. We need a function that progressively changes from 0 to 1 with no discontinuity. Mathematically, this means that we need a continuous function that allows us to compute the derivative.

主站蜘蛛池模板: 乌拉特前旗| 镇平县| 阜南县| 比如县| 遂宁市| 荥经县| 松江区| 隆昌县| 桦南县| 边坝县| 嘉善县| 桃园县| 建瓯市| 广河县| 扶余县| 安仁县| 安丘市| 吴堡县| 颍上县| 寻乌县| 信宜市| 车致| 江孜县| 徐州市| 静海县| 肥东县| 扬州市| 政和县| 宁国市| 沈阳市| 肇源县| 平泉县| 武山县| 台东市| 峨眉山市| 奉节县| 太康县| 吉首市| 霸州市| 嘉定区| 黔南|