官术网_书友最值得收藏!

The unreasonable effectiveness of data

Our first deep learning models on the binary classification task had fewer than 4,000 records. We did this so you could run the example quickly. For deep learning, you really need a lot more data, so we created a more complicated model with a lot more data, which gave us an increase in accuracy. This process demonstrated the following:

  • Establishing a baseline with other machine learning algorithms provides a good benchmark before using a deep learning model
  • We had to create a more complex model and adjust the hyper-parameters for our bigger dataset
  • The Unreasonable Effectiveness of Data

The last point here is borrowed from an article by Peter Norvig, available at https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35179.pdf. There is also a YouTube video with the same name. One of the main points in Norvig's article is this: invariably simple models and a lot of data trump more elaborate models based on less data.

We have increased the accuracy on our deep learning model by 0.38%. Considering that our dataset has highly correlated variables and that our domain is modelling human activities, this is not bad. People are, well predictable; so when attempting to predict what they do next, a small dataset usually works. In other domains, adding more data has much more of an effect. Consider a complex image-recognition task with color images where the image quality and format are not consistent. In that case, increasing our training data by a factor of 10 would have much more of an effect than in the earlier example. For many deep learning projects, you should include tasks to acquire more data from the very beginning of the project. This can be done by manually labeling the data, by outsourcing tasks (Amazon Turk), or by building some form of feedback mechanism in your application.

While other machine learning algorithms may also see an improvement in performance with more data, eventually adding more data will stop making a difference and performance will stagnate. This is because these algorithms were never designed for large high-dimensional data and so cannot model the complex patterns in very large datasets. However, you can build increasingly complex deep learning architectures that can model these complex patterns. This following plot illustrates how deep learning algorithms can continue to take advantage of more data and performance can still improve after performance on other machine algorithms stagnates:

Figure 4.6: How model accuracy increases by dataset size for deep learning models versus other machine  learning models
主站蜘蛛池模板: 涞源县| 那曲县| 德令哈市| 江北区| 曲周县| 布尔津县| 界首市| 肥乡县| 富阳市| 新和县| 新乡县| 滦南县| 二手房| 吉木乃县| 科技| 庆安县| 孟州市| 北票市| 拉萨市| 喜德县| 南木林县| 玉门市| 丁青县| 乳源| 阿图什市| 美姑县| 弋阳县| 新丰县| 胶南市| 延寿县| 杭州市| 崇州市| 永清县| 栾城县| 通榆县| 高州市| 重庆市| 阿巴嘎旗| 会泽县| 思南县| 图们市|