官术网_书友最值得收藏!

Learning the optimizer

In this method, we try to learn the optimizer. How do we generally optimize our neural network? We optimize our neural network by training on a large dataset and minimize the loss using gradient descent. But in the few-shot learning setting, gradient descent fails as we will have a smaller dataset. So, in this case, we will learn the optimizer itself. We will have two networks: a base network that actually tries to learn and a meta network that optimizes the base network. We will explore how exactly this works in the upcoming sections.

主站蜘蛛池模板: 米脂县| 斗六市| 林州市| 涞水县| 高尔夫| 察隅县| 塔河县| 浦东新区| 秦安县| 九江县| 丰台区| 溧阳市| 乌兰浩特市| 高要市| 东乌| 开原市| 赤峰市| 铜川市| 栖霞市| 临猗县| 石渠县| 阿巴嘎旗| 杂多县| 峡江县| 临潭县| 怀安县| 化德县| 扶风县| 塔城市| 怀宁县| 桓台县| 丹江口市| 赫章县| 罗甸县| 启东市| 泾阳县| 铜陵市| 英吉沙县| 大理市| 日喀则市| 张家川|