官术网_书友最值得收藏!

Encoder-decoder structure

Neural machine translation models are Recurrent Neural Networks (RNN), arranged in encoder-decoder fashion. The encoder network takes in variable length input sequences through RNN and encodes the sequences into a fixed size vector. The decoder begins with this encoded vector and starts generating translation word by word, until it predicts the end of sentence. The whole architecture is trained end-to-end with input sentence and correct output translation. The major advantage of these systems (apart from the capability to handle variable input size) is that they learn the context of a sentence and predict accordingly, rather than making a word-to-word translation. Neural machine translation can be best seen in action on Google translate in the following screenshot:

Sourced from Google translate
主站蜘蛛池模板: 库车县| 丹凤县| 麟游县| 镇坪县| 马公市| 南乐县| 巴里| 北京市| 东莞市| 宁强县| 建德市| 麻城市| 瑞丽市| 酉阳| 本溪市| 沧州市| 奉贤区| 平昌县| 北川| 水富县| 宾川县| 弥渡县| 汨罗市| 赤壁市| 阿巴嘎旗| 喀喇| 岐山县| 云南省| 长海县| 五峰| 荥阳市| 鹤岗市| 九台市| 秦安县| 淳化县| 昆山市| 德钦县| 嵊州市| 宜兰市| 英德市| 翼城县|