官术网_书友最值得收藏!

Summary

In this chapter, you took your first steps to solving NLP tasks by understanding the primary underlying platform (TensorFlow) on which we will be implementing our algorithms. First, we discussed the underlying details of TensorFlow architecture. Next, we discussed the essential ingredients of a meaningful TensorFlow client. Then we discussed a general coding practice widely used in TensorFlow known as scoping. Later, we brought all these elements together to implement a neural network to classify an MNIST dataset.

Specifically, we discussed the TensorFlow architecture lining up the explanation with an example TensorFlow client. In the TensorFlow client, we defined the TensorFlow graph. Then, when we created a session, it looked at the graph, created a GraphDef object representing the graph, and sent it to the distributed master. The distributed master looked at the graph, decided which components to use for the relevant computation, and pided it into several subgraphs to make the computations faster. Finally, workers executed subgraphs and returned the result through the session.

Next, we discussed various elements that composes a typical TensorFlow client: inputs, variables, outputs, and operations. Inputs are the data we feed to the algorithm for training and testing purposes. We discussed three different ways of feeding inputs: using placeholders, preloading data and storing data as TensorFlow tensors, and using an input pipeline. Then we discussed TensorFlow variables, how they differ from other tensors, and how to create and initialize them. Following this, we discussed how variables can be used to create intermediate and terminal outputs. Finally, we discussed several available TensorFlow operations, such as mathematical operations, matrix operations, neural-network related operations, and control-flow operations, that will be used later in the book.

Then we discussed how scoping can be used to avoid certain pitfalls when implementing a TensorFlow client. Scoping allows variables to be used with ease, while maintaining the encapsulation of the code.

Finally, we implemented a neural network using all the previously learned concepts. We used a three-layer neural network to classify an MNIST digit dataset.

In the next chapter, we will see how to use the fully connected neural network we implemented in this chapter, for learning the semantic numerical word representation of words.

主站蜘蛛池模板: 廊坊市| 绥棱县| 卓资县| 南川市| 抚顺县| 双柏县| 高尔夫| 武邑县| 龙里县| 婺源县| 九江县| 九台市| 德令哈市| 天柱县| 自贡市| 临洮县| 涟源市| 南投市| 乌拉特后旗| 平安县| 平远县| 龙门县| 姚安县| 诸暨市| 张掖市| 古蔺县| 喀什市| 宜黄县| 炉霍县| 遂宁市| 彭泽县| 井陉县| 若尔盖县| 赣榆县| 富源县| 上饶县| 积石山| 绥化市| 顺平县| 林西县| 望都县|