官术网_书友最值得收藏!

Summary

In this chapter, you took your first steps to solving NLP tasks by understanding the primary underlying platform (TensorFlow) on which we will be implementing our algorithms. First, we discussed the underlying details of TensorFlow architecture. Next, we discussed the essential ingredients of a meaningful TensorFlow client. Then we discussed a general coding practice widely used in TensorFlow known as scoping. Later, we brought all these elements together to implement a neural network to classify an MNIST dataset.

Specifically, we discussed the TensorFlow architecture lining up the explanation with an example TensorFlow client. In the TensorFlow client, we defined the TensorFlow graph. Then, when we created a session, it looked at the graph, created a GraphDef object representing the graph, and sent it to the distributed master. The distributed master looked at the graph, decided which components to use for the relevant computation, and pided it into several subgraphs to make the computations faster. Finally, workers executed subgraphs and returned the result through the session.

Next, we discussed various elements that composes a typical TensorFlow client: inputs, variables, outputs, and operations. Inputs are the data we feed to the algorithm for training and testing purposes. We discussed three different ways of feeding inputs: using placeholders, preloading data and storing data as TensorFlow tensors, and using an input pipeline. Then we discussed TensorFlow variables, how they differ from other tensors, and how to create and initialize them. Following this, we discussed how variables can be used to create intermediate and terminal outputs. Finally, we discussed several available TensorFlow operations, such as mathematical operations, matrix operations, neural-network related operations, and control-flow operations, that will be used later in the book.

Then we discussed how scoping can be used to avoid certain pitfalls when implementing a TensorFlow client. Scoping allows variables to be used with ease, while maintaining the encapsulation of the code.

Finally, we implemented a neural network using all the previously learned concepts. We used a three-layer neural network to classify an MNIST digit dataset.

In the next chapter, we will see how to use the fully connected neural network we implemented in this chapter, for learning the semantic numerical word representation of words.

主站蜘蛛池模板: 金平| 邢台县| 扎鲁特旗| 榆林市| 太康县| 湖北省| 长泰县| 安新县| 南乐县| 报价| 吴忠市| 缙云县| 泰宁县| 郁南县| 澄江县| 沂南县| 岢岚县| 嘉义市| 兴国县| 福州市| 南漳县| 裕民县| 肇庆市| 隆安县| 青州市| 齐河县| 休宁县| 灵川县| 大方县| 金山区| 抚松县| 兴文县| 和林格尔县| 茂名市| 庄浪县| 凤台县| 利津县| 新竹市| 离岛区| 海盐县| 长垣县|