官术网_书友最值得收藏!

The Continuous Bag-of-Words algorithm

The CBOW model has a working similar to the skip-gram algorithm with one significant change in the problem formulation. In the skip-gram model, we predicted the context words from the target word. However, in the CBOW model, we will predict the target from contextual words. Let's compare what data looks like for skip-gram and CBOW by taking the previous example sentence:

The dog barked at the mailman.

For skip-gram, data tuples—(input word, output word)—might look like this:

(dog, the), (dog, barked), (barked, dog), and so on.

For CBOW, data tuples would look like the following:

([the, barked], dog), ([dog, at], barked), and so on.

Consequently, the input of the CBOW has a dimensionality of 2 × m × D, where m is the context window size and D is the dimensionality of the embeddings. The conceptual model of CBOW is shown in Figure 3.13:

Figure 3.13: The CBOW model

We will not go into great details about the intricacies of CBOW as they are quite similar to those of skip-gram. However, we will discuss the algorithm implementation (though not in depth, as it shares a lot of similarities with skip-gram) to get a clear understanding of how to properly implement CBOW. The full implementation of CBOW is available at ch3_word2vec.ipynb in the ch3 exercise folder.

Implementing CBOW in TensorFlow

First, we define the variables; this is same as in the case of the skip-gram model:

embeddings = tf.Variable(tf.random_uniform([vocabulary_size,
  embedding_size], -1.0, 1.0, dtype=tf.float32))
softmax_weights = tf.Variable(
  tf.truncated_normal([vocabulary_size, embedding_size],
  stddev=1.0 / math.sqrt(embedding_size),
  dtype=tf.float32))
softmax_biases =
  tf.Variable(tf.zeros([vocabulary_size],dtype=tf.float32))

Here, we are creating a stacked set of embeddings, representing each position of the context. So we will have a matrix of size [batch_size, embeddings_size, 2*context_window_size]. Then, we will use a reduction operator to reduce the stacked matrix to that of size [batch_size, embedding size] by averaging the stacked embeddings over the last axis:

stacked_embedings = None
for i in range(2*window_size):
  embedding_i = tf.nn.embedding_lookup(embeddings,
  train_dataset[:,i])
  x_size,y_size = embedding_i.get_shape().as_list()
  if stacked_embedings is None:
    stacked_embedings = tf.reshape(embedding_i,[x_size,y_size,1])
  else:
    stacked_embedings =
    tf.concat(axis=2,
      values=[stacked_embedings,
      tf.reshape(embedding_i,[x_size,y_size,1])]
    )

assert stacked_embedings.get_shape().as_list()[2]==2*window_size
mean_embeddings = tf.reduce_mean(stacked_embedings,2,keepdims=False)

Thereafter, loss and optimizer are defined as in the skip-gram model:

loss = tf.reduce_mean(
    tf.nn.sampled_softmax_loss(weights=softmax_weights,
        biases=softmax_biases,
        inputs=mean_embeddings,
        labels=train_labels, 
        num_sampled=num_sampled, 
        num_classes=vocabulary_size))
optimizer = tf.train.AdagradOptimizer(1.0).minimize(loss)
主站蜘蛛池模板: 武强县| 老河口市| 拉萨市| 东安县| 勃利县| 鹿泉市| 鹤山市| 乐都县| 西平县| 兰西县| 金川县| 张北县| 宁晋县| 铜川市| 鹤庆县| 滨海县| 襄樊市| 龙井市| 铜梁县| 木兰县| 金溪县| 奇台县| 永登县| 开阳县| 惠来县| 余姚市| 礼泉县| 海原县| 宜良县| 绥宁县| 犍为县| 延寿县| 潞西市| 科技| 德安县| 屏边| 二连浩特市| 沅江市| 青浦区| 桂平市| 琼海市|