官术网_书友最值得收藏!

How to do it...

The section covers how to visualize TensorFlow models and output in TernsorBoard.

  1. To visualize summaries and graphs, data from TensorFlow can be exported using the FileWriter command from the summary module. A default session graph can be added using the following command:
# Create Writer Obj for log
log_writer = tf$summary$FileWriter('c:/log', sess$graph)

The graph for logistic regression developed using the preceding code is shown in the following screenshot:

Visualization of the logistic regression graph in TensorBoard
Details about symbol descriptions on TensorBoard can be found at https://www.tensorflow.org/get_started/graph_viz.
  1. Similarly, other variable summaries can be added to the TensorBoard using correct summaries, as shown in the following code:
# Adding histogram summary to weight and bias variable
w_hist = tf$histogram_summary("weights", W)
b_hist = tf$histogram_summary("biases", b)

The summaries can be a very useful way to determine how the model is performing. For example, for the preceding case, the cost function for test and train can be studied to understand optimization performance and convergence.

  1. Create a cross entropy evaluation for test. An example script to generate the cross entropy cost function for test and train is shown in the following command:
# Set-up cross entropy for test
nRowt<-nrow(occupancy_test)
xt <- tf$constant(unlist(occupancy_test[, xFeatures]), shape=c(nRowt, nFeatures), dtype=np$float32)
ypredt <- tf$nn$sigmoid(tf$matmul(xt, W) + b)
yt_ <- tf$constant(unlist(occupancy_test[, yFeatures]), dtype="float32", shape=c(nRowt, 1L))
cross_entropy_tst<-tf$reduce_mean(tf$nn$sigmoid_cross_entropy_with_logits(labels=yt_, logits=ypredt, name="cross_entropy_tst"))

The preceding code is similar to training cross entropy calculations with a different dataset. The effort can be minimized by setting up a function to return tensor objects.

  1. Add summary variables to be collected:
# Add summary ops to collect data
w_hist = tf$summary$histogram("weights", W)
b_hist = tf$summary$histogram("biases", b)
crossEntropySummary<-tf$summary$scalar("costFunction", cross_entropy)
crossEntropyTstSummary<-tf$summary$scalar("costFunction_test", cross_entropy_tst)

The script defines the summary events to be logged in the file.

  1. Open the writing object, log_writer. It writes the default graph to the location, c:/log:
# Create Writer Obj for log
log_writer = tf$summary$FileWriter('c:/log', sess$graph)
  1. Run the optimization and collect the summaries:
for (step in 1:2500) {
sess$run(optimizer)

# Evaluate performance on training and test data after 50 Iteration
if (step %% 50== 0){
### Performance on Train
ypred <- sess$run(tf$nn$sigmoid(tf$matmul(x, W) + b))
roc_obj <- roc(occupancy_train[, yFeatures], as.numeric(ypred))

### Performance on Test
ypredt <- sess$run(tf$nn$sigmoid(tf$matmul(xt, W) + b))
roc_objt <- roc(occupancy_test[, yFeatures], as.numeric(ypredt))
cat("train AUC: ", auc(roc_obj), " Test AUC: ", auc(roc_objt), "n")

# Save summary of Bias and weights
log_writer$add_summary(sess$run(b_hist), global_step=step)
log_writer$add_summary(sess$run(w_hist), global_step=step)
log_writer$add_summary(sess$run(crossEntropySummary), global_step=step)
log_writer$add_summary(sess$run(crossEntropyTstSummary), global_step=step)
} }
  1. Collect all the summaries to a single tensor using themerge_all command from the summary module:
summary = tf$summary$merge_all() 
  1. Write the summaries to the log file using the log_writer object:
log_writer = tf$summary$FileWriter('c:/log', sess$graph)
summary_str = sess$run(summary)
log_writer$add_summary(summary_str, step)
log_writer$close()
主站蜘蛛池模板: 靖远县| 波密县| 永春县| 广州市| 夏津县| 花垣县| 休宁县| 清原| 公安县| 应用必备| 涟源市| 卢龙县| 连云港市| 象山县| 万源市| 诸城市| 车致| 枣阳市| 奉贤区| 邻水| 临潭县| 吉林省| 临沧市| 玉田县| 庄浪县| 额敏县| 天水市| 临颍县| 南京市| 平罗县| 津市市| 孟州市| 望奎县| 海盐县| 奎屯市| 珠海市| 准格尔旗| 乌拉特中旗| 聂拉木县| 崇阳县| 河曲县|