官术网_书友最值得收藏!

  • Mastering TensorFlow 1.x
  • Armando Fandango
  • 419字
  • 2021-06-25 22:50:59

TF Slim

TF Slim is a lightweight library built on top of TensorFlow core for defining and training models. TF Slim can be used in conjunction with other TensorFlow low level and high-level libraries such as TF Learn. The TF Slim comes as part of the TensorFlow installation in the package: tf.contrib.slim. Run the following command to check if your TF Slim installation is working: 

python3 -c 'import tensorflow.contrib.slim as slim; eval = slim.evaluation.evaluate_once'

TF Slim provides several modules that can be picked and applied independently and mixed with other TensorFlow packages. For example, at the time of writing of this book TF Slim had following major modules:

The simple workflow in TF Slim is as follows:

  1. Create the model using slim layers.
  2. Provide the input to the layers to instantiate the model.
  3. Use the logits and labels to define the loss.
  4. Get the total loss using convenience function get_total_loss().
  5. Create an optimizer.
  6. Create a training function using convenience function slim.learning.create_train_op(), total_loss and optimizer.
  7. Run the training using the convenience function slim.learning.train() and training function defined in the previous step.

The complete code for the MNIST classification example is provided in the notebook ch-02_TF_High_Level_LibrariesThe output from the TF Slim MNIST example is as follows:

INFO:tensorflow:Starting Session.
INFO:tensorflow:Saving checkpoint to path ./slim_logs/model.ckpt
INFO:tensorflow:global_step/sec: 0
INFO:tensorflow:Starting Queues.
INFO:tensorflow:global step 100: loss = 2.2669 (0.010 sec/step)
INFO:tensorflow:global step 200: loss = 2.2025 (0.010 sec/step)
INFO:tensorflow:global step 300: loss = 2.1257 (0.010 sec/step)
INFO:tensorflow:global step 400: loss = 2.0419 (0.009 sec/step)
INFO:tensorflow:global step 500: loss = 1.9532 (0.009 sec/step)
INFO:tensorflow:global step 600: loss = 1.8733 (0.010 sec/step)
INFO:tensorflow:global step 700: loss = 1.8002 (0.010 sec/step)
INFO:tensorflow:global step 800: loss = 1.7273 (0.010 sec/step)
INFO:tensorflow:global step 900: loss = 1.6688 (0.010 sec/step)
INFO:tensorflow:global step 1000: loss = 1.6132 (0.010 sec/step)
INFO:tensorflow:Stopping Training.
INFO:tensorflow:Finished training! Saving model to disk.
final loss=1.6131552457809448

As we see from the output, the convenience function slim.learning.train() saves the output of the training in checkpoint files in the specified log directory. If you restart the training, it will first check if the checkpoint exists and will resume the training from the checkpoint by default.

The documentation page for the TF Slim was found empty at the time of this writing at the following link: https://www.tensorflow.org/api_docs/python/tf/contrib/slim. However, some of the documentation can be found in the source code at the following link: https://github.com/tensorflow/tensorflow/tree/r1.4/tensorflow/contrib/slim.

We shall use TF Slim for learning how to use pre-trained models such as VGG16 and Inception V3 in later chapters.

主站蜘蛛池模板: 浦江县| 皋兰县| 麻江县| 新宁县| 河西区| 长泰县| 绥阳县| 曲周县| 桃源县| 眉山市| 同江市| 连江县| 宿松县| 平度市| 济源市| 禄丰县| 香格里拉县| 饶平县| 昌宁县| 峨山| 陇南市| 武定县| 湛江市| 安仁县| 扶余县| 金塔县| 左云县| 斗六市| 东兰县| 乐平市| 台湾省| 若羌县| 洛扎县| 云霄县| 玛纳斯县| 紫阳县| 杭锦后旗| 宕昌县| 沁源县| 卫辉市| 武乡县|