官术网_书友最值得收藏!

GPU memory handling

At the start of the TensorFlow session, by default, a session grabs all of the GPU memory, even if the operations and variables are placed only on one GPU in a multi-GPU system. If another session starts execution at the same time, it will receive an out-of-memory error. This can be solved in multiple ways:

  • For multi-GPU systems, set the environment variable CUDA_VISIBLE_DEVICES=<list of device idx>:
os.environ['CUDA_VISIBLE_DEVICES']='0'

The code that's executed after this setting will be able to grab all of the memory of the visible GPU.

  • For letting the session grab a part of the memory of the GPU, use the config option per_process_gpu_memory_fraction to allocate a percentage of the memory:
config.gpu_options.per_process_gpu_memory_fraction = 0.5

This will allocate 50% of the memory in all of the GPU devices.

  • By combining both of the preceding strategies, you can make only a certain percentage, alongside just some of the GPU, visible to the process.
  • Limit the TensorFlow process to grab only the minimum required memory at the start of the process. As the process executes further, set a config option to allow for the growth of this memory:
config.gpu_options.allow_growth = True

This option only allows for the allocated memory to grow, so the memory is never released back.

To find out more about learning techniques for distributing computation across multiple compute devices, refer to our book,  Mastering TensorFlow.
主站蜘蛛池模板: 双牌县| 鹤庆县| 吉安县| 青神县| 呼和浩特市| 伽师县| 托克逊县| 墨玉县| 女性| 博湖县| 浪卡子县| 隆德县| 定安县| 双峰县| 炎陵县| 宝应县| 长治县| 固原市| 崇仁县| 景谷| 噶尔县| 平远县| 阿图什市| 白玉县| 颍上县| 界首市| 长垣县| 错那县| 冷水江市| 吕梁市| 安达市| 襄垣县| 景德镇市| 陕西省| 东光县| 咸丰县| 鄂温| 宝应县| 金华市| 谢通门县| 勐海县|