- R Deep Learning Essentials
- Mark Hodnett Joshua F. Wiley
- 586字
- 2021-08-13 15:34:29
Do I need a GPU (and what is it, anyway)?
Probably the two biggest reasons for the exponential growth in deep learning are:
- The ability to accumulate, store, and process large datasets of all types
- The ability to use GPUs to train deep learning models
So what exactly are GPUs and why are they so important to deep learning? Probably the best place to start is by actually looking at the CPU and why this is not optimal for training deep learning models. The CPU in a modern PC is one of the pinnacles of human design and engineering. Even the chip in a mobile phone is more powerful now than the entire computer systems of the first space shuttles. However, because they are designed to be good at all tasks, they may not be the best option for niche tasks. One such task is high-end graphics.
If we take a step back to the mid-1990s, most games were 2D, for example, platform games where the character in the game jumps between platforms and/or avoids obstacles. Today, almost all computer games utilize 3D space. Modern consoles and PCs have co-processors that take the load of modelling 3D space onto a 2D screen. These co-processors are known as GPUs.
GPUs are actually far simpler than CPUs. They are built to just do one task: massively parallel matrix operations. CPUs and GPUs both have cores, where the actual computation takes place. A PC with an Intel i7 CPU has four physical cores and eight virtual cores by using Hyper Threading. The NVIDIA TITAN Xp GPU card has 3,840 CUDA? cores. These cores are not directly comparable; a core in a CPU is much more powerful than a core in a GPU. But if the workload requires a large amount of matrix operations that can be done independently, a chip with lots of simple cores is much quicker.
Before deep learning was even a concept, researchers in neural networks realized that doing high-end graphics and training neural networks both involved workloads: large amounts of matrix multiplication that could be done in parallel. They realized that training the models on the GPU rather than the CPU would allow them to create much more complicated models.
Today, all deep learning frameworks run on GPUs as well as CPUs. In fact, if you want to train models from scratch and/or have a large amount of data, you almost certainly need a GPU. The GPU must be an NVIDIA GPU and you also need to install the CUDA? Toolkit, NVIDIA drivers, and cuDNN. These allow you to interface with the GPU and hijack its use from a graphics card to a maths co-processor. Installing these is not always easy, you have to ensure that the versions of CUDA, cuDNN and the deep learning libraries you use are compatible. Some people advise you need to use Unix rather than Windows, but support on Windows has improved greatly. This code on this book was developed on a Windows workstation. Forget about using a macOS, because they don't support NVIDIA cards.
That was the bad news. The good news is that you can learn everything about deep learning if you don't have a suitable GPU. The examples in the early chapters of this book will run perfectly fine on a modern PC. When we need to scale up, the book will explain how to use cloud resources, such as AWS and Google Cloud, to train large deep learning models.
- 圖解西門子S7-200系列PLC入門
- 顯卡維修知識(shí)精解
- 極簡(jiǎn)Spring Cloud實(shí)戰(zhàn)
- 基于Proteus和Keil的C51程序設(shè)計(jì)項(xiàng)目教程(第2版):理論、仿真、實(shí)踐相融合
- 辦公通信設(shè)備維修
- 基于ARM的嵌入式系統(tǒng)和物聯(lián)網(wǎng)開發(fā)
- INSTANT ForgedUI Starter
- 嵌入式系統(tǒng)中的模擬電路設(shè)計(jì)
- Mastering Adobe Photoshop Elements
- 筆記本電腦使用、維護(hù)與故障排除從入門到精通(第5版)
- Building 3D Models with modo 701
- 深入理解序列化與反序列化
- RISC-V處理器與片上系統(tǒng)設(shè)計(jì):基于FPGA與云平臺(tái)的實(shí)驗(yàn)教程
- Java Deep Learning Cookbook
- 3D Printing Blueprints