- Deep Learning with R for Beginners
- Mark Hodnett Joshua F. Wiley Yuxi (Hayden) Liu Pablo Maldonado
- 588字
- 2021-06-24 14:30:38
Do I need a GPU (and what is it, anyway)?
Probably the two biggest reasons for the exponential growth in deep learning are:
- The ability to accumulate, store, and process large datasets of all types
- The ability to use GPUs to train deep learning models
So what exactly are GPUs and why are they so important to deep learning? Probably the best place to start is by actually looking at the CPU and why this is not optimal for training deep learning models. The CPU in a modern PC is one of the pinnacles of human design and engineering. Even the chip in a mobile phone is more powerful now than the entire computer systems of the first space shuttles. However, because they are designed to be good at all tasks, they may not be the best option for niche tasks. One such task is high-end graphics.
If we take a step back to the mid-1990s, most games were 2D, for example, platform games where the character in the game jumps between platforms and/or avoids obstacles. Today, almost all computer games utilize 3D space. Modern consoles and PCs have co-processors that take the load of modelling 3D space onto a 2D screen. These co-processors are known as GPUs.
GPUs are actually far simpler than CPUs. They are built to just do one task: massively parallel matrix operations. CPUs and GPUs both have cores, where the actual computation takes place. A PC with an Intel i7 CPU has four physical cores and eight virtual cores by using Hyper Threading. The NVIDIA TITAN Xp GPU card has 3,840 CUDA? cores. These cores are not directly comparable; a core in a CPU is much more powerful than a core in a GPU. But if the workload requires a large amount of matrix operations that can be done independently, a chip with lots of simple cores is much quicker.
Before deep learning was even a concept, researchers in neural networks realized that doing high-end graphics and training neural networks both involved workloads: large amounts of matrix multiplication that could be done in parallel. They realized that training the models on the GPU rather than the CPU would allow them to create much more complicated models.
Today, all deep learning frameworks run on GPUs as well as CPUs. In fact, if you want to train models from scratch and/or have a large amount of data, you almost certainly need a GPU. The GPU must be an NVIDIA GPU and you also need to install the CUDA? Toolkit, NVIDIA drivers, and cuDNN. These allow you to interface with the GPU and hijack its use from a graphics card to a maths co-processor. Installing these is not always easy, you have to ensure that the versions of CUDA, cuDNN and the deep learning libraries you use are compatible. Some people advise you need to use Unix rather than Windows, but support on Windows has improved greatly. This code on this book was developed on a Windows workstation. Forget about using a macOS, because they don't support NVIDIA cards.
That was the bad news. The good news is that you can learn everything about deep learning if you don't have a suitable GPU. The examples in the early chapters of this book will run perfectly fine on a modern PC. When we need to scale up, the book will explain how to use cloud resources, such as AWS and Google Cloud, to train large deep learning models.
- 大規(guī)模數(shù)據(jù)分析和建模:基于Spark與R
- 揭秘云計(jì)算與大數(shù)據(jù)
- 深入淺出MySQL:數(shù)據(jù)庫開發(fā)、優(yōu)化與管理維護(hù)(第2版)
- 數(shù)據(jù)驅(qū)動設(shè)計(jì):A/B測試提升用戶體驗(yàn)
- Python金融數(shù)據(jù)分析(原書第2版)
- AI時代的數(shù)據(jù)價值創(chuàng)造:從數(shù)據(jù)底座到大模型應(yīng)用落地
- 深入淺出Greenplum分布式數(shù)據(jù)庫:原理、架構(gòu)和代碼分析
- 計(jì)算機(jī)應(yīng)用基礎(chǔ)教程上機(jī)指導(dǎo)與習(xí)題集(微課版)
- SQL Server 2012數(shù)據(jù)庫管理教程
- 探索新型智庫發(fā)展之路:藍(lán)迪國際智庫報告·2015(下冊)
- 數(shù)據(jù)庫原理與應(yīng)用
- Unreal Engine Virtual Reality Quick Start Guide
- 深入理解InfluxDB:時序數(shù)據(jù)庫詳解與實(shí)踐
- Web Services Testing with soapUI
- Spring MVC Beginner’s Guide