- Hands-On GPU:Accelerated Computer Vision with OpenCV and CUDA
- Bhaumik Vaidya
- 235字
- 2021-08-13 15:48:24
Summary
To summarize, in this chapter, you were introduced to programming concepts in CUDA C and how parallel computing can be done using CUDA. It was shown that CUDA programs can run on any NVIDIA GPU hardware efficiently and in parallel. So, CUDA is both efficient and scalable. The CUDA API functions over and above existing ANSI C functions needed for parallel data computations were discussed in detail. How to call device code from the host code via a kernel call, configuring of kernel parameters, and a passing of parameters to the kernel were also discussed by taking a simple two-variable addition example. It was also shown that CUDA does not guarantee the order in which the blocks or thread will run and which block is assigned to which multi-processor in hardware. Moreover, vector operations, which take advantage of parallel-processing capabilities of GPU and CUDA, were discussed. It can be seen that, by performing vector operations on the GPU, it can improve the throughput drastically, compared to the CPU. In the last section, various common communication patterns followed in parallel programming were discussed in detail. Still, we have not discussed memory architecture and how threads can communicate with one another in CUDA. If one thread needs data of the other thread, then what can be done is also not discussed. So, in the next chapter, we will discuss memory architecture and thread synchronization in detail.
- C語言程序設計習題解析與上機指導(第4版)
- Selenium Design Patterns and Best Practices
- Python 3網絡爬蟲實戰
- JavaScript by Example
- Amazon S3 Cookbook
- Easy Web Development with WaveMaker
- C語言程序設計實驗指導 (第2版)
- Learning jQuery(Fourth Edition)
- 詳解MATLAB圖形繪制技術
- Java 9 with JShell
- 計算機應用基礎案例教程(第二版)
- Mastering Machine Learning with R
- Mastering Data Analysis with R
- PHP典型模塊與項目實戰大全
- 亮劍C#項目開發案例導航