- Hands-On GPU:Accelerated Computer Vision with OpenCV and CUDA
- Bhaumik Vaidya
- 183字
- 2021-08-13 15:48:24
Threads, Synchronization, and Memory
In the last chapter, we saw how to write CUDA programs that leverage the processing capabilities of a GPU by executing multiple threads and blocks in parallel. In all programs, until the last chapter, all threads were independent of each other and there was no communication between multiple threads. Most of the real-life applications need communication between intermediate threads. So, in this chapter, we will look in detail at how communication between different threads can be done, and explain the synchronization between multiple threads working on the same data. We will examine the hierarchical memory architecture of a CUDA and how different memories can be used to accelerate CUDA programs. The last part of this chapter explains a very useful application of a CUDA in the dot product of vectors and matrix multiplication, using all the concepts we have covered earlier.
The following topics will be covered in this chapter:
- Thread calls
- CUDA memory architecture
- Global, local, and cache memory
- Shared memory and thread synchronization
- Atomic operations
- Constant and texture memory
- Dot product and a matrix multiplication example
- 數字媒體應用教程
- Python量化投資指南:基礎、數據與實戰
- Java面向對象軟件開發
- Interactive Data Visualization with Python
- PyTorch Artificial Intelligence Fundamentals
- 碼上行動:用ChatGPT學會Python編程
- Jupyter數據科學實戰
- 青少年學Python(第1冊)
- Python機器學習:預測分析核心算法
- Hands-On Dependency Injection in Go
- Functional Python Programming
- Scratch編程從入門到精通
- IBM DB2 9.7 Advanced Application Developer Cookbook
- C語言程序設計實驗指導教程
- Python量子計算實踐:基于Qiskit和IBM Quantum Experience平臺