官术网_书友最值得收藏!

Threads, Synchronization, and Memory

In the last chapter, we saw how to write CUDA programs that leverage the processing capabilities of a GPU by executing multiple threads and blocks in parallel. In all programs, until the last chapter, all threads were independent of each other and there was no communication between multiple threads. Most of the real-life applications need communication between intermediate threads. So, in this chapter, we will look in detail at how communication between different threads can be done, and explain the synchronization between multiple threads working on the same data. We will examine the hierarchical memory architecture of a CUDA and how different memories can be used to accelerate CUDA programs. The last part of this chapter explains a very useful application of a CUDA in the dot product of vectors and matrix multiplication, using all the concepts we have covered earlier.

The following topics will be covered in this chapter:

  • Thread calls
  • CUDA memory architecture
  • Global, local, and cache memory
  • Shared memory and thread synchronization
  • Atomic operations
  • Constant and texture memory
  • Dot product and a matrix multiplication example

主站蜘蛛池模板: 新郑市| 弥渡县| 南木林县| 叙永县| 长宁县| 莆田市| 柏乡县| 兴文县| 康平县| 攀枝花市| 福州市| 天津市| 和平县| 成安县| 资溪县| 阿荣旗| 诸城市| 达孜县| 仙居县| 四会市| 乌鲁木齐市| 南溪县| 湖北省| 稷山县| 兴和县| 铅山县| 富阳市| 富蕴县| 尼勒克县| 慈溪市| 九江市| 交口县| 靖宇县| 永吉县| 丹阳市| 海宁市| 罗源县| 开鲁县| 股票| 旬邑县| 三台县|