官术网_书友最值得收藏!

Summary

To summarize, in this chapter, you were introduced to programming concepts in CUDA C and how parallel computing can be done using CUDA. It was shown that CUDA programs can run on any NVIDIA GPU hardware efficiently and in parallel. So, CUDA is both efficient and scalable. The CUDA API functions over and above existing ANSI C functions needed for parallel data computations were discussed in detail. How to call device code from the host code via a kernel call, configuring of kernel parameters, and a passing of parameters to the kernel were also discussed by taking a simple two-variable addition example. It was also shown that CUDA does not guarantee the order in which the blocks or thread will run and which block is assigned to which multi-processor in hardware. Moreover, vector operations, which take advantage of parallel-processing capabilities of GPU and CUDA, were discussed. It can be seen that, by performing vector operations on the GPU, it can improve the throughput drastically, compared to the CPU. In the last section, various common communication patterns followed in parallel programming were discussed in detail. Still, we have not discussed memory architecture and how threads can communicate with one another in CUDA. If one thread needs data of the other thread, then what can be done is also not discussed. So, in the next chapter, we will discuss memory architecture and thread synchronization in detail.

主站蜘蛛池模板: 星座| 迁西县| 萨迦县| 北辰区| 嘉善县| 广灵县| 平山县| 琼结县| 沅陵县| 银川市| 旺苍县| 五常市| 榆林市| 老河口市| 扬州市| 沧源| 那曲县| 平邑县| 维西| 虎林市| 西乌| 宜章县| 龙井市| 武胜县| 北辰区| 红桥区| 太保市| 磴口县| 建瓯市| 麻城市| 青海省| 东海县| 徐闻县| 习水县| 遵义市| 德庆县| 宜春市| 黄石市| 本溪市| 兰州市| 甘谷县|