官术网_书友最值得收藏!

Exploring common forms of parallelization

There is a certain confusion when we try to define the main forms of paralleling systems. It is common to find quotations on parallel and concurrent systems as if both meant the same thing. Nevertheless, there are slight differences between them.

Within concurrent programming, we have a scenario in which a program dispatches several workers and these workers dispute to use the CPU to run a task. The stage at which the dispute takes place is controlled by the CPU scheduler, whose function is to define which worker is apt for using the resource at a specific moment. In most cases, the CPU scheduler runs the task of raking processes so fast that we might get the impression of pseudo-parallelism. Therefore, concurrent programming is an abstraction from parallel programming.

Note

Concurrent systems dispute over the same CPU to run tasks.

The following diagram shows a concurrent program scheme:

Exploring common forms of parallelization

Concurrent programming scheme.

Parallel programming can be defined as an approach in which program data creates workers to run specific tasks simultaneously in a multicore environment without the need for concurrency amongst them to access a CPU.

Note

Parallel systems run tasks simultaneously.

The following figure shows the concept of parallel systems:

Exploring common forms of parallelization

Parallel programming scheme.

Distributed programming aims at the possibility of sharing the processing by exchanging data through messages between machines (nodes) of computing, which are physically separated.

Distributed programming is becoming more and more popular for many reasons; they are explored as follows:

  • Fault-tolerance: As the system is decentralized, we can distribute the processing to different machines in a network, and thus perform individual maintenance of specific machines without affecting the functioning of the system as a whole.
  • Horizontal scalability: We can increase the capacity of processing in distributed systems in general. We can link new equipment with no need to abort applications being executed. We can say that it is cheaper and simpler compared to vertical scalability.
  • Cloud computing: With the reduction in hardware costs, we need the growth of this type of business where we can obtaining huge machine parks acting in a cooperative way and running programs in a transparent way for their users.

Note

Distributed systems run tasks within physically-separated nodes.

The following figure shows a distributed system scheme:

Exploring common forms of parallelization

Distributed programming scheme.

主站蜘蛛池模板: 宜阳县| 丹棱县| 舒兰市| 西宁市| 兖州市| 景泰县| 枣强县| 四川省| 兴宁市| 吉隆县| 吴川市| 南溪县| 金华市| 舞阳县| 正定县| 江华| 永胜县| 婺源县| 泰安市| 视频| 大石桥市| 舞阳县| 米脂县| 家居| 苏州市| 南澳县| 伊宁县| 凤山县| 孙吴县| 自贡市| 茶陵县| 繁峙县| 大石桥市| 东乌| 普兰县| 惠安县| 太原市| 河西区| 繁昌县| 永德县| 资兴市|