官术网_书友最值得收藏!

Exploring common forms of parallelization

There is a certain confusion when we try to define the main forms of paralleling systems. It is common to find quotations on parallel and concurrent systems as if both meant the same thing. Nevertheless, there are slight differences between them.

Within concurrent programming, we have a scenario in which a program dispatches several workers and these workers dispute to use the CPU to run a task. The stage at which the dispute takes place is controlled by the CPU scheduler, whose function is to define which worker is apt for using the resource at a specific moment. In most cases, the CPU scheduler runs the task of raking processes so fast that we might get the impression of pseudo-parallelism. Therefore, concurrent programming is an abstraction from parallel programming.

Note

Concurrent systems dispute over the same CPU to run tasks.

The following diagram shows a concurrent program scheme:

Exploring common forms of parallelization

Concurrent programming scheme.

Parallel programming can be defined as an approach in which program data creates workers to run specific tasks simultaneously in a multicore environment without the need for concurrency amongst them to access a CPU.

Note

Parallel systems run tasks simultaneously.

The following figure shows the concept of parallel systems:

Exploring common forms of parallelization

Parallel programming scheme.

Distributed programming aims at the possibility of sharing the processing by exchanging data through messages between machines (nodes) of computing, which are physically separated.

Distributed programming is becoming more and more popular for many reasons; they are explored as follows:

  • Fault-tolerance: As the system is decentralized, we can distribute the processing to different machines in a network, and thus perform individual maintenance of specific machines without affecting the functioning of the system as a whole.
  • Horizontal scalability: We can increase the capacity of processing in distributed systems in general. We can link new equipment with no need to abort applications being executed. We can say that it is cheaper and simpler compared to vertical scalability.
  • Cloud computing: With the reduction in hardware costs, we need the growth of this type of business where we can obtaining huge machine parks acting in a cooperative way and running programs in a transparent way for their users.

Note

Distributed systems run tasks within physically-separated nodes.

The following figure shows a distributed system scheme:

Exploring common forms of parallelization

Distributed programming scheme.

主站蜘蛛池模板: 孟连| 阿拉善左旗| 二连浩特市| 武夷山市| 灵丘县| 治多县| 友谊县| 桂阳县| 台湾省| 怀化市| 陆川县| 甘德县| 四平市| 南川市| 定日县| 密云县| 沛县| 永丰县| 南宁市| 措美县| 武山县| 巢湖市| 蓬莱市| 广州市| 湛江市| 六安市| 响水县| 长治市| 奇台县| 阿拉尔市| 天峻县| 普宁市| 阿巴嘎旗| 保定市| 马龙县| 柘城县| 二手房| 洛南县| 靖西县| 长乐市| 伊吾县|