Firing up more worker processes to scan ever larger tables is sometimes not the solution. Reading entire tables to find just a single row is usually not a good idea.
Therefore, it makes sense to create indexes:
test=# CREATE INDEX idx_id ON t_test (id);
CREATE INDEX
test=# SELECT * FROM t_test WHERE id = 43242; id | name -------+------ 43242 | hans (1 row) Time: 0.259 ms
PostgreSQL uses Lehman-Yao's high concurrency B-tree for standard indexes (https://www.csd.uoc.gr/~hy460/pdf/p650-lehman.pdf). Along with some PostgreSQL-specific optimizations, these trees provide end users with excellent performance. The most important thing is that Lehman-Yao allows you to run many operations (reading and writing) on the very same index at the same time, which helps to improve throughput dramatically.
However, indexes are not free:
test=# \di+ List of relations Schema | Name | Type | Owner | Table | Size | Description --------+--------+-------+-------+--------+-------+------------- public | idx_id | index | hs | t_test | 86 MB | (1 row)
As you can see, our index containing 4 million rows will eat up86 MBof disk space. In addition to this, the writes to the table will be slower because the index has to be kept in sync all the time.
In other words, if you insert into a table featuring 20 indexes, you also have to keep in mind that we have to write to all those indexes onINSERT, which seriously slows down the writing.
With the introduction of version 11, PostgreSQL now supports parallel index creation. It is possible to utilize more than one CPU core to build an index, thereby speeding up the process considerably. For now, this is only possible if you want to build a normal B-tree—there is no support for other index types yet. However, this will most likely change in the future. The parameter to control the level of parallelism is max_parallel_maintenance_workers. It tells PostgreSQL how many processes it can use as an upper limit.