- Containerization with LXC
- Konstantin Ivanov
- 375字
- 2021-07-09 19:26:47
The OS kernel and its early limitations
The current state of Linux containers is a direct result of the problems that early OS designers were trying to solve – managing memory, I/O, and process scheduling in the most efficient way.
In the past, only a single process could be scheduled for work, wasting precious CPU cycles if blocked on an I/O operation. The solution to this problem was to develop better CPU schedulers, so more work can be allocated in a fair way for maximum CPU utilization. Even though the modern schedulers, such as the Completely Fair Scheduler (CFS) in Linux do a great job of allocating fair amounts of time to each process, there's still a strong case for being able to give higher or lower priority to a process and its subprocesses. Traditionally, this can be accomplished by the nice()
system call, or real-time scheduling policies, however, there are limitations to the level of granularity or control that can be achieved.
Similarly, before the advent of virtual memory, multiple processes would allocate memory from a shared pool of physical memory. The virtual memory provided some form of memory isolation per process, in the sense that processes would have their own address space, and extend the available memory by means of a swap, but still there wasn't a good way of limiting how much memory each process and its children can use.
To further complicate the matter, running different workloads on the same physical server usually resulted in a negative impact on all running services. A memory leak or a kernel panic could cause one application to bring the entire operating system down. For example, a web server that is mostly memory bound and a database service that is I/O heavy running together became problematic. In an effort to avoid such scenarios, system administrators would separate the various applications between a pool of servers, leaving some machines underutilized, especially at certain times during the day, when there was not much work to be done. This is a similar problem as a single running process blocked on I/O operation is a waste of CPU and memory resources.
The solution to these problems is the use of hypervisor based virtualization, containers, or the combination of both.
- Linux系統(tǒng)架構(gòu)與運(yùn)維實(shí)戰(zhàn)
- Getting Started with oVirt 3.3
- PLC控制程序精編108例
- Learning Windows Server Containers
- 無蘋果不生活 OS X Mountain Lion隨身寶典
- Hands-On DevOps with Vagrant
- 精通Linux內(nèi)核開發(fā)
- 高性能Linux服務(wù)器構(gòu)建實(shí)戰(zhàn):運(yùn)維監(jiān)控、性能調(diào)優(yōu)與集群應(yīng)用
- Linux性能優(yōu)化
- 深入理解eBPF與可觀測性
- Windows Server 2019 Administration Fundamentals
- 蘋果OS X Mavericks 10.9應(yīng)用大全
- Kali Linux高級(jí)滲透測試
- 跟老男孩學(xué)Linux運(yùn)維:Shell編程實(shí)戰(zhàn)
- HTML5 Enterprise Application Development