Docker容器基础入门认知-Cgroup(Introduction to docker container – CGroup)

在上一篇说完 namespace 给容器技术提供了隔离之后,我们在介绍一下容器的“限制”问题

也许你会好奇,我们不是已经通过 Linux Namespace 给容器创建了一个容器了吗,为什么还需要对容器做限制呢?

因为在 linux 进程中,容器进程并不是物理隔离的,在运行时是和宿主机上的其他进程共享同一个 cpu 和内存,如果不加以限制,必定会造成资源竞争。

在容器中,第 1 号进程在“障眼法”的干扰下只能看到容器里的情况,但是宿主机上,它作为第 100 号进程与其他所有进程之间依然是平等的竞争关系。这就意味着,虽然第 100 号进程表面上被隔离了起来,但是它所能够使用到的资源(比如 CPU、内存),却是可以随时被宿主机上的其他进程(或者其他容器)占用的。当然,这个 100 号进程自己也可能把所有资源吃光。这些情况,显然都不是一个“沙盒”应该表现出来的合理行为。

Linux Cgroups 就是 Linux 内核中用来为进程设置资源限制的一个重要功能

Linux Cgroups 的全称是 Linux Control Group。它最主要的作用,就是限制一个进程组能够使用的资源上限,包括 CPU、内存、磁盘、网络带宽等等。

在 Linux 中,Cgroups 给用户暴露出来的操作接口是文件系统,即它以文件和目录的方式组织在操作系统的 /sys/fs/cgroup 路径下。在 Centos 机器里,我可以用 mount 指令把它们展示出来,这条命令是:

$ mount -t cgroup
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_prio,net_cls)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)

如果没有挂载则直接使用 yum install libcgroup 安装 cgroups 模块

可以看到,在 /sys/fs/cgroup 下面有很多诸如 cpuset、cpu、 memory 这样的子目录,也叫子系统。这些都是我这台机器当前可以被 Cgroups 进行限制的资源种类。而在子系统对应的资源种类下,你就可以看到该类资源具体可以被限制的方法。比如,对 CPU 子系统来说,我们就可以看到如下几个配置文件,这个指令是:

$ ls /sys/fs/cgroup/cpu
aegis   cgroup.clone_children  cgroup.procs          cpuacct.stat   cpuacct.usage_percpu  cpu.cfs_quota_us  cpu.rt_runtime_us  cpu.stat  notify_on_release  system.slice  user.slice
assist  cgroup.event_control   cgroup.sane_behavior  cpuacct.usage  cpu.cfs_period_us     cpu.rt_period_us  cpu.shares         docker    release_agent      tasks

如果熟悉 Linux CPU 管理的话,你就会在它的输出里注意到 cfs_period 和 cfs_quota 这样的关键词。这两个参数需要组合使用,可以用来限制进程在长度为 cfs_period 的一段时间内,只能被分配到总量为 cfs_quota 的 CPU 时间。而这样的配置文件又如何使用呢?你需要在对应的子系统下面创建一个目录,比如,我们现在进入 /sys/fs/cgroup/cpu 目录下:

root@centos:/sys/fs/cgroup/cpu$ mkdir container
root@centos/sys/fs/cgroup/cpu$ ls container/
cgroup.clone_children cpu.cfs_period_us cpu.rt_period_us  cpu.shares notify_on_release
cgroup.procs      cpu.cfs_quota_us  cpu.rt_runtime_us cpu.stat  tasks

这个目录就称为一个“控制组”。你会发现,操作系统会在你新创建的 container 目录下,自动生成该子系统对应的资源限制文件。现在,我们在后台执行这样一条脚本:

$ while : ; do : ; done &
[1] 226

这样,我们可以用 top 指令来确认一下 CPU 有没有被打满:

$ top
%Cpu0 :100.0 us, 0.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st

在输出里可以看到,CPU 的使用率已经 100% 了(%Cpu0 :100.0 us)。而此时,我们可以通过查看 container 目录下的文件,看到 container 控制组里的 CPU quota 还没有任何限制(即:-1),CPU period 则是默认的 100 ms(100000 us):

$ cat /sys/fs/cgroup/cpu/container/cpu.cfs_quota_us 
-1
$ cat /sys/fs/cgroup/cpu/container/cpu.cfs_period_us 
100000

接下来,我们可以通过修改这些文件的内容来设置限制。比如,向 container 组里的 cfs_quota 文件写入 20 ms(20000 us):

$ echo 20000 > /sys/fs/cgroup/cpu/container/cpu.cfs_quota_us

结合前面的介绍,你应该能明白这个操作的含义,它意味着在每 100 ms 的时间里,被该控制组限制的进程只能使用 20 ms 的 CPU 时间,也就是说这个进程只能使用到 20% 的 CPU 带宽。接下来,我们把被限制的进程的 PID 写入 container 组里的 tasks 文件,上面的设置就会对该进程生效了:

$ echo 226 > /sys/fs/cgroup/cpu/container/tasks 

我们可以用 top 指令查看一下:

$ top
%Cpu0 : 20.3 us, 0.0 sy, 0.0 ni, 79.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st

可以看到,计算机的 CPU 使用率立刻降到了 20%(%Cpu0 : 20.3 us)。

除 CPU 子系统外,Cgroups 的每一个子系统都有其独有的资源限制能力,比如:

  • blkio,为块设备设定I/O 限制,一般用于磁盘等设备;
  • cpuset,为进程分配单独的 CPU 核和对应的内存节点;
  • memory,为进程设定内存使用的限制。

Linux Cgroups 的设计还是比较易用的,简单粗暴地理解呢,它就是一个子系统目录加上一组资源限制文件的组合。而对于 Docker 等 Linux 容器项目来说,它们只需要在每个子系统下面,为每个容器创建一个控制组(即创建一个新目录),然后在启动容器进程之后,把这个进程的 PID 填写到对应控制组的 tasks 文件中就可以了。

而至于在这些控制组下面的资源文件里填上什么值,就靠用户执行 docker run 时的参数指定了,比如这样一条命令:

$ docker run -it --cpu-period=100000 --cpu-quota=20000 ubuntu /bin/bash

在启动这个容器后,我们可以通过查看 Cgroups 文件系统下,CPU 子系统中,“docker”这个控制组里的资源限制文件的内容来确认:

$ cat /sys/fs/cgroup/cpu/docker/5d5c9f67d/cpu.cfs_period_us 
100000
$ cat /sys/fs/cgroup/cpu/docker/5d5c9f67d/cpu.cfs_quota_us 
20000

所以你看,docker 容器时候的技术其实并没有什么特别大的创新,只是统一了linux相关功能,整合起来,形成容器技术。

————————

After we finished the previous article that namespace provides isolation for container technology, we will introduce the “limitation” of containers

Maybe you’ll be curious. Haven’t we created a container for the container through the Linux namespace? Why do we need to limit the container?

Because in the Linux process, the container process is not physically isolated. At runtime, it shares the same CPU and memory with other processes on the host. If it is not limited, it will inevitably cause resource competition.

In the container, process 1 can only see the situation in the container under the interference of “cover up”, but on the host, as process 100, it still has an equal competitive relationship with all other processes. This means that although process 100 is apparently isolated, the resources it can use (such as CPU and memory) can be occupied by other processes (or other containers) on the host at any time. Of course, the No. 100 process itself may eat up all resources. These situations are obviously not reasonable behaviors that a “sandbox” should show.

The < strong > linux cgroups is an important function in the Linux kernel used to set resource limits for processes < / strong >.

The full name of Linux cgroups is Linux control group. Its main function is to limit the upper limit of resources that a process group can use, including CPU, memory, disk, network bandwidth, etc.

In Linux, the operating interface exposed by cgroups to users is the file system, that is, it is organized in the / sys / FS / CGroup path of the operating system in the form of files and directories. In the CentOS machine, I can display them with the mount command, which is:

$ mount -t cgroup
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_prio,net_cls)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)

If not, install the cgroups module directly using Yum install libcgroup

You can see that under / sys / FS / CGroup, there are many subdirectories such as cpuset, CPU and memory, also known as subsystems. These are the types of resources that can be restricted by cgroups. Under the resource type corresponding to the subsystem, you can see the specific methods that can be restricted. For example, for the CPU subsystem, we can see the following configuration files. The instructions are:

$ ls /sys/fs/cgroup/cpu
aegis   cgroup.clone_children  cgroup.procs          cpuacct.stat   cpuacct.usage_percpu  cpu.cfs_quota_us  cpu.rt_runtime_us  cpu.stat  notify_on_release  system.slice  user.slice
assist  cgroup.event_control   cgroup.sane_behavior  cpuacct.usage  cpu.cfs_period_us     cpu.rt_period_us  cpu.shares         docker    release_agent      tasks

If you are familiar with Linux CPU management, you will notice CFS in its output_ Period and CFS_ Keywords like quota. These two parameters need to be used in combination and can be used to limit the length of the process to CFS_ Period can only be allocated to CFS in total_ CPU time of quota. How to use such a configuration file? You need to create a directory under the corresponding subsystem. For example, we now enter the / sys / FS / CGroup / CPU Directory:

root@centos:/sys/fs/cgroup/cpu$ mkdir container
root@centos/sys/fs/cgroup/cpu$ ls container/
cgroup.clone_children cpu.cfs_period_us cpu.rt_period_us  cpu.shares notify_on_release
cgroup.procs      cpu.cfs_quota_us  cpu.rt_runtime_us cpu.stat  tasks

This directory is called a “control group”. You will find that the operating system will automatically generate the resource limit file corresponding to the subsystem under the newly created container directory. Now, we execute such a script in the background:

$ while : ; do : ; done &
[1] 226

In this way, we can use the top command to confirm whether the CPU is full:

$ top
%Cpu0 :100.0 us, 0.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st

From the output, you can see that the CPU utilization has reached 100% (cpu0: 100.0 US). At this time, by viewing the files in the container directory, we can see that there is no limit on the CPU quota in the container control group (i.e. – 1), and the CPU period is 100 ms (100000 US) by default:

$ cat /sys/fs/cgroup/cpu/container/cpu.cfs_quota_us 
-1
$ cat /sys/fs/cgroup/cpu/container/cpu.cfs_period_us 
100000

Next, we can set restrictions by modifying the contents of these files. For example, to CFS in the container group_ Quota file write 20 ms (20000 US):

$ echo 20000 > /sys/fs/cgroup/cpu/container/cpu.cfs_quota_us

Combined with the previous introduction, you should understand the meaning of this operation. It means that the process limited by the control group can only use 20 ms CPU time every 100 ms, that is, the process can only use 20% CPU bandwidth. Next, we write the PID of the restricted process into the tasks file in the container group, and the above settings will take effect for the process:

$ echo 226 > /sys/fs/cgroup/cpu/container/tasks 

We can use the top command to check:

$ top
%Cpu0 : 20.3 us, 0.0 sy, 0.0 ni, 79.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st

As you can see, the CPU utilization of the computer immediately dropped to 20% (% cpu0: 20.3 US).

In addition to the CPU subsystem, each subsystem of cgroups has its unique resource limitation capabilities, such as:

  • Blkio, set I / O limits for block devices, which is generally used for disk and other devices;
  • Cpuset, which allocates a separate CPU core and a corresponding memory node for the process;
  • Memory, which sets the memory usage limit for the process.

The design of Linux cgroups is relatively easy to use. It is a combination of a subsystem directory and a set of resource limiting files. For docker and other Linux container projects, they only need to create a control group (that is, create a new directory) for each container under each subsystem, and then fill the PID of the process into the tasks file of the corresponding control group after starting the container process.

The values to be filled in the resource files under these control groups are specified by the parameters when the user executes docker run, such as this command:

$ docker run -it --cpu-period=100000 --cpu-quota=20000 ubuntu /bin/bash

After starting the container, we can confirm by checking the contents of the resource limit file in the “docker” control group in the CPU subsystem under the cgroups file system:

$ cat /sys/fs/cgroup/cpu/docker/5d5c9f67d/cpu.cfs_period_us 
100000
$ cat /sys/fs/cgroup/cpu/docker/5d5c9f67d/cpu.cfs_quota_us 
20000

So you see, the technology of docker container is not particularly innovative. It just unifies the relevant functions of Linux and integrates them to form container technology.