March 27, 2019
An Introduction to Linux Containers
Start of the Linux Container Series
This is the introduction post for the Linux containers blog post series. The following list shows the topics of all scheduled blog posts. It will be updated with the corresponding links once new posts are being released.
With the steadily growing spread of containerization now and in the future, it becomes increasingly necessary to properly understand the internals and potential security threats resulting from aspects like kernel vulnerabilities, container misconfigurations and wrong use. This also includes optimizing the process of deploying and distributing containers and their environments to increase the productivity and efficiency, which directly impacts the cost factor. There are many ways containerization can be implemented - this blog post series focusses on Linux namespaces and control groups. These features are currently being used by LXC and Docker.
A container is a set of processes that’s isolated from the host environment, processes, file hierarchy and network stack. Often containers are being created using images. These are minimal root filesystems that include the required binaries, dependencies and configuration files for the container to run. There’s no additional abstraction layer between the kernel and the application and there are no devices being virtualized. Instead, the kernel of the host system is being shared with the isolated processes. Isolation is being implemented using primitives available in the Linux kernel.
Because of these aspects containers are efficient:
Storage: To build container images, often minimal base images of Linux distributions are being used. For example, an Ubuntu base image is 188 megabytes in size. That’s only a small fraction of the size of a Ubuntu virtual machine. Moreover, there are even smaller base images like Alpine Linux with five megabytes and BusyBox with only two megabytes in size. On top of that, base images can be reused for multiple images. If for instance multiple images are making use of the same base image, the base image only has to be stored once if the container engines supports layered images.
Performance: Because of the shared kernel, there’s minimal additional cost when running containers compared to the execution without any containerization. Containers are able to be added and removed in seconds, making them a handy tool for agile application deployment.
Complexity: Every primitive that’s being used to isolate processes is already included in the operating system kernel. Without the requirement of an additional hypervisor layer, the operating system is aware of all parts of the containerization process and can handle the execution natively. For example, memory management for containers can be as simple as managing the memory of a single native process.
The Linux Container blog series takes the kernel primitives responsible for containerization into account. By using information on these mechanisms in combination with details to the corresponding kernel code, the functionality in regard to these mechanisms is being documented and illustrated. This explains how the operation of containers is possible and describes the internal processes of creating them. As the kernel source code is constantly subject to changes, all provided information is to be understood in regard to Linux version
Next post in series
- Continue reading the next article in this series