In this article, I will take you through the steps to install LXC (Linux containers) on RHEL/CentOS/Rocky Linux but before that let’s understand the first LXD. It is a free and open source next-generation system container and virtual machine manager. LXD provides a template that contains images of almost all the major Linux distributions. These images can be used to create Linux containers using the LXC utility. This is a CLI-based client utility provided by LXD. When running a virtual machine, LXD uses the hardware of the host system, but the kernel is provided by the virtual machine. Therefore, virtual machines can be used to run, for example, a different operating system.

What is LXC

LXC is a simple, yet powerful user space interface which allows Linux users to easily create and manage system or application containers.

Features of LXD

  • It provides flexibility and scalability for various use cases.
  • It supports different storage backends and network types.
  • It provides an option to install on hardware ranging from an individual laptop or cloud instance to a full server rack.
  • It implements a single REST API for both local and remote access.
  • It allows us to manage our instances using a single command line tool.

    How to Install LXC to Create Linux Containers on RHEL / CentOS / Rocky Linux

Step 1: Prerequisites

a) You should have a running RHEL/CentOS/Rocky Linux Server.

b) You should have sudo or root access to run privileged commands.

c) You should have YUM or DNF utility available in your System.

Step 2: Update Your System

First, you need to check for any latest available updates from all the enabled repositories using the yum update command. Here we are checking for the latest updates on CentOS 7 system using yum update command as shown below.

[root@localhost ~]# yum update
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.nhanhoa.com
* epel: repo.extreme-ix.org
* extras: mirrors.nhanhoa.com
* updates: mirrors.nhanhoa.com
Resolving Dependencies
--> Running transaction check
---> Package NetworkManager.x86_64 1:1.18.4-3.el7 will be updated
---> Package NetworkManager.x86_64 1:1.18.8-2.el7_9 will be an update
---> Package NetworkManager-libnm.x86_64 1:1.18.4-3.el7 will be updated
---> Package NetworkManager-libnm.x86_64 1:1.18.8-2.el7_9 will be an update
---> Package NetworkManager-team.x86_64 1:1.18.4-3.el7 will be updated
---> Package NetworkManager-team.x86_64 1:1.18.8-2.el7_9 will be an update
---> Package NetworkManager-tui.x86_64 1:1.18.4-3.el7 will be updated
---> Package NetworkManager-tui.x86_64 1:1.18.8-2.el7_9 will be an update
---> Package bash.x86_64 0:4.2.46-34.el7 will be updated
---> Package bash.x86_64 0:4.2.46-35.el7_9 will be an update
---> Package bind-export-libs.x86_64 32:9.11.4-16.P2.el7 will be updated
---> Package bind-export-libs.x86_64 32:9.11.4-26.P2.el7_9.9 will be an update
...........................................

Step 3: Change SELinux Mode

By default SELinux in your system will be enforcing mode. This can be checked by using getenforce command as shown below.

[root@localhost ~]# getenforce
Enforcing

Before proceeding, we need to change this to permissive mode by using below sed command.

[root@localhost ~]# sed -i 's/^SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config

Then restart or reboot your system by using init 6 or reboot command as shown below.

[root@localhost ~]# reboot

Once restarted, if you now check the SELinux status using same getenforce command it will show in Permissive mode.

[root@localhost ~]# getenforce
Permissive

Step 4: Install EPEL Repo 

In the next step, you need to install EPEL Repo by using yum install epel-release command as shown below.

[root@localhost ~]# yum install epel-release
Loaded plugins: fastestmirror
Determining fastest mirrors
* base: mirrors.nhanhoa.com
* extras: mirrors.nhanhoa.com
* updates: mirrors.nhanhoa.com
base | 3.6 kB 00:00:00
extras | 2.9 kB 00:00:00
updates | 2.9 kB 00:00:00
(1/4): base/7/x86_64/group_gz | 153 kB 00:00:00
(2/4): extras/7/x86_64/primary_db | 246 kB 00:00:00
(3/4): updates/7/x86_64/primary_db | 15 MB 00:01:33
(4/4): base/7/x86_64/primary_db | 6.1 MB 00:01:37
Resolving Dependencies
--> Running transaction check
---> Package epel-release.noarch 0:7-11 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=============================================================================================================================================================
Package Arch Version Repository Size
=============================================================================================================================================================
Installing:
epel-release noarch 7-11 extras 15 k

Transaction Summary
=============================================================================================================================================================
Install 1 Package

Total download size: 15 k
Installed size: 24 k
Is this ok [y/d/N]: y
.....................................

Step 5: Install Snapd

If you are thinking to install LXC from Snap store then you need to first install the snap utility in your System by using yum install snapd command as shown below.

[root@localhost ~]# yum install snapd
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
epel/x86_64/metalink | 7.7 kB 00:00:00
* base: mirrors.nhanhoa.com
* epel: repo.extreme-ix.org
* extras: mirrors.nhanhoa.com
* updates: mirrors.nhanhoa.com
epel | 4.7 kB 00:00:00
(1/3): epel/x86_64/group_gz | 96 kB 00:00:00
(2/3): epel/x86_64/updateinfo | 1.0 MB 00:00:01
(3/3): epel/x86_64/primary_db | 7.0 MB 00:00:08
Resolving Dependencies
--> Running transaction check
---> Package snapd.x86_64 0:2.55.3-1.el7 will be installed
--> Processing Dependency: snap-confine(x86-64) = 2.55.3-1.el7 for package: snapd-2.55.3-1.el7.x86_64
--> Processing Dependency: snapd-selinux = 2.55.3-1.el7 for package: snapd-2.55.3-1.el7.x86_64
--> Processing Dependency: bash-completion for package: snapd-2.55.3-1.el7.x86_64
--> Processing Dependency: fuse for package: snapd-2.55.3-1.el7.x86_64
--> Processing Dependency: squashfs-tools for package: snapd-2.55.3-1.el7.x86_64
--> Processing Dependency: squashfuse for package: snapd-2.55.3-1.el7.x86_64
--> Running transaction check
---> Package bash-completion.noarch 1:2.1-8.el7 will be installed
...............................

After successful installation, snapd socket can be enabled by using systemctl enable --now snapd.socket command as shown below.

[root@localhost ~]# systemctl enable --now snapd.socket
Created symlink from /etc/systemd/system/sockets.target.wants/snapd.socket to /usr/lib/systemd/system/snapd.socket.

Step 6: Install LXC

There are two ways to install LXC on RHEL/CentOS/Rocky Linux based systems. We will look into both the ways.

a) Using YUM 

In the first method, you can use a package manager like yum to download and install LXC packages as shown below.

[root@localhost ~]# yum -y install lxc lxc-templates lxc-extra
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.nhanhoa.com
* epel: ftp.jaist.ac.jp
* extras: mirrors.nhanhoa.com
* updates: mirrors.nhanhoa.com
Resolving Dependencies
--> Running transaction check
---> Package lxc.x86_64 0:1.0.11-2.el7 will be installed
--> Processing Dependency: lua-lxc(x86-64) = 1.0.11-2.el7 for package: lxc-1.0.11-2.el7.x86_64
--> Processing Dependency: lua-alt-getopt for package: lxc-1.0.11-2.el7.x86_64
--> Processing Dependency: liblxc.so.1()(64bit) for package: lxc-1.0.11-2.el7.x86_64
---> Package lxc-extra.x86_64 0:1.0.11-2.el7 will be installed
--> Processing Dependency: python36-lxc(x86-64) = 1.0.11-2.el7 for package: lxc-extra-1.0.11-2.el7.x86_64
--> Processing Dependency: /usr/bin/python3.6 for package: lxc-extra-1.0.11-2.el7.x86_64
---> Package lxc-templates.x86_64 0:1.0.11-2.el7 will be installed
--> Running transaction check
---> Package lua-alt-getopt.noarch 0:0.7.0-4.el7 will be installed
---> Package lua-lxc.x86_64 0:1.0.11-2.el7 will be installed
--> Processing Dependency: lua-filesystem for package: lua-lxc-1.0.11-2.el7.x86_64
---> Package lxc-libs.x86_64 0:1.0.11-2.el7 will be installed
--> Processing Dependency: rsync for package: lxc-libs-1.0.11-2.el7.x86_64
..........................................

After successful installation, you can check the lxc supported configuration by using lxc-checkconfig command as shown below.

[root@localhost ~]# lxc-checkconfig
Kernel configuration not found at /proc/config.gz; searching...
Kernel configuration found at /boot/config-3.10.0-1127.el7.x86_64
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
Warning: newuidmap is not setuid-root
Warning: newgidmap is not setuid-root
Network namespace: enabled
Multiple /dev/pts instances: enabled
..........................................

Next step is to start the lxc service by using systemctl start lxc.service command and then check the status by using systemctl status lxc.service command as shown below.

[root@localhost ~]# systemctl start lxc.service
[root@localhost ~]# systemctl status lxc.service
? lxc.service - LXC Container Initialization and Autoboot Code
Loaded: loaded (/usr/lib/systemd/system/lxc.service; disabled; vendor preset: disabled)
Active: active (exited) since Thu 2022-05-05 09:20:30 EDT; 5s ago
Process: 11303 ExecStart=/usr/libexec/lxc/lxc-autostart-helper start (code=exited, status=0/SUCCESS)
Process: 11296 ExecStartPre=/usr/libexec/lxc/lxc-devsetup (code=exited, status=0/SUCCESS)
Main PID: 11303 (code=exited, status=0/SUCCESS)

May 05 09:20:00 localhost.localdomain systemd[1]: Starting LXC Container Initialization and Autoboot Code...
May 05 09:20:00 localhost.localdomain lxc-devsetup[11296]: Creating /dev/.lxc
May 05 09:20:00 localhost.localdomain lxc-devsetup[11296]: /dev is devtmpfs
May 05 09:20:00 localhost.localdomain lxc-devsetup[11296]: Creating /dev/.lxc/user
May 05 09:20:30 localhost.localdomain lxc-autostart-helper[11303]: Starting LXC autoboot containers: [ OK ]
May 05 09:20:30 localhost.localdomain systemd[1]: Started LXC Container Initialization and Autoboot Code.

If you want to check the path of all LXC templates then you can query it by using below rpm command.

[root@localhost ~]# rpm -ql lxc-templates-1.0.11-2.el7.x86_64
/usr/share/lxc/config/centos.common.conf
/usr/share/lxc/config/centos.userns.conf
/usr/share/lxc/config/common.seccomp
/usr/share/lxc/config/debian.common.conf
/usr/share/lxc/config/debian.userns.conf
/usr/share/lxc/config/fedora.common.conf
/usr/share/lxc/config/fedora.userns.conf
/usr/share/lxc/config/gentoo.common.conf
/usr/share/lxc/config/gentoo.moresecure.conf
/usr/share/lxc/config/gentoo.userns.conf
/usr/share/lxc/config/nesting.conf
/usr/share/lxc/config/oracle.common.conf
/usr/share/lxc/config/oracle.userns.conf
/usr/share/lxc/config/plamo.common.conf
/usr/share/lxc/config/plamo.userns.conf
/usr/share/lxc/config/ubuntu-cloud.common.conf
/usr/share/lxc/config/ubuntu-cloud.lucid.conf
/usr/share/lxc/config/ubuntu-cloud.userns.conf
/usr/share/lxc/config/ubuntu.common.conf
/usr/share/lxc/config/ubuntu.lucid.conf
/usr/share/lxc/config/ubuntu.userns.conf
/usr/share/lxc/templates/lxc-alpine
/usr/share/lxc/templates/lxc-altlinux
/usr/share/lxc/templates/lxc-archlinux
/usr/share/lxc/templates/lxc-busybox
/usr/share/lxc/templates/lxc-centos
/usr/share/lxc/templates/lxc-cirros
/usr/share/lxc/templates/lxc-debian
/usr/share/lxc/templates/lxc-download
/usr/share/lxc/templates/lxc-fedora
/usr/share/lxc/templates/lxc-gentoo
/usr/share/lxc/templates/lxc-openmandriva
/usr/share/lxc/templates/lxc-opensuse
/usr/share/lxc/templates/lxc-oracle
/usr/share/lxc/templates/lxc-plamo
/usr/share/lxc/templates/lxc-sshd
/usr/share/lxc/templates/lxc-ubuntu
/usr/share/lxc/templates/lxc-ubuntu-cloud

To create a container called test-container1 from centos template, use below command.

[root@localhost ~]# lxc-create -n test-container1 -t /usr/share/lxc/templates/lxc-centos
Host CPE ID from /etc/os-release: cpe:/o:centos:centos:7
Checking cache download in /var/cache/lxc/centos/x86_64/7/rootfs ...
Downloading CentOS minimal ...
Loaded plugins: fastestmirror
Determining fastest mirrors
* base: centos.excellmedia.net
* updates: centos.excellmedia.net
base | 3.6 kB 00:00:00
updates | 2.9 kB 00:00:00
Resolving Dependencies
--> Running transaction check
---> Package chkconfig.x86_64 0:1.7.6-1.el7 will be installed
--> Processing Dependency: rtld(GNU_HASH) for package: chkconfig-1.7.6-1.el7.x86_64
--> Processing Dependency: libpopt.so.0(LIBPOPT_0)(64bit) for package: chkconfig-1.7.6-1.el7.x86_64
--> Processing Dependency: libc.so.6(GLIBC_2.14)(64bit) for package: chkconfig-1.7.6-1.el7.x86_64
--> Processing Dependency: /bin/sh for package: chkconfig-1.7.6-1.el7.x86_64
--> Processing Dependency: libsepol.so.1()(64bit) for package: chkconfig-1.7.6-1.el7.x86_64 install lxc
--> Processing Dependency: libselinux.so.1()(64bit) for package: chkconfig-1.7.6-1.el7.x86_64 install lxc
--> Processing Dependency: libpopt.so.0()(64bit) for package: chkconfig-1.7.6-1.el7.x86_64 install lxc
---> Package cronie.x86_64 0:1.4.11-24.el7_9 will be installed
--> Processing Dependency: pam >= 1.0.1 for package: cronie-1.4.11-24.el7_9.x86_64
--> Processing Dependency: systemd for package: cronie-1.4.11-24.el7_9.x86_64
--> Processing Dependency: systemd for package: cronie-1.4.11-24.el7_9.x86_64
--> Processing Dependency: sed for package: cronie-1.4.11-24.el7_9.x86_64
--> Processing Dependency: libpam.so.0(LIBPAM_1.0)(64bit) for package: cronie-1.4.11-24.el7_9.x86_64

...............................................

Container rootfs and config have been created.
Edit the config file to check/enable networking setup.

The temporary root password is stored in:

'/var/lib/lxc/test-container1/tmp_root_pass'

The root password is set up as expired and will require it to be changed
at first login, which you should do as soon as possible. If you lose the
root password or wish to change it without starting the container, you
can change it from the host by running the following command (which will
also reset the expired flag):

chroot /var/lib/lxc/test-container1/rootfs passwd

You can check the temporary stored password using cat /var/lib/lxc/test-container1/tmp_root_pass command as shown below.

[root@localhost ~]# cat /var/lib/lxc/test-container1/tmp_root_pass
Root-test-container1-EKloqE

If you want to change the root password without starting the container then you need to use below chroot command.

[root@localhost ~]# chroot /var/lib/lxc/test-container1/rootfs passwd
Changing password for user root.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.

To start the container, you need to use lxc-start -n test-container1 command as shown below.

[root@localhost ~]# lxc-start -n test-container1

Similarly to stop the container, you need to use lxc-stop -n test-container1 command as shown below.

[root@localhost ~]# lxc-stop -n test-container1

b) Using Snapd

LXC can also be installed as snap package from Snap store using snap install lxd command as shown below.

[root@localhost ~]# snap install lxd
2022-05-05T07:24:51-04:00 INFO Waiting for automatic snapd restart...
lxd 5.1-4ae3604 from Canonical? installed

After successful installation, you can check the lxc utility version by using lxc --version command as shown below.

[root@localhost ~]# lxc --version
5.1

Before creating a container we need to initialize LXD environment using lxd init command as shown below. You will be asked to provide your choices. You can answer them and proceed according to your requirements. Here we are providing below highlighted answer.

[root@localhost ~]# lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: no
Do you want to configure a new storage pool? (yes/no) [default=yes]: yes
Name of the new storage pool [default=default]: teststorage-pool
Name of the storage backend to use (btrfs, dir, lvm, ceph) [default=btrfs]: lvm
Create a new LVM pool? (yes/no) [default=yes]: yes
Would you like to use an existing empty block device (e.g. a disk or partition)? (yes/no) [default=no]: no
Size in GB of the new loop device (1GB minimum) [default=9GB]: 11GB
Would you like to connect to a MAAS server? (yes/no) [default=no]: no
Would you like to create a new local network bridge? (yes/no) [default=yes]: yes
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
Would you like the LXD server to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]:
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:

To check the storage list, you can use lxc storage list command as shown below.

[root@localhost ~]# lxc storage list
+------------------+--------+-----------------------------------------------------+-------------+---------+---------+
| NAME | DRIVER | SOURCE | DESCRIPTION | USED BY | STATE |
+------------------+--------+-----------------------------------------------------+-------------+---------+---------+
| teststorage-pool | lvm | /var/snap/lxd/common/lxd/disks/teststorage-pool.img | | 1 | CREATED |
+------------------+--------+-----------------------------------------------------+-------------+---------+---------+

To launch a container say test-container1 from CentOS 7 image, you can use lxc launch images:centos/7/amd64 test-container1 command as shown below.

[root@localhost ~]# lxc launch images:centos/7/amd64 test-container1
Creating test-container1
Starting test-container1

You can use lxc list command to check the list of all the running containers.

[root@localhost ~]# lxc list
+-----------------+---------+----------------------+-----------------------------------------------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-----------------+---------+----------------------+-----------------------------------------------+-----------+-----------+
| test-container1 | RUNNING | 10.216.18.252 (eth0) | fd42:cb8b:790f:126b:216:3eff:fe4a:6009 (eth0) | CONTAINER | 0 |
+-----------------+---------+----------------------+-----------------------------------------------+-----------+-----------+

Step 7: Uninstall LXC

Once you are done with LXC, you can choose to uninstall it from the System by using any of the below methods depending on how you had installed.

a) Using YUM or DNF

If you had installed from repository using a package manager then you can use yum -y remove lxc lxc-templates lxc-extra command to remove the package as shown below.

[root@localhost ~]# yum -y remove lxc lxc-templates lxc-extra
Loaded plugins: fastestmirror
Resolving Dependencies
--> Running transaction check
---> Package lxc.x86_64 0:1.0.11-2.el7 will be erased
---> Package lxc-extra.x86_64 0:1.0.11-2.el7 will be erased
---> Package lxc-templates.x86_64 0:1.0.11-2.el7 will be erased
--> Finished Dependency Resolution

Dependencies Resolved

=============================================================================================================================================================
Package Arch Version Repository Size
=============================================================================================================================================================
Removing:
lxc x86_64 1.0.11-2.el7 @epel 318 k
lxc-extra x86_64 1.0.11-2.el7 @epel 39 k
lxc-templates x86_64 1.0.11-2.el7 @epel 333 k

Transaction Summary
=============================================================================================================================================================
Remove 3 Packages

Installed size: 690 k
..............................................

b) Using Snapd

If you had installed from snap store, then you can remove the snap package using snap remove lxd command as shown below.

[root@localhost ~]# snap remove lxd
lxd removed