Skip to main content

How to Install Docker on Linux

Published on:
.
18 min read
.
For German Version

The container platform Docker has completely changed the software development and deployment landscape. It is an open-source technology that ensures applications run consistently in a variety of environments by combining them with all of their dependencies into a single package. Docker apps are lightweight, portable, and run in isolated environments, in contrast to traditional virtual machines. This makes it possible for several services to operate on the same hardware without affecting one another's resources, facilitating quick and effective deployment.

Docker's importance on Linux is especially noteworthy because: Important container technologies like cgroups and namespaces are natively supported by the Linux kernel.

Compared to other platforms, Docker operates on Linux more effectively and consistently. Docker containers run on Linux-based virtual machines in the majority of major cloud infrastructures, including AWS, GCP, Azure, and DigitalOcean.

Docker offers safe and isolated environments for apps by removing the typical "works on the machine" issue in software development. It streamlines CI/CD pipelines and speeds up the deployment of microservices. Docker works with nearly every widely used Linux distribution, including Fedora, Ubuntu, Debian, and CentOS.

Typical use cases consist of Docker are listed below.

  • Web applications (like Nginx and Node.js)

  • Databases (like MySQL and PostgreSQL)

  • Testing environments

  • CI/CD pipelines

  • Microservices-based architectures

Get Started with Zenarmor Today For Free

How do you Install Docker on Ubuntu?

The most popular Docker distribution is Ubuntu, and installing it is simple. You may follow the next step to install Docker on Ubuntu.

  1. To make sure you're using the most recent repositories, start by updating the package list.

    sudo apt update
  2. Install the required dependencies that allow apt to handle HTTPS repositories and certificate management.

    sudo apt install apt-transport-https ca-certificates curl software-properties-common -y
  3. Add Docker’s official GPG key to verify the authenticity of the packages.

    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/docker.gpg
  4. Add the official Docker repository to your system.

    sudo add-apt-repository \
    "deb [arch=$(dpkg --print-architecture)] https://download.docker.com/linux/ubuntu \
    $(lsb_release -cs) stable"
  5. Update the package list again to include the newly added Docker repository.

    sudo apt update
  6. You can install Docker Engine, the CLI, and containerd.

    sudo apt install docker-ce docker-ce-cli containerd.io -y
  7. Once installed, check the status of the Docker service to ensure it is running.

    sudo systemctl status docker
  8. Finally, to avoid having to use sudo for every Docker command, add your user to the docker group.

    sudo usermod -aG docker $USER

How do you Install Docker on Debian?

Since Ubuntu is based on Debian, installing Docker on Debian is fairly similar. But since Debian is frequently a more stripped-down system, it's crucial to ensure that all required dependencies are installed correctly. You may follow the next steps to install Docker on Debian

  1. Update Package Lists: Start by updating the package lists to ensure you are working with the latest repositories and security patches:

    sudo apt update

    Skipping this step could cause conflicts with outdated libraries, which might lead to installation failures or security risks.

  2. Install Required Dependencies: Debian may not include all required packages by default. Install the following dependencies to make sure Docker can be set up smoothly:

    sudo apt install apt-transport-https ca-certificates curl gnupg lsb-release -y

    Here’s what each package does.

    • apt-transport-https: Allows apt to download packages over HTTPS securely

    • ca-certificates: Ensures SSL certificates are validated correctly

    • curl: Used to download Docker’s GPG key

    • gnupg: Manages encryption keys securely

    • lsb-release: Fetches your Debian release codename (e.g., bullseye, bookworm)

    On minimal Debian installations, some of these might be missing. If the command fails, install the missing packages individually.

  3. Add Docker’s Official GPG Key: Docker packages are signed with a GPG key to guarantee their authenticity. Add the key by running the next command.

    curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

    This step is critical. Without the GPG key, apt will refuse to download and install Docker packages for security reasons.

  4. Add the Docker Repository: Next, add the official Docker repository to your Debian system.

    echo \
    "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] \
    https://download.docker.com/linux/debian $(lsb_release -cs) stable" \
    | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

    Explanation:

    • $(dpkg --print-architecture) : Detects CPU architecture (e.g., amd64, arm64)

    • $(lsb_release -cs): Returns your Debian release codename

    If your release codename is not supported, apt may reject the repository. In that case, manually replace it with a supported codename listed in Docker’s official documentation.

  5. Refresh Package Lists Again: After adding the new repository, refresh apt again.

    sudo apt update
  6. Install Docker Engine: Now install Docker Engine, the CLI, and containerd runtime.

    sudo apt install docker-ce docker-ce-cli containerd.io -y
    • docker-ce : The Docker Engine itself

    • docker-ce-cli : The Docker command-line client

    • containerd.io : The container runtime used under the hood

    With these three packages installed, Docker will be fully functional.

  7. Verify Docker Service: Docker should start automatically after installation. Check Docket status by running the following command.

    sudo systemctl status docker

    If it’s not running, start and enable it.

    sudo systemctl start docker
    sudo systemctl enable docker

    Using enable ensures Docker starts automatically on every system boot.

    tip
    • Run Docker Without sudo: By default, every Docker command requires sudo. To avoid this, add your user to the docker group:
      sudo usermod -aG docker $USER
      Then log out and back in for the change to take effect.
    • Test the Installation: Run the following container to verify everything works:
      docker run hello-world
      If you see the success message, Docker is correctly installed.
    • Cleanup in Case of Failure: If something goes wrong and you want to start fresh:
      sudo apt purge docker-ce docker-ce-cli containerd.io -y
      sudo rm -rf /var/lib/docker

In summary, installing Docker on Debian is straightforward but requires attention to a few critical points.

  • Always add the GPG key

  • Configure the correct repository address

  • Make sure the Docker service is running

How do you Install Docker on CentOS 7?

One of the most popular distributions in business settings for a long time has been CentOS 7. The yum package manager is used to install Docker on CentOS 7. However, you must take extra care when installing CentOS 7 because it has officially reached its end of life (EOL). You may follow the next steps to install Docker on CentOS 7.

  1. Remove Old Versions of Docker: If you have older Docker versions installed, remove them to avoid conflicts.

    sudo yum remove docker \
    docker-client \
    docker-client-latest \
    docker-common \
    docker-latest \
    docker-latest-logrotate \
    docker-logrotate \
    docker-engine

    Don’t worry if you see “package not found” errors here. It just means those versions were never installed on your system.

  2. Install Required Dependencies: Now, install the additional packages Docker relies on:

    sudo yum install -y yum-utils device-mapper-persistent-data lvm2
    • yum-utils : Provides tools for managing repositories

    • device-mapper-persistent-data & lvm2 : Required for Docker’s storage driver (especially overlay2)

  3. Add the Docker Repository: Add Docker’s official repository so you can install the latest Docker packages:

    sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

    This ensures that your system pulls Docker from the official source rather than outdated system repositories.

  4. Install Docker Engine: Now install Docker Engine along with its CLI and runtime:

    sudo yum install docker-ce docker-ce-cli containerd.io -y
    • docker-ce : The Docker Engine itself

    • docker-ce-cli : Command-line client for managing Docker

    • containerd.io : The container runtime that Docker relies on

  5. Start and Enable Docker Service: Finally, start the Docker service and enable it to launch at boot.

    sudo systemctl start docker
    sudo systemctl enable docker
    • start : Immediately starts Docker

    • enable : Ensures Docker automatically starts on system reboot

tip
  • Kernel Version Compatibility: Certain contemporary Docker features might not function correctly because CentOS 7 comes with an outdated kernel. For improved long-term support, think about switching to CentOS 8, Rocky Linux, or AlmaLinux if at all possible.

  • Verify the Installation: Run the following test container to confirm Docker is working.

    docker run hello-world

    Your installation is finished if the container launches successfully and prints the "Hello from Docker!" message.

    In conclusion, Docker can still be installed on CentOS 7, but for complete compatibility and security, you should definitely think about switching to a newer distribution because of its end-of-life status and outdated kernel.

How do you Install Docker on Fedora?

Unlike CentOS, Fedora provides more up-to-date packages, and its default package manager is dnf. This makes installing Docker on Fedora cleaner and more straightforward.

  1. Install Required Packages: First, install the plugin package that enables repository management commands.

    sudo dnf -y install dnf-plugins-core

    dnf-plugins-core provides tools like config-manager, which are needed for adding repositories.

  2. Add the Docker Repository: Next, add Docker’s official repository to your Fedora system:

    sudo dnf config-manager --add-repo https://download.docker.com/linux/fedora/docker-ce.repo

    Since Fedora updates very frequently, it is best to use Docker’s official repository to ensure you always get the latest and most stable packages.

  3. Install Docker: Now, install Docker Engine, the CLI, and containerd:

    sudo dnf install docker-ce docker-ce-cli containerd.io -y
  4. Start and Enable the Docker Service: After installation, start Docker and enable it so that it launches automatically at boot:

    sudo systemctl start docker
    sudo systemctl enable docker
  5. Test Your Installation: To confirm that Docker is working correctly, run the following test container:

    docker run hello-world

    If you see the “Hello from Docker!” message, your installation is successful.

SELinux Considerations

Since Fedora comes with SELinux enabled by default, you may encounter permission issues with some containers. Common solutions are as follows.

  • Temporarily disable SELinux enforcement.

    sudo setenforce 0
  • When mounting volumes, add the :z or :Z option to set the correct SELinux labels. Example:

    docker run -v /host/data:/container/data:Z myimage

Podman: A Docker Alternative

Fedora ships with Podman by default, a container engine that provides Docker-compatible commands.

For example:

podman run hello-world

works almost identically to docker run hello-world.

If you decide to use Podman instead of Docker, most scenarios will work without modification.

On CentOS 7, Docker can still be installed, but due to its older kernel and end-of-life status, some modern features may not work. On Fedora, Docker runs more smoothly on its modern kernel, and you gain flexibility with alternatives like Podman.

How do you Install Docker on Kali Linux?

Kali Linux is a Debian-based distribution widely favored by security professionals for penetration testing. Docker is particularly useful on Kali because it allows you to set up isolated testing environments quickly and securely.

  1. Update the Package Lists: Always begin by updating your repositories to avoid dependency issues caused by outdated package indexes.

    sudo apt update
  2. Install Required Dependencies: Next, install the essential tools Docker needs to be set up properly.

    sudo apt install apt-transport-https ca-certificates curl software-properties-common -y

    software-properties-common provides the add-apt-repository command.

  3. Add Docker’s GPG Key: To ensure package authenticity, add Docker’s official GPG key.

    curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/docker.gpg

    Without this GPG key, apt will refuse to trust Docker’s packages.

  4. Add the Docker Repository: Now, add the official Docker repository to your Kali system.

    echo "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable" \
    | sudo tee /etc/apt/sources.list.d/docker.list
  5. Install Docker: Update the package lists again and install Docker.

    sudo apt update
    sudo apt install docker-ce docker-ce-cli containerd.io -y
  6. Verify and Manage the Service: Check if Docker is running.

    sudo systemctl status docker

    If it’s not active, start and enable it to run on every reboot:

    sudo systemctl start docker
    sudo systemctl enable docker

    Running security tools in isolated environments is one benefit of using Docker on Kali. For instance, a Kali container can be interactively spun up using:

    docker run -it kalilinux/kali-rolling /bin/bash

    This prevents your host system from becoming cluttered or at risk while allowing you to test tools inside a controlled container environment.

In conclusion, installing Docker on Kali Linux is quite similar to installing it on Debian; however, it is particularly effective when used for sandboxing, penetration testing, and isolated security tool experimentation.

How do you Install Docker on Linux Mint?

Linux Mint is based on Ubuntu, which means installing Docker on Mint is almost identical to Ubuntu.

  1. Update Package Lists: First, update the package list to ensure your repositories are fresh:

    sudo apt update
  2. Install Dependencies: Next, install the required dependencies that Docker relies on:

    sudo apt install apt-transport-https ca-certificates curl software-properties-common -y
  3. Add Docker’s GPG Key: Now add Docker’s official GPG key to verify package authenticity:

    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/docker.gpg
  4. Add the Docker Repository: Add the Docker repository for Ubuntu (which Linux Mint follows):

    sudo add-apt-repository \
    "deb [arch=$(dpkg --print-architecture)] https://download.docker.com/linux/ubuntu \
    $(lsb_release -cs) stable"

    Since Linux Mint tracks Ubuntu’s release numbers, you may sometimes get an error when adding the repository. If this happens, manually specify the Ubuntu codename that Mint is based on (e.g., focal for Mint 20, jammy for Mint 21).

  5. Install Docker: Update the package list again and install Docker Engine, the CLI, and containerd.

    sudo apt update
    sudo apt install docker-ce docker-ce-cli containerd.io -y
tip

For a more visual experience, Mint desktop users can also use Docker Desktop (GUI). But for automation and scripting in particular, the command-line interface (CLI) continues to be the more versatile and potent method of working with Docker.

Because Linux Mint closely resembles Ubuntu, installing Docker on it is simple. If you run into problems adding the repository, just make sure to check the underlying Ubuntu codename again.

How do you Install Docker on Rocky Linux?

Rocky Linux is the community-driven continuation of CentOS and is fully compatible with RHEL. Installing Docker on Rocky Linux is almost identical to CentOS 8/9. You may follow the next steps to install Docker on Rocky Linux

  1. Install Required Packages: First, install the necessary plugins that will allow you to manage repositories.

    sudo dnf install -y dnf-plugins-core
  2. Add the Docker Repository: Next, add the official Docker repository.

    sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

    Even though the URL includes “centos,” this repository is fully compatible with Rocky Linux.

  3. Install Docker: Now install Docker Engine, the CLI, and containerd.

    sudo dnf install docker-ce docker-ce-cli containerd.io -y
  4. Start and Enable the Docker Service: Start Docker immediately and enable it so that it launches automatically on boot.

    sudo systemctl start docker
    sudo systemctl enable docker

SELinux Considerations

You might run into permission errors when working with containers if SELinux is enabled, particularly when mounting volumes. When binding a volume, use the :Z or :z option to correct this. For instance:

docker run -v /host/data:/container/data:Z myimage

In conclusion, Docker installation on Rocky Linux is seamless and nearly the same as on CentOS. Simply adjust volume mounts in accordance with SELinux policies.

How do you Install Docker on Amazon Linux?

Amazon Linux 2 is a distribution commonly used in AWS environments, and installing Docker on it is very straightforward.

  1. Update Packages: First, make sure your system packages are up to date.

    sudo yum update -y
  2. Install Docker: Next, install Docker using the amazon-linux-extras command.

    sudo amazon-linux-extras install docker -y

    The amazon-linux-extras tool is unique to Amazon Linux and is used to manage additional repositories and packages maintained by AWS.

  3. Start the Docker Service: Once installed, start Docker and enable it to launch automatically on system boot.

    sudo systemctl start docker
    sudo systemctl enable docker
  4. Add Your User to the Docker Group: By default, Docker commands require sudo. To allow the default ec2-user to run Docker without elevated privileges, add it to the Docker group.

    sudo usermod -aG docker ec2-user

    To make the group change take effect, run this, then log out and back in (or restart your session).

Docker is frequently used in conjunction with Amazon ECS (Elastic Container Service) on Amazon Linux. Docker containers typically operate in this environment with very high performance because of AWS-specific optimizations.

Quick, AWS-optimized, and ideal for cloud-native workloads, Docker installation on Amazon Linux 2 is a breeze.

How do you Install Docker on RHEL?

Red Hat Enterprise Linux (RHEL) follows a process very similar to CentOS when installing Docker.

  1. Remove Old Docker Packages: First, remove any outdated Docker versions to prevent conflicts.

    sudo yum remove docker docker-client docker-client-latest docker-common \
    docker-latest docker-latest-logrotate docker-logrotate docker-engine

    If these packages are not installed, you may see “package not found” warnings. This is normal and can be safely ignored.

  2. Install Required Packages: Next, install the dependencies Docker needs for storage and repository management.

    sudo yum install -y yum-utils device-mapper-persistent-data lvm2
    • yum-utils : Provides tools for managing repositories

    • device-mapper-persistent-data & lvm2 : Required for Docker’s storage drivers (such as overlay2)

  3. Add the Docker Repository: Now, add Docker’s official repository that provides RHEL-compatible packages.

    sudo yum-config-manager --add-repo https://download.docker.com/linux/rhel/docker-ce.repo

    This repository contains Docker CE packages specifically built for RHEL compatibility.

  4. Install Docker: Once the repository is added, install Docker Engine, CLI, and containerd:

    sudo yum install docker-ce docker-ce-cli containerd.io -y
  5. Start and Enable the Docker Service: Start Docker immediately and configure it to start automatically at system boot:

    sudo systemctl start docker
    sudo systemctl enable docker
note

Red Hat's own container solution, Podman, occasionally eclipses Docker on RHEL. Use the aforementioned repository to install the official packages if you require Docker CE specifically.

What is Docker Desktop for Linux?

Developers can now more easily access the container ecosystem with Docker Desktop, a GUI-based management tool. Due to overwhelming demand, it was later expanded to support Linux after being initially released for Windows and macOS users.

Docker Desktop is very convenient, especially for novices and experts who prefer a visual interface for container operations, even though container management is entirely possible through the command line (CLI).

Key Features

  • GUI-Based Container Management: You can start, stop, and view logs for containers with just a few clicks. Additionally, it provides dedicated panels for managing images, volumes, and networks.

  • Docker Hub Integration: Link your Docker Hub account to search, pull, and run images directly from the GUI. Instead of typing docker pull in the terminal, you can perform the same action visually.

  • Easy Installation and Updates: Instead of manually adding repositories or updating via CLI, Docker Desktop simplifies the process by handling installation and upgrades in one go.

  • Kubernetes Integration: Docker Desktop allows you to enable Kubernetes optionally. This makes it easier to orchestrate multiple containers on a single host without complex setups.

How It Differs from CLI Docker

The differences between CLI Docker and Docker Desktop are listed below.

  • Flexibility: The CLI is more flexible and scriptable but comes with a steeper learning curve.

  • Ease of Use: Docker Desktop places a graphical layer on top of the CLI, lowering the entry barrier for newcomers to the container world.

  • Resource Management: With Docker Desktop, you can configure CPU, RAM, and disk limits directly through the GUI, making it easier to optimize system resource usage.

While professionals often stick to CLI for its power and automation capabilities, Docker Desktop greatly shortens the learning curve and offers a more intuitive control environment. To put it briefly, Docker Desktop is basically a collection of Docker Engine + GUI + integrated tools. It enhances the CLI rather than replaces it, making daily tasks quicker and easier to see.

Why Use Docker on a Linux-Based System?

Docker’s popularity on Linux is no coincidence. The reason is that the foundations of container technology are built directly into the Linux kernel itself.

  1. Performance Advantage: The Linux kernel natively supports features like cgroups (control groups) and namespaces, which are essential for container isolation. On Windows, Docker relies on LinuxKit-based virtual machines to run containers. This extra layer of virtualization makes it slower and less efficient compared to Linux. On Linux, Docker starts faster, consumes fewer resources, and runs more reliably.

  2. Compatibility: Docker’s core is built on Linux. More than 90% of images on Docker Hub are Linux-based (such as nginx, mysql, redis). Some images may not work properly on Windows, but on Linux, almost all images run natively without modification.

  3. Security: The Linux kernel integrates security modules such as SELinux (in RHEL, Fedora) and AppArmor (in Ubuntu, Debian) to strengthen container isolation. Network policies (using iptables or nftables) and firewall integration are straightforward to configure. Rootless Docker, which allows containers to run without root privileges, is more stable and secure on Linux.

  4. Automation and CI/CD Integration: Popular DevOps tools such as Jenkins, GitLab CI, and GitHub Actions integrate seamlessly with Docker on Linux. In microservices architectures, CI/CD pipelines become faster and more reliable thanks to containerization. The Test : Build : Deploy cycle has become almost standard on Linux environments with Docker.

Linux is the natural habitat for Docker. With advantages in performance, compatibility, security, and automation, Linux is by far the best platform to run Docker.

What are the Minimum System Requirements to Run Docker on Linux?

Your system must fulfill specific hardware and software requirements in order to run Docker on Linux. Although Docker Engine can theoretically operate on machines with lower specifications, these configurations frequently experience performance problems. It is highly advised to use more powerful hardware for Docker Desktop or Kubernetes integration.

  • CPU Requirements: CPU requirements for Docker are as follows.

    • 64-bit Processor: Docker only supports 64-bit architectures (x86_64 and ARM64). Running on 32-bit systems is not possible.

    • Virtualization Support: Features like Intel VT-x or AMD-V are recommended. These must be enabled in your BIOS/UEFI settings.

    • ARM Devices (e.g., Raspberry Pi): Docker works on ARM64 systems. However, not all images are built for ARM. Make sure to select images with the arm64 or arm/v7 tags on Docker Hub.

  • Kernel Requirements: Kernel requirements for Docker are as follows.

    • Minimum: Linux kernel 3.10

    • Recommended: Newer kernel versions (5.x series) for better stability and feature support

    • systemd Support: Systems using systemd for service management are recommended, as installing Docker on older init.d-based systems can be more complex.

    To check your current kernel version, run:

    uname -r
  • RAM Requirements: RAM requirements for Docker are as follows.

    • Minimum: 2 GB (sufficient for basic Docker Engine usage)

    • Recommended: 4 GB or more (especially for Docker Desktop, Kubernetes, or running multiple containers simultaneously)

  • Disk Space Requirements: Disk requirements for Docker are as follows.

    • Minimum: 10 GB of free space

    • Recommended: 20 GB+ (especially if you frequently download new images)

Docker images use a layered filesystem, which means disk usage can grow rapidly if you often pull new images and create containers.

To clean up unused images and reclaim space, run:

docker system prune -a

As a summary, Docker Engine can operate on less powerful systems, but performance will be constrained. Aim for at least 4 GB RAM, a modern kernel, and a powerful CPU for workloads involving Docker Desktop, Kubernetes, or multiple containers. Keep in mind to perform cleanup commands on a regular basis because disk usage can increase rapidly. Your Docker experience will be far more reliable and efficient if your system satisfies these minimal requirements.

How is Docker Desktop Different from Docker Engine on Linux?

Docker Engine and Docker Desktop are often confused with one another. In reality, one serves as the core infrastructure, while the other is a more complete developer toolset built on top of it.

Key Differences

FeatureDocker EngineDocker Desktop
TypeBackground service (dockerd daemon)Full application with GUI + CLI integration
UsageManaged via command line (CLI)Visual dashboard + CLI integration
Resource ManagementManual CLI configuration (e.g., --cpus, --memory)Configure CPU, RAM, and disk limits via GUI
KubernetesRequires separate installation (e.g., Minikube, k3s)Comes with built-in Kubernetes support
PlatformWorks on almost any Linux distributionLimited to certain distros like Ubuntu, Debian, Fedora
Target UsersSystem administrators, DevOps engineersDevelopers and users who prefer a GUI environment

Things to Keep in Mind

  • Higher Resource Consumption: Docker Desktop consumes more resources than Docker Engine alone, which may cause performance issues on older machines.

  • Professional DevOps Teams: Typically stick to Engine + CLI, as they’re better suited for automation and scripting.

  • Beginners and Learners: Docker Desktop significantly lowers the learning curve and provides a more accessible environment.

In short: Docker Engine provides the core infrastructure, while Docker Desktop delivers a more user-friendly development environment built on top of it.

Are There GUI Tools Available for Docker on Linux?

Yes, while Docker can be fully managed via the command line (CLI), there are several GUI tools available for Linux that make container management more visual and user-friendly.

  1. Docker Desktop: The official GUI solution for Docker. It allows you to manage containers, images, networks, and volumes directly through a graphical interface. Docker desktop integrates with Docker Hub, making it easy to search, pull, and run images without using CLI commands. It provides GUI-based resource management for CPU, RAM, and disk allocation.

  2. Portainer: A lightweight, web-based container management tool. Portainer runs on top of Docker Engine and is accessible through a web browser. It is very easy to set up and use. It provides simple management for containers, volumes, and networks. Portainer is commonly used in small to medium-sized projects thanks to its simplicity and speed.

  3. Rancher: Designed for large-scale Kubernetes management. Rancher supports multi-cluster management, making it suitable for complex deployments. It works across both Docker and Kubernetes ecosystems. Rancher is primarily chosen by enterprise projects where advanced orchestration and scalability are required.

How do you Install Docker from a .deb or .rpm Package on Linux?

You may not always have access to an internet-connected server. In air-gapped (offline) environments, or in organizations where external repositories cannot be used for security reasons, Docker must be installed using the .deb or .rpm package files.

  1. Download the Official Docker Packages: Go to Docker’s official download page and download the package that matches your distribution. Be sure to choose the correct system architecture (e.g., x86_64, arm64) and distribution version (Debian, Ubuntu, RHEL, CentOS, Fedora, etc.). Selecting the wrong package can lead to dependency errors during installation.

  2. Install from a .deb Package (Ubuntu/Debian/Mint/Kali): If you are using a Debian-based distribution, install Docker from the .deb package as follows:

    sudo dpkg -i docker-ce_<version>_amd64.deb
    sudo apt -f install

    The first command installs the package. The second command (-f install) resolves and installs any missing dependencies. Without running the second command, you may end up with an incomplete installation.

  3. Install from an .rpm Package (RHEL/CentOS/Fedora/Rocky): For RHEL-based distributions, use the .rpm package:

    sudo rpm -ivh docker-ce-<version>.x86_64.rpm

    Explanation of flags:

    • -i: install

    • -v: verbose (provides detailed output)

    • -h: displays progress using hash marks

Alternatively, you can use yum localinstall or dnf install for better dependency handling, since rpm alone won’t automatically resolve missing packages.

This method is particularly useful in closed networks or for offline installations. However, managing updates manually can be challenging. If you have internet access, it’s recommended to install Docker through a package manager (apt, dnf, or yum) for easier updates and dependency resolution.

How do you Install Docker Using a Package Manager Like apt, dnf, or yum?

Using a package manager is the most practical and secure way to install Docker. It ensures a smoother setup and makes ongoing updates easier to manage.

  • Ubuntu / Debian / Linux Mint / Kali : apt: Start by updating your repositories, then install Docker Engine, the CLI, and containerd:

    sudo apt update
    sudo apt install docker-ce docker-ce-cli containerd.io -y
  • CentOS / RHEL : yum: On RHEL-based systems, install Docker with the following command:

    sudo yum install docker-ce docker-ce-cli containerd.io -y
  • Fedora / Rocky Linux / AlmaLinux : dnf: On newer distributions that use dnf, run:

    sudo dnf install docker-ce docker-ce-cli containerd.io -y

You may use a package manager for the following reasons.

  • Automatic Dependency Resolution: All required dependencies are handled for you.

  • Easy Updates: Simply run commands like apt upgrade or dnf upgrade to keep Docker up to date.

  • Security Best Practice: Since package managers integrate with your system’s security policies, this method is the most reliable and recommended approach.

In short, if your system has internet access, always prefer installing Docker via a package manager over manual .deb or .rpm files.

How do you Verify a Successful Docker Installation on Linux?

After completing the installation, it’s important to test Docker to make sure everything is working properly.

  1. Check the Docker Version: Run the following command to confirm that Docker is installed and the CLI is accessible:

    docker --version

    This should display the installed Docker version.

  2. Check the Service Status: Next, verify that the Docker service is active and running:

    systemctl status docker

    If the service is not active, you may need to start it with:

    sudo systemctl start docker
  3. Run a Test Container: Finally, run Docker’s built-in hello-world test container to ensure containers can be launched successfully:

    docker run hello-world

Docker Engine is verified to be installed correctly and operating as intended if the container launches and shows the success message.

You can be sure that your Linux Docker setup is operating correctly by performing these quick checks.

How do you Start and Manage Docker Services on Linux?

Docker Engine on Linux is managed using systemd, which allows you to start, stop, and control services easily.

  1. Start the Docker Service: To start Docker immediately, run:

    sudo systemctl start docker
  2. Enable Docker at Boot: To make sure Docker starts automatically every time your system reboots, enable it:

    sudo systemctl enable docker

    If you forget this step, Docker will not start automatically after a reboot, and you’ll need to start it manually.

  3. Stop the Docker Service: If you need to stop Docker temporarily, use:

    sudo systemctl stop docker
  4. Restart the Docker Service: To apply changes or refresh the service, restart Docker:

    sudo systemctl restart docker
  5. Check the Status of the Service: Finally, to verify whether Docker is running or troubleshoot issues, check the service status:

    sudo systemctl status docker

You can completely control Docker's lifecycle on Linux with these commands, making sure it launches when required and stays under your supervision.

How do you Uninstall Docker from a Linux System?

If you want to do a clean reinstallation or try again after an unsuccessful installation, you might occasionally need to remove Docker entirely.

To uninstall Docker and its related components on Debian-based systems, run the next command.

sudo apt purge docker-ce docker-ce-cli containerd.io -y
sudo rm -rf /var/lib/docker

To uninstall Docker and its related components on RHEL-based systems, use the following commands.

sudo yum remove docker-ce docker-ce-cli containerd.io -y
sudo rm -rf /var/lib/docker
note

The /var/lib/docker directory contains all containers, volumes, and images. If you do not remove this directory, your old container data will remain on disk and continue consuming space.

tip

Backup Before Removing If you’d like to back up your volumes before uninstalling Docker, you can export them with a simple command:

docker run --rm -v myvolume:/data -v $(pwd):/backup alpine tar czf /backup/myvolume.tar.gz /data

This will store the contents of myvolume in your current directory as a compressed archive.

What are the Best Practices for Running Containers on Linux?

Although Docker containers are quick and easy to use, improper management can result in security risks, performance issues, and sustainability problems. When using Linux containers, the following best practices should be adhered to.

  • Avoid Running as Root: Using the --user option, always attempt to run containers as a non-root user. There are serious security risks to the host system when containers are run as root.

    docker run --user 1000:1000 myapp
  • Limit Exposed Ports: Only expose the ports you actually need with the -p flag. For example, map only what is necessary:

    -p 8080:80
  • Use Trusted Images: Prefer official images (those under the library/ namespace) or images from verified publishers on Docker Hub.

  • Keep Images Updated: Outdated images may contain critical vulnerabilities. Regularly pull the latest versions:

    docker pull myimage:latest
  • Use Volumes for Persistent Data: For databases or any data you want to preserve, always use Docker volumes. This ensures your data survives even if the container is removed.

  • Isolate Networks: Create dedicated networks for each group of applications. This helps control and secure container-to-container traffic.

  • Clean Up Unused Resources: Remove unused containers, images, and volumes regularly to free up space and maintain performance:

    docker system prune -a
  • Use Orchestration Tools: For multi-service applications, don’t rely on single containers. Instead, use Docker Compose or Kubernetes to define and manage services.

  • Centralized Logging: Forward container logs to centralized logging systems such as ELK stack or Prometheus/Grafana for easier monitoring and debugging.

  • Set Resource Limits: Prevent containers from consuming unlimited system resources by setting CPU and memory limits:

    docker run --memory="512m" --cpus="1.0" myapp

You can maintain your containerized environments more secure, optimized, and sustainable by adhering to these best practices, which will guarantee production stability and scalability.

How do you Use Docker Compose on Linux?

Docker Compose allows you to manage multi-container applications using a single YAML configuration file. This makes it ideal for micro services or any setup where multiple services need to run together.

  1. Install Docker Compose: On Ubuntu/Debian, install Compose with:

    sudo apt install docker-compose -y

    Alternatively, you can install it as a binary:

    sudo curl -L "https://github.com/docker/compose/releases/download/v2.21.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
    sudo chmod +x /usr/local/bin/docker-compose

    The binary installation ensures you always have the latest release, even if your distro’s repositories are outdated.

  2. Create a docker-compose.yml File: For example, let’s define a simple Nginx + MySQL setup:

    version: '3'
    services:
    web:
    image: nginx
    ports:
    - "8080:80"
    db:
    image: mysql
    environment:
    MYSQL_ROOT_PASSWORD: example

    In this file:

    • web runs an Nginx container, mapped to port 8080 on the host.

    • db runs a MySQL container with the root password set to example.

  3. Start the Services: Now, bring up all services defined in the YAML file:

    docker-compose up -d

    Check the status of the running services:

    docker-compose ps

Multiple containers can be launched simultaneously with a single command (docker-compose up), which is very practical for micro services architectures and development environments.

To put it briefly, Docker Compose makes it easier to define, execute, and manage multi-container applications by streamlining container orchestration on a single host.

How do you Manage Docker Volumes and Networks on Linux?

Working with Docker requires managing networks and volumes. Networks facilitate the organization and security of container communication, while volumes enable data persistence.

  • List Volumes: To see all existing volumes on your system:

    docker volume ls
  • Inspect a Volume: To view detailed information about a specific volume:

    docker volume inspect myvolume
  • Remove a Volume: To delete a specific volume:

    docker volume rm myvolume
  • Prune Unused Volumes: To clean up volumes that are no longer in use:

    docker system prune --volumes

    Volumes are the most reliable way to make container data persistent, ensuring your data isn’t lost when containers are removed.

  • List Networks: To display all Docker networks on your system:

    docker network ls
  • Create a New Network: To create a custom network for your containers:

    docker network create mynetwork
  • Inspect a Network: To get detailed information about a network:

    docker network inspect mynetwork
  • Remove a Network: To delete a custom network:

    docker network rm mynetwork

By establishing isolated networks, you can maintain orderly and secure container traffic, lowering the possibility of conflicts and enhancing system architecture in general.

Your Docker environment will stay persistent, secure, and well-structured if you manage volumes and networks well.

Can Docker be Integrated with SASE for Secure Container Access?

Yes. To enable secure access to container-based applications, Docker can be integrated with SASE (Secure Access Service Edge) architectures. Strong segmentation, identity-driven access, and adaptable connectivity options are all guaranteed by this integration. SASE integration methods are listed below.

  • Zero Trust & Identity-Based Access: Users must go through identity verification and authentication before they can access containerized services. By default, no user or device is trusted under this Zero Trust model.

  • Micro-Segmentation: It is possible to separate container traffic into discrete, small segments. In the event that one service is compromised, this stops attackers from lateralizing between containers.

  • VPN / SD-WAN Integration: Container services can be protected by running them through SASE-compatible VPN or SD-WAN solutions, ensuring encrypted and policy-enforced connectivity.

This kind of integration is particularly beneficial in sectors like government, healthcare, and finance that have stringent compliance requirements. Organizations can take advantage of flexible access control and identity-based security by integrating Docker with SASE, which makes containerized workloads safe and effective.

Installing Docker on Linux gives DevOps teams, developers, and system administrators a strong and dependable method for creating, deploying, and executing applications in separate environments. Docker provides a consistent experience across platforms, regardless of the distribution you are using—Ubuntu, Debian, CentOS, Fedora, Rocky, Amazon Linux, RHEL, or any other supported one.

You can make sure that your containerized apps stay reliable, effective, and safe by adhering to the installation instructions, checking the service, and using best practices for security, performance, and resource management.

Docker is a key component of contemporary software development because of its versatility, which spans from single-container configurations to intricate multi-service environments controlled by Docker Compose or Kubernetes. Docker on Linux becomes more than just a tool with the right setup and upkeep; it becomes a solid basis for applications that are scalable and ready for production.

Get Started with Zenarmor Today For Free