You can enjoy the original version of this article on my website, protsenko.dev.
Hi! In this article, I’m sharing 29 collected Docker best practices to make your images better, more secure, and faster. These Docker Best Practices cover security, maintainability, and reproducibility. This guide is based on my experience creating the Docker Scanner IntelliJ IDEA plugin and almost all of the practices covered by the scanner. It also includes Kubernetes Security Scanner features.
If this project or article has been helpful to you, please consider giving it a ⭐ on GitHub to help others discover it.
Docker image size & maintainability
1 Always pin an image version
When you do not specify a version tag, Docker uses the latest version by default. This practice makes builds unpredictable. You do not know which image version you will download. If an attacker compromises the image author, they can push a harmful image. New image updates may also break your application. This is one of the most important Docker Best Practices and a Security Best Practice.
That was the first inspection that I implemented in the Docker Scanner plugin. You can try it and see how flawlessly it works.
# problem
ARG version=latest
FROM ubuntu as u1
FROM ubuntu:latest as u2
FROM ubuntu:$version as u3
FROM u3
USER nobody
# fix
FROM ubuntu:noble as u1
FROM ubuntu@sha256:72297848456d5d37d1262630108ab308d3e9ec7ed1c3286a32fe09856619a782 as u2
FROM u2
USER nobody
2 Avoid using the dist-upgrade in package management
Using dist-upgrade
can upgrade to a new major release. This behavior may break your Dockerfile by introducing unexpected changes. Dockerfiles should use controlled updates to maintain stability.
# problem
FROM ubuntu:20.04
RUN apt-get dist-upgrade
# fix is in using newer dist
3 Use multi-stage builds to reduce image size
Multi-stage builds allow you to use multiple FROM statements in your Dockerfile. You can copy artifacts from one stage to another, leaving behind everything you don’t want in the final image. Reducing the image size is a wide area where you could apply these Docker best practices. The next practices will be about this.
This approach dramatically reduces the final image size by excluding build tools, source code, and intermediate files that are only needed during the build process. It also improves Docker security by not shipping development dependencies and build tools in production images.
# problem
FROM node:18
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
EXPOSE 3000
CMD ["node", "dist/index.js"]
# fix
FROM node:18 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
FROM node:18-alpine AS production
WORKDIR /app
COPY package*.json ./
RUN npm install --only=production && npm cache clean --force
COPY --from=builder /app/dist ./dist
USER nobody
EXPOSE 3000
CMD ["node", "dist/index.js"]
4 Consolidate multiple RUN instructions
Each RUN
instruction creates a new image layer, which increases the final image size and build time. Consolidating these commands into a single RUN
instruction improves efficiency and simplifies maintenance.
# problem
FROM ubuntu:20.04
RUN apt-get -y --no-install-recommends install netcat
RUN apt-get clean
# fix
FROM ubuntu:20.04
RUN apt-get -y --no-install-recommends install netcat && apt-get clean
5 Always clean the package manager’s cache
There are plenty of package managers like: apk
, dnf
, yum
, zypper
, pip
. All of them cache the data, which helps to keep them performant, but it’s absolutely redundant in the container. Cached data stays in Docker image layers, increasing its size.
You could easily spot these issues with the Docker Scanner plugin.
# dnf
## problem
FROM fedora:version
RUN dnf install -y httpd
## fix
RUN dnf install -y httpd && \
dnf clean all && \
rm -rf /var/cache/dnf
# yum
## problem
FROM centos:7
RUN yum install -y httpd
## fix
FROM centos:7
RUN yum install -y httpd && yum clean
# zypper
## problem
FROM opensuse/leap:15.3
RUN zypper install -y httpd
## fix
FROM opensuse/leap:15.3
USER nobody
RUN zypper install -y httpd && zypper clean
# apk
## problem
FROM alpine:latest
RUN apk update && \
apk add curl
## fix
FROM alpine:latest
RUN apk add --no-cache curl
# pip
## problem
FROM python:3.9
USER nobody
RUN pip install django
RUN pip install -r requirements.txt
## fix
FROM python:3.9
USER nobody
RUN pip install --no-cache-dir django
RUN pip install --no-cache-dir -r requirements.txt
6 Always combine the package manager update command with the install
Yes, you could notice it’s almost the same as consolidating RUN commands, and you’re right, but even without this problem taking place.
Running the package manager update command alone updates the package list in a separate layer. This updated list may not be used if the installation occurs in another RUN statement. Combining the update and install commands in one RUN ensures that the package installation uses the latest package data.
There is a package manager list for this practice : apt-get
, apt
, yum
, apk
, dnf
, and zypper
.
# problem
FROM ubuntu:20.04
RUN apt-get update
RUN apt-get install -y --no-install-recommends build-essential
# fix
FROM ubuntu:20.04
RUN apt-get update && apt-get install --no-install-recommends -y build-essential
7 Always use --no-install-recommends
with apt-get
When you run apt-get
install without --no-install-recommends
, apt-get
installs extra packages by default. This increases the image size and may add unwanted dependencies. Using --no-install-recommends
installs only the essential packages, reducing the image size and potential security risks.
# problem
FROM ubuntu:20.04
RUN apt-get update && apt-get install -y build-essential
# fix
FROM ubuntu:20.04
RUN apt-get update && apt-get install --no-install-recommends -y build-essential
8 Always use -l with useradd to avoid high-UID bloat
When UID is large (hundreds or tens of thousands), useradd without -l writes to the file offset that is the same as UID in /var/log/lastlog
and /var/log/faillog
. As those databases use UID-based indexing, that large offset consumes a large sparse hole and immediately makes those files’ logical (apparent) size larger. In layer container builds, those sparse blocks often get materialized as real bytes – farewell, tens of MB for nothing. Don’t touch those databases by using -l
(--no-log-init
).
# problem
FROM ubuntu:20.04
RUN useradd -u 198401 nordcoderd
USER nordcoderd
# fix
FROM ubuntu:20.04
RUN useradd -u -l 198401 nordcoderd
USER nordcoderd
Best Practices to Keep Docker Files Clean
9 Always choose what to use: wget or curl
Both wget
and curl
fetch remote files. Using both tools adds unnecessary redundancy and may lead to installing extra packages. Standardize on one tool to keep the Dockerfile clear and efficient. Make something clear, yet another of Docker’s best practices.
# problem
FROM ubuntu:20.04
RUN wget http://example.com/script.sh -O script.sh && curl -sSL http://example.com/script.sh -o script2.sh
# fix
FROM ubuntu:20.04
RUN curl -sSL http://example.com/script.sh -o script.sh
10 Use absolute paths for WORKDIR
For clarity and reliability, you should always use absolute paths for your WORKDIR
. Relative paths may cause unexpected behavior during builds.
# problem
FROM ubuntu:20.04
WORKDIR app
# fix
FROM ubuntu:20.04
USER nobody
WORKDIR /app
11 Exclude unnecessary files with .dockerignore
When building Docker images, everything in the build context (the directory you run docker build
from) is sent to the Docker daemon. If you don’t exclude unnecessary files (like .git
, logs, node_modules, or temporary build artifacts), the build context becomes bloated, slowing down image builds and potentially leaking sensitive data into the image. The .dockerignore
file works like .gitignore
, preventing unwanted files from being included in the context and final image layers.
Among the Docker Security Best Practices, one key guideline is to ensure that no sensitive data, such as .env
files, is copied into the image. Unfortunately, this rule isn’t bundled in the Docker Scanner plugin.
# problem
FROM node:18
WORKDIR /app
COPY . .
RUN npm install && npm run build
CMD ["node", "dist/index.js"]
## In this case, large local 'node_modules' or '.git' folder could be copied unnecessarily.
# fix
## Add a `.dockerignore` file in the project root:
.dockerignore
.git
*.log
node_modules
Dockerfile
.dockerignore
## Then use your Dockerfile cleanly:
FROM node:18
WORKDIR /app
COPY package*.json ./
RUN npm install --only=production
COPY . .
CMD ["node", "dist/index.js"]
12 Use WORKDIR instead of the cd command
Using multiple RUN cd ... && ...
instructions creates clutter. These commands are difficult to troubleshoot and maintain. Instead, use the WORKDIR
instruction to set the working directory consistently. This approach simplifies the Dockerfile and improves readability, which is fundamental for Docker best practices.
# problem
FROM ubuntu:20.04
RUN cd /app && make build && test
# fix
FROM ubuntu:20.04
WORKDIR /app
RUN make build && test
13 Use JSON notation for CMD and ENTRYPOINT
Docker supports two formats for CMD
and ENTRYPOINT
: shell form and JSON array form. Shell form can misinterpret arguments and cause errors. JSON notation parses each argument correctly and ensures consistent behavior.
# problem
FROM ubuntu:20.04
CMD echo "Hello World"
ENTRYPOINT /usr/local/bin/start-app
# fix
FROM ubuntu:20.04
CMD ["echo", "Hello World"]
ENTRYPOINT ["/usr/local/bin/start-app"]
14 Use apt-get or apt-cache instead of apt
The apt
is designed for interactive use and may prompt for input, which is not suitable for automated Docker builds. Using apt
may lead to errors or unexpected output. Instead, use apt-get
or apt-cache
for non-interactive package management.
# problem
FROM ubuntu:20.04
RUN apt update && apt install -y build-essential
# fix
FROM ubuntu:20.04
RUN apt-get update && apt-get install -y --no-install-recommends build-essential
15 Use the package manager’s auto-confirm flag -y
Package managers like apt-get
, yum
, dnf
, and zypper
prompt for confirmation if the -y
flag is missing. Without this flag, installations may pause for manual input and block automated builds. You should care about the reproducibility of the image, which is very valuable in Docker best practices.
# problem
FROM ubuntu:20.04
RUN apt-get update && apt-get install --no-install-recommends build-essential\
# fix
FROM ubuntu:20.04
RUN apt-get update && apt-get install -y --no-install-recommends build-essential
Docker Security Best Practices
16 Avoid default, root, or dynamic user
Running containers with an undefined user or with the root
user increases the attack surface. Dynamic assignment can override the intended user and make the container vulnerable. Note: this check only considers the final user specified in the Dockerfile. It is acceptable to run build operations as root if you later switch to a dedicated non-root user.
These best practices apply to Docker security and Kubernetes Security. The Docker Scanner plugin fully covers the following examples.
# problem
## Example 1: Implicitly using the default user (can be overridden)
FROM ubuntu:20.04
RUN whoami
## Example 2: Explicitly setting the user to root
FROM ubuntu:20.04
USER root
RUN whoami
## Example 3: Dynamic user assignment via an environment variable (risky if overridden)
ARG APP_USER
FROM ubuntu:20.04
USER $APP_USER
RUN whoami
# fix
FROM ubuntu:20.04
## Create a dedicated non-root user and group
RUN groupadd --system app && useradd --system --create-home --gid app app
## Switch to the non-root user
USER app
RUN whoami
17 Avoid exposing the SSH port
Exposing it in your Dockerfile may allow unauthorized SSH access to the container, which does not usually need SSH access. Removing this exposure reduces the attack surface. Take care of this. This practice could also be applied to Kubernetes security. Exposing something sensitive is a bad idea everywhere.
Note: The EXPOSE
doesn’t automatically expose the port, it only highlights what should be exposed. However, the port could be accessible in the internal Docker network by other containers.
# problem
FROM ubuntu:20.04
EXPOSE 22
# fix
FROM ubuntu:20.04
# EXPOSE 22 removed to prevent unauthorized SSH access.
18 Avoid overriding ARG variables in RUN commands
ARG variables are set at build time. Users can override them with the --build-arg
flag. Critical commands may change if the ARG
is altered. This behavior can introduce security risks or unexpected outcomes, which is why it is a Docker security best practice.
# problem
FROM ubuntu:20.04
ARG INSTALL_PACKAGE=build-essential
RUN apt-get update && apt-get install -y $INSTALL_PACKAGE
# fix
dockerfile
FROM ubuntu:20.04
USER nobody
RUN apt-get update && apt-get install --no-install-recommends -y build-essential
19 Avoid pipe curl to bash (curl|bash)
Using curl
or wget
with a pipe (|) or redirection (>) to execute scripts directly poses a security risk. This practice, often referred to as curl | bash, should be approached with caution to avoid potential vulnerabilities. Executing something that was downloaded from a foreign website without validation is a worst practice that leads to Docker security issues/
# problem
FROM ubuntu:20.04
RUN curl -sSL http://example.com/script.sh | sh
# fix
FROM ubuntu:20.04
RUN curl -sSL http://example.com/script.sh -o script.sh
# run only if downloaded script was verified
# or better to save file to build folder to not retrieve remote file each time
20 Avoid storing secrets in ENV keys
Storing secrets in ENV
Keys puts them directly into the image layers. Attackers can extract these secrets if they gain access to the image. Do not hardcode sensitive data in your Dockerfile. This best practice could also be extended by the previous rule: use .dockerignore
to not leave sensitive files in the Docker image.
# problem
FROM ubuntu:20.04
ENV PASSWORD=supersecret123
# fix
FROM ubuntu:20.0
# Remove sensitive data from the Dockerfile.
# Inject PASSWORD at runtime using secure methods.
21 Always prefer to use COPY instead of ADD
ADD
has extra features, such as extracting tar files and handling remote URLs. These features may cause unintended effects if you only need to copy files. Use COPY
to ensure simple and predictable behavior. The ADD function could lead to non-reproducible results if you’re depending on remote scripts/archives, and in the worst case, lead to security issues. It’s why you should avoid ADD to avoid additional security problems in your Dockerfile.
# problem
FROM ubuntu:20.04
ADD ./app /app
# fix
FROM ubuntu:20.04
COPY ./app /app
22 Always use ADD with checksum verification when downloading a file
I know, sometimes you have to use ADD
. In that case, consider using --checksum
to verify the checksum of the remote source. It makes your Dockerfile secure and builds it more reproducibly. With checksum verification, you’ll be sure the image has been built with a verified file, which reduces surface attack by changing the remote file.
# problem
FROM ubuntu:20.04
ADD https://mirrors.edge.kernel.org/pub/linux/kernel/Historic/linux-0.01.tar.gz /
# fix
FROM ubuntu:20.04
ADD --checksum=sha256:24454f830cdb571e2c4ad15481119c43b3cafd48dd869a9b2945d1036d1dc68d https://mirrors.edge.kernel.org/pub/linux/kernel/Historic/linux-0.01.tar.gz /
23 Avoid using RUN with sudo
Docker executes RUN commands as the user specified by the USER directive, often root by default. Including sudo in RUN The commands are redundant and may cause unexpected outcomes*.* Avoid using sudo
to keep the builds clean and predictable.
# problem
FROM ubuntu:20.04
RUN sudo apt-get update && sudo apt-get install -y --no-install-recommends build-essential
# fix
FROM ubuntu:20.04
RUN apt-get update && sudo apt-get install -y --no-install-recommends build-essential
General Docker Best Practices
24 Avoid the deprecated MAINTAINER instruction
MAINTAINER
has been deprecated since Docker 1.13.0. Use LABEL to provide maintainer information. This change improves clarity and maintainability.
# problem
FROM ubuntu:20.04
MAINTAINER John Doe
# fix
FROM ubuntu:20.04
LABEL org.opencontainers.image.authors="John Doe "
25 Ensure trailing slash for COPY commands with multiple arguments
When you copy multiple files, the destination must be a directory. If the destination does not end with a slash, Docker may misinterpret it. This mistake leads to build errors or unexpected file placements.
# problem
FROM ubuntu:20.04
COPY file1.txt file2.txt dir
# fix
FROM ubuntu:20.04
COPY file1.txt file2.txt dir/
26 Avoid duplicate aliases in FROM instructions
Docker requires each FROM
instruction alias to be unique. Duplicate aliases lead to conflicts that stop the build process. They also reduce the clarity and maintainability of the Dockerfile. This problem you could find during the building time, but it’s better to find it in the IDE via my open-source Docker scanner plugin.
# problem
FROM ubuntu:20 as builder
RUN apt-get update && apt-get install --no-install-recommends -y build-essential
FROM node:14 as builder
RUN npm install
# fix
FROM ubuntu:20 as builder-ubuntu
RUN apt-get update && apt-get install --no-install-recommends -y build-essential
FROM node:14 as builder-node
RUN npm install
27 Avoid self-referencing COPY –from instructions
The COPY
instruction with the --from
flag must refer to a previous build stage. Referencing the current stage alias is invalid because you cannot copy from the image you are currently building.
# problem
FROM ubuntu:20 as builder
RUN apt-get update && apt-get install --no-install-recommends -y build-essential
## Incorrect: referencing the current build stage "builder"
COPY --from=builder /app /app
# fix
FROM ubuntu:20 as builder
RUN apt-get update && apt-get install --no-install-recommends -y build-essential
FROM ubuntu:20
## Correct: referencing the previous build stage "builder"
COPY --from=builder /app /app
28 Avoid multiple CMD or ENTRYPOINT, or HEALTHCHECK instructions
Defining multiple CMD
or ENTRYPOINT
, or HEALTHCHECK
instructions create confusion. Only the final instruction is used during container startup. This can lead to unintended commands running and make the Dockerfile harder to maintain. Maintainability is another aspect of Docker best practices.
# problem
FROM ubuntu:20.04
## Multiple CMD instructions: only the last one is used.
CMD ["echo", "Hello"]
CMD ["echo", "World"]
## Multiple ENTRYPOINT instructions: only the last one is used.
ENTRYPOINT ["run-app"]
ENTRYPOINT ["start-app"]
# fix
FROM ubuntu:20.04
# Single CMD instruction ensures the correct command runs.
CMD ["echo", "Hello World"]
# Single ENTRYPOINT instruction ensures the correct entrypoint runs.
ENTRYPOINT ["start-app"]
## Multiple HEALTHCHECK instructions;
# problem
FROM ubuntu:20.04
HEALTHCHECK --interval=30s CMD curl -f http://localhost/ || exit 1
HEALTHCHECK --interval=30s CMD wget -q -O- http://localhost/ || exit 1
# fix
FROM ubuntu:20.04
HEALTHCHECK --interval=30s CMD curl -f http://localhost/ || exit 1
29 Avoid exposing ports outside the allowed range
The Dockerfile declares an exposed port that falls outside the valid UNIX port range of 0-65535. Using an invalid port number is a misconfiguration that may cause errors during build or runtime.
# problem
FROM ubuntu:20.04
EXPOSE 70000
# fix
FROM ubuntu:20.04
EXPOSE 8080
Final Words after Docker Best Practices
It’s a good idea to integrate these checks into your development and CI/CD process. For the best IDE integration, give the Docker Scanner plugin for JetBrains IDEs a try. It bundles with rules targeted to Docker Best Practices to find security and maintainability problems faster. It is written in pure Kotlin and utilizes the features of the IntelliJ platform. All of this makes shift-left happen due to the fly checks. If you are interested in protecting a Kubernetes cluster, read my article about “Kubernetes Security: Best Practices to Protect Your Cluster“.
Don’t miss my new articles—follow me on LinkedIn!