Docker
How it Works
Containers
They are processes that run over the host OS, and that are isolated by Namespaces
, so they belive to be the only running processes.
CGroups
help to isolate the computational resources that can be used by each isolated process (Container).
For last the OFS (Overlay File System)
allows the container to work in layers and this means that they don't need to have entire chunks of the host OS, making them much lighter than Virtual Machines for instance.
Advantages
Have Low impact on host OS, are very fast and use minimal disk space usage.
Are very easy to Share, re-build and distribute.
Encapsulate apps/ environments instead of "whole machines".
Images
They are not snapshots of the container. Images are made on layers, layers of dependencies.
Images are Immutable. (This means, when a container dies, all is lost)
This helps on maintainability, since problems in a specific layer don't affect the lower layers and don't necessarily affect the upper layers.
After a change in one of layers, in a re-build, only the changed layer and upper layers will be re-build. The lower layers are cached, and this speed up the building process.
These images stay in a Image Registry
, that you can pull
from there.
Image
Usually an Image have a name
and an optional tag
MyAppImage:v1
.
You can generate an image either through a Dockerfile
or by commiting a change in a running container.
Docker Host
Docker have the Host, which is where it will run. The Host will run a Daemon on the background that provides the Docker API.
The Docker Host also have a Cache, that stores images pulled from the Docker Registry and images built images.
It handles Volumes, to handle persistence on file changes inside containers, since Images are immutable.
It also handles Network, for communication between containers.
Docker Client
To talk to this Docker Host, you need a Docker Client, which usually is when you call docker
in a terminal.
Installing Docker
Docker Desktop
Docker Desktop is a program that also installs Docker Engine.
It can be used in Windows, Linux or Mac. (For Mac users it is the only way of using Docker)
Docker Engine
In Windows or Linux you may install only the Docker Engine, which will be much lighter.
For Windows you must first install WSL2 (Windows Subsystem for Linux), since you cannot install Docker Engine directly on Windows.
More info on WSL 2.
Installing Docker Engine on WSL Ubuntu distro
# Add Docker's official GPG key:
sudo apt update
sudo apt install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Add the repository to Apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Give permissions to run Docker with your current user, so that you don't have to always use the root user.
sudo usermod -aG docker $USER
wsl --terminate "name"
After it restart you can start Docker with sudo service docker start
.
Auto starting Docker on WSL
In recent WSL versions
Recent versions of WSL have support to systemd
so you just need to activate it in wsl.conf
.
[boot]
systemd=true
In older WSL versions
In older versions you will need to include the sudo service docker start
as a command to run every time the distro restarts.
[boot]
command = service docker start
Docker Commands
docker attach [OPTIONS] "container-id"
Attach local STDIN
, STDOUT
and STDERR
to a running container.
-no-stdin
Don't attach the STDIN
.
docker exec [OPTIONS] "container-id" COMMAND [ARG...]
Execute commands inside a running container.
Commands run in the default working directory of the container.
-d
To run commands in the background. (Without blocking the terminal)
-it
To attach the current terminal to the container. (Like if you were opening a teminal inside the container)
-i
to keepSTDIN
open.-t
to allocate a pseudo-TTY (Pseudo Terminal)
Ex.: docker exec -it "container-id" bash
--privileged
Give extentended privileges to the command.
-w
To set the working dir inside the container.
docker logs [OPTIONS] "container-id"
Batch-retrieves logs presented at the time of execution.
--details
Show extra details provided to logs.
--follow
Follow log output.
--since
Show logs since timestamp (ex.: 2013-01-02T13:23:37Z
) or relative (ex.: 10m
)
--tail
Number of lines to show from the end of the logs.
--timestamps
Show timestamps.
--until
Show logs before a timestamp (ex.: 2013-01-02T13:23:37Z
) or relative (ex.: 10m
)
docker prune [OPTIONS]
Removes all stopped containers.
docker ps [OPTIONS]
Shows the running containers and info about them.
-a
Shows all active and inactive containers.
-q
Only display container IDs.
-s
Display total file sizes.
docker restart [OPTIONS] ["container-id"...]
Restart on or more containers.
-s
Signal to send to the container.
-t
Seconds to wait before killing the container.
docker rm [OPTIONS] ["container-id"...]
To destroy the one or more containers.
This will also prune
networks
and volumes
of this container, IF not used by others.
-f
Force the removal.
-l
Remove the specified link.
-v
Remove anonymous volumes associated with the container.
docker run [OPTIONS] "image-name:tag" [COMMAND] [ARG...]
Create and run a new container from an image.
It can also run a command in a new container, pulling the image if needed and starting the container.
If no
tag
was specified on theimage-name
it will uselatest
as default.
-a
Attach to STDIN
, STDOUT
or STDERR
.
--cpus
Number of CPUs.
-d
To run the container in detached mode. (The terminal won't get blocked executing the container)
-e
or --env
Set environment variables.
--env-file
Set the .env
file with environment variables to be used.
-i
Keep STDIN
open.
--ip
IPv4 address.
--memory
Memory limit.
--mount
Attach a filesystem to the container.
Non existent files or folders in the host generate an error.
Ex.: --mount type=bind,source=~/host/path,target=~/container/path
--name
Specify the name of the container.
--network
Connect a container to a network.
-p
Specify the port map of the container.
Ex.: -p hostPort:containerPort
--restart
Restart policy to apply when a container exits.
--rm
To auto destroy the container and its associated anonymous volumes after it stops.
-t
Allocate a pseudo-TTY (Pseudo Terminal)
-v
Mount volumes do the container. (Prefer using --mount
)
Non existent files or folders in the host are created by docker.
Ex.: -v ~/host/path:~/container/path
-w
Specify working directory inside the container.
docker start [OPTIONS] ["container-id"...]
To start or re-start one or more containers that are not running.
-a
Attach to STDOUT
or STDERR
, and forward signals.
-i
Attach container's STDIN
.
docker stop [OPTIONS] "container-id"
To stop specified containers.
Containers don't auto destroy after they are stopped. Have that in mind to not bloat your hardrives.
-s
Signal to send to the container.
-t
Seconds to wait before killing the container.
Container's Data Storage
By default all files created inside a container are stored in a writable container layer.
So:
Data don't persist when container is destroyed and it can become difficult to get data out of the container.
A container's writable layer is tightly coupled to the host machine where the container is running.
Writing into a container's writable layer requires a storage driver to manage the filesystem. The storage driver provides a union filesystem, using the Linux kernel. This extra abstraction reduces performance as compared to using data volumes, which write directly to the host filesystem.
No matter which type of mount you choose to use, the data looks the same from within the container.
Are a way of mounting existent files or folders from the host to the container, to persist data.
Are more limited than
Volumes
.They can be shared between containers.
They are managed by you. (Docker don't manage them)
They depend on the directory structure and OS of the host machine.
Bind mounts allow write access to files on the host by default.
One side effect of using bind mounts is that you can change the host filesystem via processes running in a container, including creating, modifying, or deleting important system files or directories. This is a powerful ability which can have security implications, including impacting non-Docker processes on the host system.
Starting containers with bind mounts
This can be done using -v
or --mount
parameters.
docker run ... -v ~/dev:~/dev
docker run ... ---mount type=bind,source=~/dev,target=~/dev
Readonly bind mounts
docker run ... ---mount type=bind,source=~/dev,target=~/dev,readonly
-v
vs --mount
-v
vs --mount
The major difference is on -v
if a specified host file or folder doesn't exist in the host, docker will create it.
On --mount
, it will not create it, and instead an error will be raised.
Are a mechanism for persisting data in containers that are completely managed by Docker.
Easier to backup and migrate.
More safely shared among multiple containers.
Let's you store volumes on remote hosts or cloud and encrypt volume contents.
Volumes don't increase containers size, because they do not persist data in the container's
writable layer
.Containers life cycle don't interfere on volumes.
docker volume [OPTIONS]
Creating a Named volume
Creating volumes with docker volume create
allows you to use them in multiple projects.
Created volumes will be placed at: /var/lib/docker/volumes
of the host machine. (Use docker volume inspect
to check the volume path)
Set labels for the volumes with:
docker volume create a-new-volume --label label-key=label-value --label ...
Starting containers with volumes
If you start a container with a volume that doesn't exist, Docker will create one for you.
docker run --name mycontainer --mount source=vol1,target=/app
You can verify the volume was created, by looking the Mounts
section with:
docker inspect mycontainer
Using volumes as Readonly
docker run ... -v ~/dev:~/dev:ro
docker run ... --mount type=bind,source=~/dev,target=~/dev,readonly
-v
vs --mount
-v
vs --mount
There is no difference between them in volumes.
Anonymous vs Named volumes
Anonymous volumes
Unamed volumes, they are given a random name that is guaranteed to be unique within a Docker host.
They persist unless given the
--rm
flag when creating the container.Are not reused or shared between containers automatically.
To share them between containers, you must mount them using the random volume ID.
Named volumes
Are volumes by which the user gave a name.
Easily shared between containers.
Are temporary, and only persisted in the host memory. When the containers stops, files written won't be persisted.
Useful for temporary or sensitive files that you don't want to persist either in the host nor the container writable layer.
Not shareable between containers.
Only if running Docker on Linux.
Starting containers with tmpfs
docker run --name mycontainer --mount type=tmpfs,source=~/dev,target=~/dev
docker run --name mycontainer --mount type=tmpfs,source=~/dev,target=~/dev
Container's Network
Each Docker Network (Default or Created) will have different IP ranges, so these networks are isolated.
Containers can out-of-the-box communicate with the web.
Create Docker Networks with
docker network create "network-name"
You can use different drivers when creating a Network with --driver
flag.
docker network create "network-name" --driver bridge
Delete unused networks with
docker network prune
Connect running container to a network with
docker network connect "network-name" "container-id"
Container-Host communication
You can use a special domain, that is understood and converted in Docker, to the Host IP address, host.docker.internal
.
This can be used in bridge
or host
modes.
bridge
bridge
Useful for Container-Container
communication.
Created containers with unassigned networks will be inserted in Docker's default bridge network.
Containers in this default network, can communicate with each other ONLY by IP address.
Container-Container communication
The best way for Container-Container
communication is to create a network for them, and set this network with --network "network-name"
flag.
docker run "image-name" --network "network-name"
host
host
Mix Docker's network with Host's network, meaning a Host port will be assigned to the Container.
Useful for communication between Host
and Container
.
This network type won't work as expected if Docker is running in Virtual Machines.
Since the Docker Host won't be your machine, BUT the virtual machine.
(Ex.: Won't work correctly in Mac, or Docker inside Hyper-V, etc...)
Host-Container communication
You may access the container with localhost:<assigned-port>
.
overlay
overlay
Useful when you need multiple Dockers in different machines to communicate, as if they were in the same network.
For example, when working with Swarm
modes.
none
none
The container is totally isolated.
Last updated