Docker
Last updated
Last updated
They are processes that run over the host OS, and that are isolated by Namespaces
, so they belive to be the only running processes.
CGroups
help to isolate the computational resources that can be used by each isolated process (Container).
For last the OFS (Overlay File System)
allows the container to work in layers and this means that they don't need to have entire chunks of the host OS, making them much lighter than Virtual Machines for instance.
Have Low impact on host OS, are very fast and use minimal disk space usage.
Are very easy to Share, re-build and distribute.
Encapsulate apps/ environments instead of "whole machines".
They are not snapshots of the container. Images are made on layers, layers of dependencies.
Images are Immutable. (This means, when a container dies, all is lost)
This helps on maintainability, since problems in a specific layer don't affect the lower layers and don't necessarily affect the upper layers.
After a change in one of layers, in a re-build, only the changed layer and upper layers will be re-build. The lower layers are cached, and this speed up the building process.
These images stay in a Image Registry
, that you can pull
from there.
Usually an Image have a name
and an optional tag
MyAppImage:v1
.
You can generate an image either through a Dockerfile
or by commiting a change in a running container.
Docker have the Host, which is where it will run. The Host will run a Daemon on the background that provides the Docker API.
The Docker Host also have a Cache, that stores images pulled from the Docker Registry and images built images.
It handles Volumes, to handle persistence on file changes inside containers, since Images are immutable.
It also handles Network, for communication between containers.
To talk to this Docker Host, you need a Docker Client, which usually is when you call docker
in a terminal.
Docker Desktop is a program that also installs Docker Engine.
It can be used in Windows, Linux or Mac. (For Mac users it is the only way of using Docker)
In Windows or Linux you may install only the Docker Engine, which will be much lighter.
For Windows you must first install WSL2 (Windows Subsystem for Linux), since you cannot install Docker Engine directly on Windows.
More info on WSL 2.
Give permissions to run Docker with your current user, so that you don't have to always use the root user.
After it restart you can start Docker with sudo service docker start
.
In recent WSL versions
Recent versions of WSL have support to systemd
so you just need to activate it in wsl.conf
.
In older WSL versions
In older versions you will need to include the sudo service docker start
as a command to run every time the distro restarts.
After changing the wsl.conf
always restart the distro.
CONTAINER_ID
is a auto generated ID for each container. In some commands you can specify this ID instead of the container name.
And if you use the ID, you don't have to write the entire ID but only a unique part of it.
The container NAME will also be auto generated if not specified in a Dockerfile
or when building/running the container.
docker attach [OPTIONS] "container-id"
Attach local STDIN
, STDOUT
and STDERR
to a running container.
-no-stdin
Don't attach the STDIN
.
docker exec [OPTIONS] "container-id" COMMAND [ARG...]
Execute commands inside a running container.
Commands run in the default working directory of the container.
-d
To run commands in the background. (Without blocking the terminal)
-it
To attach the current terminal to the container. (Like if you were opening a teminal inside the container)
-i
to keep STDIN
open.
-t
to allocate a pseudo-TTY (Pseudo Terminal)
Ex.: docker exec -it "container-id" bash
--privileged
Give extentended privileges to the command.
-w
To set the working dir inside the container.
docker logs [OPTIONS] "container-id"
Batch-retrieves logs presented at the time of execution.
--details
Show extra details provided to logs.
--follow
Follow log output.
--since
Show logs since timestamp (ex.: 2013-01-02T13:23:37Z
) or relative (ex.: 10m
)
--tail
Number of lines to show from the end of the logs.
--timestamps
Show timestamps.
--until
Show logs before a timestamp (ex.: 2013-01-02T13:23:37Z
) or relative (ex.: 10m
)
docker prune [OPTIONS]
Removes all stopped containers.
docker ps [OPTIONS]
Shows the running containers and info about them.
-a
Shows all active and inactive containers.
-q
Only display container IDs.
-s
Display total file sizes.
docker restart [OPTIONS] ["container-id"...]
Restart on or more containers.
-s
Signal to send to the container.
-t
Seconds to wait before killing the container.
docker rm [OPTIONS] ["container-id"...]
To destroy the one or more containers.
This will also prune
networks
and volumes
of this container, IF not used by others.
-f
Force the removal.
-l
Remove the specified link.
-v
Remove anonymous volumes associated with the container.
docker run [OPTIONS] "image-name:tag" [COMMAND] [ARG...]
Create and run a new container from an image.
It can also run a command in a new container, pulling the image if needed and starting the container.
If no tag
was specified on the image-name
it will use latest
as default.
-a
Attach to STDIN
, STDOUT
or STDERR
.
--cpus
Number of CPUs.
-d
To run the container in detached mode. (The terminal won't get blocked executing the container)
-e
or --env
Set environment variables.
--env-file
Set the .env
file with environment variables to be used.
-i
Keep STDIN
open.
--ip
IPv4 address.
--memory
Memory limit.
--mount
Attach a filesystem to the container.
Non existent files or folders in the host generate an error.
Ex.: --mount type=bind,source=~/host/path,target=~/container/path
--name
Specify the name of the container.
--network
Connect a container to a network.
-p
Specify the port map of the container.
Ex.: -p hostPort:containerPort
--restart
Restart policy to apply when a container exits.
--rm
To auto destroy the container and its associated anonymous volumes after it stops.
-t
Allocate a pseudo-TTY (Pseudo Terminal)
-v
Mount volumes do the container. (Prefer using --mount
)
Non existent files or folders in the host are created by docker.
Ex.: -v ~/host/path:~/container/path
-w
Specify working directory inside the container.
docker start [OPTIONS] ["container-id"...]
To start or re-start one or more containers that are not running.
-a
Attach to STDOUT
or STDERR
, and forward signals.
-i
Attach container's STDIN
.
docker stop [OPTIONS] "container-id"
To stop specified containers.
Containers don't auto destroy after they are stopped. Have that in mind to not bloat your hardrives.
-s
Signal to send to the container.
-t
Seconds to wait before killing the container.
By default all files created inside a container are stored in a writable container layer.
So:
Data don't persist when container is destroyed and it can become difficult to get data out of the container.
A container's writable layer is tightly coupled to the host machine where the container is running.
Writing into a container's writable layer requires a storage driver to manage the filesystem. The storage driver provides a union filesystem, using the Linux kernel. This extra abstraction reduces performance as compared to using data volumes, which write directly to the host filesystem.
No matter which type of mount you choose to use, the data looks the same from within the container.
Are a way of mounting existent files or folders from the host to the container, to persist data.
Are more limited than Volumes
.
They can be shared between containers.
They are managed by you. (Docker don't manage them)
They depend on the directory structure and OS of the host machine.
Bind mounts allow write access to files on the host by default.
One side effect of using bind mounts is that you can change the host filesystem via processes running in a container, including creating, modifying, or deleting important system files or directories. This is a powerful ability which can have security implications, including impacting non-Docker processes on the host system.
This can be done using -v
or --mount
parameters.
-v
vs --mount
The major difference is on -v
if a specified host file or folder doesn't exist in the host, docker will create it.
On --mount
, it will not create it, and instead an error will be raised.
Are a mechanism for persisting data in containers that are completely managed by Docker.
Easier to backup and migrate.
More safely shared among multiple containers.
Let's you store volumes on remote hosts or cloud and encrypt volume contents.
Volumes don't increase containers size, because they do not persist data in the container's writable layer
.
Containers life cycle don't interfere on volumes.
docker volume [OPTIONS]
Creates volumes.
Ex.: docker volume create vol1
inspect
Display detailed information on one or more volumes.
Ex.: docker volume inspect vol1
ls
List volumes.
Destroy all unused local volumes.
Destroy one or more volumes. (Cannot be in use by containers)
Creating volumes with docker volume create
allows you to use them in multiple projects.
Created volumes will be placed at: /var/lib/docker/volumes
of the host machine. (Use docker volume inspect
to check the volume path)
Set labels for the volumes with:
If you start a container with a volume that doesn't exist, Docker will create one for you.
docker run --name mycontainer --mount source=vol1,target=/app
You can verify the volume was created, by looking the Mounts
section with:
docker inspect mycontainer
-v
vs --mount
There is no difference between them in volumes.
Anonymous volumes
Unamed volumes, they are given a random name that is guaranteed to be unique within a Docker host.
They persist unless given the --rm
flag when creating the container.
Are not reused or shared between containers automatically.
To share them between containers, you must mount them using the random volume ID.
Useful for containers that don't need to or shouldn't share data. Since it enforces that the container will create it's own volume.
Named volumes
Are volumes by which the user gave a name.
Easily shared between containers.
Are temporary, and only persisted in the host memory. When the containers stops, files written won't be persisted.
Useful for temporary or sensitive files that you don't want to persist either in the host nor the container writable layer.
Not shareable between containers.
Only if running Docker on Linux.
docker run --name mycontainer --mount type=tmpfs,source=~/dev,target=~/dev
Run docker container inspect "container-id"
to see the IP address of the container.
Docker generates IPs in the range of 172.%.%.%
.
Each Docker Network (Default or Created) will have different IP ranges, so these networks are isolated.
Containers can out-of-the-box communicate with the web.
You can use different drivers when creating a Network with --driver
flag.
You can use a special domain, that is understood and converted in Docker, to the Host IP address, host.docker.internal
.
This can be used in bridge
or host
modes.
bridge
Useful for Container-Container
communication.
Created containers with unassigned networks will be inserted in Docker's default bridge network.
Containers in this default network, can communicate with each other ONLY by IP address.
Technically, containers with unspecified networks (--network
) that run on bridge
mode can communicate with other containers with the "container-ip".
But hardcoding "container-ip" is not cool, since they could change.
The best way for Container-Container
communication is to create a network for them, and set this network with --network "network-name"
flag.
Within a created Docker Network, all container can communicate with each other by their "container-name".
(Each container in the same created Network will be able to resolve the other container's HostName)
host
Mix Docker's network with Host's network, meaning a Host port will be assigned to the Container.
Useful for communication between Host
and Container
.
This network type won't work as expected if Docker is running in Virtual Machines.
Since the Docker Host won't be your machine, BUT the virtual machine.
(Ex.: Won't work correctly in Mac, or Docker inside Hyper-V, etc...)
You may access the container with localhost:<assigned-port>
.
overlay
Useful when you need multiple Dockers in different machines to communicate, as if they were in the same network.
For example, when working with Swarm
modes.
none
The container is totally isolated.