In this article you will learn how to use Docker to start containerizing your appllications. When I first started using Docker, all I did was use ready-made images from Docker Hub. Many of these are useful, but they introduce a lot of “magic” which can make it hard to understand how things actually work. I’ll be focusing on a lower level view of Docker usage, which should facilitate learning.
I will not focus on the installation process, as it’s well documented in other places. Note: If you’re using Linux, you should not install Docker from the standard repositories. The tutorial for Ubuntu can be found here, and the tutorials for other platforms can be found in left-hand-side tab.
By default, Docker requires its commands to be run as root. You can avoid this restriction by adding yourself to the ‘docker’ group.
sudo usermod -aG docker your-username
your-username with your actual username. After running the command, you need to log out and in before the changes take effect.
Verify that Docker was successfully installed with:
Basic Docker Usage
Now that you have Docker installed, let’s try some commands to familiarize us with the application.
docker pull ubuntu docker create -i -t --name cool ubuntu docker start cool docker attach cool
These commands do in order:
- Downloads the Ubuntu base image from Docker Hub.
- Creates a container named ‘cool’ using the ubuntu base image. The other flags will be explained later in the tutorial.
- Starts the container ‘cool’.
- Enters into container shell.
After running the commands, you should be inside the bash shell of the container; go ahead and explore the file system using the
cd commands. We can also install programs inside the container using apt.
apt update && apt install -y python3 python3 -c "print('Hello World')"
You should see the message “Hello World” printed using the Python installation.
Next we’ll look at how you can manage containers from your host system. Start up another terminal with the ubuntu container is still running. In this new terminal, enter
This command works in a similar way to the
ps UNIX utility, but instead of showing processes, it shows running docker containers. Let’s try stopping the container from our host system.
docker stop cool
You will see that the container exits in your other terminal. If you enter
docker ps again, you will notice that the container is no longer in this list. This does not mean that the container has been destroyed, just that it’s not running. To see all containers — even the stopped ones — run
docker ps -a
We can start this container up again.
docker start cool
Check the output from
docker ps, it should not say that it’s running again. We can reattach to the container with
docker attach cool
We can check that Python is still installed
Next we’ll remove this container
docker stop cool docker remove cool
Now the container should be removed from your system. Enter
docker ps -a to verify this.
The Docker Run Command
The way we created the ‘cool’ container above took four commands. This is pretty verbose, which is why there is a shortcut that performs all these steps in a single command. We can create the same containers as below with
docker run -it --name cool ubuntu
We can also merge flags to make the command less verbose. In this case we merged
ps inside the container. You’ll notice that
bash is running with PID 1. This is an important fact, as if this program exits, Docker knows to also exit the container. So we can exit the container from within the container, we just need to exit bash. We can do this with the
Running Other Programs
Until now, we have just been interacting with the
/bin/bash binary in the Ubuntu container. However, we can pass names of other programs as arguments to run them directly.
docker run ubuntu echo "Hello World"
This will print out the message “Hello World” using /bin/echo inside the container, and then it exits the container. Notice how we did not supply the
The -i flag
The reason we do not use the
-i flag is because it means that we should attach the stdin stream from the host to the container. This is not needed, as we are not passing any data to it through stdin. Let’s demonstrate a scenario when
-i would be needed:
echo "Hello World" | docker run -i ubuntu rev
rev command is a UNIX tool that reverses a string. The above command pipes the string “Hello World” from the host to a container that reverses and prints out “dlroW olleH”.
Note that you can always pipe out data from a container without any special flags.
docker run ubuntu echo "Hello World" | rev
Redirecting from containers is especially useful when redirecting into files on your host system.
docker run ubuntu echo "Hello World" > output.txt
The -t flag
-t flag sets the container up for interacting with the container as a terminal. You should only use this flag if you actually need to interact with the container through bash or another shell. As far as I know, the
-t flag does not really make sense without the
You may also have noticed that we left out the
--name parameter. This means that Docker automatically named these containers. You can see the names that Docker gave the containers with
ps -a. We don’t really want to remove these containers manually, so we can run
docker container prune
This will remove all stopped containers.
The Alpine Image
We have used the Ubuntu image this far in the tutorial. Another image that is very popular when working with Docker is Alpine. The main advantage of using the Alpine image is because it’s very small compared to most other images. Let’s compare their sizes:
docker pull alpine docker image ls
The Ubuntu image is 64.2MB, while the Alpine image is only 5.58MB. There are some other differences in the Alpine image compared to the Ubuntu image.
- The entrypoint for the alpine image is /bin/ash instead of /bin/bash.
- Alpine does not use apt, they use the apk tool for package management. The package index can be found here.
I always tend to prefer using an Alpine base image if possible. Let’s try running it
docker run --rm -it alpine
Most tools that you’re used to should be available. Notice that I added a
--rm flag in the run command. This means that the container is automatically removed when it exits, so that we do not have to do prunes later on.
At this point we know how to run a shell in the container, and we know how to run programs as arguments to the run command. However, we do not really want to configure the containers through the shell, since in that case we would loose the configuration every time we recreated the container. What we want to do is to bake in the configuration in the image.
This is where Dockerfiles come in to the picture. The purpose of a Dockerfile is to function as a build script that extends another image. We configure a base image in order to generate a new image containing a new configuration.
A Minimal Example
Let’s take a look at a minimal example. Create a directory somewhere in your filesystem. I’ll use in ~/Projects/cool as my project directory. In this directory we need to create a file named Dockerfile. Open this file, and add the following line to the the file:
Save this file, and then run the command
docker build -t cool .
This builds a new image ‘cool’ that is identical to the alpine image. You can check that this image was created with
docker image ls
Let’s install Python in this image. We can do this by invoking the
RUN instruction with the
apk install command as target. The
RUN instruction can be used to call any executable that is available in the current state of the built image.
Your Dockerfile should now look like
FROM alpine RUN apk add python3
Rebuild the image using the above command. Let’s try running this image.
docker run --rm -it cool
python --version to check that the image now comes with Python installed.
Setting Default Arguments
We have seen that we can run arbitrary programs inside of the container by appending the
docker run command with the name of the binary. For some images it may make sense to have a default argument; if you omit the argument, the default command will be called instead. You can change the default argument with the
CMD instruction. Let’s change the default argument to launch the Python interactive shell. Your Dockerfile should look like
FROM alpine RUN apk add python3 CMD [ "python3" ]
The arguments to
CMD is a comma separated array of the executable and its arguments.
Let’s rebuild and try to run the image without an argument.
docker build -t cool . docker run --rm -it cool
You’ll notice that we are now inside of the Python interactive shell. You can exit it with CTRL+D. What if we wanted to give the Python binary some arguments? Since
CMD only provides the default argument, we would have to write the entire command as the argument.
docker run --rm -it cool python3 -c "print('Hello World')"
It’s possible to make a change so that we can pass arguments directly to Python. This is done by overriding the entrypoint. The entrypoint for Alpine is the shell /bin/ash, which you may have seen in the output of
docker ps. Because of this, when you provide an argument in
CMD or explicitly during the run command, it works the same as if you would run
/bin/bash -c "echo Hello World"
on your host system. We can override the entrypoint in the Dockerfile with the
ENTRYPOINT instruction. Let’s set this to the Python interactive shell. We should also remove the
CMD instruction. Your Dockerfile should now look like
FROM alpine RUN apk add python3 ENTRYPOINT [ "python3" ]
Let’s rebuild, and try to run it again with some arguments.
docker build -t cool . docker run --rm -it cool -c "print('Hello World')"
Now the arguments after ‘cool’ get passed to the Python executable. We can also change the default argument to Python, again with the
CMD instruction. Let’s change it so the help flag is passed to Python by default.
FROM alpine RUN apk add python3 ENTRYPOINT [ "python3" ] CMD [ "--help" ]
Now rebuild and run it without arguments.
docker build -t cool . docker run --rm -it cool
You should see the help screen for Python.
I have shown you what I consider to be the most fundamental parts to understanding how to use Docker. Although we have only scratched the surface of what you can do with Docker, I feel confident that learning the rest should come much easier if you understood everything written in this article. There are some additional Dockerfile instructions that I did not cover in the article, but you should be aware of:
ENV- Set environment variables
WORKDIR- Set working directory
COPY- Copy files from host to container
You can read about these in the Dockerfile reference.
Sometimes you may need to persist data even after a container has been destroyed. For this you should use volumes, which you can read about more here.
Thanks for reading!