Docker Getting Started - What You Need To Know
Straight To The Point
Docker is a pretty mysterious thing to a lot of people these days and to be honest, it went right over my head the first time I heard about it. The idea of a container seemed not so special to me until I started to use it and understand the power this simple tool had to offer. My deployments got faster, my configurations were easier to manage, and using it was easier than expected. So what is a container then? The best definition can be found right on the docker website:
Containers are an abstraction at the app layer that packages code and dependencies together. Multiple containers can run on the same machine and share the OS kernel with other containers, each running as isolated processes in user space. Containers take up less space than VMs (container images are typically tens of MBs in size), can handle more applications and require fewer VMs and Operating systems.
By the end of this tutorial, you will be able to do the following:
- Pull images from a repo and run containers
- Package an app into a container using a Dockerfile
- Publish a container to a registry
These three things were the building blocks that got me started on my path with Docker. There are a lot of Docker tutorials out there so I’m going to skip a lot of the fluff and give you the info which I think is most relevant to get you started using Docker as quick as possible.
Our Environment
For this lab I will be using the Centos7 OS installed on Virtualbox. Everything we will be doing will be OS agnostic for maybe the exception of some commands that are translated differently if you are not using Centos as your OS. I will also be using a user with sudo privileges and not the root account. I think it is important to realize what commands and actions you can execute without sudo and vice versa in regards to docker.
All files for this lab can be found in my public git repo: https://github.com/RyterINC/docker-getting-started
Let’s make sure we have Docker installed first…
Docker already has documentation on the process which can be found here, but I have included in the root of our repo a script
called install-docker.sh
that will install Docker and docker-compose for you. I encourage you to go through these commands and understand what is happening under the hood.
I will be installing the latest version of docker and docker-compose which at the time of this writing is:
|
|
Be sure to run the script with sudo: sudo bash install-docker.sh
Now that we have docker installed, we can start doing the fun stuff. The first thing I want to do is run our first container just to make sure everything is working ok. Run the following command:
sudo docker-run hello-world
You should get the following output:
|
|
We had to use sudo
in order to run this docker command because the docker daemon runs as root and is bound to a Unix socket that is owned by root as well.
In order to get our user to run docker commands without having to sudo
, we have to add our user to the docker group. Execute the following command:
sudo usermod -aG docker $USER
From here, you will have to logout and then log back in to your terminal in order to receive the new group assignment permissions. Let’s now run docker ps -a
to see if it worked.
|
|
Great! Now let’s talk about this command and what happened we ran it. The docker ps
command shows us a list of all the containers running on our host and the -a
shows all
the containers including the ones that aren’t running. From left to right we see:
- A Container ID that has been assigned to our container
- We derived the container from the hello-world image
- The command to run within our container is
/hello
- It tells us when the container is created
- The status of the container, in our case “Exited” because the hello-world container exits after it runs
- The ports that are exposed for the container
- A name that has been randomly assigned to us since we did not specify one
As the tutorial goes on, you will see more of this info and the commands and data will become more familiar to you. For now, since we have our docker client running and everything seems to be working fine, let’s get into the first topic of our lesson…
Running Containers From Images
So, with your new found knowledge of docker you may have noticed we actually have already
done this. we ran docker run hello-world
, which pulled the image and created an instance of that image known as
a docker container which runs as a process on your operating system. Let’s try running an Nginx image from dockerhub with a couple of options:
docker run -p 8080:8080 -d nginx:stable-alpine
And here is the result:
|
|
since this is the first time we have tried running a container from this image, docker once again has pulled the image for us from dockerhub and stored it on disk. Dissecting the command,
we have added the -p
option which allows a container to publish a port to the host. This is a bit confusing however, as the layout is host port, then container port. In our example we specify port 8080 on the host
to connect to port 80 on the container. Since Nginx publishes content to port 80 which is an http port, we can forward that content to 8080 on the host and our Nginx page will be reachable.
Since I am using a bridged NAT type for my virtualbox setup, I can view my container by connecting to my VM’s IP address at the port we specified for Nginx to forward to and as you can see from the image above we were successfully able to reach our Nginx page that is running in docker. Cool stuff huh?
For the other option -d
, it runs the container in detached mode which means the container runs in the background. If you didn’t specify this option, all the container information would be written to the
terminal. This can be useful for debugging or better understanding of what’s happening inside your container.
The last new content we introduced was the image tag that we specified. For this image, we told the dockerhub that we wanted the image with the tag stable-alpine
.If that image exists in the Nginx dockerhub repo, then
it would be pulled by our docker daemon. If you want to see what images Nginx has available, you can visit their repo here. I encourage you to find some other docker images of
software you might use and install them via docker instead of having to do a full install and configure. Instead, you could run a container and take ten extra minutes for lunch.
Packaging A Container
So now that we have pulled a pre-built image and have gotten a small taste of docker, let’s build our own image and see if we can host an app inside. In my humble opinion, packaging an app with a Dockerfile
is the
biggest barrier to entry for most people. Luckily for us, there are a ton of examples on the web on how to do this in case this isn’t clear to you. We will be using a simple flask app in order to accomplish our task.
In our docker-getting-started repo I have included a script called hello.py
(our flask app) and a Dockerfile
. Let’s go ahead and create a virtualenv
so we can run our app and see what it looks like. You can find instructions on how to install virtualenv here
|
|
After we create our environment and activate it, we can then install our dependencies.
pip install -r requirements.txt
The only dependency we need to install is flask. From here, we can run our app with python hello.py
If you are having issues connecting to your app, ensure your firewall is either down or you punched a hole for port 5000. Since this is for development purposes, I just turned mine off.
sudo service firewalld stop
If everything has gone well, you should be able to see our app running by accessing your VM’s IP or hostname followed by port 5000.
Since the app works, let’s take a look at the Dockerfile:
|
|
This is about as simple as they come. Let’s go through this line by line so we really understand what’s going on…
FROM python:3.7-alpine
The FROM
command sets the base image for the rest of the commands you will be executing. I’ve decided to use the python image, and the version is 3.7-alpine
. Alpine images are smaller OS’s that only have the dependencies
you need. This can be useful if you are trying to reduce your image size or if you have a really small app and you don’t want all of the overhead that a full OS comes with.
COPY . /app
We then run a COPY
command that will take everything in the current directory and it’s subfolders and put it into our app
directory. This will create the directory for us if it doesn’t exist (which it doesn’t).
WORKDIR /app
This line will set the working directory for subsequent commands.
RUN pip install -r requirements.txt
Then, we run a pip install on our requirements file to install dependencies in our working directory, which we set as /app
in the last command.
ENTRYPOINT [ "python" ]
With ENTRYPOINT
, we set the container as an executable and docker will start the application with the command you give it. in our case, it’s python
.
CMD [ "hello.py" ]
We will pass a default value of hello.py
to python
. This cen be overriden in the event you need to run another script, you can find more info about this on docker’s site.
Time To Build
docker build -t docker-getting-started:latest .
Here’s what it looks like:
|
|
That’s great and all, but remember, just because an image builds doesn’t mean the app inside is working. lets run an instance of this image and see if we can see our app.
docker run -p 5000:5000 -d --name flaskapp docker-getting-started:latest
In a nutshell, we are executing a docker run
to create a container from the image docker-getting-started:latest
that we have included in our command at the end of the line. The -p 5000:5000
means we port forward port 5000 on the container to port 5000 on the host and as before we use -d
to run in detached mode. Last but not least we name our container flaskapp
so we can reference it easily.
Here’s what it looks like:
|
|
And just like that our app is up and running. If you go to the address you used before, you will be able to see the same Hello, World!
and your app will be serving.
Pushing The Image
Now that we have a working image that can run a working container, let’s push it to a repo
go to docker hubs website and create an account. Once you do that, run docker login
on your terminal and log in with your dockerhub creds. We then need to docker tag
our image in order to associate it with our account.
docker tag docker-getting-started:latest ryterinc/docker-getting-started:latest
“ryterinc” in this case is my dockerhub username, you would replace this with yours.
Then, we finally push our image to dockerhub:
docker push ryterinc/docker-getting-started:latest
|
|
If you navigate to your docker hub account and go to the “Repositories” link, then you should be able to see your new image in docker hub.
Let’s run our container from the new image. After all, that was the whole point of all this work right? Run this command to remove all images from your docker daemon
docker rmi $(docker images -a -q)
Run a new instance of our image but from dockerhub:
docker run -p 5000:5000 -d ryterinc/docker-getting-started:latest
Here’s the result:
|
|
In Conclusion
In this lesson, we were able to run a docker container from docker hub, package our own app, run that app in a container, and publish that container to dockerhub to use it ourselves. Docker has allowed us to package an application and its dependencies into an easy to run process that is able to run in one command. This was a pretty quick lesson and it only skims the surface of docker. I encourage you to try and package more complex images, as well as research any concepts you didn’t quite understand. Go forth and package all the things!