Learn to use Docker containers

Docker tutorial: Get started with Docker

Docker has revolutionized how applications are deployed. Follow this step-by-step guide from installing Docker to building a Docker container for the Apache web server

1 2 Page 2
Page 2 of 2

The --name flag lets you specify a name for the running container. The name is optional (you can always refer to a container by the first five digits of its container ID) but must be different from the name of any other container or image. If you don’t specify a name, one will be randomly generated—such as inspiring_hodgkin or loving_austin.

Once you run this command, you should be able to point a web browser at the IP address of the host and see the default Apache web server page.

Again, you can see the status of the container and the TCP port mappings by using the docker ps command. And you can look up the network mappings by using the docker port command:

$ sudo docker port apache 80
0.0.0.0:8080

Note that you could use the -P option on the docker run command to publish all open ports on the container to the host and map a random high port such as 49153 back to port 80 on the container. This can be used in scripting as necessary, although it is generally a bad idea to do this in production.

At this point, you have a fully functional Docker container running your Apache process. When you stop the container, it will remain in the system and can be restarted at any time via the docker restart command.

Automate Docker image builds with Dockerfiles

As educational as it is to build Docker containers manually, it is pure tedium to do this again and again. To make the build process easy, consistent, and repeatable, Docker provides a form of automation for creating Docker images called Dockerfiles.

Dockerfiles are text files, stored in a repository alongside Docker images. They describe how an specific container is built, letting Docker perform the build process for you automatically. Here is an example Dockerfile for a minimal container, much like the one I built in the first stages of this demo:

FROM ubuntu:latest
RUN apt-get update
RUN apt-get install -y curl
ENTRYPOINT ["/bin/bash"]

If you save this file as dftest in your local directory, you can build an image named ubuntu:testing from dftest with the following command:

$ sudo docker build -t ubuntu:testing - < dftest

Docker will build a new image based on the ubuntu:latest image. Then, inside the container, it will perform an apt-get update and use apt-get to install curl. Finally, it will set the default command to run at container launch as /bin/bash. You could then run:

$ sudo docker run -i -t ubuntu:testing

Et voilà! You have a root shell on a new container built to those specifications. Note that you can also launch the container with this command:

$ sudo docker run -i -t dftest

There are numerous options that can be used in a Dockerfile, such as mapping host directories to containers, setting environment variables, and even setting triggers to be used in future builds. A full list of Dockerfile operators is in the Dockerfile Reference page.

Docker for Mac and Docker for Windows

Docker containers are a Linux-specific technology. However, you can easily run Docker on MacOS or Windows using virtualization. Docker Inc. has developed editions of Docker designed to be run in the Mac and Windows desktop environments: Docker for Mac and Docker for Windows.

Docker for Mac and Docker for Windows work roughly the same way. They install as conventional desktop applications, and they use the native hypervisor (Xhyve on the Mac and Hyper-V on Windows) to run containers. They also provide exactly the same command-line interface used in Docker for Linux, so if you start your Docker adventures in one realm you can move to the other without having to relearn commands.

docker for windows 1 IDG

The command-line interface for Docker on Windows is identical to the Linux version, barring the use of sudo.

If you launch a command-line session (PowerShell in Windows, Terminal on the Mac) and type docker run -i -t ubuntu /bin/bash as I did near the beginning of this tutorial, you should see Docker execute the same steps to pull a fresh copy of the Ubuntu image into the local repository and launch it. 

The desktop editions of Docker also provide a convenient GUI for managing how Docker interacts with the local system. For example, you can define which local drives are automatically made available to containers, so you don’t have to wrangle those permissions yourself. You can also control how much CPU or memory is made available to Docker from the host, how network connections and network proxies behave, and so on.

docker for windows shared drives IDG

Desktop editions of Docker provide a GUI to control interactions between Docker and the host.

Some caveats about Docker for Windows are worth spelling out here:

  • You don’t use sudo to run Docker if you use an admin-level account, because sudo doesn’t exist on Windows and administrative permissions are handled differently anyway. Keep this in mind if you decide to re-enact any tutorials using Docker for Windows.
  • If you use the Oracle VM VirtualBox hypervisor, note that installing Docker for Windows will disable it. Docker for Windows uses Hyper-V, and VirtualBox can’t run when Hyper-V has been enabled. To use VirtualBox, you’ll have to disable Hyper-V (which requires a reboot).

Of course there’s much more to Docker than I’ve covered in this guide, but this should give you a basic understanding of how Docker operates, a grasp of the key Docker concepts, and the know-how to build functional containers. You can find more information on the Docker website including an online tutorial. A guide with more advanced examples can be found at PayneDigital.com.

Copyright © 2018 IDG Communications, Inc.

1 2 Page 2
Page 2 of 2