An Introduction to Docker

Traducciones al Español
Estamos traduciendo nuestros guías y tutoriales al Español. Es posible que usted esté viendo una traducción generada automáticamente. Estamos trabajando con traductores profesionales para verificar las traducciones de nuestro sitio web. Este proyecto es un trabajo en curso.
Create a Linode account to try this guide with a $100 credit.
This credit will be applied to any valid services used during your first 60 days.

What is Docker?

Docker is a tool that enables you to create, deploy, and manage lightweight, stand-alone packages that contain everything needed to run an application (code, libraries, runtime, system settings, and dependencies). These packages are called containers.

Each container is deployed with its own CPU, memory, block I/O, and network resources, all without having to depend upon an individual kernel and operating system. While it may be easiest to compare Docker and virtual machines, they differ in the way they share or dedicate resources.

Containers help expand your Linode’s functionality in a number of ways. For example, you can deploy multiple instances of nginx with multiple stagings (such as development and production). Unlike deploying multiple virtual machines, the deployed containers will not tax your Linode’s resources.

Docker Images

Each Docker container is created from an image. You pull images from a Docker registry (such as the official Docker Hub) and use them to build containers. A single image can create numerous containers. For example, you could use the latest nginx image to deploy a webserver container for:

  • Web dev ops
  • Testing
  • Production
  • Web applications

Dockerfiles

A Dockerfile is a text file that contains the necessary commands to assemble an image. Once a Dockerfile is written, the administrator uses the docker build command to create an image based on the commands within the file. The commands and information within the Dockerfile can be configured to use specific software versions and dependencies to ensure consistent and stable deployments.

A Dockerfile uses the following commands for building the images:

  • ADD - copy files from a source on the host to the container’s own filesystem at the set destination.
  • CMD - execute a specific command within the container.
  • ENTRYPOINT - set a default application to be used every time a container is created with the image.
  • ENV - set environment variables.
  • EXPOSE - expose a specific port to enable networking between the container and the outside world.
  • FROM - define the base image used to start the build process.
  • MAINTAINER - define the full name and email address of the image creator.
  • RUN - central executing directive for Dockerfiles.
  • USER - set the UID (the username) that will run the container.
  • VOLUME - enable access from the container to a directory on the host machine.
  • WORKDIR - set the path where the command, defined with CMD, is to be executed.

Not every command must be used. Below is a working Dockerfile example, using only the MAINTAINER, FROM, and RUN commands:

File: Dockerfile
1
2
3
MAINTAINER NAME EMAIL
FROM ubuntu:latest
RUN apt-get -y update && apt-get -y upgrade && apt-get install -y build-essential

Docker Swarm

Docker makes it easy to join servers together to form a cluster, called a Docker Swarm. Once you’ve created a Swarm manager, or leader, and attached nodes to the leader, you can scale out container deployment. The leader will automatically adapt the cluster by adding or removing tasks to maintain a desired state.

A node is a single instance of the Docker engine that participates in the Swarm. You can run one or more nodes on a single Linode. The Swarm manager uses ingress load balancing to expose services that can be made available to the Swarm. Docker Swarm can also:

  • Check the health of your containers.
  • Launch a fixed set of containers from a single Docker image.
  • Scale the number of containers up or down (depending upon the current load).
  • Perform rolling updates across containers.
  • Provide redundancy and failover.
  • Add or subtract container iterations as demands change.

Next Steps

To explore Docker further, visit our Docker Quick Reference, our guide on deploying a Node.js web server, or the Linode How to install Docker and deploy a LAMP Stack guide.

More Information

You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

This page was originally published on


Your Feedback Is Important

Let us know if this guide made it easy to get the answer you needed.


Join the conversation.
Read other comments or post your own below. Comments must be respectful, constructive, and relevant to the topic of the guide. Do not post external links or advertisements. Before posting, consider if your comment would be better addressed by contacting our Support team or asking on our Community Site.