How to Setup and Run Docker Data Volumes

In this tutorial we are going to walk throughout the use of docker volume, which can be seen as a flexible way provided by docker ecosystem to manage and handle data, internally by containers as well as in a shared fashion between them.

Please not that, for the demonstration I’ll be using docker for mac the version showed below, but I believe the instructions/commands remain the same for linux, windows or even for new version of docker in a cloud based platform and my working folder in terminal will be DockerVolumes.

➜  ~ docker --version

Docker version 1.12.3, build 6b644ec

What we will learn:

  • Understand what is a volume in Docker ecosystem
  • Create and use data volumes
    • As folders or even files
    • As docker containers
  • Delete volumes


By way of introduction, I would say that the basic question that the reader might ask, is why should we use volumes at all, docker containers provide a light way OS level virtualization, which means by a docker container I can run an entire OS to work with, so why volumes matter; well, in fact, there are many reasons behind bringing this mechanism; the most import, features are, first of all, is separation of concerns, which means, to be able to separate data storage from containers, this loosely coupling allows us to keep volumes mainly data even after deleting our containers which is a very nice and frequent use case, second of all, volumes allow us to share data between the host machine and docker containers running on the top of this machine, third of all, it allows us to share data between containers themselves.

Creating a docker volume

In fact, there are several ways to create docker volumes; mainly, either within the container creation lifecycle, or separately.

Adding a volume within docker creation

When we create and start a docker container using the command docker run we can specify a volume to be attached to the created container by adding the argument -v and specifying a folder path of the volume like illustrated bellow.

Let's create a docker image (node js image) and setup a sample node web app to work with throughout this article:

  • Setup a simple node js app

➜  DockerVolumes ls

Dockerfile   node_modules package.json server.js

// Dockerfile

➜  DockerVolumes cat Dockerfile

FROM node:latest

# Create app directory

RUN mkdir -p /usr/src/app

WORKDIR /usr/src/app

# Install app dependencies

COPY package.json /usr/src/app/

RUN npm install

# Bundle app source

COPY . /usr/src/app


CMD [ "node", "server.js" ]

// server.js

➜  DockerVolumes cat server.js

var express = require('express');

var app = express();

app.get('/', function (req, res) {

res.send("What's up docker volume!");


app.listen(3000, function () {

console.log('Server listening on port 3000!');


  • Build our docker image and tag it using –t flag

➜  DockerVolumes docker build -t linoxide/nodewebapp .

Sending build context to Docker daemon  5.12 kB

Step 1 : FROM node:latest

---> f5eca816b45d

Step 2 : RUN mkdir -p /usr/src/app

---> Using cache

---> 26654b09fdc4

Step 3 : WORKDIR /usr/src/app

---> Using cache

---> 37845a9e9171

Step 4 : COPY package.json /usr/src/app/

---> 6c56590fbccb

Removing intermediate container bdb0b1c0881b

Step 5 : RUN npm install

---> Running in 47e7726adb89

Removing intermediate container bc82f32eef88

Successfully built 7d558c342dfb

  • Let’s run a container with a volume using our last built image, node we will bind public port 3001 to the private (inside our container) port 3000 of our node server:

➜  DockerVolumes docker run -p 3001:3000 -d --name nodewebapp -v /nodewebapp linoxide/nodewebapp


  • We can test our node web just using curl

➜  ~ curl localhost:3001

What's up docker volume!

  • Inspecting our container to check our created volume, to do so, run

➜  DockerVolumes docker inspect nodewebapp



"Id": "602f318d57d5e03b353b15a4c644f90a9918e46590ddff0fb776fa09cf32c51a",

"Created": "2016-12-04T10:17:50.65018429Z",

"Path": "node",

"Args": [



"Mounts": [


"Name": "cadf05a38efc5c2445f5f7b848c16c3fa2c15e1a036a9b4cad40acc1a9e74371",

"Source": "/var/lib/docker/volumes/cadf05a38efc5c2445f5f7b848c16c3fa2c15e1a036a9b4cad40acc1a9e74371/_data",

"Destination": "/nodewebapp",

"Driver": "local",

"Mode": "",

"RW": true,

"Propagation": ""

In the Mounts array we have “Source” field which is the folder in the host machine linked to the create volume inside the container, which the “Destination” field.

  • We can also inspect the created docker volume using the volume Id:

➜  DockerVolumes docker volume inspect cadf05a38efc5c2445f5f7b848c16c3fa2c15e1a036a9b4cad40acc1a9e74371



"Name": "cadf05a38efc5c2445f5f7b848c16c3fa2c15e1a036a9b4cad40acc1a9e74371",

"Driver": "local",

"Mountpoint": "/var/lib/docker/volumes/cadf05a38efc5c2445f5f7b848c16c3fa2c15e1a036a9b4cad40acc1a9e74371/_data",

"Labels": null,

"Scope": "local"




Another way to create volumes within the container creation is by adding VOLUME to Dockerfile like:

VOLUME ["/foo", "/var/bar", "/etc/baz"]

Using host folder as volume

One of the most developer oriented nice feature of docker volume is, the possibility of using a host folder as volume; by doing so, a developer for instance can bind his/her workspace to a docker volume and run a container using that volume, so any change made in the workspace is automatically reflected on the related volume, which a cool feature from developer standing point. So, let’s try this out, I’m going to use the folder of my node web app created above to link it to docker volume.

Please note that I have changed a little bit my Dockerfile, so that my working directory will the docker volume and package.json to use nodemon so that the node server will restart after any changes, my files now look like this:

// Dockerfile

➜  DockerVolumes cat Dockerfile

FROM node:latest

RUN npm i -g nodemon

WORKDIR /nodewebapp


CMD [ "npm", "run", "dev" ]

// package.json

➜  DockerVolumes cat package.json


"name": "DockerVolumes",

"version": "1.0.0",

"description": "",

"main": "index.js",

"scripts": {

"test": "echo \"Error: no test specified\" && exit 1",

"dev" : "npm i && nodemon server.js"


"keywords": [],

"author": "",

"license": "ISC",

"devDependencies": {

"express": "^4.14.0"



Rebuilding the image linoxide/nodewebapp and run the container

➜  DockerVolumes docker run -p 3002:3000 -d --name nodewebapp2 -v /Users/deep/Code/docker-volume/linoxide/DockerVolumes:/nodewebapp linoxide/nodewebapp


And try out the node app using curl and note that whenever a change is made in server.js for example changing the log message in the ‘/’ endpoint will be reflected automatically:

➜  ~ curl localhost:3002

What's up docker volume!

➜  ~ curl localhost:3002

What's up docker volume update server .js !


  • One note to be taken here is that we need to provide the absolute path of host directory to be binded as volume.
  • Other thing is that by default volumes are mounted in read-write mode, this behaviour could be changed by specifying the :mode after the volume path, for instance :ro for read-only mode like below:

➜  DockerVolumes docker run -p 3002:3000 -d --name nodewebapp2 -v /Users/deep/Code/docker-volume/linoxide/DockerVolumes:/nodewebapp:ro linoxide/nodewebapp


  • Third thing to note is that we cannot mount a host folder using Dockerfile as any docker image must be portable, so separated from any host.
  • Fourth thing, is similarly, to mounting a host folder as docker volume we can mount a file by providing the file’s path rather than a folder’s path.

Docker volume as container

As a sophisticated mechanism of sharing/persisting data between containers docker provide volumes as containers.

I’m going to pick up an image from docker hub to illustrate the volume as container, for instance let’s take the official redis image. To create a volume named redisDB with under the folder /redisdbstore using this image run the following command, redis at the end of the command represent the name of docker image:

➜  ~ docker create -v /redisdbstore --name redisDB redis

Unable to find image 'redis:latest' locally

latest: Pulling from library/redis

386a066cd84a: Already exists

769149e3a45c: Pull complete

1f43b3c0854a: Pull complete

70e928127ad8: Pull complete

9ad9c0058d76: Pull complete

bc845722f255: Pull complete

105d1e8cd76a: Pull complete

Digest: sha256:c2ce5403bddabd407c0c63f67566fcab6facef90877de58f05587cdf244a28a8

Status: Downloaded newer image for redis:latest


Therefore, we can use --from-volumes argument to mount the created volume /redisdbstore in another container:

➜  ~ docker run -d --volumes-from redisDB --name nodeWebAppWithRedisDB1 linoxide/nodewebapp



  • It’s possible to use several times --volumes-from to use different containers’ volumes
  • It’s possible to chain container creation by mounting volumes coming from a parent volume as redisDB for instance we can use nodeWebAppWithRedisDB1 to create nodeWebAppWithRedisDB2 like illustrated below:

➜  ~ docker run -d --name nodeWebAppWithRedisDB2 --volumes-from nodeWebAppWithRedisDB1 linoxide/nodewebapp


Deleting volumes

As I have mentioned above, volumes are separated from containers, deleting a container will not delete its attached volumes.

Delete a volume with the container

The easiest way to delete a volume within it container is to use –v option:

➜  ~ docker rm –v container_id

Delete a volume with its name/id

➜  ~ docker volume rm –v volume_id

Delete dangling volumes

If we delete a container without specifying the option –v the associated volumes end up as a dangling volume in to the local disk, they can be deleted using:

➜  ~ docker volume rm `docker volume ls -q -f dangling=true`


To conclude this article, I would like to mention that volumes are a very nice data management mechanism. They can be used in different ways and many level in devops ecosystem for creating backups, restores or even migrations. Volumes still directly accessible from the hosting machine, so they inter operate friendly with classical Linux oriented tools, but the experienced user might pay attention in playing with volumes in a such fashion to avoid corrupting data.

About Mohamed Ez Ez

Mohamed graduated with a Computer Science degree in Software Engineering from the National Graduate Engineering School of Computer Science and Systems Analysis ; ENSIAS (French abbreviation) - Rabat Morocco. After working more than two years as a full stack Java developer @ Accenture DC in Morocco; he decided to come back to school :smile: to pursue a research Master in Models and Algorithms for Decision Support at Blaise Pascal university - ISIMA in France, and now he is following his PhD :wink: He loves beautiful code, great design and great music.

Author Archive Page

Have anything to say?

Your email address will not be published. Required fields are marked *

All comments are subject to moderation.