How to Setup Snapshots and Clones in ZFS on Linux

Notable new feature in Ubuntu 16.04 is addition of ZFS file system. This file system have long time been dubbed a Solaris' killer feature. It has been ported over to Linux but it wont be integrated into mainline kernel due to Sun's clever pick of incompatible license. They did that so that can claim having some feature Linux don't have. The clever pick did not do them any good and we wont be here talk about fate of Sun nor about legality of using CDDL-licensed kernel module inside GPL-licensed kernel. Fact is that Canonical put the ZFS into Ubuntu regardless of the legal warnings, so lets just use it.

Installing ZFS file system and making array of disks

First thing we do is install ZFS file system with command

sudo apt install zfs

Installing ZFS on Ubuntu

Next we need to see what disks we have to work with, so lets use lsblk command

List all drives

I have added 4 disks with 8GB each, to the VM so we are going to use those for testing the ZFS. First we are going to make array spanning across all 4 disks. It will be striping, for best speed and size. Those are virtual disks, not like any of them are going to fail so a striping raid 0 seems fine for testing. For production you would want RAID 5.

sudo zpool create -f linoxide /dev/sdb /dev/sdc /dev/sdd /dev/sde

This command will span raid 0 array across all mentioned drives. They are all empty on my VM, but if you are doing this real drives, make sure you don't have your only copy of some important data on any of the drives because it is going to be lost.  If you wonder why we have to use option -f, it is because ZFS by default assumes EFI and GPT while the VirtualBox VM that I created has BIOS and MBR. Lets then see what we created.

sudo zpool status linoxide

zpool status

We now need to create ZFS file system on this linoxide zpool we created.

sudo zfs create linoxide/test

This command creates ZFS filesystem test in poll linoxide and mount. Lets see that filesystem:

sudo zfs list -r linoxide

list with -r option

How to take ZFS Snapshots

Since we have empty zpool and empty filesystem in it, let's make a snapshot of this. Snapshots are made in format zpoolname/fsname@snapshotname so we are going to use following command:

sudo zfs snapshot linoxide/test@snap1

sudo zfs list -t snapshot

list snapshots

Lets now create some data so we can return a snapshot and see how things change.

sudo nano /linoxide/test/ourdata.txt

In the nano type 123 or any random data you want, and then save it. Check with cat command to see if data is there, and then lets make one more snapshot with following command

sudo zfs snapshot linoxide/test@snap2

Lets then reopen our data file and add more numbers.

sudo nano /linoxide/test/ourdata.txt

For example make it be 12345. Now we rollback to our snapshots. Here are commands along with output:

miki@miki-VirtualBox:~$ cat /linoxide/test/ourdata.txt
miki@miki-VirtualBox:~$ sudo zfs rollback linoxide/test@snap2
miki@miki-VirtualBox:~$ cat /linoxide/test/ourdata.txt
miki@miki-VirtualBox:~$ sudo zfs rollback linoxide/test@snap1
cannot rollback to 'linoxide/test@snap1': more recent snapshots or bookmarks exist
use '-r' to force deletion of the following snapshots and bookmarks:
miki@miki-VirtualBox:~$ sudo zfs rollback -r linoxide/test@snap1
miki@miki-VirtualBox:~$ cat /linoxide/test/ourdata.txt
cat: /linoxide/test/ourdata.txt: No such file or directory

We see that after rolling back sudo zfs rollback linoxide/test@snapname is the command that does the rollback, rest is there to illustrate the change in data. When we returned to the start, there was no file at all, which is correct since we made first snapshot before making the file. But to return to first snapshot we had to use flag -r to delete snap2, because ZFS only allows returning to latest snapshot. So it made snap1 latest by deleting snap2. We can delete the remaining snapshot with command:

sudo zfs destroy linoxide/test@snap1

How to perform ZFS Cloning

Clone in zfs is made from snapshot, and it makes snapshot of filesystem make a clone if itself at different mount-point and under different name. It will for all intents and purposes become a different filesystem albeit one that starts with same contents as the previous one. Due to deduplication feature it might seem as new filesystem is not taking any space, but as filesystems start to diverge, the space would begin to be used. So lets first create some files and directories in the file system

sudo touch /linoxide/test/file{1..9} && sudo mkdir /linoxide/test/dir{1..9}

miki@miki-VirtualBox:~$ ls /linoxide/test/
dir1 dir3 dir5 dir7 dir9 file2 file4 file6 file8
dir2 dir4 dir6 dir8 file1 file3 file5 file7 file9

Next we will make a snapshot and clone the filesystem from snapshot into new filesystem called clonedfs

sudo zfs snapshot linoxide/test@snap3

sudo zfs clone linoxide/test@snap3 linoxide/clonedfs

sudo zfs list

zfs list command

Next lets check contents of clonedfs:

miki@miki-VirtualBox:~$ ls /linoxide/clonedfs/
dir1 dir3 dir5 dir7 dir9 file2 file4 file6 file8
dir2 dir4 dir6 dir8 file1 file3 file5 file7 file9

It is the same. So we are done here, lets delete.

Deleting the snapshot is without first deleting the clone is not possible

miki@miki-VirtualBox:~$ sudo zfs destroy linoxide/test@snap3
cannot destroy 'linoxide/test@snap3': snapshot has dependent clones
use '-R' to destroy the following datasets:

So we can use -R option to do it while deleting the snapshot or we can first delete the clone and then redo command. We will do first option.

sudo zfs destroy -R linoxide/test@snap3

then we destroy the test filesystem

sudo zfs destroy -R linoxide/test

Then lets delete zpool

sudo zpool destroy linoxide

A word on RAIDZ

For this tutorial we used classic RAID O but for production you might want to try RAIDZ. Here is how to create the zpools with various raidz levels.

sudo zpool create linoxide raidz /dev/sdb /dev/sdc /dev/sdd /dev/sde

For change of raidz level just type instead raidz2 or raidz3 instead of raidz.


This was introduction to basic commands for using and administering ZFS file system. For more info and documentation, you can refer to official Oracle documentation for ZFS. Linux implementation differs somewhat, but generally you can use Oracle docs for most things. If you are not ok with having tainted kernel due to GPL-incompatible module, or you are simply scared of Oracle lawyers, you might want to use BTRFS instead. It will soon come to feature parity with ZFS. Thanks for reading.

About Mihajlo Milenovic

Miki is a long time GNU/Linux user, Free Software advocate and a freelance system administrator from Serbia. Got introduced to GNU/Linux in year 2003 on old AMD Duron computer, and since than always eager to learn new stuff about this system. From 2016 writes for Linoxide to share his experiences with wider audience

Author Archive Page

Have anything to say?

Your email address will not be published. Required fields are marked *

All comments are subject to moderation.