S4 - Command Line Tool to Sync Local Files with Amazon S3

S4, short for Simple Storage Solution Syncer, is a free and open source tool for synchronizing your files to Amazon S3 service which works from Linux command line. It works in tandem with S3 and works just like proprietary solutions like One Drive or DropBox. It provides very fast upload and download speeds at affordable costs. Moreover, S4 gives you the ability to version your uploads so that you can easily roll back to a previous version if need be.

S4 tool is able to keep track of file changes using a .index file located at every root folder’s file which is being synched The folder comes with keys of every file that is being synchronized including the timestamps located both locally & remotely. The files are in JSON format. The keys can be viewed using the ls subcommand. s3cmd is another tool which serves the same purpose that we covered earlier in our article.

Installation on Centos 7

Install EPEL repository. EPEL is short for Extra Repositories for Enterprise Linux.

# yum -y install epel release

Refresh the repo

# yum repolist

Install python 3

# yum install yum-utils
# yum-builddep python
# curl -O https://www.python.org/ftp/python/3.5.0/Python-3.5.0.tgz

Unzip the file

 # tar xvf Python-3.5.0.tgz

Navigate to the folder Python-3.5.0

 # cd  Python-3.5.0

Build and install python3

# ./configure
# make
# make install

Install pip3

# yum install python34-setuptools
# easy_install-3.4 pip

Install

# pip install s4

Installation on Ubuntu 17.01 and Debian 9

Install python 3

apt-get install python3

Install development tools and a few more packages for a stable and robust environment

apt-get install build-essential libssl-dev libffi-dev python-dev

Install pip3

# apt-get install python3-pip

Checking pip version

# pip3 –version
pip 9.0.1 from /usr/lib/python3/dist-packages (python 3.5)

Install s4

# pip install s4

Setting up S3 Bucket

Sign in to your AWS account and head to the 's3' section under 'storage' and create a bucket. In this example, I created a bucket called magnum2030

s3 aws

Give the bucket 'Public' access and give it read and write bucket permissions. Also enable 'List' and 'Write objects'.

aws bucket public access

 

aws bucket public access

In the created bucket, Proceed and create a folder which we'll use for synchronizing files from the Terminal. In this case, the folder is called project1.

bucket project

How to run S4 Commands

Make a local directory and add a few files

# mkdir project1
# cd project1
# touch file1.txt file2.txt

Run

s4 add

Type the path of the local directory and provide your AWS credentials

local folder: /home/Jamie/project1
s3 uri: s3://magnum2030/project1
AWS Access Key ID: AKIAJD53D9GCGKCD
AWS Secret Access Key:
region name: us-east-2
Provide a name for this entry [project1]:jamie

s4 add files

Synchronizing Files

# s4 sync project1
# Syncing Project1 [/home/jamie/project1/  s3://magnum2030/project1/]

s4 syncing files

Head out to your remote folder project1 in the magnum2030 bucket and confirm the existence of the two files.

s4 synced files

Let's append a few lines of text to our files

echo "Seasons greetings!" >> file1.txt
echo "Happy holiday folks!" >> file2.txt

Run the sync command to sync the changes in our remote files in AWS

s4 sync project1

Output

Creating . (s3://magnum2030/project1/ => /home/jamie/project1/)
An error occurred while trying to update .: 'NoneType' object has no attribute 'total_size'
Updating file1.txt (/home/jamie/project1/ => s3://magnum2030/project1/)
Updating file2.txt (/home/jamie/project1/ => s3://magnum2030/project1/)

s4 syncing changes

Head out to our files in project1 folder and confirm the changes effected.

Synchronizing Files continuously

To perform a perpetual synchronization of files, use the 'daemon' command

s4 daemon project1

How to print existing targets

# s4 targets

Output

s3://magnum2030/project1/] ?region=us-east-1: [/home/jamie/project1 <=> https://s3.console.aws.amazon.com/s3/buckets/magnum2030/project1/?region=us-east-1] Jamie: [/home/jamie/project1 <=> https://s3.console.aws.amazon.com/s3/buckets/magnum2030/project1/?region=us-east-2&tab=overview] Project1: [/home/jamie/Downloads <=> s3://mybucket/Downloads] james: [/home/jamie/project1 <=> s3://magnum2030/project1] jamie: [/home/jamie/project1 <=> s3://magnum2030/project1] jay: [/home/jamie/aws <=> s3://magnum2030/project1] project1: [/home/jamie/project1 <=> s3://magnum2030/project1] test1: [/home/jamie/project1 <=> s3://magnum2030/project1]>

Listing your target's contents

# s4 ls project1

s4 list

Removing a target

s4 rm target-name

Note

The target name is what is specified in the second last line below.

s4 add files

Ignoring files

Create a .syncignore file in the directory being synced - project1. This file lists and determines files or directories to be ignored during the syncing process. Each line entry represents a file or folder to be ignored by sync feature. For instance, the existence of  "TestExample" in the .syncignore file will force sync to ignore all files and folders called "TestExample".

Hope you enjoyed this article and please add your comments.

About Jamie Arthur

Hey, I'm James, a passionate Linux Systems administrator, and a tech enthusiast. I derive immense gratification in conducting research on Linux systems and keeping myself up to date with the latest in the technology world.

Author Archive Page

Have anything to say?

Your email address will not be published. Required fields are marked *

All comments are subject to moderation.