How to Upload Files to S3 using Terraform

upload files to s3 using terraform

Terraform is an open-source, Infrastructure as Code tool, created by HashiCorp. It is a tool for building, changing, and versioning infrastructure safely and efficiently in the cloud. Infrastructure as code tool allows developers to codify infrastructure in a way that makes provisioning automated, faster, and repeatable.

Amazon S3 is an object storage that you can use to store and retrieve any amount of data, at any time, from anywhere on the web.

In this tutorial, I am going to show you how to upload files from a laptop/PC to the AWS S3 bucket using terraform.

Requirement

  1. AWS account and Identity Access Management (IAM) user with a pair of access, secret key.
  2. Terraform installed in your system.

Step 1: Provide access key

Create a file name provider.tf and paste the following line of code. The access key and secret key are generated when you add a user in IAM.  Make sure that the user has at least the privilege of AmazonS3FullAccess. Select the region that you are going to work in.

provider "aws" {

  access_key = "ACCESS_KEY_HERE"

  secret_key = "SECRET_KEY_HERE"

  region     = "us-east-1"
}

Step 2: Create a bucket

Open another file in the same directory name 's3bucket.tf' and create our first bucket 'b1', name it 's3-terraform-bucket'. You might get error if the provided name bucket 's3-terraform-bucket' is not unique over the global AWS region. Another important thing is ACL which provides granular access to your bucket, make it either private or public. You can provide tag as your choice.

Also, upload the file, which is located ‘myfiles’ directory. Define resource as aws_s3_bucket_object. To refer to the bucket you just define above, get it from bucket id b1. Key is the name given to the object as of your choice. Etag is given to find if the file has been changed from its last upload using md5 sum.

# Create a bucket
resource "aws_s3_bucket" "b1" {

  bucket = "s3-terraform-bucket-lab"

  acl    = "private"   # or can be "public-read"

  tags = {

    Name        = "My bucket"

    Environment = "Dev"

  }

}

# Upload an object
resource "aws_s3_bucket_object" "object" {

  bucket = aws_s3_bucket.b1.id

  key    = "profile"

  acl    = "private"  # or can be "public-read"

  source = "myfiles/yourfile.txt"

  etag = filemd5("myfiles/yourfile.txt")

}

Step 2.1: To upload multiple files (optional)

If you want to upload all the files of a directory, then you need to use 'for_each' loop.

resource "aws_s3_bucket_object" "object1" {
for_each = fileset("myfiles/", "*")
bucket = aws_s3_bucket.b1.id
key = each.value
source = "myfiles/${each.value}"
etag = filemd5("myfiles/${each.value}")
}

Step 3: Execute

The terraform plan command is used to create an execution plan. Terraform performs a refresh, unless explicitly disabled, and then determines what actions are necessary to achieve the desired state specified in the configuration files.

Finally to execute terraform apply and see the output.

terraform plan
terraform apply

Login to your AWS console and go to S3 service. You can see there a bucket s3-terraform -bucket and the file you upload inside it.

aws s3 bucket
AWS S3 Bucket - 's3-terraform-bucket-lab'

Conclusion

We have reached the end of this article. In this guide, we have walked you through the steps required to create a bucket in AWS s3, add single or multiple files using terraform.

 

11 Comments... add one

    • You can either directly make a bucket in s3 Glacier and upload your file or just enable 'life cycle management' so that your object from standard tire will move to archive (Glacier) after N. number of days.

      Reply
  1. can we upload files which are placed in git repo to s3 bucket using terraform?? or it is neccessary that files must present at local

    Reply
  2. I was searching to upload multiple file in s3. Found here, simply explained. I like the author, the way of simplifying article. Hope to see more on cloud tutorials

    Reply
  3. One should never have to provide aws/ado/gc creds.

    I get your approach, but please start thinking about how this could be "re-used" or rebuilding as a module that others could use.

    Reply
  4. can i have multiple s3 bucket for one cloud front distribution.
    bucket 1 for uploading files to store and bucket 2 for showing error files. since i supposed to delete all files in bucket 1 i cannot keep in same bucket

    Reply
    • It is possible. See the following example,
      1. Create a bucket, say 'my-storage-bucket' : use for uploading files
      2. Create another bucket, say 'my-error-bucket' : use for storing error
      3. Create the cloudfront distribution say 'sth.cloudfront.net' with origin 'my-storage-bucket'
      4. Again, add another origin 'my-error-bucket' on the same distribution.
      5. Now, go to the distribution behavior and add behavior to open error with sth.cloudfront.net/error while sth.cloudfront.net will serve your uploaded files.

      Reply

Leave a Comment