<<

Long-term file storage with S3

File storage with Amazon S3 is ridiculously cheap - a terabyte of storage in the EU-Frankfurt region will cost $24 a month. True, S3 storage isn’t exactly convenient compared to something like Dropbox, but for long-term archiving it’s hard to beat. It’s where I keep backups of things like photos, past projects, etc etc.

There’s a way of reducing that cost even more if you’re very cost-sensitive, and that’s changing the storage class to Glacier.

Glacier is an S3 storage class that moves the data somewhere with a retrieval latency measured in hours - but for backups and long-term storage, who cares about latency? It drops the cost to a mere $4.00 per month per terabyte - or even $1.80 if you’re happy to increase the retrieval time to 12 hours.

You can change the storage class on a file-by-file basis, either in the web UI, or with the command line interface. That second option is way more convenient if you’re dealing with a lot of files - with the best will in the world, nobody could ever accuse the AWS interface as being user-friendly.

Assuming you’ve got the AWS CLI installed and configured, changing the storage class in a bucket is straight-forward:

aws s3 cp --recursive --storage-class GLACIER s3://<your bucket name>/ ↳
   s3://<your bucket name>/

The cp command does an in-place copy from the original storage class to the one that you specify.

Specifying the storage-class allows you to set the class when uploading:

aws s3 cp --recursive --storage-class GLACIER <local filepath> ↳
   s3://<your bucket name>/

And if you still want to get cheaper, store the files in the us-east-1 region - that brings the price down to less than $1 per terabyte per month.