If you are using Postgres 9.4 Database for your project. I think that you are thinking about backup Backup Postgres everyday. So in this post I will show you how to backup Backup Postgres 9.4 to S3.

A sign of a good leader is not how many followers you have, but how many leaders you create.
Install Dependencies:
|
apt-get install lzop pv python-pip python daemontools |
Using PIP to install WAL-E:
Using PIP to upgrade Request:
Using PIP to upgrade Six:
|
pip install --upgrade six |
If you not upgrade them, maybe you will meet an error as below when you run WAL-E Backup:
|
$ /usr/bin/envdir /etc/wal-e.d/env /usr/local/bin/wal-e backup-push /var/lib/postgresql/9.4/main Traceback (most recent call last): File "/usr/local/bin/wal-e", line 5, in <module> from pkg_resources import load_entry_point File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2749, in <module> working_set = WorkingSet._build_master() File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 446, in _build_master return cls._build_from_requirements(__requires__) File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 459, in _build_from_requirements dists = ws.resolve(reqs, Environment()) File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 628, in resolve raise DistributionNotFound(req) pkg_resources.DistributionNotFound: six>=1.9.0 |
And we should change permission for PIP packet so that postgres user is able to use them:
|
$ chmod a+xr -R /usr/local/lib/python2.7/dist-packages/requests* $ chmod a+xr -R /usr/local/lib/python2.7/dist-packages/six* |
Edit postgresql.conf to do backup with wall-push command:
|
wal_level = archive # hot_standby in 9.0+ is also acceptable archive_mode = on archive_command = 'envdir /etc/wal-e.d/env /usr/local/bin/wal-e wal-push %p' archive_timeout = 60 |
Now, we restart postgres to apply the changes:
|
/etc/init.d/postgresql restart |
Backup Everyday
Assume that you created a bucket on S3, and you have AWS credentials. So you should push them to config file with commands:
|
umask u=rwx,g=rx,o= mkdir -p /etc/wal-e.d/env echo "your_aws_secret_key" > /etc/wal-e.d/env/AWS_SECRET_ACCESS_KEY echo "your_aws_access_key_id" > /etc/wal-e.d/env/AWS_ACCESS_KEY_ID echo 's3://yourbucketname/postgres' > \ /etc/wal-e.d/env/WALE_S3_PREFIX chown -R root:postgres /etc/wal-e.d |
Now, we will try to backup to S3 in the first time. At the first, change to postgres user:
Run backup command:
|
$ /usr/bin/envdir /etc/wal-e.d/env /usr/local/bin/wal-e backup-push /var/lib/postgresql/9.4/main |
The output should be:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
|
wal_e.main INFO MSG: starting WAL-E DETAIL: The subcommand is "backup-push". STRUCTURED: time=2016-01-05T02:20:47.435213-00 pid=11384 wal_e.operator.backup INFO MSG: start upload postgres version metadata DETAIL: Uploading to s3://xxxxxxxxxxxxxxxxxx/postgres/basebackups_005/base_000000010000000000000002_00000040/extended_version.txt. STRUCTURED: time=2016-01-05T02:20:47.715716-00 pid=11384 wal_e.operator.backup INFO MSG: postgres version metadata upload complete STRUCTURED: time=2016-01-05T02:20:47.786793-00 pid=11384 wal_e.worker.upload INFO MSG: beginning volume compression DETAIL: Building volume 0. STRUCTURED: time=2016-01-05T02:20:47.826936-00 pid=11384 wal_e.worker.upload INFO MSG: begin uploading a base backup volume DETAIL: Uploading to "s3://xxxxxxxxxxxxxxx/postgres/basebackups_005/base_000000010000000000000002_00000040/tar_partitions/part_00000000.tar.lzo". STRUCTURED: time=2016-01-05T02:20:48.233511-00 pid=11384 wal_e.worker.upload INFO MSG: finish uploading a base backup volume DETAIL: Uploading to "s3://xxxxxxxxxxxxxxxxxx/postgres/basebackups_005/base_000000010000000000000002_00000040/tar_partitions/part_00000000.tar.lzo" complete at 9791.88KiB/s. STRUCTURED: time=2016-01-05T02:20:48.846993-00 pid=11384 NOTICE: pg_stop_backup complete, all required WAL segments have been archived |
Finally, we add the command to crontab to backup 5 AM everyday:
|
0 5 * * * /usr/bin/envdir /etc/wal-e.d/env /usr/local/bin/wal-e backup-push /var/lib/postgresql/9.4/main |
Finished your works now!