Hi,
I have decided to also create an incremental backup on my own and was wondering what you would recommend. I did some DDG'ing around and came up with rdiff-backup. Would you recommend this? (There are some more, but this one appears to have an rpm in the fedora repositories.) Personally, I am a great fan of rsync but I also wanted a setup that would be fast because I would like to run it every hour (say).
OK, I know this is OT, but it is slightly so because the system used to backup are updated F20 systems:-)
Any suggestions/personal experiences/suggested tweaks/references would be greatly appreciated.
Many thanks and best wishes, Ranjan
On Thu, Jun 19, 2014 at 07:59:01AM -0500, Ranjan Maitra wrote:
Hi,
I have decided to also create an incremental backup on my own and was wondering what you would recommend. I did some DDG'ing around and came up with rdiff-backup. Would you recommend this? (There are some more, but this one appears to have an rpm in the fedora repositories.) Personally, I am a great fan of rsync but I also wanted a setup that would be fast because I would like to run it every hour (say).
I'm no expert on these things, but I can suggest you take a look at rsnapshot, which uses rsync to do the deed. It is probably most suitable for backing up individual computers, or small numbers of systems, though my own personal computer at home is all I've used it for, so far. it's quick because it only copies the changes. rsnapshot.org.
On 19.06.2014 15:10, Fred Smith wrote:
On Thu, Jun 19, 2014 at 07:59:01AM -0500, Ranjan Maitra wrote:
Hi,
I have decided to also create an incremental backup on my own and was wondering what you would recommend. I did some DDG'ing around and came up with rdiff-backup. Would you recommend this? (There are some more, but this one appears to have an rpm in the fedora repositories.) Personally, I am a great fan of rsync but I also wanted a setup that would be fast because I would like to run it every hour (say).
I'm no expert on these things, but I can suggest you take a look at rsnapshot, which uses rsync to do the deed. It is probably most suitable for backing up individual computers, or small numbers of systems, though my own personal computer at home is all I've used it for, so far. it's quick because it only copies the changes. rsnapshot.org.
http://duplicity.nongnu.org/ https://wiki.gnome.org/Apps/DejaDup Not that I recommend. :)
poma
On Thu, 2014-06-19 at 07:59 -0500, Ranjan Maitra wrote:
Hi,
I have decided to also create an incremental backup on my own and was wondering what you would recommend. I did some DDG'ing around and came up with rdiff-backup. Would you recommend this? (There are some more, but this one appears to have an rpm in the fedora repositories.) Personally, I am a great fan of rsync but I also wanted a setup that would be fast because I would like to run it every hour (say).
OK, I know this is OT, but it is slightly so because the system used to backup are updated F20 systems:-)
Any suggestions/personal experiences/suggested tweaks/references would be greatly appreciated.
I wouldn't call it off-topic for this list.
Personally I use rsnapshot to a local NAS. Some notes:
* Rsnapshot backs up file-by-file. It saves bandwidth by using rsync, but if a huge file has a single byte change then the sync will be fast but there will be two copies of the huge file on the backup server. Consider carefully if you want this to happen with databases or VMs (but note that sparse files are handled sensibly). There are hooks to use your own scripts for these cases.
* Rsnapshot keeps a view of a complete snapshot (hence the name) of the backup-up directories by using hard links on the server. Since directories can't have arbitrary hard links to them, they have to be physically copied. This doesn't matter much in practice. (See Apple's Time Machine which changed filesystem semantics to make directory links work).
* My first experience of rsnapshot was to run it on my desktop and write to an NFS-mounted backup space. This works but is slow and inefficient in network bandwidth. I now run rsnapshot from the NAS and pull data over SSH. It turns out that this is how rsnapshot is supposed to be used, but the developers apparently didn't think it worth mentioning in the docs (I mean, it's so obvious ...) Doing it this way is about an order of magnitude faster in my case but did require a fair amount of fracking around on my completely undocumented Iomega NAS.
* Out of the box rsnapshot is focussed on backing up a single client to a single server. Handling multiple clients requires a little massaging of the config file. Not a big deal but worth knowing.
poc
Ranjan Maitra wrote:
Hi,
I have decided to also create an incremental backup on my own and was wondering what you would recommend. I did some DDG'ing around and came up with rdiff-backup. Would you recommend this? (There are some more, but this one appears to have an rpm in the fedora repositories.) Personally, I am a great fan of rsync but I also wanted a setup that would be fast because I would like to run it every hour (say).
OK, I know this is OT, but it is slightly so because the system used to backup are updated F20 systems:-)
Any suggestions/personal experiences/suggested tweaks/references would be greatly appreciated.
Many thanks and best wishes, Ranjan
Take a look at obnam
On 06/19/2014 07:59 AM, Ranjan Maitra wrote:
Hi,
I have decided to also create an incremental backup on my own and was wondering what you would recommend. I did some DDG'ing around and came up with rdiff-backup. Would you recommend this? (There are some more, but this one appears to have an rpm in the fedora repositories.) Personally, I am a great fan of rsync but I also wanted a setup that would be fast because I would like to run it every hour (say).
OK, I know this is OT, but it is slightly so because the system used to backup are updated F20 systems:-)
Any suggestions/personal experiences/suggested tweaks/references would be greatly appreciated.
Many thanks and best wishes, Ranjan
I'm using rdiff-backup and it's remarkably easy to backup but when backing up HUGE files (like a VM image) it can take a long time, because the receiving side compares the two files and stores only the difference. This is good for disk space but bad for CPU usage.
I'm backing up to a server in my basement that's used for little else, so this isn't an issue for me.
On 06/19/2014 11:16 AM, Steven Stern wrote:
I'm using rdiff-backup and it's remarkably easy to backup but when backing up HUGE files (like a VM image) it can take a long time, because the receiving side compares the two files and stores only the difference. This is good for disk space but bad for CPU usage.
I'm using Back In Time to back up my desktop to a flash drive. As the program uses links to represent files that haven't changed (a great saving in both time and space) provided that the target media can support them, I've reformatted the drive to ext4. Currently, I have about 31GB of data backed up to a 16GB drive, with about 8GB still free. This may or may not be the right solution for you, but it's worked just fine for me.
And, in case anybody's wondering, I once deleted a folder I didn't need any more then restored it from the backup, just to make sure that the restore function worked as advertised.
On 06/19/2014 09:10 AM, Fred Smith wrote:
On Thu, Jun 19, 2014 at 07:59:01AM -0500, Ranjan Maitra wrote:
Hi,
I have decided to also create an incremental backup on my own and was wondering what you would recommend. I did some DDG'ing around and came up with rdiff-backup. Would you recommend this? (There are some more, but this one appears to have an rpm in the fedora repositories.) Personally, I am a great fan of rsync but I also wanted a setup that would be fast because I would like to run it every hour (say).
I'm no expert on these things, but I can suggest you take a look at rsnapshot, which uses rsync to do the deed. It is probably most suitable for backing up individual computers, or small numbers of systems, though my own personal computer at home is all I've used it for, so far. it's quick because it only copies the changes. rsnapshot.org.
I fully concur with rsnapshot. While this is not a traditional backup program, it uses the rsync --link-dest to create hard links. The end result is that you have essentially a snapshot of the directory tree you are backing up. Identical files between snapshots are hard linked so they do not take up additional space. I have used rsnapshot at home, but also I used it at work.