Hello,
I want to backup a few Fedora serves and I'm wondering what Open Source software people have used and would recommend for taking a snapshot of a the complete system and have it save to an external USB hard drive.
-Montana Blog: http://montanaquiring.info My Friend Feed: http://friendfeed.com/antikx iPhone/Touch Apps I have bought: http://appshopper.com/feed/user/antikx/myapps
On Tue, 2009-10-27 at 14:00 -0500, Montana Quiring wrote:
Hello,
I want to backup a few Fedora serves and I'm wondering what Open Source software people have used and would recommend for taking a snapshot of a the complete system and have it save to an external USB hard drive.
Personally I like "dd" for images.
dd if=/dev/xxx | gzip > backup.img.gz
On 2009-10-27 14:00, Montana Quiring wrote:
I want to backup a few Fedora serves and I'm wondering what Open Source software people have used and would recommend for taking a snapshot of a the complete system and have it save to an external USB hard drive.
I'm using the Amanda backup software, which is part of the Fedora distribution. It's mostly meant for doing backups of multiple clients (over the network) to tape drives/stackers/libraries, but you can also set up a virtual tape device consisting of a set of disk directories. I use that to back up to external disks over various types of interfaces (mostly Firewire).
You didn't specify if there are any particular restrictions on the data format you'll be saving to the external drive, but with Amanda, you'd basically be limited to their archive format, where each file has a particular header, followed by the archive data, which can be in either native "dump" format or in GNU tar format, with optional compression.
Note that Amanda is really optimized for doing regularly scheduled backups, with periodic full dumps and more frequent incremental dumps. That means, that on a restore, you might have to pull data from multiple archives, just for one file system. If you're looking for something that gives you a complete system image to make "bare metal" restores or cloning easier, this is probably not the best solution.
I've used PartImage to great success for all manner of imaging. Excellent replacement for Symantec/Norton Ghost. I think this is tool used in CloneZilla, not positive though. SysRescCD bootable image includes this tool... If you either a) have enough RAM (~765Mb) or b) have another burner that you *didn't* boot from, it can burn bootable CDs or DVDs.
-Adam
-----Original Message----- From: Montana Quiring montanaq@gmail.com Date: Tue, 27 Oct 2009 14:00:28 To: MUUG Roundtableroundtable@muug.mb.ca Subject: [RndTbl] Open Source Backup/cloning to USB HDD
_______________________________________________ Roundtable mailing list Roundtable@muug.mb.ca http://www.muug.mb.ca/mailman/listinfo/roundtable
This is from Clonezilla page: Based on Partimage http://www.partimage.org/, ntfsclonehttp://www.linux-ntfs.org/, partclone http://partclone.org/, and dd to clone partition. However, clonezilla, containing some other programs, can save and restore not only partitions, but also a whole disk.
Thanks for the feedback everyone!
-Montana Blog: http://montanaquiring.info My Friend Feed: http://friendfeed.com/antikx iPhone/Touch Apps I have bought: http://appshopper.com/feed/user/antikx/myapps
On Tue, Oct 27, 2009 at 3:24 PM, Adam Thompson athompso@athompso.netwrote:
I've used PartImage to great success for all manner of imaging. Excellent replacement for Symantec/Norton Ghost. I think this is tool used in CloneZilla, not positive though. SysRescCD bootable image includes this tool... If you either a) have enough RAM (~765Mb) or b) have another burner that you *didn't* boot from, it can burn bootable CDs or DVDs.
-Adam
-----Original Message----- From: Montana Quiring montanaq@gmail.com Date: Tue, 27 Oct 2009 14:00:28 To: MUUG Roundtableroundtable@muug.mb.ca Subject: [RndTbl] Open Source Backup/cloning to USB HDD
Roundtable mailing list Roundtable@muug.mb.ca http://www.muug.mb.ca/mailman/listinfo/roundtable
Just out of curiosity, what is it that these various programs that people have recommended do with regard to imaging that "dd" doesn't?
I've never looked at any of these in any great detail but so far as I can see they all add complexity while reducing functionality.
Image a disk? dd
Backup a file system? rsync
to tape? tar
to remote? ssh (or scp)
scheduled? put any of the above in cron
That's all you need. What am I missing?
Regards,
probably nothing extra except have a GUI. I will probably be showing someone that isn't super comfortable with CLI's to do a backup/restore, so I would rather them have a GUI.
-Montana Blog: http://montanaquiring.info My Friend Feed: http://friendfeed.com/antikx iPhone/Touch Apps I have bought: http://appshopper.com/feed/user/antikx/myapps
On Tue, Oct 27, 2009 at 3:36 PM, John Lange john@johnlange.ca wrote:
Just out of curiosity, what is it that these various programs that people have recommended do with regard to imaging that "dd" doesn't?
I've never looked at any of these in any great detail but so far as I can see they all add complexity while reducing functionality.
Image a disk? dd
Backup a file system? rsync
to tape? tar
to remote? ssh (or scp)
scheduled? put any of the above in cron
That's all you need. What am I missing?
Regards,
John Lange http://www.johnlange.ca
On Tue, 2009-10-27 at 15:26 -0500, Montana Quiring wrote:
This is from Clonezilla page: Based on Partimage, ntfsclone, partclone, and dd to clone partition. However, clonezilla, containing some other programs, can save and restore not only partitions, but also a whole disk.
Thanks for the feedback everyone!
Roundtable mailing list Roundtable@muug.mb.ca http://www.muug.mb.ca/mailman/listinfo/roundtable
I have used dd for copying drives or partitions in the past, in order to have a clone ready to swap in in the event of a failure. I found it extremely useful, much faster than some cheap commercial software.
One problem I have encountered is IO errors, especially when copying large volumes. The 'noerr' option allows the copy to complete, but of course does not correct the problem. I have had one or two errors on every second drive.
For my Mac, I have some commercial software that can find the error and what file (if any) it affects. It can report the error to the HSF+ file system, so that no other files use that sector.
I can't find any material that explains how this works -- what I surmise is that the drive firmware hides defective blocks. Obviously it does not hide them all, or at least not instantly - so the file system tracks bad blocks that the firmware has missed in order to avoid using them in files. Using dd circumvents the file system, so there is a risk of a corrupt file on the target drive/partition.
The drives seem OK in every other respect, and software that monitors the status of the drives does not indicate any impending failure. I am guessing, based on a small sample, that this is a 'normal' condition.
On Tue, Oct 27, 2009 at 3:36 PM, John Lange john@johnlange.ca wrote:
Just out of curiosity, what is it that these various programs that people have recommended do with regard to imaging that "dd" doesn't?
I've never looked at any of these in any great detail but so far as I can see they all add complexity while reducing functionality.
Image a disk? dd
Backup a file system? rsync
to tape? tar
to remote? ssh (or scp)
scheduled? put any of the above in cron
That's all you need. What am I missing?
Regards,
John Lange http://www.johnlange.ca
On Tue, 2009-10-27 at 15:26 -0500, Montana Quiring wrote:
This is from Clonezilla page: Based on Partimage, ntfsclone, partclone, and dd to clone partition. However, clonezilla, containing some other programs, can save and restore not only partitions, but also a whole disk.
Thanks for the feedback everyone!
Roundtable mailing list Roundtable@muug.mb.ca http://www.muug.mb.ca/mailman/listinfo/roundtable
Unfortunately, based on how the drives are *designed* to operate, seeing any bad sectors at all is NOT NORMAL and probably indicates upcoming failure of the drive.
All modern drives (IDE, SATA, SCSI & SAS, anyway) reserve an entire track (or more) for bad-sector remapping. The firmware automatically remaps any sector that requires more than X number of retries to read or write successfully. This is done transparently to the user, operating system, ATAPI controller, etc. - it's handled 100% internally.
IF the remapping operation fails, then - and only then - will the read or write command fail and the bad sector will be reported. The main reason for sector remapping operations to fail is that all of the reserved area is used up. Sometimes there are many bad sectors in the vendor-reserved area, which dramatically decreases the number of successful remapping operations that can take place; sometimes it's simply not possible to read the contents of the old, failed, sector no matter how many retries; there are a few other reasons that vary from one manufacturer to the next.
But, basically, if the OS is reporting bad sectors, it's time to replace the drive because the drive can no longer perform its own defect management. If you use a tool to examine the drive's SMART response, you'll often see a fairly low number of allowable "bad sectors" - this, too, varies from one manufacturer to the next - because each manufacturer does allow (for warranty purposes) a larger number of sectors to fail than are guaranteed to be remapped successfully. Use a SMART-aware tool to invoke the drive's "Long" self-test, and if necessary, return the drive for repair or replace it.
Some spurious read errors may be considered normal; sometimes a drive will attempt to remap a bad block, fail to do so and report the error, then try again and succeed the next time that sector is accessed.
It may be considered "normal" that if your drive comes with a one-year warranty, then after that one year, you will gradually develop more and more bad sectors. So any system that isn't brand-new is likely to have some bad sectors. Outside the warranty period, there's no "rule of thumb" for how fast those sectors crop up - so even if you have a two-year-old system, you might already be seeing bad sectors beyond the drive's capability to remap. Note that many modern file systems do not provide any reasonable way to dynamically grow the bad sectors list any more. I would nominate ext2 and ext3 in that list as well as HFS+ and NTFS... not that it's impossible, it's just not reasonably easy for a user to do so. For that matter, I can't think of a single modern FS other than FAT32 where it *is* reasonably easy!
Bottom line: bad sectors are bad. You shouldn't see them on modern hard disks. But you probably will. And yes, they really do mean your drive is going bad - but it might still have many usable years left.
On Tue, Oct 27, 2009 at 18:37, Dan Martin ummar143@gmail.com wrote:
One problem I have encountered is IO errors, especially when copying large volumes. The 'noerr' option allows the copy to complete, but of course does not correct the problem. I have had one or two errors on every second drive. [...]
I can't find any material that explains how this works -- what I surmise is
that the drive firmware hides defective blocks. Obviously it does not hide them all, or at least not instantly - so the file system tracks bad blocks that the firmware has missed in order to avoid using them in files. Using dd circumvents the file system, so there is a risk of a corrupt file on the target drive/partition. [...] The drives seem OK in every other respect, and software that monitors the status of the drives does not indicate any impending failure. I am guessing, based on a small sample, that this is a 'normal' condition.
-- -Adam Thompson athompso@athompso.net
On Tue, Oct 27, 2009 at 19:18, Adam Thompson athompso@athompso.net wrote:
All modern drives (IDE, SATA, SCSI & SAS, anyway) reserve an entire track (or more) for bad-sector remapping. The firmware automatically remaps any sector that requires more than X number of retries to read or write successfully. This is done transparently to the user, operating system, ATAPI controller, etc. - it's handled 100% internally.
Updating my own information, it seems that many drives only attempt to reallocate bad sectors on WRITE, and *not* on read. The *destructive* SMART "Long" test causes every sector to be re-written, which will accomplish this goal, but at the cost of losing all your data.
Windows' ScanDisk program (or CHKDSK, under some versions of Windows) can do a non-destructive surface scan that only tests READS.
Gibson Research (GRC)'s SpinRite (http://www.grc.com/spinrite.htm) is the only program that I know and would recommend, that I know can do NON-destructive full-disk read-write testing. It has several limitations, including its US$90 price, but Steve Gibson "wrote the book" on HDD testing and it shows. FYI, if you think $90 is too much money, consider that Steve (the author) only writes in one programming language - x86 assembly!
Reportedly, some vendor diagnostics can do non-destructive write testing, but the current versions of Seagate SeaTools and WD DataLifeguard do not appear to have this functionality, and Fujitsu, HGST, Toshiba, and Samsung don't produce drive test utilities [any more].
It would be fairly simple to implement something that does this in Linux with a shell script, something that looped over X megabytes of space, dd(1)'d 1MB of data at a time to a temporary file, dd(1)'d it right back to the drive at offset X. The implementation is left as an exercise for the reader :-).
Gibson Research (GRC)'s SpinRite (http://www.grc.com/spinrite.htm) is the only program that I know and would recommend, that I know can do NON-destructive full-disk read-write testing. It has several limitations, including its US$90 price, but Steve Gibson "wrote the book" on HDD testing and it shows. FYI, if you think $90 is too much money, consider that Steve (the author) only writes in one programming language - x86 assembly!
Just don't mention SpinRite around Mr. Donovan. *wink*
-Montana
On 10/27/2009 09:38 PM, Adam Thompson wrote:
reallocate bad sectors on WRITE, and *not* on read. The *destructive* SMART "Long" test causes every sector to be re-written, which will
I don't believe the SMART long test is destructive, why do you think so?
Peter
On Wed, Oct 28, 2009 at 01:10, Peter O'Gorman peter@pogma.com wrote:
On 10/27/2009 09:38 PM, Adam Thompson wrote:
reallocate bad sectors on WRITE, and *not* on read. The *destructive* SMART "Long" test causes every sector to be re-written, which will
I don't believe the SMART long test is destructive, why do you think so?
Sorry, I wasn't clear enough in my original posting.
The regular SMART "Long" test is *not* destructive.
Some HDDs - there's no pattern than I can discern to tell which, other than reading the documentation - offer an additional, *non-standard* destructive test. Some of *those* HDDs appear to invoke the same function as "low-level format", since both are essentially equivalent processes. Sector remapping takes place as usual during either command.
Generally speaking, SCSI, FC and SAS devices should always support a low-level format command, per ANSI T13. IDE and SATA devices may or may not support a low-level format command. The SMART specification (per ANSI T13) is extensible, and specifies a *minimum* level of functionality; some vendors - at least in certain models - implement more than minimum required functionality. It seems to me that models sold as OEM equipment to server and storage appliance vendors often implement extra functionality, probably at the request of vendors like NetApp, EMC, Dell, HP, etc. Since most storage vendors now offer cheap(er) SATA storage options for their arrays, some so-called "desktop" models will also benefit from this.
The only way to be certain is to read the technical documentation for your particular drive, which will tell you precisely what commands the drive supports. I'm not talking about the user manual; note that some drives do not appear to have any such documentation - in which case you can probably assume they don't support anything beyond the minimum required feature set.
To (sometimes) get around dd stopping on the first error or using 'noerr' and getting nothing copied for a sector, I've been using a program called ddrescue ref: http://directory.fsf.org/project/ddrescue/ that does several retries before giving up. It also has some interesting techniques to recover as much as possible from a damaged disks while minimizing further damage.
-Daryl
On Tue, 27 Oct 2009, Dan Martin wrote:
I have used dd for copying drives or partitions in the past, in order to have a clone ready to swap in in the event of a failure. I found it extremely useful, much faster than some cheap commercial software.
One problem I have encountered is IO errors, especially when copying large volumes. The 'noerr' option allows the copy to complete, but of course does not correct the problem. I have had one or two errors on every second drive.
For my Mac, I have some commercial software that can find the error and what file (if any) it affects. It can report the error to the HSF+ file system, so that no other files use that sector.
I can't find any material that explains how this works -- what I surmise is that the drive firmware hides defective blocks. Obviously it does not hide them all, or at least not instantly - so the file system tracks bad blocks that the firmware has missed in order to avoid using them in files. Using dd circumvents the file system, so there is a risk of a corrupt file on the target drive/partition.
The drives seem OK in every other respect, and software that monitors the status of the drives does not indicate any impending failure. I am guessing, based on a small sample, that this is a 'normal' condition.
On Tue, Oct 27, 2009 at 3:36 PM, John Lange john@johnlange.ca wrote:
Just out of curiosity, what is it that these various programs that people have recommended do with regard to imaging that "dd" doesn't?
I've never looked at any of these in any great detail but so far as I can see they all add complexity while reducing functionality.
Image a disk? dd
Backup a file system? rsync
to tape? tar
to remote? ssh (or scp)
scheduled? put any of the above in cron
That's all you need. What am I missing?
Regards,
John Lange http://www.johnlange.ca
On Tue, 2009-10-27 at 15:26 -0500, Montana Quiring wrote:
This is from Clonezilla page: Based on Partimage, ntfsclone, partclone, and dd to clone partition. However, clonezilla, containing some other programs, can save and restore not only partitions, but also a whole disk.
Thanks for the feedback everyone!
Roundtable mailing list Roundtable@muug.mb.ca http://www.muug.mb.ca/mailman/listinfo/roundtable
-Daryl
It's fascinating (ok, at least to the übergeeks among us) to note that SpinRite and GNU ddrescue have diametrically opposite approaches to data recovery. Yes despite being completely opposite, both approaches are still valid and each will work well in different scenarios.
I've been reminded by a few people that SpinRite in particular, while a very useful tool, can actually cause data LOSS in some scenarios - if you choose to purchase and use this (IMHO) extremely useful utility, PLEASE READ THE MANUAL FIRST. And beyond that, if you don't UNDERSTAND the manual, write Steve Gibson and ask (politely) for your money back - because if you don't understand what the manual is talking about, you are likely to use the tool incorrectly and lose more data than you save. Think of SpinRite as a lethal weapon - highly useful in the right situation, but if you don't know how to use it you'll just blow your foot off by accident.
Thank you for listening :-) -Adam
On Thu, Oct 29, 2009 at 22:47, Daryl F wyatt@prairieturtle.ca wrote:
To (sometimes) get around dd stopping on the first error or using 'noerr' and getting nothing copied for a sector, I've been using a program called ddrescue ref: http://directory.fsf.org/project/ddrescue/ that does several retries before giving up. It also has some interesting techniques to recover as much as possible from a damaged disks while minimizing further damage.