1

I have a half-dead 1TB WD mechanical hard drive, which I am trying to recover files from using TestDisk. It used to host Windows 10 and before crashing wasn't filled over half-way, so I decided to make a dd image of the largest partition.

The problem I'm facing is the fear of running out of space on the healthy disk which is used to store the disk image. Due to my lack of any disk recovery experience I expected that I could use a 1TB healthy disk to store the recovered <500GB of data. I'm also using the healthy drive to host a fresh copy of Windows 10 and the TestDisk software.

Currently I have around 7% of the partition imaged (which took about a day), so I'm wondering if I should cancel the imaging process (which I don't want to do, because the half-dead disk is already in bad shape) and use a 2TB healthy disk or if I should wait until the dd image is 90% done and then cancel it. Since the drive was never filled half-way, would I risk loosing data? How useable would a dd image be were it to be aborted mid-way creation on TestDisk?

  • 1
    Do not use dd for this, use ddrescue. How usable will the image be? Read this answer. It applies to images created with dd as well, if dd is used properly. Using dd properly with faulty disk and over multiple sessions is not as easy as you wish. Use ddrescue. – Kamil Maciorowski May 01 '20 at 19:56

1 Answers1

2

The tool to use is ddrescue :

GNU ddrescue is a data recovery tool. It copies data from one file or block device (hard disc, cdrom, etc) to another, trying to rescue the good parts first in case of read errors.

Ddrescue does not write zeros to the output when it finds bad sectors in the input, and does not truncate the output file if not asked to. So, every time you run it on the same output file, it tries to fill in the gaps without wiping out the data already rescued.

You can repeat ddrescue runs until you are "successful enough".

ddrescue has some options that can speed up at least the first pas, but this is done manually on a trial and error basis.

The option -a, --min-read-rate=<bytes> specifies the minimum read rate of good areas in bytes/s. Specifying a size like 10M may skip first areas that are still readable but extremely slowly, continuing with areas that can still be read quick enough.

The options of --no-scrape or --no-trim might be able to speed up the recovery of the easy parts by entirely omitting trying the damaged sectors. You could also try values for the --skip-size option to see if larger or smaller values than the default will speed things up.

harrymc
  • 480,290
  • Another trick with ddrescue is the -r switch to read the second pass backwards from the end of the disk. – davidgo May 02 '20 at 00:58