Yes, it's absolutely possible that it could die while cloning. However think about this:
Same dying HDD in two different parallel universes
Universe 1: Tech decides to just directly scan the drive in R-Studio and it dies completely at the 80% mark.
Amount of data actually recovered: 0% the scan only found out where the files were, it didn't save them anywhere
Universe 2: Tech decides to clone in ddrescue first. Drive fails at the 80% mark same as in Universe 1
Amount of data recovered: 80% or more (depending on how full the drive was) because the sectors were being copied as they were read
The principle is simple, if you scan and recover entirely from the original drive you have to read all the sectors twice. Once to scan, and once to recover. If you clone first you only read them once and the subsequent reading/scanning is all done on the new healthy drive. And, yes all the jumping around reading files is much more likely to damage a drive than the sequential reading from cloning.
As to what we use to recover data without killing the drives, it's all hardware imaging tools. Mostly PC-3000 and DeepSpar Disk Imager 4. The advantage of hardware imaging is that we can control the read timeouts to minimize the strain on drives from trying over and over to read the same bad sector. The first pass we'll use a really low timeout where the drive will skip any sector that takes longer than say 300ms to read (varies by brand what we actually use). The next pass we'll use 500ms, and the final pass we'll try at 850ms. That way we have the bulk of the sectors read already before we try reading the damaged areas more intensively.
Ddrescue does something similar, though it can't control read timeouts. When it hits a bad sector, it'll jump ahead a certain block of sectors then try reading backwards until it hits another bad sector. Then it leaves that section of "questionable sectors" alone and continues reading ahead (preventing head damage). Only after it finishes the whole drive this way does it go back and then try reading sectors in between the bad ones by splitting the blocks in half and reading both forward and backward between the known bad ones. It's a similar concept to what the hardware imagers do, minus the timeout control.
Very interesting. Thanks for that.
I suppose I was assuming software like R-Studio would recover the files as it reads them, rather than requiring a second pass. Perhaps that should be an option. But yeah, given that using recovery software increases drive activity, even by a small amount, that could of course potentially reduce the chances of recovering the data.
I can understand why professional hardware imaging tools are vastly superior to software recovery tools, and probably much more successful at recovering data, especially if they allow full control of the read process, but how do you know if it's even safe to spin-up the disk in the first place?
Imagine this hypothetical (but somewhat typical) situation: Customer has an external drive containing thousands of files that are so important that losing even one of them would result in the end of the world as we know it. He has ensured that his files are very safe by writing the words "do not delete" in capital letters (with a few exclamation marks, just to be sure) on a pink Post-It note attached to the drive. No backups of course, but hey, he knows if the worst happens he can rely on his IT guys to work their witchcraft.
So the customer asks me to collect a new computer and the external drive containing his priceless data, which he wants me to transfer to his new computer. As I arrive at the customer's premises, he walks towards me carrying both the computer and the drive, and I witness him drop the external drive and watch helplessly as it bounces down a small flight of concrete stairs.
Now at this point, wanting to avert global annihilation, I suggest to him that we immediately send the drive off to a professional data recovery lab since there's a good chance that even spinning the drive up to check it could result in data loss due to mechanical damage. Let's assume the platters survived the fall but the head's arm is damaged/dislocated in such a way that it is likely to scrape across the surface of the platter the moment power is applied. What procedures would a professional lab use to prevent data loss in this scenario? Also, would that procedure always be used to avoid data loss whenever the cause of failure is unknown (just in case)?