Glider
"In het verleden behaalde resultaten bieden geen garanties voor de toekomst"
About this blog

These are the ramblings of Matthijs Kooijman, concerning the software he hacks on, hobbies he has and occasionally his personal life.

Most content on this site is licensed under the WTFPL, version 2 (details).

Questions? Praise? Blame? Feel free to contact me.

My old blog (pre-2006) is also still available.

See also my Mastodon page.

Sun Mon Tue Wed Thu Fri Sat
31        
Powered by Blosxom &Perl onion
(With plugins: config, extensionless, hide, tagging, Markdown, macros, breadcrumbs, calendar, directorybrowse, feedback, flavourdir, include, interpolate_fancy, listplugins, menu, pagetype, preview, seemore, storynum, storytitle, writeback_recent, moreentries)
Valid XHTML 1.0 Strict & CSS
Recovering data from a failing hard disk with HFS+

Recently, a customer asked me te have a look at an external hard disk he was using with his Macbook. It would show up a file listing just fine, but when trying to open actual files, it would start failing. Of course there was no backup, but the files were very precious...

This started out as a small question, but ended up in an adventure that spanned a few days and took me deep into the ddrescue recovery tool, through the HFS+ filesystem and past USB power port control. I learned a lot, discovered some interesting things and produced a pile of scripts that might be helpful to others. Since the journey seems interesting as well as the end result, I will describe the steps I took here, "ter leering ende vermaeck".


I started out confirming the original problem. Plugging in the disk to my Linux laptop, it showed up as expected in dmesg. I could mount the disk without problems, see the directory listing and even open up an image file stored on the disk. Opening other files didn't seem to work.

SMART

As you do with bad disks, you try to get their SMART data. Since smartctl did not support this particular USB bridge (and I wasn't game to try random settings to see if it worked on a failing disk), I gave up on SMART initially. I later opened up the case to bypassing the USB-to-SATA controller (in case the problem was there, and to make SMART work), but found that this particular hard drive had the converter built into the drive itself (so the USB part was directly attached to the drive). Even later, I found out some page online (I have not saved the link) that showed the disk was indeed supported by smartctl and showed the option to pass to smartctl -d to make it work. SMART confirmed that the disk was indeed failing, based on the number of reallocated sectors (2805).

Fast-then-slow copying

Since opening up files didn't work so well, I prepared to make a sector-by-sector copy of the partition on the disk, using ddrescue. This tool has a good approach to salvaging data, where it tries to copy off as much data as possible quickly, skipping data when it comes to a bad area on disk. Since reading a bad sector on a disk often takes a lot of time (before returning failure), ddrescue tries to steer clear of these bad areas and focus on the good parts first. Later, it returns to these bad areas and, in a few passes, tries to get out as much data as possible.

At first, copying data seemed to work well, giving a decent read speed of some 70MB/s as well. But very quickly the speed dropped terribly and I suspected the disk ran into some bad sector and kept struggling with that. I reset the disk (by unplugging it) and did a few more attempts and quickly discovered something weird: The disk would work just fine after plugging it in, but after a while the speed would plummet tot a whopping 64Kbyte/s or less. This happened everytime. Even more, it happened pretty much exactly 30 seconds after I started copying data, regardless of what part of the disk I copied data from.

So I quickly wrote a one-liner script that would start ddrescue, kill it after 45 seconds, wait for the USB device to disappear and reappear, and then start over again. So I spent some time replugging the USB cable about once every minute, so I could at least back up some data while I was investigating other stuff.

Since the speed was originally 70MB/s, I could pull a few GB worth of data every time. Since it was a 2000GB disk, I "only" had to plug the USB connector around a thousand times. Not entirely infeasible, but not quite comfortable or efficient either.

So I investigated ways to further automate this process: Using hdparm to spin down or shutdown the disk, use USB powersaving to let the disk reset itself, disable the USB subsystem completely, but nothing seemed to increase the speed again other than completely powering down the disk by removing the USB plug.

While I was trying these things, the speed during those first 30 seconds dropped, even below 10MB/s at some point. At that point, I could salvage around 200MB with each power cycle and was looking at pulling the USB plug around 10,000 times: no way that would be happening manually.

Automatically pulling the plug

I resolved to further automate this unplugging and planned using an Arduino (or perhaps the GPIO of a Raspberry Pi) and something like a relay or transistor to interrupt the power line to the hard disk to "unplug" the hard disk.

For that, I needed my Current measuring board to easily interrupt the USB power lines, which I had to bring from home. In the meanwhile, I found uhubctl, a small tool that uses low-level USB commands to individually control the port power on some hubs. Most hubs don't support this (or advertise support, but simply don't have the electronics to actually switch power, apparently), but I noticed that the newer raspberry pi's supported this (for port 2 only, but that would be enough).

Coming to the office the next day, I set up a raspberry pi and tried uhubctl. It did indeed toggle USB power, but the toggle would affect all USB ports at the same time, rather than just port 2. So I could switch power to the faulty drive, but that would also cut power to the good drive that I was storing the recovered data on, and I was not quite prepared to give the good drive 10,000 powercycles.

The next plan was to connect the recovery drive through the network, rather than directly to the Raspberry Pi. On Linux, setting up a network drive using SSHFS is easy, so that worked in a few minutes. However, somehow ddrescue insisted it could not write to the destination file and logfile, citing permission errors (but the permissions seemed just fine). I suspect it might be trying to mmap or something else that would not work across SSHFS....

The next plan was to find a powered hub - so the recovery drive could stay powered while the failing drive was powercycled. I rummaged around the office looking for USB hubs, and eventually came up with some USB-based docking station that was externally powered. When connecting it, I tried the uhubctl tool on it, and found that one of its six ports actually supported powertoggling. So I connected the failing drive to that port, and prepared to start the backup.

When trying to mount the recovery drive, I discovered that a Raspberry pi only supports filesystems up to 2TB (probably because it uses a 32-bit architecture). My recovery drive was 3TB, so that would not work on the Pi.

Time for a new plan: do the recovery from a regular PC. I already had one ready that I used the previous day, but now I needed to boot a proper Linux on it (previously I used a minimal Linux image from UBCD, but that didn't have a compiler installed to allow using uhubctl). So I downloaded a Debian live image (over a mobile connection - we were still waiting for fiber to be connected) and 1.8GB and 40 minutes later, I finally had a working setup.

The run.sh script I used to run the backup basically does this:

  1. Run ddrescue to pull of data
  2. After 35 seconds, kill ddrescue
  3. Tell the disk to sleep, so it can spindown gracefully before cutting the power.
  4. Tell the disk to sleep again, since sometimes it doesn't work the first time.
  5. Cycle the USB power on the port
  6. Wait for the disk to re-appear
  7. Repeat from 1.

By now, the speed of recovery had been fluctuating a bit, but was between 10MB/s and 30MB/s. That meant I was looking at some thousands up to ten thousands powercycles and a few days up to a week to backup the complete disk (and more if the speed would drop further).

Selectively backing up

Realizing that there would be a fair chance that the disk would indeed get slower, or even die completely due to all these power cycles, I had to assume I could not backup the complete disk.

Since I was making the backup sector by sector using ddrescue, this meant a risk of not getting any meaningful data at all. Files are typically fragmented, so can be stored anywhere on the disk, possible spread over multiple areas as well. If you just start copying at the start of the disk, but do not make it to the end, you will have backed some data but the data could belong to all kinds of different files. That means that you might have some files in a directory, but not others. Also, a lot of files might only be partially recovered, the missing parts being read as zeroes. Finally, you will also end up backing up all unused space on the disk, which is rather pointless.

To prevent this, I had to figure out where all kinds of stuff was stored on the disk.

The catalog file

The first step was to make sure the backup file could be mounted (using a loopback device). On my first attempt, I got an error about an invalid catalog.

I looked around for some documentation about the HFS+ filesystems, and found a nice introduction by infosecaddicts.com and a more detailed description at dubeiko.com. The catalog is apparently the place where the directory structure, filenames, and other metadata are stored in a single place.

This catalog is not in a fixed location (since its size can vary), but its location is noted in the so-called volume header, a fixed-size datastructure located at 1024 bytes from the start of the partition. More details (including easier to read offsets within the volume header) are provided in this example.

Looking at the volume header inside the backup, gives me:

root@debian:/mnt/recover/WD backup# dd if=backup.img bs=1024 skip=1 count=1 2> /dev/null | hd
00000000  48 2b 00 04 80 00 20 00  48 46 53 4a 00 00 3a 37  |H+.... .HFSJ..:7|
00000010  d4 49 7e 38 d8 05 f9 64  00 00 00 00 d4 49 1b c8  |.I~8...d.....I..|
00000020  00 01 24 7c 00 00 4a 36  00 00 10 00 1d 1a a8 f6  |..$|..J6........|
                                   ^^^^^^^^^^^ Block size: 4096 bytes
00000030  0e c6 f7 99 14 cd 63 da  00 01 00 00 00 01 00 00  |......c.........|
00000040  00 02 ed 79 00 6e 11 d4  00 00 00 00 00 00 00 01  |...y.n..........|
00000050  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
00000060  00 00 00 00 00 00 00 00  a7 f6 0c 33 80 0e fa 67  |...........3...g|
00000070  00 00 00 00 03 a3 60 00  03 a3 60 00 00 00 3a 36  |......`...`...:6|
00000080  00 00 00 01 00 00 3a 36  00 00 00 00 00 00 00 00  |......:6........|
00000090  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
*
000000c0  00 00 00 00 00 e0 00 00  00 e0 00 00 00 00 0e 00  |................|
000000d0  00 00 d2 38 00 00 0e 00  00 00 00 00 00 00 00 00  |...8............|
000000e0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
*
00000110  00 00 00 00 12 60 00 00  12 60 00 00 00 01 26 00  |.....`...`....&.|
00000120  00 0d 82 38 00 01 26 00  00 00 00 00 00 00 00 00  |...8..&.........|
00000130  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
*
00000160  00 00 00 00 12 60 00 00  12 60 00 00 00 01 26 00  |.....`...`....&.|
00000170  00 00 e0 38 00 01 26 00  00 00 00 00 00 00 00 00  |...8..&.........|
00000180  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
*
00000400

00000110  00 00 00 00 12 60 00 00  12 60 00 00 00 01 26 00  |.....`...`....&.|
          ^^^^^^^^^^^^^^^^^^^^^^^ Catalog size, in bytes: 0x12600000

00000120  00 0d 82 38 00 01 26 00  00 00 00 00 00 00 00 00  |...8..&.........|
                      ^^^^^^^^^^^ First extent size, in 4k blocks: 0x12600
          ^^^^^^^^^^^ First extent offset, in 4k blocks: 0xd8238
00000130  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|

I have annotated the parts that refer to the catalog. The content of the catalog (just like all other files), are stored in "extents". An extent is a single, contiguous block of storage, that contains (a part of) the content of a file. Each file can consist of multiple extents, to prevent having to move file content around each time things change (e.g. to allow fragmentation).

In this case, the catalog is stored only in a single extent (since the subsequent extent descriptors have only zeroes). All extent offsets and sizes are in blocks of 4k byte, so this extent lives at 0xd8238 * 4k = byte 3626205184 (~3.4G) and is 0x12600 * 4k = 294MiB long. So I backed up the catalog by adding -i 3626205184 to ddrescue, making it skip ahead to the location of the catalog (and then power cycled a few times until it copied the needed 294MiB).

After backup the allocation file, I could mount the image file just fine, and navigate the directory structure. Trying to open files would mostly fail, since the most files would only read zeroes now.

I did the same for the allocation file (which tracks free blocks), the extents file (which tracks the content of files that are more fragmented and whose extent list does not fit in the catalog) and the attributes file (not sure what that is, but for good measure).

Afterwards, I wanted to continue copying from where I previously left off, so I tried passing -i 0 to ddrescue, but it seems this can only be used to skip ahead, not back. In the end, I just edited the logfile, which is just a textfile, to set the current position to 0. ddrescue is smart enough to skip over blocks it already backed up (or marked as failed), so it then continued where it previously left off.

Where are my files?

With the catalog backed up, I needed to read it to figure out what file were stored where, so I could make sure the most important files were backed up first, followed by all other files, skipping any unused space on the disk.

I considered and tried some tools for reading the catalog directly, but none of them seemed workable. I looked at hfssh from hfsutils (which crashed), hfsdebug (which is discontinued and no longer available for download), hfsinspect (which calsl itself "quite buggy").

Instead, I found the filefrag commandline utility that uses a Linux filesystem syscall to figure out where the contents of a particular file is stored on disk. To coax the output of that tool into a list of extents usable by ddrescue, I wrote a oneliner shell script called list-extents.sh:

sudo filefrag -e "$@"  | grep  '^   ' |sed 's/\.\./:/g' | awk -F: '{print $4, $6}'

Given any number of filenames, it produces a list of (start, size) pairs for each extent in the listed files (in 4k blocks, which is the Linux VFS native block size).

With the backup image loopback-mounted at /mnt/backup, I could then generate an extent list for a given subdirectory using:

sudo find /mnt/backup/SomeDir -type f -print0 | xargs -0 -n 100 ./list-extents.sh > SomeDir.list

To turn this plain list of extents into a logfile usable by ddrescue, I wrote another small script called post-process.sh, that adds the appropriate header, converts from 4k blocks to 512-byte sectors, converts to hexadecimal and sets the right device size (so if you want to use this script, edit it with the right size). It is called simply like this:

./post-process.sh SomeDir.list

This produces two new files: SomeDir.list.done, in which all of the selected files are marked as "finished" (and all other blocks as "non-tried") and SomeDir.list.notdone which is reversed (all selected files are marked as "non-tried" and all others are marked as "finished").

Backing up specific files

Edit: Elmo pointed out that all the mapfile manipulation with ddrescuelog was not actually needed if I had know about ddrescue's --domain-mapfile option, which passes a second mapfile to ddrescue and makes it only process blocks that are marked in that finished mapfile, while presumably reading and updating the regular mapfile as normal.

Armed with a couple of these logfiles for the most important files on the disk and one for all files on the disk, I used the ddrescuelog tool to tell ddrescue what stuff to work on first. The basic idea is to mark everything that is not important as "finished", so ddrescue will skip over it and only work on the important files.

ddrescuelog backup.logfile --or-mapfile SomeDir.list.notdone | tee todo.original > todo

This uses the ddrescuelog --or-mapfile option, which takes my existing logfile (backup.logfile) and marks all bytes as finished that are marked as finished in the second file (SomeDir.list.notdone). IOW, it marks all bytes that are not part of SomeDir as done. This generates two copies (todo and todo.original) of the result, I'll explain why in a minute.

With the generated todo file, we can let ddrescue run (though I used the run.sh script instead):

# Then run on the todo file
sudo ddrescue -d /dev/sdd2 backup.img todo -v -v

Since the generation of the todo file effectively threw away information (we can not longer see from the todo file what parts of the non-important sectors were already copied, or had errors, etc.), we need to keep the original backup.logfile around too. Using the todo.original file, we can figure out what the last run did, and update backup.logfile accordingly:

ddrescuelog backup.logfile --or-mapfile <(ddrescuelog --xor-mapfile todo todo.original) > newbackup.logfile

Note that you could also use SomeDir.list.done here, but actually comparing todo and todo.original helps in case there were any errors in the last run (so the error sectors will not be marked as done and can be retried later).

With backup.logfile updated, I could move on to the next subdirectories, and once all of the important stuff was done, I did the same with a list of all file contents to make sure that all files were properly backed up.

But wait, there's more!

Now, I had the contents of all files backed up, so the data was nearly safe. I did however find that the disk contained a number of hardlinks, and/or symlinks, which did not work. I did not dive into the details, but it seems that some of the metadata and perhaps even file content is stored in a special "metadata directory", which is hidden by the Linux filesystem driver. So my filefrag-based "All files"-method above did not back up sufficient data to actually read these link files from the backup.

I could have figured out where on disk these metadata files were stored and do a backup of that, but then I still might have missed some other special blocks that are not part of the regular structure. I could of course back up every block, but then I would be copying around 1000GB of mostly unused space, of which only a few MB or GB would actually be relevant.

Instead, I found that HFS+ keeps an "allocation file". This file contains a single bit for each block in the filesystem, to store whether the block is allocated (1) or free (0). Simply looking a this bitmap and backing up all blocks that are allocated should make sure I had all data, and only left unused blocks behind.

The position of this allocation file is stored in the volume header, just like the catalog file. In my case, it was stored in a single extent, making it fairly easy to parse.

The volume header says:

00000070  00 00 00 00 03 a3 60 00  03 a3 60 00 00 00 3a 36  |......`...`...:6|
          ^^^^^^^^^^^^^^^^^^^^^^^ Allocation file size, in bytes: 0x12600000

00000080  00 00 00 01 00 00 3a 36  00 00 00 00 00 00 00 00  |......:6........|
                      ^^^^^^^^^^^ First extent size, in 4k blocks: 0x3a36
          ^^^^^^^^^^^ First extent offset, in 4k blocks: 0x1

This means the allocation file takes up 0x3a36 blocks (of 4096 bytes of 8 bits each, so it can store the status of 0x3a36 * 4k * 8 = 0x1d1b0000 blocks, which is rounded up from the total size of 0x1d1aa8f6 blocks).

First, I got the allocation file off the disk image (this uses bash arithmetic expansion to convert hex to decimal, you can also do this manually):

dd if=/dev/backup of=allocation bs=4096 skip=1 count=$((0x3a36))

Then, I wrote a small python script parse-allocation-file.py to parse the allocate file and output a ddrescue mapfile. I started out in bash, but that got tricky with bit manipulation, so I quickly converted to Python.

The first attempt at this script would just output a single line for each block, to let ddrescuelog merge adjacent blocks, but that would produce such a large file that I stopped it and improved the script to do the merging directly.

cat allocation | ./parse-allocation-file.py > Allocated.notdone

This produces an Allocated.notdone mapfile, in which all free blocks are marked as "finished", and all allocated blocks are marked as "non-tried".

As a sanity check, I verified that there was no overlap between the non-allocated areas and all files (i.e. the output of the following command showed no done/rescued blocks):

ddrescuelog AllFiles.list.done --and-mapfile Allocated.notdone | ddrescuelog --show-status -

Then, I looked at how much data was allocated, but not part of any file:

ddrescuelog AllFiles.list.done --or-mapfile Allocated.notdone | ddrescuelog --show-status -

This marked all non-allocated areas and all files as done, leaving a whopping 21GB of data that was somehow in use, but not part of any files. This size includes stuff like the volume header, catalog, the allocation file itself, but 21GB seemed a lot to me. It also includes the metadata file, so perhaps there's a bit of data in there for each file on disk, or perhaps the file content of hard linked data?

Nearing the end

Armed with my Allocated.notdone file, I used the same commands as before to let ddrescue backup all allocated sectors and made sure all data was safe.

For good measure, I let ddrescue then continue backing up the remainder of the disk (e.g. all unallocated sectors), but it seemed the disk was nearing its end now. The backup speed (even during the "fast" first 30 seconds) had dropped to under 300kB/s, so I was looking at a couple of more weeks (and thousands of powercycles) for the rest of the data, assuming the speed did not drop further. Since the rest of the backup should only be unused space, I shut down the backup and focused on the recovered data instead.

What was interesting, was that during all this time, the number of reallocated sectors (as reported by SMART) had not increased at all. So it seems unlikely that the slowness was caused by bad sectors (unless the disk firmware somehow tried to recover data from these reallocated sectors in the background and locked up itself in the process). The slowness also did not seem related to what sectors I had been reading. I'm happy that the data was recovered, but I honestly cannot tell why the disk was failing in this particular way...

In case you're in a similar position, the scripts I wrote are available for download.

So, with a few days of work, around a week of crunch time for the hard disk and about 4,000 powercycles, all 1000GB of files were safe again. Time to get back to some real work :-)

Comments
elmo wrote at 2023-01-22 23:03

You can use the ddrescue -m option to provide a 'domain' mapfile that tells ddrescue to only work on that part of the disk. This avoids all the mapfile manipulation youo had to go through.

Matthijs Kooijman wrote at 2023-01-31 17:59

Thanks for the tip, had I realized that option existed, that would have saved quite a fiddling with mapfiles. I've added a remark to my post about this!

Name:
URL:
Comment:

 
Comment can contain markdown formatting
 

 
2 comments -:- permalink -:- 17:58
Copyright by Matthijs Kooijman - most content WTFPL