Glider
"In het verleden behaalde resultaten bieden geen garanties voor de toekomst"
About this blog

These are the ramblings of Matthijs Kooijman, concerning the software he hacks on, hobbies he has and occasionally his personal life.

Most content on this site is licensed under the WTFPL, version 2 (details).

Questions? Praise? Blame? Feel free to contact me.

My old blog (pre-2006) is also still available.

January
Sun Mon Tue Wed Thu Fri Sat
   
18 19
20 21 22 23 24 25 26
27 28 29 30 31    
Powered by Blosxom &Perl onion
(With plugins: config, extensionless, hide, tagging, Markdown, macros, breadcrumbs, calendar, directorybrowse, entries_index, feedback, flavourdir, include, interpolate_fancy, listplugins, menu, pagetype, preview, seemore, storynum, storytitle, writeback_recent, moreentries)
Valid XHTML 1.0 Strict & CSS
Recovering data from a failing hard disk with HFS+

Recently, a customer asked me te have a look at an external hard disk he was using with his Macbook. It would show up a file listing just fine, but when trying to open actual files, it would start failing. Of course there was no backup, but the files were very precious...

This started out as a small question, but ended up in an adventure that spanned a few days and took me deep into the ddrescue recovery tool, through the HFS+ filesystem and past USB power port control. I learned a lot, discovered some interesting things and produced a pile of scripts that might be helpful to others. Since the journey seems interesting as well as the end result, I will describe the steps I took here, "ter leering ende vermaeck".

I started out confirming the original problem. Plugging in the disk to my Linux laptop, it showed up as expected in dmesg. I could mount the disk without problems, see the directory listing and even open up an image file stored on the disk. Opening other files didn't seem to work.

SMART

As you do with bad disks, you try to get their SMART data. Since smartctl did not support this particular USB bridge (and I wasn't game to try random settings to see if it worked on a failing disk), I gave up on SMART initially. I later opened up the case to bypassing the USB-to-SATA controller (in case the problem was there, and to make SMART work), but found that this particular hard drive had the converter built into the drive itself (so the USB part was directly attached to the drive). Even later, I found out some page online (I have not saved the link) that showed the disk was indeed supported by smartctl and showed the option to pass to smartctl -d to make it work. SMART confirmed that the disk was indeed failing, based on the number of reallocated sectors (2805).

Fast-then-slow copying

Since opening up files didn't work so well, I prepared to make a sector-by-sector copy of the partition on the disk, using ddrescue. This tool has a good approach to salvaging data, where it tries to copy off as much data as possible quickly, skipping data when it comes to a bad area on disk. Since reading a bad sector on a disk often takes a lot of time (before returning failure), ddrescue tries to steer clear of these bad areas and focus on the good parts first. Later, it returns to these bad areas and, in a few passes, tries to get out as much data as possible.

At first, copying data seemed to work well, giving a decent read speed of some 70MB/s as well. But very quickly the speed dropped terribly and I suspected the disk ran into some bad sector and kept struggling with that. I reset the disk (by unplugging it) and did a few more attempts and quickly discovered something weird: The disk would work just fine after plugging it in, but after a while the speed would plummet tot a whopping 64Kbyte/s or less. This happened everytime. Even more, it happened pretty much exactly 30 seconds after I started copying data, regardless of what part of the disk I copied data from.

So I quickly wrote a one-liner script that would start ddrescue, kill it after 45 seconds, wait for the USB device to disappear and reappear, and then start over again. So I spent some time replugging the USB cable about once every minute, so I could at least back up some data while I was investigating other stuff.

Since the speed was originally 70MB/s, I could pull a few GB worth of data every time. Since it was a 2000GB disk, I "only" had to plug the USB connector around a thousand times. Not entirely infeasible, but not quite comfortable or efficient either.

So I investigated ways to further automate this process: Using hdparm to spin down or shutdown the disk, use USB powersaving to let the disk reset itself, disable the USB subsystem completely, but nothing seemed to increase the speed again other than completely powering down the disk by removing the USB plug.

While I was trying these things, the speed during those first 30 seconds dropped, even below 10MB/s at some point. At that point, I could salvage around 200MB with each power cycle and was looking at pulling the USB plug around 10,000 times: no way that would be happening manually.

Automatically pulling the plug

I resolved to further automate this unplugging and planned using an Arduino (or perhaps the GPIO of a Raspberry Pi) and something like a relay or transistor to interrupt the power line to the hard disk to "unplug" the hard disk.

For that, I needed my Current measuring board to easily interrupt the USB power lines, which I had to bring from home. In the meanwhile, I found uhubctl, a small tool that uses low-level USB commands to individually control the port power on some hubs. Most hubs don't support this (or advertise support, but simply don't have the electronics to actually switch power, apparently), but I noticed that the newer raspberry pi's supported this (for port 2 only, but that would be enough).

Coming to the office the next day, I set up a raspberry pi and tried uhubctl. It did indeed toggle USB power, but the toggle would affect all USB ports at the same time, rather than just port 2. So I could switch power to the faulty drive, but that would also cut power to the good drive that I was storing the recovered data on, and I was not quite prepared to give the good drive 10,000 powercycles.

The next plan was to connect the recovery drive through the network, rather than directly to the Raspberry Pi. On Linux, setting up a network drive using SSHFS is easy, so that worked in a few minutes. However, somehow ddrescue insisted it could not write to the destination file and logfile, citing permission errors (but the permissions seemed just fine). I suspect it might be trying to mmap or something else that would not work across SSHFS....

The next plan was to find a powered hub - so the recovery drive could stay powered while the failing drive was powercycled. I rummaged around the office looking for USB hubs, and eventually came up with some USB-based docking station that was externally powered. When connecting it, I tried the uhubctl tool on it, and found that one of its six ports actually supported powertoggling. So I connected the failing drive to that port, and prepared to start the backup.

When trying to mount the recovery drive, I discovered that a Raspberry pi only supports filesystems up to 2TB (probably because it uses a 32-bit architecture). My recovery drive was 3TB, so that would not work on the Pi.

Time for a new plan: do the recovery from a regular PC. I already had one ready that I used the previous day, but now I needed to boot a proper Linux on it (previously I used a minimal Linux image from UBCD, but that didn't have a compiler installed to allow using uhubctl). So I downloaded a Debian live image (over a mobile connection - we were still waiting for fiber to be connected) and 1.8GB and 40 minutes later, I finally had a working setup.

The run.sh script I used to run the backup basically does this:

  1. Run ddrescue to pull of data
  2. After 35 seconds, kill ddrescue
  3. Tell the disk to sleep, so it can spindown gracefully before cutting the power.
  4. Tell the disk to sleep again, since sometimes it doesn't work the first time.
  5. Cycle the USB power on the port
  6. Wait for the disk to re-appear
  7. Repeat from 1.

By now, the speed of recovery had been fluctuating a bit, but was between 10MB/s and 30MB/s. That meant I was looking at some thousands up to ten thousands powercycles and a few days up to a week to backup the complete disk (and more if the speed would drop further).

Selectively backing up

Realizing that there would be a fair chance that the disk would indeed get slower, or even die completely due to all these power cycles, I had to assume I could not backup the complete disk.

Since I was making the backup sector by sector using ddrescue, this meant a risk of not getting any meaningful data at all. Files are typically fragmented, so can be stored anywhere on the disk, possible spread over multiple areas as well. If you just start copying at the start of the disk, but do not make it to the end, you will have backed some data but the data could belong to all kinds of different files. That means that you might have some files in a directory, but not others. Also, a lot of files might only be partially recovered, the missing parts being read as zeroes. Finally, you will also end up backing up all unused space on the disk, which is rather pointless.

To prevent this, I had to figure out where all kinds of stuff was stored on the disk.

The catalog file

The first step was to make sure the backup file could be mounted (using a loopback device). On my first attempt, I got an error about an invalid catalog.

I looked around for some documentation about the HFS+ filesystems, and found a nice introduction by infosecaddicts.com and a more detailed description at dubeiko.com. The catalog is apparently the place where the directory structure, filenames, and other metadata are stored in a single place.

This catalog is not in a fixed location (since its size can vary), but its location is noted in the so-called volume header, a fixed-size datastructure located at 1024 bytes from the start of the partition. More details (including easier to read offsets within the volume header) are provided in this example.

Looking at the volume header inside the backup, gives me:

root@debian:/mnt/recover/WD backup# dd if=backup.img bs=1024 skip=1 count=1 2> /dev/null | hd
00000000  48 2b 00 04 80 00 20 00  48 46 53 4a 00 00 3a 37  |H+.... .HFSJ..:7|
00000010  d4 49 7e 38 d8 05 f9 64  00 00 00 00 d4 49 1b c8  |.I~8...d.....I..|
00000020  00 01 24 7c 00 00 4a 36  00 00 10 00 1d 1a a8 f6  |..$|..J6........|
                                   ^^^^^^^^^^^ Block size: 4096 bytes
00000030  0e c6 f7 99 14 cd 63 da  00 01 00 00 00 01 00 00  |......c.........|
00000040  00 02 ed 79 00 6e 11 d4  00 00 00 00 00 00 00 01  |...y.n..........|
00000050  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
00000060  00 00 00 00 00 00 00 00  a7 f6 0c 33 80 0e fa 67  |...........3...g|
00000070  00 00 00 00 03 a3 60 00  03 a3 60 00 00 00 3a 36  |......`...`...:6|
00000080  00 00 00 01 00 00 3a 36  00 00 00 00 00 00 00 00  |......:6........|
00000090  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
*
000000c0  00 00 00 00 00 e0 00 00  00 e0 00 00 00 00 0e 00  |................|
000000d0  00 00 d2 38 00 00 0e 00  00 00 00 00 00 00 00 00  |...8............|
000000e0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
*
00000110  00 00 00 00 12 60 00 00  12 60 00 00 00 01 26 00  |.....`...`....&.|
00000120  00 0d 82 38 00 01 26 00  00 00 00 00 00 00 00 00  |...8..&.........|
00000130  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
*
00000160  00 00 00 00 12 60 00 00  12 60 00 00 00 01 26 00  |.....`...`....&.|
00000170  00 00 e0 38 00 01 26 00  00 00 00 00 00 00 00 00  |...8..&.........|
00000180  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
*
00000400

00000110  00 00 00 00 12 60 00 00  12 60 00 00 00 01 26 00  |.....`...`....&.|
          ^^^^^^^^^^^^^^^^^^^^^^^ Catalog size, in bytes: 0x12600000

00000120  00 0d 82 38 00 01 26 00  00 00 00 00 00 00 00 00  |...8..&.........|
                      ^^^^^^^^^^^ First extent size, in 4k blocks: 0x12600
          ^^^^^^^^^^^ First extent offset, in 4k blocks: 0xd8238
00000130  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|

I have annotated the parts that refer to the catalog. The content of the catalog (just like all other files), are stored in "extents". An extent is a single, contiguous block of storage, that contains (a part of) the content of a file. Each file can consist of multiple extents, to prevent having to move file content around each time things change (e.g. to allow fragmentation).

In this case, the catalog is stored only in a single extent (since the subsequent extent descriptors have only zeroes). All extent offsets and sizes are in blocks of 4k byte, so this extent lives at 0xd8238 * 4k = byte 3626205184 (~3.4G) and is 0x12600 * 4k = 294MiB long. So I backed up the catalog by adding -i 3626205184 to ddrescue, making it skip ahead to the location of the catalog (and then power cycled a few times until it copied the needed 294MiB).

After backup the allocation file, I could mount the image file just fine, and navigate the directory structure. Trying to open files would mostly fail, since the most files would only read zeroes now.

I did the same for the allocation file (which tracks free blocks), the extents file (which tracks the content of files that are more fragmented and whose extent list does not fit in the catalog) and the attributes file (not sure what that is, but for good measure).

Afterwards, I wanted to continue copying from where I previously left off, so I tried passing -i 0 to ddrescue, but it seems this can only be used to skip ahead, not back. In the end, I just edited the logfile, which is just a textfile, to set the current position to 0. ddrescue is smart enough to skip over blocks it already backed up (or marked as failed), so it then continued where it previously left off.

Where are my files?

With the catalog backed up, I needed to read it to figure out what file were stored where, so I could make sure the most important files were backed up first, followed by all other files, skipping any unused space on the disk.

I considered and tried some tools for reading the catalog directly, but none of them seemed workable. I looked at hfssh from hfsutils (which crashed), hfsdebug (which is discontinued and no longer available for download), hfsinspect (which calsl itself "quite buggy").

Instead, I found the filefrag commandline utility that uses a Linux filesystem syscall to figure out where the contents of a particular file is stored on disk. To coax the output of that tool into a list of extents usable by ddrescue, I wrote a oneliner shell script called list-extents.sh:

sudo filefrag -e "$@"  | grep  '^   ' |sed 's/\.\./:/g' | awk -F: '{print $4, $6}'

Given any number of filenames, it produces a list of (start, size) pairs for each extent in the listed files (in 4k blocks, which is the Linux VFS native block size).

With the backup image loopback-mounted at /mnt/backup, I could then generate an extent list for a given subdirectory using:

sudo find /mnt/backup/SomeDir -type f -print0 | xargs -0 -n 100 ./list-extents.sh > SomeDir.list

To turn this plain list of extents into a logfile usable by ddrescue, I wrote another small script called post-process.sh, that adds the appropriate header, converts from 4k blocks to 512-byte sectors, converts to hexadecimal and sets the right device size (so if you want to use this script, edit it with the right size). It is called simply like this:

./post-process.sh SomeDir.list

This produces two new files: SomeDir.list.done, in which all of the selected files are marked as "finished" (and all other blocks as "non-tried") and SomeDir.list.notdone which is reversed (all selected files are marked as "non-tried" and all others are marked as "finished").

Backing up specific files

Armed with a couple of these logfiles for the most important files on the disk and one for all files on the disk, I used the ddrescuelog tool to tell ddrescue what stuff to work on first. The basic idea is to mark everything that is not important as "finished", so ddrescue will skip over it and only work on the important files.

ddrescuelog backup.logfile --or-mapfile SomeDir.list.notdone | tee todo.original > todo

This uses the ddrescuelog --or-mapfile option, which takes my existing logfile (backup.logfile) and marks all bytes as finished that are marked as finished in the second file (SomeDir.list.notdone). IOW, it marks all bytes that are not part of SomeDir as done. This generates two copies (todo and todo.original) of the result, I'll explain why in a minute.

With the generated todo file, we can let ddrescue run (though I used the run.sh script instead):

# Then run on the todo file
sudo ddrescue -d /dev/sdd2 backup.img todo -v -v

Since the generation of the todo file effectively threw away information (we can not longer see from the todo file what parts of the non-important sectors were already copied, or had errors, etc.), we need to keep the original backup.logfile around too. Using the todo.original file, we can figure out what the last run did, and update backup.logfile accordingly:

ddrescuelog backup.logfile --or-mapfile <(ddrescuelog --xor-mapfile todo todo.original) > newbackup.logfile

Note that you could also use SomeDir.list.done here, but actually comparing todo and todo.original helps in case there were any errors in the last run (so the error sectors will not be marked as done and can be retried later).

With backup.logfile updated, I could move on to the next subdirectories, and once all of the important stuff was done, I did the same with a list of all file contents to make sure that all files were properly backed up.

But wait, there's more!

Now, I had the contents of all files backed up, so the data was nearly safe. I did however find that the disk contained a number of hardlinks, and/or symlinks, which did not work. I did not dive into the details, but it seems that some of the metadata and perhaps even file content is stored in a special "metadata directory", which is hidden by the Linux filesystem driver. So my filefrag-based "All files"-method above did not back up sufficient data to actually read these link files from the backup.

I could have figured out where on disk these metadata files were stored and do a backup of that, but then I still might have missed some other special blocks that are not part of the regular structure. I could of course back up every block, but then I would be copying around 1000GB of mostly unused space, of which only a few MB or GB would actually be relevant.

Instead, I found that HFS+ keeps an "allocation file". This file contains a single bit for each block in the filesystem, to store whether the block is allocated (1) or free (0). Simply looking a this bitmap and backing up all blocks that are allocated should make sure I had all data, and only left unused blocks behind.

The position of this allocation file is stored in the volume header, just like the catalog file. In my case, it was stored in a single extent, making it fairly easy to parse.

The volume header says:

00000070  00 00 00 00 03 a3 60 00  03 a3 60 00 00 00 3a 36  |......`...`...:6|
          ^^^^^^^^^^^^^^^^^^^^^^^ Allocation file size, in bytes: 0x12600000

00000080  00 00 00 01 00 00 3a 36  00 00 00 00 00 00 00 00  |......:6........|
                      ^^^^^^^^^^^ First extent size, in 4k blocks: 0x3a36
          ^^^^^^^^^^^ First extent offset, in 4k blocks: 0x1

This means the allocation file takes up 0x3a36 blocks (of 4096 bytes of 8 bits each, so it can store the status of 0x3a36 * 4k * 8 = 0x1d1b0000 blocks, which is rounded up from the total size of 0x1d1aa8f6 blocks).

First, I got the allocation file off the disk image (this uses bash arithmetic expansion to convert hex to decimal, you can also do this manually):

dd if=/dev/backup of=allocation bs=4096 skip=1 count=$((0x3a36))

Then, I wrote a small python script parse-allocation-file.py to parse the allocate file and output a ddrescue mapfile. I started out in bash, but that got tricky with bit manipulation, so I quickly converted to Python.

The first attempt at this script would just output a single line for each block, to let ddrescuelog merge adjacent blocks, but that would produce such a large file that I stopped it and improved the script to do the merging directly.

cat allocation | ./parse-allocation-file.py > Allocated.notdone

This produces an Allocated.notdone mapfile, in which all free blocks are marked as "finished", and all allocated blocks are marked as "non-tried".

As a sanity check, I verified that there was no overlap between the non-allocated areas and all files (i.e. the output of the following command showed no done/rescued blocks):

ddrescuelog AllFiles.list.done --and-mapfile Allocated.notdone | ddrescuelog --show-status -

Then, I looked at how much data was allocated, but not part of any file:

ddrescuelog AllFiles.list.done --or-mapfile Allocated.notdone | ddrescuelog --show-status -

This marked all non-allocated areas and all files as done, leaving a whopping 21GB of data that was somehow in use, but not part of any files. This size includes stuff like the volume header, catalog, the allocation file itself, but 21GB seemed a lot to me. It also includes the metadata file, so perhaps there's a bit of data in there for each file on disk, or perhaps the file content of hard linked data?

Nearing the end

Armed with my Allocated.notdone file, I used the same commands as before to let ddrescue backup all allocated sectors and made sure all data was safe.

For good measure, I let ddrescue then continue backing up the remainder of the disk (e.g. all unallocated sectors), but it seemed the disk was nearing its end now. The backup speed (even during the "fast" first 30 seconds) had dropped to under 300kB/s, so I was looking at a couple of more weeks (and thousands of powercycles) for the rest of the data, assuming the speed did not drop further. Since the rest of the backup should only be unused space, I shut down the backup and focused on the recovered data instead.

What was interesting, was that during all this time, the number of reallocated sectors (as reported by SMART) had not increased at all. So it seems unlikely that the slowness was caused by bad sectors (unless the disk firmware somehow tried to recover data from these reallocated sectors in the background and locked up itself in the process). The slowness also did not seem related to what sectors I had been reading. I'm happy that the data was recovered, but I honestly cannot tell why the disk was failing in this particular way...

In case you're in a similar position, the scripts I wrote are available for download.

So, with a few days of work, around a week of crunch time for the hard disk and about 4,000 powercycles, all 1000GB of files were safe again. Time to get back to some real work :-)

 
0 comments -:- permalink -:- 14:53
Modifying a LED strip DMX dimmer for incandescent bulbs

DMX PWM dimmer module

For a theatre performance, I needed to make the tail lights of an old car controllable through the DMX protocol, which the most used protocol used to control stage lighting. Since these are just small incandescent lightbulbs running on 12V, I essentially needed a DMX-controllable 12V dimmer. I knew that there existed ready-made modules for this to control LED-strips, which also run at 12V, so I went ahead and tried using one of those for my tail lights instead.

I looked around ebay for a module to use, and found this one. It seems the same design is available from dozens of different vendors on ebay, so that's probably clones, or a single manufacturer supplying each.

DMX module details

This module has a DMX input and output using XLR or a modular connector, and screw terminals for 12V power input, 4 output channels and one common connection. The common connection is 12V, so the output channels sink current (e.g. "Common anode"), which is relevant for LEDs. For incandescent bulbs, current can flow either way, so this does not really matter.

Dimmer module PCB

Opening up the module, it seems fairly simple. There's a microcontroller (or dedicated DMX decoder chip? I couldn't find a datasheet) inside, along with two RS-422 transceivers for DMX, four AP60T03GH MOSFETS for driving the channels, and one linear regulator to generate a logic supply voltage.

On the DMX side, this means that the module has a separate input and output signals (instead of just connecting them together). It also means that the DMX signal is not isolated, which violates the recommendations of the DMX specification AFAIU (and might be problematic if there is more than a few volts of ground difference). On the output side, it seems there are just MOSFETs to toggle the output, without any additional protection.

See more ...

 
0 comments -:- permalink -:- 16:56
Running an existing Windows 7 partition under QEMU/KVM/virt-manager

I was previously running an ancient Windows XP install under Virtualbox for the occasional time I needed Windows for something. However, since Debian Stretch, virtualbox is no longer supplied, due to security policy problems, I've been experimenting with QEMU, KVM and virt-manager. Migrating my existing VirtualBox XP installation to virt-manager didn't work (it simply wouldn't boot), and I do not have any spare Windows keys lying around, but I do have a Windows 7 installed alongside my Linux on a different partition, so I decided to see if I could get that to boot inside QEMU/KVM.

An obvious problem is the huge change in hardware between the real and virtual environment, but apparently recent Windows versions don't really mind this in terms of drivers, but the activation process could be a problem, especially when booting both virtually and natively. So far I have not seen any complications with either drivers or activation, not even after switching to virtio drivers (see below). I am using an OEM (preactivated?) version of Windows, so that might help in this area.

Update: When booting Windows in the VM a few weeks later, it started bugging me that my Windows was not genuine, and it seems no longer activated. Clicking the "resolve now" link gives a broken webpage, and going through system properties suggests to contact Lenovo (my laptop provider) to resolve this (or buy a new license). I'm not yet sure if this is really problematic, though. This happened shortly after replacing my hard disk, though I'm not sure if that's actually related.

Rebooting into Windows natively shows it is activated (again or still), but booting it virtually directly after that still shows as not activated...

Creating the VM

Booting the installation was actually quite painless: I just used the wizard inside virt-manager, entered /dev/sda (my primary hard disk) as the storage device, pressed start, selected to boot Windows in my bootloader and it booted Windows just fine.

Booting is not really fast, but once it runs, things are just a bit sluggish but acceptable.

One caveat is that this adds the entire disk, not just the Windows partition. This also means the normal bootloader (grub in my case) will be used inside the VM, which will happily boot the normal default operating system. Protip: Don't boot your Linux installation inside a VM inside that same Linux installation, both instances will end up fighting in your filesystem. Thanks for fsck, which seems to have fixed the resulting garbage so far...

To prevent this, make sure to actually select your Windows installation in the bootloader. See below for a more permanent solution.

See more ...

 
0 comments -:- permalink -:- 18:13
Calculating a constant path basename at compiletime in C++

In some Arduino / C++ project, I was using a custom assert() macro, that, if the assertion would fail show an error message, along with the current filename and line number. The filename was automatically retrieved using the __FILE__ macro. However, this macro returns a full path, while we only had little room to show it, so we wanted to show the filename only.

Until now, we've been storing the full filename, and when an assert was triggered we would use the strrchr function to chop off all but the last part of the filename (commonly called the "basename") and display only that. This works just fine, but it is a waste of flash memory, storing all these (mostly identical) paths. Additionally, when an assertion fails, you want to get a message out ASAP, since who knows what state your program is in.

Neither of these is really a showstopper for this particular project, but I suspected there would be some way to use C++ constexpr functions and templates to force the compiler to handle this at compiletime, and only store the basename instead of the full path. This week, I took up the challenge and made something that works, though it is not completely pretty yet.

Working out where the path ends and the basename starts is fairly easy using something like strrchr. Of course, that's a runtime version, but it is easy to do a constexpr version by implementing it recursively, which allows the compiler to evaluate these functions at compiletime.

For example, here are constexpr versions of strrchrnul(), basename() and strlen():

/**
 * Return the last occurence of c in the given string, or a pointer to
 * the trailing '\0' if the character does not occur. This should behave
 * just like the regular strrchrnul function.
 */
constexpr const char *static_strrchrnul(const char *s, char c) {
  /* C++14 version
    if (*s == '\0')
      return s;
    const char *rest = static_strrchr(s + 1, c);
    if (*rest == '\0' && *s == c)
      return s;
    return rest;
  */

  // Note that we cannot implement this while returning nullptr when the
  // char is not found, since looking at (possibly offsetted) pointer
  // values is not allowed in constexpr (not even to check for
  // null/non-null).
  return *s == '\0'
      ? s
      : (*static_strrchrnul(s + 1, c) == '\0' && *s == c)
        ? s
        : static_strrchrnul(s + 1, c);
}

/**
 * Return one past the last separator in the given path, or the start of
 * the path if it contains no separator.
 * Unlike the regular basename, this does not handle trailing separators
 * specially (so it returns an empty string if the path ends in a
 * separator).
 */
constexpr const char *static_basename(const char *path) {
  return (*static_strrchrnul(path, '/') != '\0'
      ? static_strrchrnul(path, '/') + 1
      : path
     );
}

/** Return the length of the given string */
constexpr size_t static_strlen(const char *str) {
  return *str == '\0' ? 0 : static_strlen(str + 1) + 1;
}

So, to get the basename of the current filename, you can now write:

constexpr const char *b = static_basename(__FILE__);

However, that just gives us a pointer halfway into the full string literal. In practice, this means the full string literal will be included in the link, even though only a part of it is referenced, which voids the space savings we're hoping for (confirmed on avr-gcc 4.9.2, but I do not expect newer compiler version to be smarter about this, since the linker is involved).

To solve that, we need to create a new char array variable that contains just the part of the string that we really need. As happens more often when I look into complex C++ problems, I came across a post by Andrzej Krzemieński, which shows a technique to concatenate two constexpr strings at compiletime (his blog has a lot of great posts on similar advanced C++ topics, a recommended read!). For this, he has a similar problem: He needs to define a new variable that contains the concatenation of two constexpr strings.

For this, he uses some smart tricks using parameter packs (variadic template arguments), which allows to declare an array and set its initial value using pointer references (e.g. char foo[] = {ptr[0], ptr[1], ...}). One caveat is that the length of the resulting string is part of its type, so must be specified using a template argument. In the concatenation case, this can be easily derived from the types of the strings to concat, so that gives nice and clean code.

In my case, the length of the resulting string depends on the contents of the string itself, which is more tricky. There is no way (that I'm aware of, suggestions are welcome!) to deduce a template variable based on the value of an non-template argument automatically. What you can do, is use constexpr functions to calculate the length of the resulting string, and explicitly pass that length as a template argument. Since you also need to pass the contents of the new string as a normal argument (since template parameters cannot be arbitrary pointer-to-strings, only addresses of variables with external linkage), this introduces a bit of duplication.

Applied to this example, this would look like this:

constexpr char *basename_ptr = static_basename(__FILE__);
constexpr auto basename = array_string<static_strlen(basename_ptr)>(basename_ptr); \

This uses the static_string library published along with the above blogpost. For this example to work, you will need some changes to the static_string class (to make it accept regular char* as well), see this pull request for the version I used.

The resulting basename variable is an array_string object, which contains just a char array containing the resulting string. You can use array indexing on it directly to access variables, implicitly convert to const char* or explicitly convert using basename.c_str().

So, this solves my requirement pretty neatly (saving a lot of flash space!). It would be even nicer if I did not need to repeat the basename_ptr above, or could move the duplication into a helper class or function, but that does not seem to be possible.

 
0 comments -:- permalink -:- 21:33
Automatically remotely attaching tmux and forwarding things

GnuPG logo

I recently upgraded my systems to Debian Stretch, which caused GnuPG to stop working within Mutt. I'm not exactly sure what was wrong, but I discovered that GnuPG version 2 changed quite some things and relies more heavily on the gpg-agent, and I discovered that recent SSH version can forward unix domain socket instead of just TCP sockets, which allows forwarding a gpg-agent connection over SSH.

Until now, I had my GPG private keys stored on my server, Tika, where my Mutt mail client also runs. However, storing private keys, even with a passphrase, on permanentely connected multi-user system never felt quite right. So this seemed like a good opportunity to set up proper forwarding for my gpg agent, and keep my private keys confined to my laptop.

I already had some small scripts in place to easily connect to my server through SSH, attach to the remote tmux session (or start it), set up some port forwards (in particular a reverse port forward for SSH so my mail client and IRC client could open links in my browser), and quickly reconnect when the connection fails. However, once annoyance was that when the connection fails, the server might not immediately notice, so reconnecting usually left me with failed port forwards (since the remote listening port was still taken by the old session). This seemed like a good occasion to fix that as wel.

The end result is a reasonably complex script, that is probably worth sharing here. The script can be found in my scripts git repository. On the server, it calls an attach script, but that's not much more than attaching to tmux, or starting a new session with some windows if no session is running yet.

The script is reasonably well-commented, including an introduction on what it can do, so I will not repeat that here.

For the GPG forwarding, I based upon this blogpost. There, they suggest configuring an extra-socket in gpg-agent.conf, but I've found that gpg-agent already created an extra socket (whose path I could query with gpgconf --list-dirs), so I didn't use that extra-socket configuration line. They also talk about setting StreamLocalBindUnlink to clean up a lingering socket when creating a new one, but that is already handled by my script instead.

Furthermore, to prevent a gpg-agent from being autostarted by gnupg serverside (in case the forwarding fails, or when I would connect without this script, etc.), I added no-autostart to ~/.gnupg/gpg.conf. I'm not running systemd user session on my server, but if you are you might need to disable or mask some ssh-agent sockets and/or services to prevent systemd from creating sockets for ssh-agent and starting it on-demand.

My next step is to let gpg-agent also be my ssh-agent (or perhaps just use plain ssh-agent) to enforce confirming each SSH authentication request. I'm currently using gnome-keyring / seahorse as my SSH agent, but that just silently approves everything, which doesn't really feel secure.

 
0 comments -:- permalink -:- 16:46
Running Ruby on Rails using Systemd socket activation

Ruby on Rails logo

On a small embedded system, I wanted to run a simple Rails application and have it automatically start up at system boot. The system is running systemd, so a systemd service file seemed appropriate to start the rails service.

Normally, when you run the ruby-on-rails standalone server, it binds on port 3000. Binding on port 80 normally requires root (or a special capability enabled for all of ruby), but I don't want to run the rails server as root. AFAIU, normal deployments using something like Nginx to open port 80 and let it forward requests to the rails server, but I wanted a minimal setup, with just the rails server.

An elegant way to binding port 80 without running as root is to use systemd's socket activation feature. Using socket activation, systemd (running as root) opens up a network port before starting the daemon. It then starts the daemon, which inherits the open network socket file descriptor, with some environment variables to indicate this. Apart from allowing privileged ports without root, this has other advantages such as on-demand starting, easier parallel startup and seamless restarts and upgrades (none of which is really important for my usecase, but it is still nice :-p).

See more ...

 
1 comment -:- permalink -:- 10:35
Raspberry pi powerdown and powerup button

Raspberry Pi Zero

TL;DR: This post describes an easy way to add a power button to a raspberryp pi that:

  • Only needs a button and wires, no other hardware components.
  • Allows graceful shutdown and powerup.
  • Only needs modification of config files and does not need a dedicated daemon to read GPIO pins.

There are two caveats:

  • This shuts down in the same way as shutdown -h now or halt does. It does not completely cut the power (like some hardware add-ons do).
  • To allow powerup, the I²C SCL pin (aka GPIO3) must be used, conflicting with externally added I²C devices.

If you use Raspbian stretch 2017.08.16 or newer, all that is required is to add a line to /boot/config.txt:

dtoverlay=gpio-shutdown,gpio_pin=3

Make sure to reboot after adding this line. If you need to use a different gpio, or different settings, lookup gpio-shutdown in the docs.

Then, if you connect a pushbutton between GPIO3 and GND (pin 5 and 6 on the 40-pin header), you can let your raspberry shutdown and startup using this button.

If you use an original Pi 1 B (non-plus) revision 1.0 (without mounting holes), pin 5 will be GPIO1 instead of GPIO3 and you will need to specify gpio_pin=1 instead. The newer revision 2.0 (with 2 mounting holes in the board) and all other rpi models do have GPIO3 and work as above.

All this was tested on a Rpi Zero W, Rpi B (rev 1.0 and 2.0) and a Rpi B+.

If you have an older Raspbian version, or want to know how this works, read on below.

See more ...

 
25 comments -:- permalink -:- 10:51
Retraining the Spamassassin Bayes filter with recent messages

On my mailserver, I'm using Spamassassin with a Bayes filter to detect spam. Such a filter needs to be trained with samples of spam and ham (non-spam) messages to let it learn what spam and ham looks like, but it also needs to be retrained when the spam or ham changes over time. I have some automatic training set up, but since a while I've seen the bayes filter being completely wrong (showing a confident ham score for something that is very clearly spam), so I decided to retrain the filter from scratch, using the spam and ham messages I collected over the last time (I don't really throw away any e-mail).

Since training with all my e-mail is not productive (more than 5,000 messages aren't really helpful AFAIU, and training with old messages is not representative for current messages), I decided to just take all of my e-mail and take the last 2,000 spam and ham messages and train with that. My spam is neatly collected in 2 mailboxes (Spam for obvious spam and ProbablySpam for messages that need an occasional review to find false positives), but my ham is sorted out in dozens of different mailboxes. Hence, I needed some find magic to get a list of the most recent spam and ham messages. So, I built these commands:

# find Spam ProbablySpam -type f \( -path '*/cur/*' -o -path '*/new/*' \) -printf "%T@ %p\n"
  | sort -n | cut -d' ' -f 2 | tail -n 2000 > spam
# find . -type d \( -path ./Spam -o -path ./ProbablySpam -o -path ./Bulk -o -path ./Sent \) -prune -o \
         -type f \( -path '*/cur/*' -o -path '*/new/*' \) -printf "%T@ %p\n" \
  | sort -n | cut -d' ' -f 2 | tail -n 2000 > ham
# sa-learn --progress --spam -f spam
# sa-learn --progress --ham -f ham

After retraining with recent spam, the results were a lot better, so I'm not longer spending time every day deleting a couple dozens spam e-mails :-D

Related stories

 
0 comments -:- permalink -:- 21:03
Using Xctu through an Arduino shield

XBee modules are a range of wireless modules built by Digi, and are typically used to add wireless connectivity to Arduino or other microcontroller based projects. To configure these modules and update their firmware, you can use the XCTU configuration utility. This utility uses a serial port to talk to the XBee module, so you will need some way to connect to the XBee module to a serial port on your computer (using a USB "TTL" serial port, a "real" RS232 port has too high voltage).

The easiest way is to use a dedicated board, like the SparkFun Explorer USB:

SparkFun Explorer USB

However, if you already have an Arduino and an XBee shield for it, you might want to use those to connect XCTU to your XBee module. In theory, this should be a matter of re-arranging some wires, but in practice I've run into some problems attempting this (depending on the hardware used).

In this post, I'll show a few ways to do this using an Arduino and a shield, and explain some of the problems you might run into.

See more ...

 
0 comments -:- permalink -:- 16:23
Interrupts, sleeping and race conditions on Arduino

Arduino Community Logo

My book about Arduino and XBee includes a chapter on battery power and sleeping. When I originally wrote it, it ended up over twice the number of pages originally planned for it, so I had to severely cut down the content. Among the content removed, was a large section talking about interrupts, sleeping and race conditions. Since I am not aware of any other online sources that cover this subject as thoroughly, I decided to publish this content as a blogpost separately, which is what you're looking at now.

In this blogpost, I will first explain interrupts and race conditions using a number of examples. Then sleeping is added into the mix, which again results in some interesting race conditions. All these examples have been written for Arduino boards using the AVR architecture, but the general concepts apply equally well to other platforms.

The basics of interrupts and sleeping on AVR are not covered in detail here. If you have no experience with this, I recommend these excellent articles on interrupts and on sleeping by Nick Gammon, which cover interrupts, sleeping and other powersaving in a lot of detail.

See more ...

 
0 comments -:- permalink -:- 17:34
Showing 1 - 10 of 164 posts
Copyright by Matthijs Kooijman