Glider
"In het verleden behaalde resultaten bieden geen garanties voor de toekomst"
About this blog

These are the ramblings of Matthijs Kooijman, concerning the software he hacks on, hobbies he has and occasionally his personal life.

Most content on this site is licensed under the WTFPL, version 2 (details).

Questions? Praise? Blame? Feel free to contact me.

My old blog (pre-2006) is also still available.

See also my Mastodon page.

July
Sun Mon Tue Wed Thu Fri Sat
    2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30 31    
Powered by Blosxom &Perl onion
(With plugins: config, extensionless, hide, tagging, Markdown, macros, breadcrumbs, calendar, directorybrowse, entries_index, feedback, flavourdir, include, interpolate_fancy, listplugins, menu, pagetype, preview, seemore, storynum, storytitle, writeback_recent, moreentries)
Valid XHTML 1.0 Strict & CSS
Forcing compiletime initialization of variables in C++ using constexpr

Every now and then I work on some complex C++ code (mostly stuff running on Arduino nowadays) so I can write up some code in a nice, consise and abstracted manner. This almost always involves classes, constructors and templates, which serve their purpose in the abstraction, but once you actually call them, the compiler should optimize all of them away as much as possible.

This usually works nicely, but there was one thing that kept bugging me. No matter how simple your constructors are, initializing using constructors always results in some code running at runtime.

In contrast, when you initialize normal integer variable, or a struct variable using aggregate initialization, the copmiler can completely do the initialization at compiletime. e.g. this code:

struct Foo {uint8_t a; bool b; uint16_t c};
Foo x = {0x12, false, 0x3456};

Would result in four bytes (0x12, 0x00, 0x34, 0x56, assuming no padding and big-endian) in the data section of the resulting object file. This data section is loaded into memory using a simple loop, which is about as efficient as things get.

Now, if I write the above code using a constructor:

struct Foo {
    uint8_t a; bool b; uint16_t c;};
    Foo(uint8_t a, bool b, uint16_t c) : a(a), b(b), c(c) {}
};
Foo x = Foo(0x12, false, 0x3456);

This will result in those four bytes being allocated in the bss section (which is zero-initialized), with the constructor code being executed at startup. The actual call to the constructor is inlined of course, but this still means there is code that loads every byte into a register, loads the address in a register, and stores the byte to memory (assuming an 8-bit architecture, other architectures will do more bytes at at time).

This doesn't matter much if it's just a few bytes, but for larger objects, or multiple small objects, having the loading code intermixed with the data like this easily requires 3 to 4 times as much code as having it loaded from the data section. I don't think CPU time will be much different (though first zeroing memory and then loading actual data is probably slower), but on embedded systems like Arduino, code size is often limited, so not having the compiler just resolve this at compiletime has always frustrated me.

Constant Initialization

Today I learned about a new feature in C++11: Constant initialization. This means that any global variables that are initialized to a constant expression, will be resolved at runtime and initialized before any (user) code (including constructors) starts to actually run.

A constant expression is essentially an expression that the compiler can guarantee can be evaluated at compiletime. They are required for e.g array sizes and non-type template parameters. Originally, constant expressions included just simple (arithmetic) expressions, but since C++11 you can also use functions and even constructors as part of a constant expression. For this, you mark a function using the constexpr keyword, which essentially means that if all parameters to the function are compiletime constants, the result of the function will also be (additionally, there are some limitations on what a constexpr function can do).

So essentially, this means that if you add constexpr to all constructors and functions involved in the initialization of a variable, the compiler will evaluate them all at compiletime.

(On a related note - I'm not sure why the compiler doesn't deduce constexpr automatically. If it can verify if it's allowed to use constexpr, why not add it? Might be too resource-intensive perhaps?)

Note that constant initialization does not mean the variable has to be declared const (e.g. immutable) - it's just that the initial value has to be a constant expression (which are really different concepts - it's perfectly possible for a const variable to have a non-constant expression as its value. This means that the value is set by normal constructor calls or whatnot at runtime, possibly with side-effects, without allowing any further changes to the value after that).

Enforcing constant initialization?

Anyway, so much for the introduction of this post, which turned out longer than I planned :-). I learned about this feature from this great post by Andrzej Krzemieński. He also writes that it is not really possible to enforce that a variable is constant-initialized:

It is difficult to assert that the initialization of globals really took place at compile-time. You can inspect the binary, but it only gives you the guarantee for this binary and is not a guarantee for the program, in case you target for multiple platforms, or use various compilation modes (like debug and retail). The compiler may not help you with that. There is no way (no syntax) to require a verification by the compiler that a given global is const-initialized.

If you accidentially forget constexpr on one function involved, or some other requirement is not fulfilled, the compiler will happily fall back to less efficient runtime initialization instead of notifying you so you can fix this.

This smelled like a challenge, so I set out to investigate if I could figure out some way to implement this anyway. I thought of using a non-type template argument (which are required to be constant expressions by C++), but those only allow a limited set of types to be passed. I tried using builtin_constant_p, a non-standard gcc construct, but that doesn't seem to recognize class-typed constant expressions.

Using static_assert

It seems that using the (also introduced in C++11) static_assert statement is a reasonable (though not perfect) option. The first argument to static_assert is a boolean that must be a constant expression. So, if we pass it an expression that is not a constant expression, it triggers an error. For testing, I'm using this code:

class Foo {
public:
  constexpr Foo(int x) { }
  Foo(long x) { }
};

Foo a = Foo(1);
Foo b = Foo(1L);

We define a Foo class, which has two constructors: one accepts an int and is constexpr and one accepts a long and is not constexpr. Above, this means that a will be const-initialized, while b is not.

To use static_assert, we cannot just pass a or b as the condition, since the condition must return a bool type. Using the comma operator helps here (the comma accepts two operands, evaluates both and then discards the first to return the second):

static_assert((a, true), "a not const-initialized"); // OK
static_assert((b, true), "b not const-initialized"); // OK :-(

However, this doesn't quite work, neither of these result in an error. I was actually surprised here - I would have expected them both to fail, since neither a nor b is a constant expression. In any case, this doesn't work. What we can do, is simply copy the initializer used for both into the static_assert:

static_assert((Foo(1), true), "a not const-initialized"); // OK
static_assert((Foo(1L), true), "b not const-initialized"); // Error

This works as expected: The int version is ok, the long version throws an error. It doesn't trigger the assertion, but recent gcc versions show the line with the error, so it's good enough:

test.cpp:14:1: error: non-constant condition for static assertion
 static_assert((Foo(1L), true), "b not const-initialized"); // Error
 ^
test.cpp:14:1: error: call to non-constexpr function ‘Foo::Foo(long int)’

This isn't very pretty though - the comma operator doesn't make it very clear what we're doing here. Better is to use a simple inline function, to effectively do the same:

template <typename T>
constexpr bool ensure_const_init(T t) { return true; }

static_assert(ensure_const_init(Foo(1)), "a not const-initialized"); // OK
static_assert(ensure_const_init(Foo(1L)), "b not const-initialized"); // Error

This achieves the same result, but looks nicer (though the ensure_const_init function does not actually enforce anything, it's the context in which it's used, but that's a matter of documentation).

Note that I'm not sure if this will actually catch all cases, I'm not entirely sure if the stuff involved with passing an expression to static_assert (optionally through the ensure_const_init function) is exactly the same stuff that's involved with initializing a variable with that expression (e.g. similar to the copy constructor issue below).

The function itself isn't perfect either - It doesn't handle (const) (rvalue) references so I believe it might not work in all cases, so that might need some fixing.

Also, having to duplicate the initializer in the assert statement is a big downside - If I now change the variable initializer, but forget to update the assert statement, all bets are off...

Using constexpr constant

As Andrzej pointed out in his post, you can mark variables with constexpr, which requires them to be constant initialized. However, this also makes the variable const, meaning it cannot be changed after initialization, which we do not want. However, we can still leverage this using a two-step initialization:

constexpr Foo c_init = Foo(1); // OK
Foo c = c_init;

constexpr Foo d_init = Foo(1L); // Error
Foo d = d_init;

This isn't very pretty either, but at least the initializer is only defined once. This does introduce an extra copy of the object. With the default (implicit) copy constructor this copy will be optimized out and constant initialization still happens as expected, so no problem there.

However, with user-defined copy constructors, things are diffrent:

class Foo2 {
public:
  constexpr Foo2(int x) { }
  Foo2(long x) { }
  Foo2(const Foo2&) { }
};

constexpr Foo2 e_init = Foo2(1); // OK
Foo2 e = e_init; // Not constant initialized but no error!

Here, a user-defined copy constructor is present that is not declared with constexpr. This results in e being not constant-initialized, even though e_init is (this is actually slighly weird - I would expect the initialization syntax I used to also call the copy constructor when initializing e_init, but perhaps that one is optimized out by gcc in an even earlier stage).

We can user our earlier ensure_const_init function here:

constexpr Foo f_init = Foo(1);
Foo f = f_init;
static_assert(ensure_const_init(f_init), "f not const-initialized"); // OK

constexpr Foo2 g_init = Foo2(1);
Foo2 g = g_init;
static_assert(ensure_const_init(g_init), "g not const-initialized"); // Error

This code is actually a bit silly - of course f_init and g_init are const-initialized, they are declared constexpr. I initially tried this separate init variable approach before I realized I could (need to, actually) add constexpr to the init variables. However, this silly code does catch our problem with the copy constructor. This is just a side effect of the fact that the copy constructor is called when the init variables are passed to the ensure_const_init function.

Using two variables

One variant of the above would be to simply define two objects: the one you want, and an identical constexpr version:

Foo h = Foo(1);
constexpr Foo h_const = Foo(1);

It should be reasonable to assume that if h_const can be const-initialized, and h uses the same constructor and arguments, that h will be const-initialized as well (though again, no real guarantee).

This assumes that the h_const object, being unused, will be optimized away. Since it is constexpr, we can also be sure that there are no constructor side effects that will linger, so at worst this wastes a bit of memory if the compiler does not optimize it.

Again, this requires duplication of the constructor arguments, which can be error-prone.

Summary

There's two significant problems left:

  1. None of these approaches actually guarantee that const-initialization happens. It seems they catch the most common problem: Having a non-constexpr function or constructor involved, but inside the C++ minefield that is (copy) constructors, implicit conversions, half a dozen of initialization methods, etc., I'm pretty confident that there are other caveats we're missing here.

  2. None of these approaches are very pretty. Ideally, you'd just write something like:

    constinit Foo f = Foo(1);
    

    or, slightly worse:

    Foo f = constinit(Foo(1));
    

Implementing the second syntax seems to be impossible using a function - function parameters cannot be used in a constant expression (they could be non-const). You can't mark parameters as constexpr either.

I considered to use a preprocessor macro to implement this. A macro can easily take care of duplicating the initialization value (and since we're enforcing constant initialization, there's no side effects to worry about). It's tricky, though, since you can't just put a static_assert statement, or additional constexpr variable declaration inside a variable initialization. I considered using a C++11 lambda expression for that, but those can only contain a single return statement and nothing else (unless they return void) and cannot be declared constexpr...

Perhaps a macro that completely generates the variable declaration and initialization could work, but still a single macro that generates multiple statement is messy (and the usual do {...} while(0) approach doesn't work in global scope. It's also not very nice...

Any other suggestions?

Update 2020-11-06: It seems that C++20 has introduced a new keyword, constinit to do exactly this: Require that at variable is constant-initialized, without also making it const like constexpr does. See https://en.cppreference.com/w/cpp/language/constinit

 
0 comments -:- permalink -:- 21:25
Bouncing packets: Kernel bridge bug or corner case?

Tux

While setting up Tika, I stumbled upon a fairly unlikely corner case in the Linux kernel networking code, that prevented some of my packets from being delivered at the right place. After quite some digging through debug logs and kernel source code, I found the cause of this problem in the way the bridge module handles netfilter and iptables.

Just in case someone else actually finds himself in this situation and actually manages to find this blogpost, I'll detail my setup, the problem and it solution here.

See more ...

 
0 comments -:- permalink -:- 18:40
Debian Squeeze on an emulated MIPS machine

In my work as a Debian Maintainer for the OpenTTD and related packages, I occasionally come across platform-specific problems. That is, compiling and running OpenTTD works fine on my own x86 and amd64 systems, but when I my packages to Debian, it turns out there is some problem that only occurs on more obscure platforms like MIPS, S390 or GNU Hurd.

This morning, I saw that my new grfcodec package is not working on a bunch of architectures (it seems all of the failing architectures are big endian). To find out what's wrong, I'll need to have a machine running one of those architectures so I can debug.

In the past, I've requested access to Debian's "porter" machines, which are intended for these kinds of things. But that's always a hassle, which requires other people's time to set up, so I'm using QEMU to set up a virtual machine running the MIPS architecture now.

What follows is essentially an update for this excellent tutorial about running Debian Etch on QEMU/MIPS(EL) by Aurélien Jarno I found. It's probably best to read that tutorial as well, I'll only give the short version, updated for Squeeze. I've also looked at this tutorial on running Squeeze on QEMU/PowerPC by Uwe Hermann.

Finally, note that Aurélien also has pre-built images available for download, for a whole bunch of platforms, including Squeeze on MIPS. I only noticed this after writing this tutorial, might have saved me a bunch of work ;-p

Preparations

You'll need qemu. The version in Debian Squeeze is sufficient, so just install the qemu package:

$ aptitude install qemu

You'll need a virtual disk to install Debian Squeeze on:

$ qemu-img create -f qcow2 debian_mips.qcow2 2G

You'll need a debian-installer kernel and initrd to boot from:

$ wget http://ftp.de.debian.org/debian/dists/squeeze/main/installer-mips/current/images/malta/netboot/initrd.gz
$ wget http://ftp.de.debian.org/debian/dists/squeeze/main/installer-mips/current/images/malta/netboot/vmlinux-2.6.32-5-4kc-malta

Note that in Aurélien's tutorial, he used a "qemu" flavoured installer. It seems this is not longer available in Squeeze, just a few others (malta, r4k-ip22, r5k-ip32, sb1-bcm91250a). I just picked the first one and apparently that one works on QEMU.

Also, note that Uwe's PowerPC tutorial suggests downloading a iso cd image and booting from that. I tried that, but QEMU has no BIOS available for MIPS, so this approach didn't work. Instead, you should tell QEMU about the kernel and initrd and let it load them directly.

Booting the installer

You just run QEMU, pointing it at the installer kernel and initrd and passing some extra kernel options to keep it in text mode:

$ qemu-system-mips -hda debian_mips.qcow2 -kernel vmlinux-2.6.32-5-4kc-malta -initrd initrd.gz -append "root=/dev/ram console=ttyS0" -nographic

Now, you get a Debian installer, which you should complete normally.

As Aurélien also noted, you can ignore the error about a missing boot loader, since QEMU will be directly loading the kernel anyway.

After installation is completed and the virtual system is rebooting, terminate QEMU:

$  killall qemu-system-mips

(I haven't found another way of terminating a -nographic QEMU...)

Booting the system

Booting the system is very similar to booting the installer, but we leave out the initrd and point the kernel to the real root filesystem instead.

Note that this boots using the installer kernel. If you later upgrade the kernel inside the system, you'll need to copy the kernel out from /boot in the virtual system into the host system and use that to boot. QEMU will not look inside the virtual disk for a kernel to boot automagically.

$ qemu-system-mips -hda debian_mips.qcow2 -kernel vmlinux-2.6.32-5-4kc-malta -append "root=/dev/sda1 console=ttyS0" -nographic

More features

Be sure to check Aurélien's tutorial for some more features, options and details.

 
0 comments -:- permalink -:- 12:18
dconf-editor is the new gconf-editor

Gnome

A I previously mentioned, Gnome3 is migrating away from the gconf settings storage to the to GSettings settings API (along with the default dconf settings storage backend).

So where you previously used the gconf-editor program to browse and edit Gnome settings, you can now use dconf-editor to browse and edit settings.

I do wonder if the name actually implies that dconf-editor is editing the dconf storage directly, instead of using the fancy new GSettings API? :-S

 
0 comments -:- permalink -:- 17:07
CrashPlan: Cheap cloud backup that runs on Linux

For some time, I've been looking for a decent backup solution. Such a solution should:

  • be completely unattended,
  • do off-site backups (and possibly onsite as well)
  • be affordable (say, €5 per month max)
  • run on Linux (both desktops and headless servers)
  • offer plenty of space (couple of hundred gigabytes)

Up until now I haven't found anything that met my demands. Most backup solutions don't run on (headless Linux) and most generic cloud storage providers are way too expensive (because they offer high-availability, high-performance storage, which I don't really need).

Backblaze seemed interesting when they launched a few years ago. They just took enormous piles of COTS hard disks and crammed a couple dozen of them in a custom designed case, to get a lot of cheap storage. They offered an unlimited backup plan, for only a few euros per month. Ideal, but it only works with their own backup client (no normal FTP/DAV/whatever supported), which (still) does not run on Linux.

Crashplan

Crashplan logo

Recently, I had another look around and found CrashPlan, which offers an unlimited backup plan for only $5 per month (note that they advertise with $3 per month, but that is only when you pay in advance for four years of subscription, which is a bit much. Given that if you cancel beforehand, you will still get a refund of any remaining months, paying up front might still be a good idea, though). They also offer a family pack, which allows you to run CrashPlan on up to 10 computers for just over twice the price of a single license. I'll probably get one of these, to backup my laptop, Brenda's laptop and my colocated server.

The best part is that the CrashPlan software runs on Linux, and even on a headless Linux server (which is not officially supported, but CrashPlan does document the setup needed). The headless setup is possible because CrashPlan runs a daemon (as root) that takes care of all the actual work, while the GUI connects to the daemon through a TCP port. I still need to double-check what this means for the security though (especially on a multi-user system, I don't want to every user with localhost TCP access to be able to administer my backups), but it seems that CrashPlan can be configured to require the account password when the GUI connects to the daemon.

The CrashPlan software itself is free and allows you to do local backups and backups to other computers running CrashPlan (either running under your own account, or computers of friends running on separate accounts). Another cool feature is that it keeps multiple snapshots of each file in the backup, so you can even get back a previous version of a file you messed up. This part is entirely configurable, but by default it keeps up to one snapshot every 15 minutes for recent changes, and reduces that to one snapshot for every month for snapshots over a year old.

When you pay for a subscription, the software transforms into CrashPlan+ (no reinstall required) and you get extra features such as multiple backup sets, automatic software upgrades and most notably, access to the CrashPlan Central cloud storage.

I've been running the CrashPlan software for a few days now (it comes with a 30-day free trial of the unlimited subscription) and so far, I'm quite content with it. It's been backing up my homedir to a local USB disk and into the cloud automatically, I don't need to check up on it every time.

The CrashPlan runs on Java, which I doesn't usually make me particularly enthousiastic. However, the software seems to run fast and reliable so far, so I'm not complaining. Regarding the software itself, it does seem to me that it's not intended for micromanaging. For example, when my external USB disk is not mounted, the interface shows "Destination unavailable". When I then power on and mount the external disk, it takes some time for Crashplan to find out about this and in the meanwhile, there's no button in the interface to convince CrashPlan to recheck the disk. Also, I can add a list of filenames/path patterns to ignore, but there's not really any way to test these regexes.

Having said that, the software seems to do its job nicely if you just let it do its job in the background. On piece of micromanagement which I do like is that you can manually pause and resume the backups. If you pause the backups, they'll be automatically resumed after 24 hours, which is useful if the backups are somehow bothering you, without the risk that you forget to turn the backups back on.

Backing up only when docked

Of course, sending away backups is nice when I am at home and have 50Mbit fiber available, but when I'm on the road, running on some wifi or even 3G connection, I really don't want to load my connection with the sending of backup data.

Of course I can manually pause the backups, but I don't want to be doing that every time when I pick up my laptop and get moving. Since I'm using a docking station, it makes sense to simply pause backups whenever I undock and resume them when I dock again.

The obvious way to implement this would be to simply stop the CrashPlan daemon when undocking, but when I do that, the CrashPlanDesktop GUI becomes unresponsive (and does not recover when the daemon is started again).

So, I had a look at the "admin console", which offers "command line" commands, such as pause and resume. However, this command line seems to be available only inside the GUI, which is a bit hard to script (also note that not all of the commands seem to work for me, sleep and help seem to be unknown commands, which cause the console to close without an error message, just like when I just type something random).

It seems that these console commands are really just sent verbatim to the CrashPlan daemon. Googling around a bit more, I found a small script for CrashPlan PRO (the business version of their software), which allows sending commands to the daemon through a shell script. I made some modifications to this script to make it useful for me:

  • don't depend on the current working dir, hardcode /usr/local/crashplan in the script instead
  • fixed a bashism (== vs =)
  • removed -XstartOnFirstThread argument from java (MacOS only?)
  • don't store the commands to send in a separate $command but instead pass "$@" to java directly. This latter prevents bash from splitting arguments with spaces in them into multiple arguments, which causes the command "pause 9999" to be interpreted as two commands instead of one with an argument.

I have this script under /usr/local/bin/CrashPlanCommand:

#!/bin/sh
BASE_DIR=/usr/local/crashplan

if [ "x$@" == "x" ] ; then
  echo "Usage: $0 <command> [<command>...]"
  exit
fi

hostPort=localhost:4243
echo "Connecting to $hostPort"

echo "Executing $@"

CP=.
for f in `ls $BASE_DIR/lib/*.jar`; do
    CP=${CP}:$f
done

java -classpath $CP com.backup42.service.ui.client.ConsoleApp $hostPort "$@"

Now I can run CrashPlanCommand 'pause 9999' and CrashPlanCommand resume to pause and resume the backups (9999 is the number of minutes to pause, which is about a week, since I might be undocked more than 24 hourse, which is the default pause time).

To make this run automatically on undock, I created a simply udev rules file as /etc/udev/rules.d/10-local-crashplan-dock.rules:

ACTION=="change", ATTR{docked}=="0", ATTR{type}=="dock_station", RUN+="/usr/local/bin/CrashPlanCommand 'pause 9999'"
ACTION=="change", ATTR{docked}=="1", ATTR{type}=="dock_station", RUN+="/usr/local/bin/CrashPlanCommand resume"

And voilà! Automatica pausing and resuming on undocking/docking of my laptop!

 
1 comment -:- permalink -:- 17:05
Changing the gdm3 (login screen) background in Gnome3

Gnome

I upgraded to Gnome3 this week, and after half a day of debugging I got my (quite non-standard) setup working completely again. One of the things that got broken was my custom wallpaper on the gdm3 login screen. This used to be configured in /etc/gdm3/greeter.gconf.defaults, but apparently Gnome3 replaced gconf by this new "gsettings" thingy.

Anyway, to change the desktop background in gdm, add the following lines to /etc/gdm3/greeter.gsettings:

[org.gnome.desktop.background]
picture-uri='file:///etc/gdm3/thinkpad.jpg'

For reference, I also found some other method, which looks a lot more complicated. I suspect it also doesn't work in Debian, which runs gdm as root, not as a separate "gdm" user. Systems that do use such a user might need the more complicated method, I guess (which probably ends up storing the settings somewhere in the homedir of the gdm user...).

 
0 comments -:- permalink -:- 12:19
Debian Squeeze, Gnome, Pulseaudio and volume hotkeys

Gnome

I've been configuring my new laptop (more on that later) and this time I've tried to get the volume hotkeys working properly with Pulseaudio. On a default Debian Squeeze installation, the volume hotkeys are processed by (the media-keys plugin of) gnome-settings-daemon (1). The good news is that Gnome has switched over to using pulseaudio by default (and even removed support for plain ALSA). However, Debian does not want to force users to use pulseaudio. So the bad news is that Debian has disabled this pulseaudio support in gnome-settings-daemon and has a patch to use the ALSA mixer (via GStreamer).

Normally, it shouldn't matter much which mixer you use, as long as they work. However, I'm using two different sound cards on my laptop: The builtin one for on the road and an external USB sound card when I'm at home (to get a S/PDIF output). So I need pulseaudio to route my audio to the right place, and I want my volume controls to control the same card as well. Note that gnome-volume-control, the GUI to control your volums is installed in two flavours by Debian (Pulseaudio and GStreamer), and the right one is started by a wrapper script depending on whether Pulse is running.

Fortunately, the Debian patch is somewhat configurable: You can select a different mixer device through gconf. To get at that configuration, use gconf-editor and browse to /desktop/gnome/sound/default_mixer_device. Set this value in the form of "element:device", where element selects the gstreamer plugin to use, and device sets its "device" property. I initially tried using the "pulsemixer" element (in the form "pulsemixer:alsa_output.usb-0ccd_USB_Audio-00-Aureon51MkII.analog-stereo"), but that only allowed me to specify a specific Pulseaudio sink, not "whatever-is-default").

So, instead, I settled for using the "alsamixer" gstreamer plugin, together with the Pulseaudio ALSA plugin (the same one you use to redirect ALSA applications to Pulseaudio). For this to work, it's important that you redirect ALSA applications to pulse using the following in your /etc/asound.conf or your ~/.asoundrc:

pcm.!default.type pulse
ctl.!default.type pulse

This makes sure that not just audio streams (pcm) but also mixer controls (ctl) are redirected to Pulseaudio. Now, set the /desktop/gnome/sound/default_mixer_device gconf value to the following:

alsamixer:default

This should make sure that your volume keys work with the device selected as default in Pulseaudio (through pavucontrol or gnome-volume-control for example). It seems this behaviour relies on the fact that gnome-settings-daemon only keeps the mixer controls open for a few seconds, allowing the Pulseaudio ALSA plugin to select the right pulseaudio sink to control everytime the mixer is reopened (so it needs a few seconds of not pressing the volume hotkeys after changing the default device).

By the way, it seems that in the next version of Gnome (and/or Debian) this problem wil probably be fixed out of the box, since the 2.93 packages in Debian experimental have Pulseaudio support enabled (haven't tested them, though).

Hopefully this helps someone else out there struggling with the same problem...

(1): You might have noticed that I'm talking about Gnome here. I case you wondered, I've actually started to use parts of Gnome for daily use on my laptop. I'm still using Awesome as my primary window manager and I'm not using gnome-panel, so I haven't suddenly become a GUI addict all of the sudden ;-)

 
3 comments -:- permalink -:- 10:42
Getting Screen and X (and dbus and ssh-agent and ...) to play well

When you use Screen together with Xorg, you'll recognize this: You log in to an X session, start screen and use the terminals within screen to start programs every now and then. Everything works fine so far. Then, you logout and log in again (or X crashes, or whatever). You happily re-attach the still running screen, which allows you to continue whatever you were doing.

But now, whenever you want to start a GUI program, things get wonky. You'll get errors about not being able to find configuration data, connect to gconf or DBUS, or your programs will not start at all, with the ever-informative error message "No protocol specified". You'll also recognize your ssh-agent and gpg-agent to stop working within the screen session...

What is happening here, is that all those programs are using "environment variables" to communicate. In particular, when you log in, various daemons get started (like the DBUS daemon and your ssh-agent). To allow other programs to connect to these daemons, they put their contact info in an environment variable in the login process. Whenever a process starts another process, these environment variables get transfered from the parent process to the child process. Sine these environment variables are set in the X sesssion startup process, which starts everything else, all programs should have access to them.

However, you'll notice that, after logging in a second time, the screen you re-attach to was not started by the current X session. So that means its environment variables still point to the old (no longer runnig) daemons from the previous X session. This includes any shells already running in the screen as well as new shells started within the screen (since the latter inherit the environment variables from the screen process itself).

To fix this, we would like to somehow update the environment of all processes that are already running when we login, to update them with the addresses of the new daemons. Unfortunately, we can't change the environment of other processes (unless we resort to scary stuff like using gdb or poking around in /dev/mem...). So, we'll have to convice those shells to actually update their own environments.

So, this solution has two parts: First, after login, saving the relevant variables from the environment into a file. Then, we'll need to get our shell to load those variables.

The first part is fairly easy: Just run a script after login that writes out the values to a file. I have a script called ~/bin/save-env to do exactly that. It looks like this (full version here):

#!/bin/sh

# Save a bunch of environment variables. This script should be run just
# after login. The saved variables can then be sourced by every bash
# shell, so long running shells (e.g., in screen) or incoming SSH shells
# can also use these services.

# Save the DBUS sessions address on each login
if [ -n "$DBUS_SESSION_BUS_ADDRESS" ]; then
echo export DBUS_SESSION_BUS_ADDRESS="$DBUS_SESSION_BUS_ADDRESS" > ~/.env.d/dbus
fi

if [ -n "$SSH_AUTH_SOCK" ]; then
echo export SSH_AGENT_PID="$SSH_AGENT_PID" > ~/.env.d/ssh
echo export SSH_AUTH_SOCK="$SSH_AUTH_SOCK" >> ~/.env.d/ssh
fi

# Save other variables here

This script fills the directory ~/.env.d with files containg environment variables, separated by application. I could probably have thrown them all into a single file, but it seemed like a good idea to separate them. Anyway, these files are created in such a way that they can be sourced by a running shell to get the new files.

If you download and install this script, don't forget to make it executable and create the ~/.env.d directory. You'll need to make sure it gets run as late as possible after login. I'm running a (stripped down) Gnome session, so I used gnome-session-properties to add it to my list of startup applications. You might call this script from your .xession, KDE's startup program list, or whatever.

For the second part, we need to set our saved variables in all of our shells. This sounds easy, just run for f in ~/.env.d/*; do source "$f"; done in every shell (Don't be tempted to do source ~/.env.d/*, since that sources just the first file with the other files as arguments!). But, of course we don't want to do this manually, but let every shell do it automatically.

For this, we'll use a tool completely unintended, but suitable enough for this job: $PROMPT_COMMAND. Whenever Bash is about to display a prompt, it evals whatever is in the variable $PROMPT_COMMAND. So it ends up evaluating that command all the time, which makes it a prefect place to load the saved variables. By setting the $PROMPT_COMMAND variable in your ~/.bashrc variable, it will become enabled in every shell you start (except for login shells, so you might want to source ~/.bashrc from your ~/.bash_profile):

# Source some variables at every prompt. This is to make stuff like
# ssh agent, dbus, etc. working in long-running shells (e.g., inside
# screen).
PROMPT_COMMAND='for f in ~/.env.d/*; do source "$f"; done'

You might need to be careful where to place this line, in case PROMPT_COMMAND already has some other value, like is default on Debian for example. Here's my full .bashrc file, note the += and starting ; in the second assignment of $PROMPT_COMMAND.

The astute reader will have noticed that this will only work for existing shells when a prompt is displayed, meaning you might need to just press enter at an existing prompt (to force a new one) after logging in the second time to get the values loaded. But that's a small enough burden, right?

So, with these two components, you'll be able to optimally use your long-running screen sessions, even when your X sessions are not so stable ;-)

Additionally, this stuff also allows you to use your faithful daemons when you SSH into the machine. I use this so I can start GUI programs from another machine (in particular, to open up attachments from my email client which runs on a server somewhere). See my recent blogpost about setting that up. However, since running a command through SSH non-interactively never shows a prompt and thus never evaluates $PROMPT_COMMAND, you'll need to manually source the variables at once in your .bashrc directly. I do this at the top of my ~/.bashrc.

Man, I need to learn how to writer shorter posts...

 
0 comments -:- permalink -:- 15:26
Opening attachments on another machine from within mutt

For a fair amount of years now, I've been using Mutt as my primary email client. It's a very nice text-based email client that is permanently running on my server (named drsnuggles). This allows me to connect to my server from anywhere, connect to the running Screen and always get exactly the same, highly customized, mail interface (some people will say that their webmail interfaces will allow for exactly the same, but in my experience webmail is always clumsy and slow compared to a decent, well-customized text-based client when processing a couple of hundreds of emails per day).

Attachment troubles

So I like my mutt / screen setup. However, there has been one particular issue that didn't work quite as efficient: attachments. Whenever I wanted to open an email attachment, I needed to save the attachment within mutt to some place that was shared through HTTP, make the file world-readable (mutt insists on not making your saved attachments world-readable), browse to some url on the local machine and open the attachment. Not quite efficient at all.

Yesterday evening I was finally fed up with all this stuff and decided to hack up a fix. It took a bit of fiddling to get it right (and I had nearly started to spend the day coding a patch for mutt when the folks in #mutt pointed out an easier, albeit less elegant "solution"), but it works now: I can select an attachment in mutt, press "x" and it gets opened on my laptop. Coolness.

How does it work?

Just in case anyone else is interested in this solution, I'll document how it works. The big picture is as follows: When I press "x", a mutt macro is invoked that copies the attachment to my laptop and opens the attachment there. There's a bunch of different steps involved here, which I'll detail below.

See more ...

 
0 comments -:- permalink -:- 22:38
Adobe dropped 64 bit Linux support in Flash again

Only recently, Adobe has started to (finally) support 64 bit Linux with its Flash plugin. I could finally watch Youtube movies (and more importantly, do some Flash development work for Brevidius).

However, this month Adobe has announced that it drops support for 64 bit Linux again. Apparently they "are making significant architectural changes to the 64-bit Linux Flash Player and additional security enhancements" and they can't do that while keeping the old architecture around for stable releases, apparently.

This is particularly nasty, because the latest 10.0 version (which still has amd64 support) has a couple of dozens (!) of security vulnerabilities which are fixed in a 10.1 version only (which does not have Linux amd64 support anymore).

So Adobe is effectively encouraging people on amd64 Linux to either not use their product, or use a version with critical security flaws. Right.

 
0 comments -:- permalink -:- 09:51
Showing 11 - 20 of 42 posts
Copyright by Matthijs Kooijman - most content WTFPL