These are the ramblings of Matthijs Kooijman, concerning the software he hacks on, hobbies he has and occasionally his personal life.
Most content on this site is licensed under the WTFPL, version 2 (details).
Questions? Praise? Blame? Feel free to contact me.
My old blog (pre-2006) is also still available.
Sun | Mon | Tue | Wed | Thu | Fri | Sat |
---|---|---|---|---|---|---|
1 | 2 | |||||
3 | 4 | 5 | 6 | 7 | 8 | 9 |
10 | 11 | 12 | 13 | 14 | 15 | 16 |
17 | 18 | 19 | 20 | 21 | 22 | 23 |
24 | 25 | 26 | 27 | 28 | 29 | 30 |
31 |
(...), Arduino, AVR, BaRef, Blosxom, Book, Busy, C++, Charity, Debian, Electronics, Examination, Firefox, Flash, FreeBSD, Gnome, Hardware, Inter-Actief, IRC, JTAG, LARP, Layout, Linux, Madness, Mail, Math, MS-1013, Mutt, Nerd, Notebook, Optimization, Personal, Plugins, QEMU, Random, Rant, S270, Sailing, Samba, Sanquin, Script, Sleep, Software, SSH, Study, Symbols, Tika, Travel, Trivia, Windows, Work, Xanthe, XBee
A I previously mentioned, Gnome3 is migrating away from the gconf settings storage to the to GSettings settings API (along with the default dconf settings storage backend).
So where you previously used the gconf-editor
program to browse and
edit Gnome settings, you can now use dconf-editor
to browse and
edit settings.
I do wonder if the name actually implies that dconf-editor
is editing
the dconf storage directly, instead of using the fancy new GSettings
API? :-S
For some time, I've been looking for a decent backup solution. Such a solution should:
Up until now I haven't found anything that met my demands. Most backup solutions don't run on (headless Linux) and most generic cloud storage providers are way too expensive (because they offer high-availability, high-performance storage, which I don't really need).
Backblaze seemed interesting when they launched a few years ago. They just took enormous piles of COTS hard disks and crammed a couple dozen of them in a custom designed case, to get a lot of cheap storage. They offered an unlimited backup plan, for only a few euros per month. Ideal, but it only works with their own backup client (no normal FTP/DAV/whatever supported), which (still) does not run on Linux.
Recently, I had another look around and found CrashPlan, which offers an unlimited backup plan for only $5 per month (note that they advertise with $3 per month, but that is only when you pay in advance for four years of subscription, which is a bit much. Given that if you cancel beforehand, you will still get a refund of any remaining months, paying up front might still be a good idea, though). They also offer a family pack, which allows you to run CrashPlan on up to 10 computers for just over twice the price of a single license. I'll probably get one of these, to backup my laptop, Brenda's laptop and my colocated server.
The best part is that the CrashPlan software runs on Linux, and even on a headless Linux server (which is not officially supported, but CrashPlan does document the setup needed). The headless setup is possible because CrashPlan runs a daemon (as root) that takes care of all the actual work, while the GUI connects to the daemon through a TCP port. I still need to double-check what this means for the security though (especially on a multi-user system, I don't want to every user with localhost TCP access to be able to administer my backups), but it seems that CrashPlan can be configured to require the account password when the GUI connects to the daemon.
The CrashPlan software itself is free and allows you to do local backups and backups to other computers running CrashPlan (either running under your own account, or computers of friends running on separate accounts). Another cool feature is that it keeps multiple snapshots of each file in the backup, so you can even get back a previous version of a file you messed up. This part is entirely configurable, but by default it keeps up to one snapshot every 15 minutes for recent changes, and reduces that to one snapshot for every month for snapshots over a year old.
When you pay for a subscription, the software transforms into CrashPlan+ (no reinstall required) and you get extra features such as multiple backup sets, automatic software upgrades and most notably, access to the CrashPlan Central cloud storage.
I've been running the CrashPlan software for a few days now (it comes with a 30-day free trial of the unlimited subscription) and so far, I'm quite content with it. It's been backing up my homedir to a local USB disk and into the cloud automatically, I don't need to check up on it every time.
The CrashPlan runs on Java, which I doesn't usually make me particularly enthousiastic. However, the software seems to run fast and reliable so far, so I'm not complaining. Regarding the software itself, it does seem to me that it's not intended for micromanaging. For example, when my external USB disk is not mounted, the interface shows "Destination unavailable". When I then power on and mount the external disk, it takes some time for Crashplan to find out about this and in the meanwhile, there's no button in the interface to convince CrashPlan to recheck the disk. Also, I can add a list of filenames/path patterns to ignore, but there's not really any way to test these regexes.
Having said that, the software seems to do its job nicely if you just let it do its job in the background. On piece of micromanagement which I do like is that you can manually pause and resume the backups. If you pause the backups, they'll be automatically resumed after 24 hours, which is useful if the backups are somehow bothering you, without the risk that you forget to turn the backups back on.
Of course, sending away backups is nice when I am at home and have 50Mbit fiber available, but when I'm on the road, running on some wifi or even 3G connection, I really don't want to load my connection with the sending of backup data.
Of course I can manually pause the backups, but I don't want to be doing that every time when I pick up my laptop and get moving. Since I'm using a docking station, it makes sense to simply pause backups whenever I undock and resume them when I dock again.
The obvious way to implement this would be to simply stop the CrashPlan daemon when undocking, but when I do that, the CrashPlanDesktop GUI becomes unresponsive (and does not recover when the daemon is started again).
So, I had a look at the "admin console", which offers "command
line" commands, such as pause
and resume
. However, this command line
seems to be available only inside the GUI, which is a bit hard to
script (also note that not all of the commands seem to work for me,
sleep
and help
seem to be unknown commands, which cause the console
to close without an error message, just like when I just type something
random).
It seems that these console commands are really just sent verbatim to the CrashPlan daemon. Googling around a bit more, I found a small script for CrashPlan PRO (the business version of their software), which allows sending commands to the daemon through a shell script. I made some modifications to this script to make it useful for me:
/usr/local/crashplan
in the script instead==
vs =
)-XstartOnFirstThread
argument from java (MacOS only?)$command
but instead
pass "$@" to java directly. This latter prevents bash from splitting
arguments with spaces in them into multiple arguments, which
causes the command "pause 9999" to be interpreted as two commands
instead of one with an argument.I have this script under /usr/local/bin/CrashPlanCommand
:
#!/bin/sh
BASE_DIR=/usr/local/crashplan
if [ "x$@" == "x" ] ; then
echo "Usage: $0 <command> [<command>...]"
exit
fi
hostPort=localhost:4243
echo "Connecting to $hostPort"
echo "Executing $@"
CP=.
for f in `ls $BASE_DIR/lib/*.jar`; do
CP=${CP}:$f
done
java -classpath $CP com.backup42.service.ui.client.ConsoleApp $hostPort "$@"
Now I can run CrashPlanCommand 'pause 9999'
and CrashPlanCommand
resume
to pause and resume the backups (9999 is the number of minutes
to pause, which is about a week, since I might be undocked more than 24
hourse, which is the default pause time).
To make this run automatically on undock, I created a simply udev rules
file as /etc/udev/rules.d/10-local-crashplan-dock.rules
:
ACTION=="change", ATTR{docked}=="0", ATTR{type}=="dock_station", RUN+="/usr/local/bin/CrashPlanCommand 'pause 9999'"
ACTION=="change", ATTR{docked}=="1", ATTR{type}=="dock_station", RUN+="/usr/local/bin/CrashPlanCommand resume"
And voilà! Automatica pausing and resuming on undocking/docking of my laptop!
I upgraded to Gnome3 this week, and after half a day of debugging I got
my (quite non-standard) setup working completely again. One of the
things that got broken was my custom wallpaper on the gdm3 login screen.
This used to be configured in /etc/gdm3/greeter.gconf.defaults
, but
apparently Gnome3 replaced gconf by this new "gsettings" thingy.
Anyway, to change the desktop background in gdm, add the following lines
to /etc/gdm3/greeter.gsettings
:
[org.gnome.desktop.background]
picture-uri='file:///etc/gdm3/thinkpad.jpg'
For reference, I also found some other method, which looks a lot
more complicated. I suspect it also doesn't work in Debian, which runs
gdm as root, not as a separate "gdm
" user. Systems that do use such a
user might need the more complicated method, I guess (which probably
ends up storing the settings somewhere in the homedir of the gdm
user...).
I've been configuring my new laptop (more on that later) and this time I've tried to get the volume hotkeys working properly with Pulseaudio. On a default Debian Squeeze installation, the volume hotkeys are processed by (the media-keys plugin of) gnome-settings-daemon (1). The good news is that Gnome has switched over to using pulseaudio by default (and even removed support for plain ALSA). However, Debian does not want to force users to use pulseaudio. So the bad news is that Debian has disabled this pulseaudio support in gnome-settings-daemon and has a patch to use the ALSA mixer (via GStreamer).
Normally, it shouldn't matter much which mixer you use, as long as they
work. However, I'm using two different sound cards on my laptop: The
builtin one for on the road and an external USB sound card when I'm at
home (to get a S/PDIF output). So I need pulseaudio to route my audio to
the right place, and I want my volume controls to control the same card
as well. Note that gnome-volume-control
, the GUI to control your
volums is installed in two flavours by Debian (Pulseaudio and
GStreamer), and the right one is started by a wrapper script depending
on whether Pulse is running.
Fortunately, the Debian patch is somewhat configurable: You can select a
different mixer device through gconf. To get at that configuration, use
gconf-editor
and browse to /desktop/gnome/sound/default_mixer_device
.
Set this value in the form of "element:device
", where element
selects the gstreamer plugin to use, and device
sets its "device"
property. I initially tried using the "pulsemixer
" element (in the
form
"pulsemixer:alsa_output.usb-0ccd_USB_Audio-00-Aureon51MkII.analog-stereo
"),
but that only allowed me to specify a specific Pulseaudio sink, not
"whatever-is-default").
So, instead, I settled for using the "alsamixer
" gstreamer plugin,
together with the Pulseaudio ALSA plugin (the same one you use to
redirect ALSA applications to Pulseaudio). For this to work, it's
important that you redirect ALSA applications to pulse using the
following in your /etc/asound.conf
or your ~/.asoundrc
:
pcm.!default.type pulse
ctl.!default.type pulse
This makes sure that not just audio streams (pcm
) but also mixer
controls (ctl
) are redirected to Pulseaudio. Now, set the
/desktop/gnome/sound/default_mixer_device
gconf value to the
following:
alsamixer:default
This should make sure that your volume keys work with the device
selected as default in Pulseaudio (through pavucontrol
or
gnome-volume-control
for example). It seems this behaviour relies on
the fact that gnome-settings-daemon only keeps the mixer controls open
for a few seconds, allowing the Pulseaudio ALSA plugin to select the
right pulseaudio sink to control everytime the mixer is reopened (so it
needs a few seconds of not pressing the volume hotkeys after changing
the default device).
By the way, it seems that in the next version of Gnome (and/or Debian) this problem wil probably be fixed out of the box, since the 2.93 packages in Debian experimental have Pulseaudio support enabled (haven't tested them, though).
Hopefully this helps someone else out there struggling with the same problem...
(1): You might have noticed that I'm talking about Gnome here. I case
you wondered, I've actually started to use parts of Gnome for daily use
on my laptop. I'm still using Awesome as my primary window manager
and I'm not using gnome-panel
, so I haven't suddenly become a GUI addict
all of the sudden ;-)
When you use Screen together with Xorg, you'll recognize this: You log in to an X session, start screen and use the terminals within screen to start programs every now and then. Everything works fine so far. Then, you logout and log in again (or X crashes, or whatever). You happily re-attach the still running screen, which allows you to continue whatever you were doing.
But now, whenever you want to start a GUI program, things get wonky. You'll get errors about not being able to find configuration data, connect to gconf or DBUS, or your programs will not start at all, with the ever-informative error message "No protocol specified". You'll also recognize your ssh-agent and gpg-agent to stop working within the screen session...
What is happening here, is that all those programs are using "environment variables" to communicate. In particular, when you log in, various daemons get started (like the DBUS daemon and your ssh-agent). To allow other programs to connect to these daemons, they put their contact info in an environment variable in the login process. Whenever a process starts another process, these environment variables get transfered from the parent process to the child process. Sine these environment variables are set in the X sesssion startup process, which starts everything else, all programs should have access to them.
However, you'll notice that, after logging in a second time, the screen you re-attach to was not started by the current X session. So that means its environment variables still point to the old (no longer runnig) daemons from the previous X session. This includes any shells already running in the screen as well as new shells started within the screen (since the latter inherit the environment variables from the screen process itself).
To fix this, we would like to somehow update the environment of all
processes that are already running when we login, to update them with
the addresses of the new daemons. Unfortunately, we can't change the
environment of other processes (unless we resort to scary stuff
like using gdb
or poking around in /dev/mem
...). So, we'll have to
convice those shells to actually update their own environments.
So, this solution has two parts: First, after login, saving the relevant variables from the environment into a file. Then, we'll need to get our shell to load those variables.
The first part is fairly easy: Just run a script after login that writes
out the values to a file. I have a script called ~/bin/save-env
to do
exactly that. It looks like this (full version here):
#!/bin/sh
# Save a bunch of environment variables. This script should be run just
# after login. The saved variables can then be sourced by every bash
# shell, so long running shells (e.g., in screen) or incoming SSH shells
# can also use these services.
# Save the DBUS sessions address on each login
if [ -n "$DBUS_SESSION_BUS_ADDRESS" ]; then
echo export DBUS_SESSION_BUS_ADDRESS="$DBUS_SESSION_BUS_ADDRESS" > ~/.env.d/dbus
fi
if [ -n "$SSH_AUTH_SOCK" ]; then
echo export SSH_AGENT_PID="$SSH_AGENT_PID" > ~/.env.d/ssh
echo export SSH_AUTH_SOCK="$SSH_AUTH_SOCK" >> ~/.env.d/ssh
fi
# Save other variables here
This script fills the directory ~/.env.d with files containg environment variables, separated by application. I could probably have thrown them all into a single file, but it seemed like a good idea to separate them. Anyway, these files are created in such a way that they can be sourced by a running shell to get the new files.
If you download and install this script, don't forget to make it
executable and create the ~/.env.d directory. You'll need to make sure
it gets run as late as possible after login. I'm running a (stripped
down) Gnome session, so I used gnome-session-properties
to add it
to my list of startup applications. You might call this script from your
.xession, KDE's startup program list, or whatever.
For the second part, we need to set our saved variables in all of our
shells. This sounds easy, just run
for f in ~/.env.d/*; do source "$f"; done
in every shell (Don't be
tempted to do source ~/.env.d/*
, since that sources just the first
file with the other files as arguments!). But, of course we don't want
to do this manually, but let every shell do it automatically.
For this, we'll use a tool completely unintended, but suitable enough
for this job: $PROMPT_COMMAND
. Whenever Bash is about to display a
prompt, it evals whatever is in the variable $PROMPT_COMMAND
. So it
ends up evaluating that command all the time, which makes it a prefect
place to load the saved variables. By setting the $PROMPT_COMMAND
variable in your ~/.bashrc
variable, it will become enabled in every
shell you start (except for login shells, so you might want to source
~/.bashrc
from your ~/.bash_profile
):
# Source some variables at every prompt. This is to make stuff like
# ssh agent, dbus, etc. working in long-running shells (e.g., inside
# screen).
PROMPT_COMMAND='for f in ~/.env.d/*; do source "$f"; done'
You might need to be careful where to place this line, in case
PROMPT_COMMAND already has some other value, like is default on Debian
for example. Here's my full .bashrc file, note the +=
and
starting ;
in the second assignment of $PROMPT_COMMAND
.
The astute reader will have noticed that this will only work for existing shells when a prompt is displayed, meaning you might need to just press enter at an existing prompt (to force a new one) after logging in the second time to get the values loaded. But that's a small enough burden, right?
So, with these two components, you'll be able to optimally use your long-running screen sessions, even when your X sessions are not so stable ;-)
Additionally, this stuff also allows you to use your faithful daemons
when you SSH into the machine. I use this so I can start GUI programs
from another machine (in particular, to open up attachments from my
email client which runs on a server somewhere). See my recent blogpost
about setting that up. However, since running a command through SSH
non-interactively never shows a prompt and thus never evaluates
$PROMPT_COMMAND
, you'll need to manually source the variables at once
in your .bashrc directly. I do this at the top of my ~/.bashrc.
Man, I need to learn how to writer shorter posts...
For a fair amount of years now, I've been using Mutt as my primary email client. It's a very nice text-based email client that is permanently running on my server (named drsnuggles). This allows me to connect to my server from anywhere, connect to the running Screen and always get exactly the same, highly customized, mail interface (some people will say that their webmail interfaces will allow for exactly the same, but in my experience webmail is always clumsy and slow compared to a decent, well-customized text-based client when processing a couple of hundreds of emails per day).
So I like my mutt / screen setup. However, there has been one particular issue that didn't work quite as efficient: attachments. Whenever I wanted to open an email attachment, I needed to save the attachment within mutt to some place that was shared through HTTP, make the file world-readable (mutt insists on not making your saved attachments world-readable), browse to some url on the local machine and open the attachment. Not quite efficient at all.
Yesterday evening I was finally fed up with all this stuff and decided to hack up a fix. It took a bit of fiddling to get it right (and I had nearly started to spend the day coding a patch for mutt when the folks in #mutt pointed out an easier, albeit less elegant "solution"), but it works now: I can select an attachment in mutt, press "x" and it gets opened on my laptop. Coolness.
Just in case anyone else is interested in this solution, I'll document how it works. The big picture is as follows: When I press "x", a mutt macro is invoked that copies the attachment to my laptop and opens the attachment there. There's a bunch of different steps involved here, which I'll detail below.
Only recently, Adobe has started to (finally) support 64 bit Linux with its Flash plugin. I could finally watch Youtube movies (and more importantly, do some Flash development work for Brevidius).
However, this month Adobe has announced that it drops support for 64 bit Linux again. Apparently they "are making significant architectural changes to the 64-bit Linux Flash Player and additional security enhancements" and they can't do that while keeping the old architecture around for stable releases, apparently.
This is particularly nasty, because the latest 10.0 version (which still has amd64 support) has a couple of dozens (!) of security vulnerabilities which are fixed in a 10.1 version only (which does not have Linux amd64 support anymore).
So Adobe is effectively encouraging people on amd64 Linux to either not use their product, or use a version with critical security flaws. Right.
Recently, I've been setting up awstats, a webserver log analyzer, to
parse my Lighttpd logs. When I'm done, I might post some details on my
setup and the glue scripts used, but for now, I just want to comment on the
right LogFormat
configuration value to use for lighttpd.
When googling around, a lot of people either not mention
LogFormat
at all, or suggest to use LogFormat=1
, which means the
Combined Log Format (CLF). However, lighttpd use a different log format!
In fact, the CLF is very similar to Lighttpd's log format, but
it differs in the second field. In CLF, the second field is the identd
username, which is ignored by awstats. In Lighttpd's format, this is the
virtual host of the current request, which is very relevant if you're logging
multiple virtual hosts to the same logfile. This similarity is th reason that
LogFormat=1
does work for most people, but it's better to use the proper
configuration:
LogFormat="%host %virtualname %logname %time1 %methodurl %code %bytesd %refererquot %uaquot"
I've taken this format string from the only correct posting I found online,
but the forum of that posting seems to interpret the %ua
in the last field
as a newline (probably u for unicode and a for 0x0a, which is the ASCII code
for a newline...), so it took me a while to realize that it was correct.
Recently I have been doing some Flash debugging for my work at Brevidius.
In a video player we have been developing (based on
work done by Jeroen Wijering) we needed to escape some url parameter,
since our flash code could not be certain what would be in the value (and
characters like & and = could cause problems). The obvious way to do this is
of course the escape
function in ActionScript. This function promises to
escape all "non-alphanumeric characters", which would solve all our problems.
However, afters implementing this, we find that there are spaces magically
appearing in our GET parameters. Upon investigation, it turns out that there
are plus signs in our actual values (it's Base64 encoded data, which uses
the plus sign). However, the escape
function apparently thinks a
plus sign is alphanumeric, since it does not escape it (note that the flash
10 documentation documents this fact). Which shouldn't be a
problem, since a plus sign isn't special in an url according to RFC1738:
Thus, only alphanumerics, the special characters "$-_.+!*'(),", and reserved characters used for their reserved purposes may be used unencoded within a URL.
(Note that RFC3986 does recommend escaping plus signs, since they might be used to separate variables, but that's not the case here).
However, the urls we generate in flash point to PHP scripts and thus pass
their variables to PHP. Unfortunately, PHP does not adhere to the RFC's
strictly: It interprets plus signs in an url as spaces. Historically, spaces
in an url were replaced by plus signs, while spaces should really be encoded
as %20 nowadays. There is of course a simple way get Flash (or any other
url-generating piece of code) work properly with PHP: Simply encode plus signs
in your data as %2B (which is the "official" way). This makes sure you get a
real plus in your $_GET
array in PHP, and the problem is resolved.
After some searching, and asking around in ##swf
on Freenode, I found
the encodeURIComponent
function, which is similar to escape
, but does
encode the plus sign. If we use this function, we can again send data with
spaces to PHP! And since encoding more than needed is still fine according to
the specs, there are no downsides (except that you need Flash >= 9.0).
So, if you're developing in Flash, please stop using escape
, and use
encodeURIComponent
instead.
I've recently been hacking a bit with Flash (or rather, with Adobe's Flex compiler. This is a freely available commandline compiler, which actually works on Linux as well.
However, out of the box the commandline compiler, mxmlc
, is obnoxiously
slow. It takes nearly 10 seconds to compile a simple Hello, world! example,
and compiling the video player I'm working on takes over 30 seconds. Not
quite productive.
This is a documented "flaw" in the mxmlc
compiler, caused by the fact
that it has to start up a big java program everytime, loading thousands of
classes and because it always recompiles the entire source.
The official solution to get faster compile times is to use the
Flex Compiler Shell (fcsh
), which is included with the Flex SDK.
Basically, it's a caching version of the mxmlc
compiler, that keeps running
in the background and caches compiled files.
fcsh
is intended to be used by IDEs. The IDE starts fcsh
and then
communicates with it through its stdin/stdout. This means fcsh
is really a
simple thing, without any support for listing on sockets or properly
daemonizing.
Using fcsh speeds up the build time from nearly 40 seconds to just a few seconds (depending on how many changes were made). The first compilation run is still slow, of course.
I've been trying to use fcsh
with the ant build system, which makes
things a bit tricky. Since there is no long-running process (like an IDE)
which can keep fcsh
running, this needs some way for fcsh
to be run in the
background, connected to some fifo or network socket so we can start it on the
first compilation run and then connect to it in subsequent compilation runs.
A quick google shows that there are already a few fcsh
wrappers to do this,
some of which intended to work with ant as well. At a quick glance, it seems
that the flex-shell-script-daemonizer seems the most useful one. It is
created in Java and runs as a daemon (unlike the alternatives I saw, which
were Windows-only due to some useless GUI). It has two modes: In server mode
it starts fcsh
and connects it to a TCP network socket and in client mode it
takes a command to run and passes it over the netowrk to the running server.
There is also some stuff to integrate make ant use fcsh
for compiling
actionscript files. However, this requires changing the build methods and
stuff in build.xml, which I don't want to do (I'm trying to minimize my
changes to the sources, since I'm modifiying someone else's project).
So, I created a small shell script that can be used as a drop in replacement
for mxmlc
. It simply takes joins together all its arguments to a single
command and passes that to the flex-shell-script-daemonize command. If the
fcsh
daemon is not yet running, it automatically starts it (and does some
half-baked daemonization, since neither fcsh
nor
flex-shell-script-daemonizer does that...)
While writing the script, I also found out that the fcsh
"shell", doesn't
have any way to quote spaces in arguments in the command (at least, I couldn't
find any and there was no documentation about it). This means that, AFAICS,
there is no way to support spaces in paths. How braindead is that... I guess
FlexBuilder, the official IDE from Adobe doesn't use fcsh
after all and
instead just includes the compiler classes directly...
Of course, just as I finished the script, I encountered flexcompile,
which basically does the same thing (and includes the network features of
flex-shell-script-daemonizer as well). It's written in python. However, it
does require the path to flex and the mxmlc
part of the path to be passed in
as arguments, so it's not a 100% drop-in replacement (which I needed due to
the way the build system I was using was defined). Perhaps it might be useful
to you.
Anyway, here's my script. Point the JAR
variable at the jar generated by
flex-shell-script-daemonizer and FCSH
to the fcsh
binary included with the
Flex SDK.
#!/bin/sh
# (Another) wrapper around FlexShellScript.jar to provide a mxmlc-like
# interface (so we can just replace the call to mxmlc with this script).
# For this, we'll have to call FlexShellScript.jar with the "client"
# argument, and then the entire mxmlc command as a single argument
# (whereas we will be called with a bunch of separate arguments)
JAR=/home/matthijs/docs/src/upstream/fcsh-daemonizer/FlexShellScript.jar
FCSH=/usr/local/flex/bin/fcsh
PORT=53000
# Check if fcsh is running
if ! nc -z localhost "$PORT"; then
echo "fcsh not running, trying to start..."
# fcsh not running yet? Start it (and do some poor-man's
# daemonizing, since the jar doesn't do that..)
java -jar "$JAR" server "$FCSH" "$PORT" > /dev/null 2> /dev/null <
/dev/null &
# Wait for the server to have started
while ! nc -z localhost "$PORT"; do
echo "Waiting for fcsh to start..."
sleep 1
done
fi
# Run the client. Note that this does no quoting of parameters, since I
# can't find any documentation that fcsh actually supports that. Seems
# like spaces in path just won't work...
# We use $* instead of $@ for building the command, since that will
# expand to a single word, instead of multiple words.
java -jar "$JAR" client "mxmlc $*" "$PORT"