Utilite Pro as a server

I got aย  Utilite Pro to replace my aging Fit-PC2. It’s significantly more powerful than the Fit-PC2 and contains an ARM processor. The standard package contains a slightly customised version of Ubuntu (mainly the kernel includes a few extra drivers for the onboard hardware). It turns out, however, that a number of useful features in the kernel have been disabled. These include lots of useful filesystems, including btrfs and xfs. It also has the kernel cryptographic routines disabled, which means you’re out of luck if you want to use some encryption software to protect your data. In the end, recompiling the kernel was pretty straightforward and Utilite provide pretty comprehensive instructions on how to do so.

In case it’s useful (and as a reminder to myself), here’s the link to the instructions, but here’s my little script to automate the process:

#!/usr/bin/env bash

set -eu

# Build a new kernel for the Utilite Pro. See:
# http://www.utilite-computer.com/wiki/index.php/Utilite_Linux_Kernel

OLD="yes" # run 'make oldconfig' instead of 'make menuconfig'?

CWD=$(pwd)
TODAY=$(date +%Y-%m-%d)
ROOTFS=$CWD/../rootfs-$TODAY
TARBALL=$CWD/../linux-image-utilite-$TODAY.tar.gz
IMAGE=uImage-cm-fx6

mkdir -p $ROOTFS/boot

export ARCH=arm

make utilite_defconfig

# Are we using the existing .config?
if [ $OLD == "yes" ]; then
make oldconfig
else
make menuconfig # this is interactive but you need to save a .config manually
fi
make -j4 # parallelise the build
make uImage
INSTALL_MOD_PATH=$ROOTFS make modules_install

cp -v $CWD/arch/arm/boot/uImage $ROOTFS/boot/$IMAGE
tar -C $ROOTFS -czvf $TARBALL .

# Display the commands to actually install the new kernel.
boot_partition=$(awk '{for (i=1; i<=NF; i++) {if($i~/root=/) {print substr($i,6,length($i)-6)"1"}}}' /proc/cmdline) echo '# mount the /boot partition:' echo "mount $boot_partition /boot" if [ ! -f /boot/$IMAGE-$TODAY ]; then echo '# back up the existing image:' echo mv /boot/$IMAGE /boot/$IMAGE-$TODAY else echo "WARNING: EXISTING KERNEL IMAGE BACKUP IN /boot: $IMAGE-$TODAY" fi echo '# extract the new kernel to the installation:' echo tar -C / -xvf $TARBALL

To use the script, place it in the cloned git repository and execute (probably best as root). It creates a tarball and prints the instructions for installing the new kernel after the compilation has completed. It refuses to overwrite an existing tarball (if you've run this script multiple times in the same day, you'll need to remove the old tarball before running the script again.

I've attached an example of the kernel .config to this post which enables a lot more filesystems (including btrfs and xfs) as well as including the kernel cryptographic functions.

Hosting your own calendar server

I recently came across an interesting story by Cory Doctorow entitled ‘Scroogled‘ in which Google becomes a malevolent force with its comprehensive archive for each user. With that in mind, I began to consider the information I put into Google’s server (the irony of writing this post on Google-owned blogger.com is not lost on me). I use a lot of their services, and thought about which one I could most easily replace. Google’s calendar offering seemed a good place to start since I didn’t really interact with it through Gmail, but accessed it through my phone and through Thunderbird on my other computers.

Some brief searches found a list of potential calendar servers, but the one which stood out to me was radicale. This CalDAV server is a nice simple Python server, with no dependencies besides Python itself. The default configuration is pretty well set up, with a few changes needed before you can start accessing your server.

The default port from which the CalDAV calendars are served is 5232, so I opened up that port on my router so that I could access the calendars from anywhere. I had to install a CalDAV app called CalDAV-sync-beta on my phone to be able to view my calendars on the native calendar widgets. The Lightning plugin for Thunderbird can load the radicale calendars by default. Adding them is a simple walk through the wizard, selecting the option to add a new calendar “On the network:”, then choosing CalDAV as the type of calendar. The syntax for the calendar location is

http://your-home-server.com:5232/username/calendarname

replacing your-home-server.com with either your server’s IP address or its URI. Likewise, username should be the user who’s launched the radicale daemon (I suggest this is not root). The calendarname value can be anything you like, but it’s probably best to make it something memorable, or at least descriptive.  For the CalDAV-sync-beta app on Android, the process is similar (Settings > Accounts & sync > Add account and select CalDAV. I found it easier to select “Manual mode” for configuring the calendar. The syntax for the calendar address is similar to the Lightning example above:

http://your-home-server.com:5232/username/

except you’ll notice I’ve omitted the calendarname value at the end. This is because CalDAV-sync-beta will search for all the calendars you have at that location and offer you the option of syncing them all or just certain ones. You can specify the full path as in the Lightning example if you know you will only want to connect to a single calendar on this server. The username value needs to contain a value, but you can omit the password (we have not set up a password protected calendar).

I have yet to manage to get radicale to accept a username and password, so the calendars are open to the public, which is probably something of which you should be aware.

Overall, it’s been working well, and disentangling myself from at least one Google service is a start.

Free space with freedup

A useful tool I came across recently is freedup. If you’re like me, you have potentially many copies of the same file in a number of locations on your disk (this is particularly true of me because I have version-controlled backups of my most important files). Whilst multiple copies of the same file make restoring from older copies easy, it also makes chewing through disk space easy. To solve this, it’s possible to link two identical files together so they share the same data on the disk. In essence, files on a disk are more like information on where to look for the data. That means it’s easy to get two files to point to the same location on the disk. This approach means you don’t need to have two copies of the same data, you just point the two files at the same data. These types of links are often called hard links.

Whilst it’s possible to find all duplicates manually and link the two files together (through the ln command), it’s tedious if you have hundreds or thousands of duplicates. That’s where freedup comes in handy. It can search through specified paths finding all duplicates, and hard link them together for you, telling you how much space you’re saving in the process. A typical command might look like:

freedup -c -d -t -0 -n ./

where -c counts the disk space saved by each link, -d forces modification times to be identical, -t and -0 disable the use of the external hash functions. Most importantly at this stage, -n forces freedup to perform a dummy run through. Piping the output into a new file or a pager (like less) means you can verify it’s looking in the right places for files and that it’s linking what you expected. Remove the -n and rerun the command and it’ll link those files identified in the dummy run. My experience was a several gigabyte space saving on my external disk — not something to be sniffed at.

Unison vs. Dropbox

A mature project by the name of Unison purports to synchronise directories between a number of computers. It’s cross-platform (at least Windows, Mac and Linux) so it seemed suitable for a test run. Given my adventures with Dropbox, then SpiderOak, this looked promising.

Unison differs in some important respects from both SpiderOak and Dropbox. Firstly, there’s no remote backup (a.k.a. storage in the “cloud”): you can synchronise directories across many machines but you have to have at least some form of access to them (usually SSH). Secondly, Unison doesn’t run as a daemon like SpiderOak and Dropbox do. Those two launch transfers based on input/output (I/O) activity (i.e. you’ve added or removed a file in the synced directories); Unison doesn’t do this on its own. Thirdly, Unison doesn’t do versioning, so you can’t view the changes to a file over a given time. In SpiderOak’s case, this versioning goes back forever whilst Dropbox does so only for the last thirty days. These limitations can be overcome through the use of additional tools (see here for more info), notably file monitoring tools.

Instead of this approach, however, I decided a more straightforward approach would suffice. I have essentially three machines on which I would like to synchronise a directory (and its contents). I decided that a star topology would work best, which is to say one of the machines would act as the master copy and the two other clients would connect to it to upload and download files. The advantage of this approach is that I need only run an SSH server on one machine; the clients need only have SSH clients on them. Since one of these machines is a laptop and the other is behind a corporate firewall, this made set up a lot easier.

The first thing to note is that in order for this to act as Dropbox-like as possible, key-based SSH logins are necessary. Once I’d successfully set up key-based logins for each client machine to the master, then setting up Unison was pretty straightforwards. Their documentation is actually pretty good, and I was able to set up the profiles on the Laptop and the Desktop machine with little hassle.

One point worth making is that Unison versions on the clients in the star network must be close to (or exactly the same as) that on the master in the network. Apparently there’s some differences in the way Unison checks for new files between versions. I’ve used versions 2.40.63 (Linux), 2.40.61 (Mac and Windows), and I haven’t received any error messages about conflicts. On the Windows machine, it was easiest to use the Cygwin version of Unison with Cygwin’s version of OpenSSH too. I didn’t have much luck with the GUI tool on any architecture. In fact, it was much easier to use the command-line and text files.

To set up a profile, Unison expected a .prf file in $HOME/.unison with the following format:

# Some comments, if you want
root = C:UsersslacksetUnison
root = ssh://user@remote-master.com:2222//home/slackset/Unison
sshargs = -C

As you can see, the syntax is pretty simple. Note the // after the remote-master.com which specifies the SSH port number. Omit the :2222 if using the default (22). This will synchronise the contents of C:UsersslacksetUnison on a Windows machine with a target of /home/slackset/Unison on the master. The process is the same on a Mac, but the .prf files live in $HOME/Library/Application Data/Unison. You can create as many profiles as you want, something more akin to functionality in SpiderOak, but missing in Dropbox (which can synchronise only one directory).

There’s a number of options for the command-line invocation of Unison to run this in a daemon-like manner:

/usr/bin/unison ProfileName -contactquietly -batch -repeat 180 -times -ui text -logfile /tmp/unison-“$(date +%Y-%m-%d).log”

The important flags are -batch and -repeat n which forces the syncronisation to occur without prompts and which will repeat every n seconds (in my case, 180, or three minutes). If you omit -logfile with some target, a unison.log will be left in your home directory (which is annoying). I put this in the crontab with the @reboot keyword (see man 5 crontab) on the Windows (through Cygwin) and Mac so every three minutes my files are synchronised. That’s not quite instantaneous, but if I’m feeling impatient, I created an alias to run that command without the -repeat 180:

alias syncme=’/usr/bin/unison ProfileName -contactquietly -batch -times -ui text -logfile /tmp/unison-“$(date +%Y-%m-%d).log”‘

It spits out a list of the files which will be updated (either uploaded or downloaded) to standard output. I could bind this to a keyboard shortcut (with AutoHotKey on Windows, for example) or as an icon somewhere, but since I have a terminal open all the time, it seems easier to just type syncme when I’m impatient.

So far, this is working pretty well over Dropbox, but I do miss the fallback of having versioning. I may eventually get around to setting up git on the master in the star network, which would give me good versioning, I think. Something for a rainy day, perhaps.

PPTPd installation and configuration

Setting up a PPTP server (aka VPN in Microsoft Windows operating systems) on Slackware 13.1 with the aid of SlackBuilds.org (SBo) and sbopkg. Most of this is lifted from here, which was the most recent set of instructions I could find. Everything else dated from a few years ago, and that makes those documents about as useful as a chocolate teapot.

Install pptpd from SBo. Use sbopkg if you like, otherwise follow the instructions here.

Once that’s complete, edit /etc/ppp/chap-secrets with your favourite editor. I like vim, so:

vim /etc/ppp/chap-secrets

Add a new username and password to log in:

someusername pptpd somestrongpassword *

Replace someusername and somestrongpassword with the username and password you wish to use to connect to your VPN.

Now we need to tell pptpd how to handle the new connections’ IP addresses on the local network. Edit /etc/pptpd.conf with your favourite editor:

vim /etc/pptpd.conf

In /etc/pptpd.conf, add the following lines to give the remote machine an IP on the local network in the 192.168.111.0/24 subnet:

localip 192.168.111.1
remoteip 192.168.111.234-238,192.168.111.245

Moving on, edit /etc/ppp/options.pptpd

vim /etc/ppp/options.pptpd

In that file, replace ms-dns 192.168.1.1 and ms-dns 192.168.1.2
with Google’s DNS servers:

ms-dns 8.8.8.8 
ms-dns 8.8.4.4

The final step is opening up port 1723 on your router and setting up dynamic dns to provide a more easily remembered address to connect to from your remote host.

When all that’s done, launch pptpd as root and try connecting to your new PPTP/VPN server.

I tested this from a different machine on a different network and was able to browse just fine through my PPTP server. Browsing to www.whatismyip.com gave me my PPTP server IP address, so it worked just fine. What I need now is more bandwidth at home!