07 November, 2021

Breaking recursion in Python and fix-ing it back up again

Trying out github repos for what started off as a blog post, on how fixpoint combinators relate to Python:

https://github.com/benclifford/python-fixpoint#readme

28 January, 2021

berlin strava openstreetmap mashup

I made a mashup (so last decade) of my strava rides since I've been in Berlin.

You can see it here: http://www.hawaga.org.uk/ben/berlinmap/.

It uses leaflet.js to draw the map, Open Street Map to provide the base street map, Strava on my phone to record my rides, and jpravetz' strava CLI to get the data out of Strava and into a file.

04 January, 2021

commandline editing for zoom meetups

In the last year, I've helped host a bunch of meetups and conferences on Zoom. We've recorded them, and I wanted to turn each talk into a separate YouTube video.

Zoom can give you a giant MP4 of the whole session, and I wanted to cut that video into pices.

A few times, I tried using OpenShot but: my laptop struggled to cope with the load; the UI was focused on more interesting editing than what I wanted; that UI isn't natural for me, who spends 99% of my time not using editing software.

I figured out a workflow that worked better for me:

  • Use VLC on the giant MP4 to quickly identify the start and end times for each talk.
  • Make an ffmpeg command line for each talk, for example:
    ffmpeg -i day1.mp4 -ss 2:31:54 -to 2:47:07 day1-7-madany.mp4
    ffmpeg -i day1.mp4 -ss 2:47:16 -to 2:57:46 day1-8-foster.mp4
    ffmpeg -i day1.mp4 -ss 2:58:17 -to 3:13:10 day1-9-glanzman.mp4
    ffmpeg -i day1.mp4 -ss 3:13:34 -to 3:23:42 day1-10-ward.mp4
    ffmpeg -i day1.mp4 -ss 3:24:02 -to 3:33:22 day1-11-reynier.mp4
    ffmpeg -i day1.mp4 -ss 3:45:52 -to 4:13:18 day1-12-clifford.mp4
    ffmpeg -i day1.mp4 -ss 4:15:17 -to 4:55:14 day1-13-shaffer.mp4
      
    which will copy ranges out of `day1.mp4` in to the named file.
  • Then, let this run. It's quite slow on my laptop but doesn't need any human interaction.

11 March, 2020

A1120EUA Hall Effect Switch vs Pi Zero W

I'm building a bike-wheel LED project that wants to know how fast the wheel is turning, and what position it is in as it spins.
Got A1120EUA-T from hobbytronics
turns out you can't set pullups from sysfs (/sys/class/gpio) ... so i'll have to install wiringpi for the gpio command.
$ gpio export 0 in
$ gpio mode 0 up

and now I can wave my neodymium magnets nearby (about 1.5cm) and see the input go low as it passes by.
$ while true; do gpio read 0 ; done
1
1
1
0

07 October, 2018

`stg uncommit` the first commit

I wanted to edit the first commit of a repository using `stg`. (Actually I wanted to edit all of them, which was easy except for the first).

`stg` doesn't like this:

$ stg uncommit -n 1
Uncommitting 1 patches ... done
stg uncommit: Trying to uncommit 38a035a7667c9417e10a4f53e7c88a57711ca440,
              which does not have exactly one parent
... the root commit is not formed in the way that `stg` wants.

I worked around this by:

  1. making a new empty commit in a new repo:
    mkdir ~/empty-repo
    $ cd ~/empty-repo
    $ git init
    $ git commit -a --allow-empty -m "Empty initial commit"
    [master (root-commit) 38a035a] Empty initial commit
    
  2. getting that empty commit into my real repo ...
    $ cd ~/real-repo
    $ git remote add empty-repo ~/empty-repo
    $ git fetch empty-repo
    
  3. cherry picking the original root commit on top of the empty commit ...
    $ git tag tmp1
    $ stg pop -a
    $ git reset --hard empty-repo/master
    $ git cherry-pick tmp1
    
  4. and now this cherry-picked originally original commit can be `stg uncommit`ted:
    $ stg uncommit -n 1
    

20 May, 2018

temporary merge tool

I've been an enthusiastic user of stgit for some time. This lets you work on a series of commits at the same time, so that you can edit them to look good before sending them out into the world (i.e. pushing upstream). You can move up and down a series of git commits making changes to them, rather than adding new commits onto the end of a branch.

One use I often make of this is preparing a bunch of orthogonal patches: patches which don't interact with each other or need to be applied in a strict order, but that I want all applied at once while I'm making day to day use of my software, so that I can test the changes in real use.

It's pretty awkward (i.e. impossible) to do this collaboratively though: all of git's support for collaborative work is by making new commits onto the end of branches, not editing earlier commits.

So I have been trying a new workflow: lots of features branches at once, instead of lots of stg patches; with a script I wrote, tmt, which makes your checkout look like the merge of all of those feature branches, but lets you commit your changes onto one specific feature branch.

Here's the repo, with a README. Yes, it's Haskell. Of course. https://github.com/benclifford/tmt

05 March, 2018

sous vide

I built a sous vide cooker, driven by a PWM/PID controller written in Erlang.

Then I gave a talk about it (and some other related temperature control that I've done) at a Raspberry Pi meetup.

Here's the video (login required) and here are the slides.

16 February, 2018

Build a Crap Web Form in Haskell in 28 days.

I've been writing an informal series of posts about a small scout camp registration system that I've been building:

Build a Crap Web Form in Haskell in 28 days

29 January, 2018

Yellow Pages for Modern Times

Early on in my career it was pretty common to use clusters comprised of a pile of heterogeneous unix systems: some Sun boxes, some Linux machines, maybe IRIX and AIX in there too.

The thing that made them into a single cluster was that your user account existed on them all, and you had the same home directory on them all: make files on one machine, and they're visible on the other machines; but you still had access to the machine specific features of each host.

The technology of the time often used Network File System (NFS) and Network Information Service (NIS) (formerly known as Yellow Pages, with that name living on in the yp prefix of commands like yppasswd).

Fast-forward a decade or two and things look different: virtual machines are thing now, and more recently, containers. It's now very common to custom build a virtual machine or a container, both with something approximating an entire OS, specifically for running one application, or for running just a single piece of one application.

So maybe you'd connect these pieces - virtual machines or containers - with some kind of socket connection: a web front end exposing HTTP and talking to a PostgreSQL database in another container with no shared files between them.

I did a bunch of stuff this way, and it was great: you can install separate package stacks in isolation from each other. Want this weird version of a library or compiler? Or to run some curl | sudo script without messing up the rest of your system? Or stick with an old distribution of your OS just for one tool? Easy.

But it was a pain getting getting files between different places. Got my text editor and version control set up in one place, but need to compile in another? There are all sorts of different ways to get files between those places: for example, commit regularly to version control; or rsync.

Docker running on one host has options for mounting pieces of the host file system inside containers; but I lacked a good idea of what to mount where.

It was all so simple before: you had ~ everywhere, and nothing else.

So I started using the unix cluster model, described at the top of this post, to guide how I set up a lot of my containers and virtual machines.

The actual technology (NFS, docker volume mounts, YP, LDAP, HESIOD, ...) isn't massively relevant: I've used different mechanisms in different places.

What really matters is: all the (regular human) users get their home directory, mounted at the same place (under /home).

Pretty much with most ways of sharing files, that means the unix user id for that user should be the same everywhere too.

I've implemented this basic model in a few different ways: for a couple of VMs inside the same physical server, a traditional NFS and OpenLDAP setup (NFS for file sharing, LDAP for distributing account details) which is a more modern replacement for NFS/NIS; on my laptop and some of my physical servers, I've got a wrapper around docker called cue which creates exactly one user (the invoking user) inside the container, and mounts their home directory appropriately; I have some ad-hoc docker server containers (eg inbound SMTP) where the whole of /home is volume-mounted, and then OpenLDAP is used to share accounts.

There are plenty of downsides: for example, your whole home directory is accessible in more places than it needs to be and so is more vulnerable; you can't access files outside your home directory, so ~ is now a specially magic directory; posix filesystems work badly in distributed systems. For lots of what I want, these downsides are outweighed by the upsides.

One twist that doesn't happen so much with a cluster of physical machines: a server such as a mail server is now a container which has a mail queue which I want to persist across rebuilds. Rebuilding would be unusual in the physcial machine model because you don't usually rebuild physical servers often. So where should that persistent data go? Inside a specific /home directory? in a /container-data directory that is mounted too, like an alternate version of /home? What user-id should own the queue? Different builds of a container might assign different user-ids to the mail server.

21 January, 2018

A string of DNS protocol bugs.

I went to turn on DNSSEC for cqx.ltd.uk today - the server that signed it broken right before my Christmas busy period so I disabled DNSSEC on that zone until I got round to fixing it.

I've encountered three different apparent protocol implementation bugs in the space of a few hours:

  • Andrews and Arnold's web based control panel accepts DS records as generated by BIND's dnssec-keygen tool but then throws a complicated looking error when talking to Nominet, the UK domain registry, to put those records where they need to be. As far as I can tell, this is because the BIND output has whitespace in the middle of a hex string, something RFC 4034 s5.3 seems to think is acceptable. Why is installing crypto keys always so hard?
  • For a while, Hetzner's recursive resolvers were unable to verify (and therefore refused to answer) results for my zone. I have a suspicion (but I don't have much to go on other than a hunch) that this was something to do with DS records and the actual zone having some kind of mismatch - although Google Public DNS at 8.8.8.8, and Verisign's DNSSEC checker both worked ok.
  • I discovered an implementation quirk in the Haskell dns library, which I use inside a debugging tool I'm slowly building. This is to do with the mechanism which DNS uses to compress replies: where a domain name would be repeated in a response, it can be replaced by a pointer to another occurence of that name in the reply. It looks like in this case that the dns library will only accept those pointers if they point to regions of the reply that have specifically already been parsed by the domain name parsing code, rather than pointers to arbitrary bytes in the reply. This is frustratingly familiar to another bug I encountered (at Campus London) where their (not-so) transparent firewall was reordering DNS replies; giving a bug that only manifested when I was sitting in their cafe. (github issue #103)