Gelber Sack ICS Calendars Offline

Posted: 2014-01-11 17:42   |  More posts about code

It has come to my attention that SITA Deutschland has removed all data for Stuttgart Gelber Sack collections from their website. This is the data that I used to create the ICS Calendars distributed through this website.

Unfortunately, this means that these ICS calendars will remain empty until I find an alternative solution. I offer my apologies for this, but sadly it's something completely out of my control.

If you have any suggestions as to an alternative data source, please contact me.


Big Picture Cataloguer 1.0

Posted: 2013-11-14 21:02   |  More posts about code photography

Many years ago, I wrote the Big Picture Cataloguer - a little tool for downloading's fantastic Big Picture photojournalism series and tagging each image with its appropriate caption.

It's proved to be my most popular tool so far, with over 10,000 downloads so far. Unfortunately, it was also my buggiest, having been coded while hungover on a particularly drunken weekend. Plenty of users have contacted me in the mean time with support questions, and I'd always meant to clean up the code and add tests, but never quite got around to it.

This year, I finally managed to get around to it and I'm now happy to release Big Picture Cataloguer 1.0.

Click here for a Windows executable.

For OS X and Linux users, you can find the source code here. Be warned: the pyexiv2 module is unfortunately extremely difficult to get up and running (on OS X, at least). Feel free to fork this to use a friendlier image metadata library!



ICS Calendars for Stuttgart Gelber Sack Collections

Posted: 2013-10-27 13:22   |  More posts about code

Something I finished months ago but have only just gotten around to making public now is a service that provides ICS calendars for plastic rubbish collections in Stuttgart (the so called Gelber Sack Abholtermine).

The Problem

On the surface, the motivation is this: it's a pain to have to remember when your rubbish is going to be collected, an even greater pain to miss a collection and have sacks of rubbish accumulating on your balcony. Wouldn't it be great if your phone reminded you automatically, the night before?

"But," I hear you say, "you can already do this! Just download the Gelber Sack app, key in your information, and you're set!"

Unfortunately, it's not that simple. Without wanting to step on anyone's toes, the Gelber Sack App is, in a word, shit:

  • Here is a screenshot of it reminding me of a collection 4 days late:

  • The app also went through a period where it would pop up noticiations for all collections in the past few months:

  • This would cause your phone to flip out every time you turned it on, vibrating and ringing for about 5 seconds at a time. Great.

  • It occured to me that there's absolutely no need for a "Gelber Sack App". You've already got an app on your phone for this purpose - it's called Calendar. Why install another app, which may or may not work depending on how it's feeling, and uses the battery when running in the background?

It was this last point that led me to create this service.

The Solution

All good Calendar applications (Google Calendar and iCal included) have support for importing external calendars in the ICS format. So, you can add your facebook events to Google Calendar and have your calendar automatically update when you accept an invitation to an event on facebook. Why not do the same with Gelber Sack collections?

To use this service, simply visit this page (available in both German and English), enter your post code, select the street you live on, and you'll be presented with a URL for an ICS calendar which you can just copy and paste into the calendar software of your choice. Then, make sure this external calendar is also synced to your phone's calendar application, turn on notifications (I have mine set to 2 hours before, so I'm notified at 10PM the night before) and you'll never miss another rubbish collection!

For the nerds...

Source code for the application which scrapes the SITA Deutschland website and generates the ICS files is available on github here.

To avoid hammering SITA's servers, this program uses an extremely liberal (5-second) pause between each HTTP request.


dyndns autologin

Posted: 2013-07-03 17:24   |  More posts about code

Got this email just now:

If you would like to maintain your free Dyn account, you must log into your account once a month. Note that using an update client will no longer suffice for this monthly login. You will still continue to get email alerts every 30 days if your email address is current. Failure to login once per month will result in expiration and loss of your hostname.

dyndns-autologin is a tiny script that will automatically login to your DynDNS account to prevent this happening.



xltronic Vibes

Posted: 2013-05-17 00:15   |  More posts about music

Years ago, when was a thriving community of electronic music nerds, something like xltronic Vibes was possible: a series of albums of music produced by xltronic members, mixed together, often with an overarching theme.

Such collaborations are pretty much impossible nowadays. The website has been dying a slow and miserable death in the past few years and the community is now as good as non-existant. To make matters worse, the old Vibes series has disappeared from the site.

So, here is a mirror of all of the Vibes albums I managed to collect over the years. If anyone has any that are missing, please get in touch, so they don't disappear from the internet forever.

Personal favourites are definitely #10 and #19.


Shout-out to hipsters

Posted: 2013-05-10 16:28   |  More posts about music live music

Had the chance to see Mac Demarco live in Munich a few days ago. My expectations were so high I was almost certain I'd come away disappointed, but I left completely blown away.

Demarco's studio albums are fairly mellow. When playing live, however, the tempo is jumped up and plenty more distortion and general madness is thrown in to the mix.

Here's a video of the band playing in the Pitchfork headquarters. An excellent performance all around: funny banter, awesome music and really tight playing.


Set list

  • The Stars Keep On Calling My Name
  • Cooking Up Something Good
  • Rock And Roll Night Club
  • Ode To Viceroy
  • Freaking Out The Neighborhood
  • My Kind of Woman
  • Baby's Wearing Blue Jeans
  • Jam:
    • She's Really All I Need
    • Take Five (Dave Brubeck Quartet)
    • Break Stuff (Limp Bizkit)
    • Blackbird (The Beatles)
    • She's Really All I Need (Reprise)
  • Still Together


Fiscal Cliff

Posted: 2012-12-31 13:15   |  More posts about america politics

It's almost the new year, which means that we're at the precipice of the so-called "Fiscal Cliff". A cliff that somehow manages to loom while simultaneously threatening to drag us under.

The first time I heard the phrase fiscal cliff was the morning after the 2012 American election, when it was already clear that Obama would win. All of a sudden, it was everywhere. However, according to Wikipedia:

In late February 2012, Ben Bernanke, chairman of the U.S. Federal Reserve, popularized the term "fiscal cliff" for the impending 2012 fiscal crisis.

Some analysts have argued that fiscal slope or fiscal hill would be more appropriate terminology because while the cumulative economic effect over all of 2013 would be substantial, it would not be felt immediately but rather gradually as the weeks and months went by.

With this in mind, here is a google trends graph of searches for "fiscal cliff" from February to today:

Doesn't look like there's much movement until after October. Analysing just October to the present day, we get the following graph:

Just as I'd thought - the first spike occurs on November 7th: the morning of Obama's re-election.

I am not suggesting anything conspiratorial, but I do find it curious that talk of such an apocalyptic term as the fiscal cliff would only emerge directly after the election. Why not in February? Why not during the painfully long election campaign when journalists would have been happy to have had something new to write about?


ahye - a screenshot and image server

Posted: 2012-12-10 22:47   |  More posts about code

The problem

You want to share a screenshot of a part of your screen with a friend or colleague. While this is a little less convoluted on OS X, on Windows or Linux you will typically:

  • Press the print screen key
  • Open an image editor
  • Paste the contents of the clipboard
  • Crop the image so that only the relevant section is visible
  • Save this image as a new file
  • Close the image editor
  • Send the image file to your friend
  • Delete this file from your computer

By any measure, a very long, drawn-out procedure - not to mention the fact that your friend also has to accept the file transfer, find the file and open it, before deleting it, so as their Downloads folder or Desktop doesn't get cluttered.

Gyazo is a service that sprang up to solve this problem. Just run the program, select the area of your screen to take a screenshot of, and suddenly you're taken to the image hosted on the gyazo servers. The URL is also automatically copied to the clipboard, allowing the user to simply paste this URL to whomever they're collaborating with.

Sounds great! However, there are two main problems with using gyazo:

  • It is absolutely plastered in ads
  • Your screenshot now resides on someone else's server

The solution

ahye is a compact image server written in python to address this. It has two main features:

  • Compatibility with gyazo screenshooter clients (which are open source, and so can be used with any ahye server)
  • An additional web interface for uploading and mirroring images

Trying it out

If you would like to try ahye out without installing it, you can use a test server I've set up.

WARNING - This server is for experimental purposes only. I cannot guarantee any images saved to it will persist after a few minutes. If you like how ahye works, please set up and use your own server.

In order to use this test server, you can:

Setting up your own ahye server

Getting started with ahye is rather simple. In the next few days, I'll be also publishing a Chef cookbook that can be used to deploy ahye, but until then you can:

  • git clone git://
  • pip install -r requirements.txt
  • python
  • Download the gyazo client of your choice for your OS - there are a few listed in the repo's README
  • Make a small alteration to point the client to your server

Web interface

As mentioned above, ahye also provides a web interface for uploading and mirroring images. Simply visit the your ahye server in a browser and drag your image(s) into the browser window to upload it.

In order to mirror an image, simply append the URL of the image to your ahye server's url. So, let's say your ahye server is located at, and you want to mirror the file, you just have to go to and your ahye server will automatically rehost the image and redirect you to its own copy.

If the URL to be mirrored has already been downloaded by the ahye server, it will be served without being re-downloaded and re-saved.

The future

ahye is far from perfect. It is designed - on purpose - as a literal 'black hole' for images: image histories or per-user images are both features beyond the scope of this project. The web interface is also a little clunky, with the jQuery fileupload plugin not being used to its fullest.

That being said, it is a tool that I and my colleagues at work use daily to make our lives easier. It does its job great, and it provides good peace-of-mind that I'm not going to accidentally save a segment of my credit card number somewhere where I can't delete it. (True story - this was the impetus for ahye)

So, give it a try, and if you're an open source developer yourself, please don't hesitate to fork and contribute!



Posted: 2012-09-30 17:30   |  More posts about client code facebewk facebook graph api internet python

facebewk ['feɪsbuk] is a Python wrapper for the facebook Graph API. A while ago, there was quite a commotion online when official repositories for Python and C# API clients were removed by facebook without any forewarning. These clients had become neglected and were no longer accurate, and so facebook wanted to stop supporting them. Since then, the hole left in the Python ecosystem doesn't seem to have been plugged - most people seem to be using straight HTTP requests to the API.

I decided to write facebewk - a simple little API client that's designed to minimise HTTP requests and encapsulate data nicely.

Facebook have put an awful lot of work into making the Graph API as friendly and robust as it is. Apps developers have already spent their time familiarising themselves with it. So why mess with a good thing? Instead of coming up with a new API for developers to learn, facebewk provides a developer-friendly Node object, representative of exactly what information facebook returns when queried about a certain node on the social graph.

Some important points:

  • These Node objects are generated recursively. Therefore, a Node object representing my own facebook profile will, in turn, contain subnodes representing my hometown of Dublin and my current location, for example.
  • Information contained in subnodes may not be complete, but this is invisible to the developer: facebewk takes care of grabbing a subnode's full information if a key is accessed which doesn't currently exist in the subnode.

This is all very difficult to describe in prose, so instead, here is an example session from a Python interactive shell to show you exactly what facebewk is about:

>>> from facebewk import Client
>>> me = c.get('me')
>>> type(me)
>>> me['name']
u'Aengus Walton'
>>> type(me['hometown'])
>>> me['hometown']
{'__fetched__': False, u'id': u'110769XXXXXXXXX', u'name': u'Dublin, Ireland'}
>>> me['hometown']['checkins']
>>> me['hometown']
{u'category': u'City', u'likes': 146053, u'description': u'<p><b>Dublin</b> is the capital and most populous city of ........', '__fetched__': True, u'talking_about_count': 115999, u'name': u'Dublin, Ireland', u'link': u'', u'location': {u'latitude': 53.344037395687, u'longitude': -6.2632156999178}, u'is_community_page': True, u'checkins': 16734, u'id': u'110769888951990', u'is_published': True}
>>> newsfeed = c.get('/me/home')
>>> type(newsfeed)
>>> type(newsfeed['data'])
>>> me['significant_other']
{'__fetched__': False, u'name': u'Patricia Korcz', u'id': u'100000XXXXXXXXX'}
>>> me['significant_other']['hometown']['checkins']
>>> status_update =, {'message': 'writing my blog post innit', 
...     'privacy': {'value': 'CUSTOM', 'networks': 1, 'friends': 'NO_FRIENDS'}})
>>> status_update
{'__fetched__': False, u'id': u'37300126_632748066014'}
>>> status_update['message']
u'writing my blog post innit'
>>> my_comment =, {'message': 'blablabla'})

The last few lines result in the following appearing on my profile:

Example status message made to facebook from the facebewk client

So, the Node is essentially a subclass of dict, with some magic thrown in in order to grab missing information on the fly. The only HTTP GET requests to facebook occur on lines 3, 12, 19, 27 and 34. Information retrieved from the graph is stored, overwriting the previously scant Node objects with more detailed ones. Posting to the graph is possible with the same API as is defined here. facebewk also automatically figures out whether the target node to be posted to is a user profile or page, in which case it posts a normal wall post, or a pre-existing post of some kind, in which case it posts a comment.

Authentication is unsupported and, at the moment, outside the scope of this project. As you can see above, I used an API key generated by the Graph API Explorer, which is probably the best way to get started with facebewk if you want to try it out.

There's a lot more that can be done here, for example, making the privacy controls above configurable with keyword arguments, adding wrappers for grabbing friendlists and newsfeeds, and so on, but I'm publishing this now to gauge interest and see if it's something which would really benefit developers in real-world scenarios.

If you have any thoughts on this project, if there are improvements or additional functionality you'd love to see, then please get in touch via email or the github issue tracker.

Source code available on github


The last few months at a glance (a summary)

Posted: 2012-09-10 20:31   |  More posts about drink

This blog has gone without updates for a good half year. Here is what you missed.

That will be all.