Gelber Sack ICS Calendars Back Online


Posted: 2014-06-17 22:38   |  More posts about code

A while ago, responsibility for the rubbish collection in Stuttgart changed hands, from SITA Deutschland to Schaal-Müller, rendering useless the ICS calendars service I'd recently launched.

The data required to provide automatically-updating ICS calendars is now back online, and so this evening I spent some time redesigning my scraper and getting the service back up and running. You'll need to unsubscribe from your previously subscribed ICS calendar URL and follow the instructions on my Gelber Sack page to add your new calendar.

Hopefully it'll last somewhat longer this time :)

Comments


zzzz.io: A free dynamic DNS service


Posted: 2014-06-17 22:35   |  More posts about code dyndns

I released zzzz.io last week. It's a free DynDNS alternative - sign up, get an easy-to-remember subdomain, and point it to the IP of your choice. If needs be, you can update the IP as it changes.

It's been a great success so far, with over 1000 people signing up within the first few hours. If you might find it useful, give it a try!

Comments


Gelber Sack ICS Calendars Offline


Posted: 2014-01-11 17:42   |  More posts about code

It has come to my attention that SITA Deutschland has removed all data for Stuttgart Gelber Sack collections from their website. This is the data that I used to create the ICS Calendars distributed through this website.

Unfortunately, this means that these ICS calendars will remain empty until I find an alternative solution. I offer my apologies for this, but sadly it's something completely out of my control.

If you have any suggestions as to an alternative data source, please contact me.

Comments


Big Picture Cataloguer 1.0


Posted: 2013-11-14 21:02   |  More posts about code photography

Many years ago, I wrote the Big Picture Cataloguer - a little tool for downloading Boston.com's fantastic Big Picture photojournalism series and tagging each image with its appropriate caption.

It's proved to be my most popular tool so far, with over 10,000 downloads so far. Unfortunately, it was also my buggiest, having been coded while hungover on a particularly drunken weekend. Plenty of users have contacted me in the mean time with support questions, and I'd always meant to clean up the code and add tests, but never quite got around to it.

This year, I finally managed to get around to it and I'm now happy to release Big Picture Cataloguer 1.0.

Click here for a Windows executable.

For OS X and Linux users, you can find the source code here. Be warned: the pyexiv2 module is unfortunately extremely difficult to get up and running (on OS X, at least). Feel free to fork this to use a friendlier image metadata library!

Enjoy!

Comments


ICS Calendars for Stuttgart Gelber Sack Collections


Posted: 2013-10-27 13:22   |  More posts about code

Something I finished months ago but have only just gotten around to making public now is a service that provides ICS calendars for plastic rubbish collections in Stuttgart (the so called Gelber Sack Abholtermine).

The Problem

On the surface, the motivation is this: it's a pain to have to remember when your rubbish is going to be collected, an even greater pain to miss a collection and have sacks of rubbish accumulating on your balcony. Wouldn't it be great if your phone reminded you automatically, the night before?

"But," I hear you say, "you can already do this! Just download the Gelber Sack app, key in your information, and you're set!"

Unfortunately, it's not that simple. Without wanting to step on anyone's toes, the Gelber Sack App is, in a word, shit:

  • Here is a screenshot of it reminding me of a collection 4 days late:

  • The app also went through a period where it would pop up noticiations for all collections in the past few months:

  • This would cause your phone to flip out every time you turned it on, vibrating and ringing for about 5 seconds at a time. Great.

  • It occured to me that there's absolutely no need for a "Gelber Sack App". You've already got an app on your phone for this purpose - it's called Calendar. Why install another app, which may or may not work depending on how it's feeling, and uses the battery when running in the background?

It was this last point that led me to create this service.

The Solution

All good Calendar applications (Google Calendar and iCal included) have support for importing external calendars in the ICS format. So, you can add your facebook events to Google Calendar and have your calendar automatically update when you accept an invitation to an event on facebook. Why not do the same with Gelber Sack collections?

To use this service, simply visit this page (available in both German and English), enter your post code, select the street you live on, and you'll be presented with a URL for an ICS calendar which you can just copy and paste into the calendar software of your choice. Then, make sure this external calendar is also synced to your phone's calendar application, turn on notifications (I have mine set to 2 hours before, so I'm notified at 10PM the night before) and you'll never miss another rubbish collection!

For the nerds...

Source code for the application which scrapes the SITA Deutschland website and generates the ICS files is available on github here.

To avoid hammering SITA's servers, this program uses an extremely liberal (5-second) pause between each HTTP request.

Comments


dyndns autologin


Posted: 2013-07-03 17:24   |  More posts about code

Got this email just now:

If you would like to maintain your free Dyn account, you must log into your account once a month. Note that using an update client will no longer suffice for this monthly login. You will still continue to get email alerts every 30 days if your email address is current. Failure to login once per month will result in expiration and loss of your hostname.

dyndns-autologin is a tiny script that will automatically login to your DynDNS account to prevent this happening.

Enjoy.

Comments


ahye - a screenshot and image server


Posted: 2012-12-10 22:47   |  More posts about code

The problem

You want to share a screenshot of a part of your screen with a friend or colleague. While this is a little less convoluted on OS X, on Windows or Linux you will typically:

  • Press the print screen key
  • Open an image editor
  • Paste the contents of the clipboard
  • Crop the image so that only the relevant section is visible
  • Save this image as a new file
  • Close the image editor
  • Send the image file to your friend
  • Delete this file from your computer

By any measure, a very long, drawn-out procedure - not to mention the fact that your friend also has to accept the file transfer, find the file and open it, before deleting it, so as their Downloads folder or Desktop doesn't get cluttered.

Gyazo is a service that sprang up to solve this problem. Just run the program, select the area of your screen to take a screenshot of, and suddenly you're taken to the image hosted on the gyazo servers. The URL is also automatically copied to the clipboard, allowing the user to simply paste this URL to whomever they're collaborating with.

Sounds great! However, there are two main problems with using gyazo:

  • It is absolutely plastered in ads
  • Your screenshot now resides on someone else's server

The solution

ahye is a compact image server written in python to address this. It has two main features:

  • Compatibility with gyazo screenshooter clients (which are open source, and so can be used with any ahye server)
  • An additional web interface for uploading and mirroring images

Trying it out

If you would like to try ahye out without installing it, you can use a test server I've set up.

WARNING - This server is for experimental purposes only. I cannot guarantee any images saved to it will persist after a few minutes. If you like how ahye works, please set up and use your own server.

In order to use this test server, you can:

Setting up your own ahye server

Getting started with ahye is rather simple. In the next few days, I'll be also publishing a Chef cookbook that can be used to deploy ahye, but until then you can:

  • git clone git://github.com/kopf/ahye.git
  • pip install -r requirements.txt
  • python runserver.py
  • Download the gyazo client of your choice for your OS - there are a few listed in the repo's README
  • Make a small alteration to point the client to your server

Web interface

As mentioned above, ahye also provides a web interface for uploading and mirroring images. Simply visit the your ahye server in a browser and drag your image(s) into the browser window to upload it.

In order to mirror an image, simply append the URL of the image to your ahye server's url. So, let's say your ahye server is located at http://ahye.myweb.com, and you want to mirror the file http://i.imgur.com/EHWlL.jpg, you just have to go to http://ahye.myweb.com/http://i.imgur.com/EHWlL.jpg and your ahye server will automatically rehost the image and redirect you to its own copy.

If the URL to be mirrored has already been downloaded by the ahye server, it will be served without being re-downloaded and re-saved.

The future

ahye is far from perfect. It is designed - on purpose - as a literal 'black hole' for images: image histories or per-user images are both features beyond the scope of this project. The web interface is also a little clunky, with the jQuery fileupload plugin not being used to its fullest.

That being said, it is a tool that I and my colleagues at work use daily to make our lives easier. It does its job great, and it provides good peace-of-mind that I'm not going to accidentally save a segment of my credit card number somewhere where I can't delete it. (True story - this was the impetus for ahye)

So, give it a try, and if you're an open source developer yourself, please don't hesitate to fork and contribute!

Comments


facebewk


Posted: 2012-09-30 17:30   |  More posts about client code facebewk facebook graph api internet python

facebewk ['feɪsbuk] is a Python wrapper for the facebook Graph API. A while ago, there was quite a commotion online when official repositories for Python and C# API clients were removed by facebook without any forewarning. These clients had become neglected and were no longer accurate, and so facebook wanted to stop supporting them. Since then, the hole left in the Python ecosystem doesn't seem to have been plugged - most people seem to be using straight HTTP requests to the API.

I decided to write facebewk - a simple little API client that's designed to minimise HTTP requests and encapsulate data nicely.

Facebook have put an awful lot of work into making the Graph API as friendly and robust as it is. Apps developers have already spent their time familiarising themselves with it. So why mess with a good thing? Instead of coming up with a new API for developers to learn, facebewk provides a developer-friendly Node object, representative of exactly what information facebook returns when queried about a certain node on the social graph.

Some important points:

  • These Node objects are generated recursively. Therefore, a Node object representing my own facebook profile will, in turn, contain subnodes representing my hometown of Dublin and my current location, for example.
  • Information contained in subnodes may not be complete, but this is invisible to the developer: facebewk takes care of grabbing a subnode's full information if a key is accessed which doesn't currently exist in the subnode.

This is all very difficult to describe in prose, so instead, here is an example session from a Python interactive shell to show you exactly what facebewk is about:

>>> from facebewk import Client
>>> c = Client('FACEBOOK_ACCESS_TOKEN')
>>> me = c.get('me')
>>> type(me)
>>> me['name']
u'Aengus Walton'
>>> type(me['hometown'])
>>> me['hometown']
{'__fetched__': False, u'id': u'110769XXXXXXXXX', u'name': u'Dublin, Ireland'}
>>> me['hometown']['checkins']
16734
>>> me['hometown']
{u'category': u'City', u'likes': 146053, u'description': u'<p><b>Dublin</b> is the capital and most populous city of ........', '__fetched__': True, u'talking_about_count': 115999, u'name': u'Dublin, Ireland', u'link': u'http://www.facebook.com/pages/Dublin-Ireland/110769888951990', u'location': {u'latitude': 53.344037395687, u'longitude': -6.2632156999178}, u'is_community_page': True, u'checkins': 16734, u'id': u'110769888951990', u'is_published': True}
>>>
>>> newsfeed = c.get('/me/home')
>>> type(newsfeed)
>>> type(newsfeed['data'])
>>>
>>> me['significant_other']
{'__fetched__': False, u'name': u'Patricia Korcz', u'id': u'100000XXXXXXXXX'}
>>> me['significant_other']['hometown']['checkins']
222
>>>
>>> status_update = c.post(me, {'message': 'writing my blog post innit', 
...     'privacy': {'value': 'CUSTOM', 'networks': 1, 'friends': 'NO_FRIENDS'}})
>>> status_update
{'__fetched__': False, u'id': u'37300126_632748066014'}
>>> status_update['message']
u'writing my blog post innit'
>>> my_comment = c.post(status_update, {'message': 'blablabla'})
>>> c.like(my_comment)
True

The last few lines result in the following appearing on my profile:

Example status message made to facebook from the facebewk client

So, the Node is essentially a subclass of dict, with some magic thrown in in order to grab missing information on the fly. The only HTTP GET requests to facebook occur on lines 3, 12, 19, 27 and 34. Information retrieved from the graph is stored, overwriting the previously scant Node objects with more detailed ones. Posting to the graph is possible with the same API as is defined here. facebewk also automatically figures out whether the target node to be posted to is a user profile or page, in which case it posts a normal wall post, or a pre-existing post of some kind, in which case it posts a comment.

Authentication is unsupported and, at the moment, outside the scope of this project. As you can see above, I used an API key generated by the Graph API Explorer, which is probably the best way to get started with facebewk if you want to try it out.

There's a lot more that can be done here, for example, making the privacy controls above configurable with keyword arguments, adding wrappers for grabbing friendlists and newsfeeds, and so on, but I'm publishing this now to gauge interest and see if it's something which would really benefit developers in real-world scenarios.

If you have any thoughts on this project, if there are improvements or additional functionality you'd love to see, then please get in touch via email or the github issue tracker.

Source code available on github

Comments


Albumart Replacer


Posted: 2012-02-04 18:33   |  More posts about art code computers music

The Albumart Replacer is a little script I wrote to automatically find higher-resolution copies of album covers. For listening to music, I use foobar2000 and although the wonderful foo_discogs automatically adds album art to my music, sometimes the artwork on discogs.com isn't the best.

Enter Albumart Replacer. If I'm listening to music and I notice the album art isn't up to scratch, e.g.:

I simply run the script, it sends the image's data to tineye.com, and if any higher-resolution copies of the same image are found, it'll grab the best quality one:

This is all done without any user interaction. Using foo_run, it's easy to set up a keybinding in foobar2000 so that it just takes one key press (simply pass the script a song's %PATH% variable from foobar and it'll do the rest.)

Source code, downloads and further details are available at the github repository.

Enjoy!

Comments


@grammer_man who the fuck is this nigga and why u comin at me like that #Hoeassnigga


Posted: 2012-01-09 20:06   |  More posts about code computers funny idiots internet oddities

Had a spare hour last Thursday and decided to write a little twitter bot. There he is above. His name is Grammer_Man and he corrects other twitter users' misspellings, using data scraped from these Wikipedia pages.

Responses have been pouring in already, some agitated, some confused, but most positive -- which was a pleasant surprise. In any event, the minimal amount of effort in coding has paid off many times over in entertainment.

You can see who's responding at the moment by searching for @grammer_man, and also by checking his list of favourites.

Here is the (somewhat slapdash) code that powers our fearless spelling Nazi:

grabber.py

This module grabs the spelling data from Wikipedia.

#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import pickle

import requests
from BeautifulSoup import BeautifulSoup

def grab(letter):
    '''
    Grabs spellings from wikipedia
    '''
    url = 'http://en.wikipedia.org/wiki/Wikipedia:Lists_of_common_misspellings/%s' % letter
    html = requests.get(url).content
    soup = BeautifulSoup(html)
    bullets = soup.findAll('li')
    retval = {}
    for bullet in bullets:
        if 'plainlinks' in repr(bullet):
            values = bullet.text.split('(')
            if len(values) == 2:
                retval[values[0]] = values[1][:-1] # shave off the ) at end
    return retval

def get_spellings():
    '''
    Returns a dictionary of {false: correct} spellings
    '''
    if not os.path.exists('words.pkl'):
        retval = {}
        for c in 'ABCDEFGHIJKLMNOPQRSTUVWXYZ':
            print 'Getting typos - %s' % c
            retval.update(grab(c))
        print 'Dumping...'
        f = open('words.pkl', 'w')
        pickle.dump(retval, f)
        f.close()
        return retval
    else:
        f = open('words.pkl', 'r')
        retval = pickle.load(f)
        f.close()
        return retval

if __name__ == '__main__':
    get_spellings()

bot.py

The bot. Selects misspellings at random, searches for them, responds to them, while also taking breaks between tweets and longer breaks every few hours.

#!/usr/bin/env python
# -*- coding: utf-8 -*-

import os
import random
import time
import pickle

import twitter

from grabber import get_spellings

API = twitter.Api()

MESSAGES = u'''
Hey $USERNAME, didn't you mean $CORRECT there?
#
# All messages stored in here, one per line.
#
'''.split('\n')

def compose_message(twitter_post, mistake, correct):
    '''
    Choose a message from MESSAGES at random, substitute fields to personalise it and 
    check if it exceeds the twitter message limit. Try this 100 times before failing.
    '''
    retries = 0
    while retries &lt; 100:
        message = MESSAGES[random.randint(0, len(MESSAGES) - 1)]
        message = message.replace('$USERNAME', '@%s' % twitter_post.user.screen_name)
        message = message.replace('$MISTAKE', '"%s"' % mistake).replace('$CORRECT', '"%s"' % correct)
        if message and len(message) &lt; 141:
            return message
    return None

def correct_spelling(twitter_post, mistake, correct):
    '''
    Correct someone's spelling in a twitter_post
    '''
    print u'Correcting @%s for using %s...' %(twitter_post.user.screen_name, 
                                            mistake)
    message = compose_message(twitter_post, mistake, correct)
    if not message:
        print u'All messages were too long... Aborting...'
        return False
    else:
        failures = 0
        try:
            API.PostUpdate(message, in_reply_to_status_id=twitter_post.id)
        except Exception, e:
            print 'Failed to submit tweet (%s).'
            return False
        return True

def search(word):
    '''
    Search twitter for uses of a word, return one if it's been used recently.
    Otherwise return None.

    TODO: Add time awareness.
    '''
    print 'Searching for uses of %s...' % word
    results = API.GetSearch(word)
    if results:
        for result in results:
            if not check_if_done(result.id) and\
                not result.user.screen_name == 'grammer_man' and word in result.text:
                return result
    return None

def check_if_done(id):
    '''
    Checks if a tweet has already been responded to
    '''
    if os.path.exists('done.pkl'):
        f = open('done.pkl', 'r')
        done = pickle.load(f)
        f.close()
        if id in done:
            return True
    return False

def update_done(id):
    '''
    Updates a list of tweets that've been replied to
    '''
    if os.path.exists('done.pkl'):
        f = open('done.pkl', 'r')
        done = pickle.load(f)
        f.close()
    else:
        done = []

    done.append(id)

    f = open('done.pkl', 'w')
    pickle.dump(done, f)
    f.close()

def main():
    '''
    Main program flow
    '''
    words = get_spellings()
    counter = 0 
    while True:
        word = random.choice(words.keys())
        post = search(word)
        if counter &gt; 100:
            rand_time = random.randint(120*60, 240*60)
            print 'Done %s tweets, sleeping for %s minutes' % (counter, rand_time/60)
            time.sleep(rand_time)
            counter = 0
        # TODO: PROPERLY PRUNE THE MISTAKES/CORRECTIONS FROM WIKIPEDIA AND REMOVE THIS:
        if not u',' in word + words[word] and not u';' in word + words[word]:
            if post:
                result = correct_spelling(post, word, words[word])
                if result:
                    counter += 1
                    print '#%s Done' % counter
                    update_done(post.id)
                    time.sleep(random.randint(300,500))

if __name__ == '__main__':
    main()

Grammer_Man uses the following libraries:

Comments