Albumart Replacer


Posted: 2012-02-04 18:33   |  More posts about art code computers music

The Albumart Replacer is a little script I wrote to automatically find higher-resolution copies of album covers. For listening to music, I use foobar2000 and although the wonderful foo_discogs automatically adds album art to my music, sometimes the artwork on discogs.com isn't the best.

Enter Albumart Replacer. If I'm listening to music and I notice the album art isn't up to scratch, e.g.:

I simply run the script, it sends the image's data to tineye.com, and if any higher-resolution copies of the same image are found, it'll grab the best quality one:

This is all done without any user interaction. Using foo_run, it's easy to set up a keybinding in foobar2000 so that it just takes one key press (simply pass the script a song's %PATH% variable from foobar and it'll do the rest.)

Source code, downloads and further details are available at the github repository.

Enjoy!

Comments


@grammer_man who the fuck is this nigga and why u comin at me like that #Hoeassnigga


Posted: 2012-01-09 20:06   |  More posts about code computers funny idiots internet oddities

Had a spare hour last Thursday and decided to write a little twitter bot. There he is above. His name is Grammer_Man and he corrects other twitter users' misspellings, using data scraped from these Wikipedia pages.

Responses have been pouring in already, some agitated, some confused, but most positive -- which was a pleasant surprise. In any event, the minimal amount of effort in coding has paid off many times over in entertainment.

You can see who's responding at the moment by searching for @grammer_man, and also by checking his list of favourites.

Here is the (somewhat slapdash) code that powers our fearless spelling Nazi:

grabber.py

This module grabs the spelling data from Wikipedia.

#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import pickle

import requests
from BeautifulSoup import BeautifulSoup

def grab(letter):
    '''
    Grabs spellings from wikipedia
    '''
    url = 'http://en.wikipedia.org/wiki/Wikipedia:Lists_of_common_misspellings/%s' % letter
    html = requests.get(url).content
    soup = BeautifulSoup(html)
    bullets = soup.findAll('li')
    retval = {}
    for bullet in bullets:
        if 'plainlinks' in repr(bullet):
            values = bullet.text.split('(')
            if len(values) == 2:
                retval[values[0]] = values[1][:-1] # shave off the ) at end
    return retval

def get_spellings():
    '''
    Returns a dictionary of {false: correct} spellings
    '''
    if not os.path.exists('words.pkl'):
        retval = {}
        for c in 'ABCDEFGHIJKLMNOPQRSTUVWXYZ':
            print 'Getting typos - %s' % c
            retval.update(grab(c))
        print 'Dumping...'
        f = open('words.pkl', 'w')
        pickle.dump(retval, f)
        f.close()
        return retval
    else:
        f = open('words.pkl', 'r')
        retval = pickle.load(f)
        f.close()
        return retval

if __name__ == '__main__':
    get_spellings()

bot.py

The bot. Selects misspellings at random, searches for them, responds to them, while also taking breaks between tweets and longer breaks every few hours.

#!/usr/bin/env python
# -*- coding: utf-8 -*-

import os
import random
import time
import pickle

import twitter

from grabber import get_spellings

API = twitter.Api()

MESSAGES = u'''
Hey $USERNAME, didn't you mean $CORRECT there?
#
# All messages stored in here, one per line.
#
'''.split('\n')

def compose_message(twitter_post, mistake, correct):
    '''
    Choose a message from MESSAGES at random, substitute fields to personalise it and 
    check if it exceeds the twitter message limit. Try this 100 times before failing.
    '''
    retries = 0
    while retries < 100:
        message = MESSAGES[random.randint(0, len(MESSAGES) - 1)]
        message = message.replace('$USERNAME', '@%s' % twitter_post.user.screen_name)
        message = message.replace('$MISTAKE', '"%s"' % mistake).replace('$CORRECT', '"%s"' % correct)
        if message and len(message) < 141:
            return message
    return None

def correct_spelling(twitter_post, mistake, correct):
    '''
    Correct someone's spelling in a twitter_post
    '''
    print u'Correcting @%s for using %s...' %(twitter_post.user.screen_name, 
                                            mistake)
    message = compose_message(twitter_post, mistake, correct)
    if not message:
        print u'All messages were too long... Aborting...'
        return False
    else:
        failures = 0
        try:
            API.PostUpdate(message, in_reply_to_status_id=twitter_post.id)
        except Exception, e:
            print 'Failed to submit tweet (%s).'
            return False
        return True

def search(word):
    '''
    Search twitter for uses of a word, return one if it's been used recently.
    Otherwise return None.

    TODO: Add time awareness.
    '''
    print 'Searching for uses of %s...' % word
    results = API.GetSearch(word)
    if results:
        for result in results:
            if not check_if_done(result.id) and\
                not result.user.screen_name == 'grammer_man' and word in result.text:
                return result
    return None

def check_if_done(id):
    '''
    Checks if a tweet has already been responded to
    '''
    if os.path.exists('done.pkl'):
        f = open('done.pkl', 'r')
        done = pickle.load(f)
        f.close()
        if id in done:
            return True
    return False

def update_done(id):
    '''
    Updates a list of tweets that've been replied to
    '''
    if os.path.exists('done.pkl'):
        f = open('done.pkl', 'r')
        done = pickle.load(f)
        f.close()
    else:
        done = []

    done.append(id)

    f = open('done.pkl', 'w')
    pickle.dump(done, f)
    f.close()

def main():
    '''
    Main program flow
    '''
    words = get_spellings()
    counter = 0 
    while True:
        word = random.choice(words.keys())
        post = search(word)
        if counter > 100:
            rand_time = random.randint(120*60, 240*60)
            print 'Done %s tweets, sleeping for %s minutes' % (counter, rand_time/60)
            time.sleep(rand_time)
            counter = 0
        # TODO: PROPERLY PRUNE THE MISTAKES/CORRECTIONS FROM WIKIPEDIA AND REMOVE THIS:
        if not u',' in word + words[word] and not u';' in word + words[word]:
            if post:
                result = correct_spelling(post, word, words[word])
                if result:
                    counter += 1
                    print '#%s Done' % counter
                    update_done(post.id)
                    time.sleep(random.randint(300,500))

if __name__ == '__main__':
    main()

Grammer_Man uses the following libraries:

Comments


The Videogame Music Preservation Foundation


Posted: 2010-08-20 18:24   |  More posts about art computer games computers downloads electronic internet music

A friend came across this website a few weeks ago, and I was very excited about it - an archive of plenty of video game music (mainly for DOS, which is what I grew up with), all recorded properly in order to maximise the nostalgia, and made available in ogg format.

I contacted the guy who runs it about setting up a torrent of the entire archive, and he very kindly obliged. You can get the entire collection here (~4.4GB in total). Enjoy!

Comments


Why the future doesn't need us.


Posted: 2010-07-24 01:21   |  More posts about computer science computers internet robotics science

I finally managed to get around to reading Bill Joy's article Why the future doesn't need us the other day while waiting to board a plane. Bill Joy is a renowned computer scientist who co-founded Sun Microsystems and authored the popular UNIX text editor vi. The article is concerned with the ever increasing speed of "progress" in fields of new technology (primarily robotics, nanotechnology and genetic engineering) which Joy views with apprehension, arguing that the products of these fields will eventually render mankind obsolete and lead to our self-destruction.

There's no point trying to quote it, so instead you can read the article here, read more about Bill Joy here, or read responses and criticism of the article here.

Comments


Rise of the Robots


Posted: 2010-06-12 02:58   |  More posts about america computers politics robotics science war

A good, short blog post from the wonderful ginandtacos blog on the increasing prevalence of unmanned vehicles in war, ending with a very sobering thought:
Won't it be great when the military can send in the tanks without having to put crews in harm's way? Yes and no. The fewer casualties, the better. But what becomes of our reluctance to send the military galavanting around the sordid parts of the world once American casualties are taken out of the equation? We have almost no restraint as it is. I shudder to think of how easily Presidents and legislators will make the decision to go to war when the attitude of "We can just send robots to do it!" becomes entrenched. We saw what the advancements in design of cruise missiles in the 1980s did to the Executive Branch; if someone's acting up, just lob a dozen Tomahawks at them from a few hundred miles away. It became the easy way to intervene without actually making a commitment or putting Americans at risk. Collateral damage isn't much of a deterrent to our political class. UAVs are another step in that direction, a step toward a future with more remotely operated and even autonomous means of doing the dirty work. It's great that technology allows more American soldiers to come home alive and in one piece, but if we remove the U.S. body count from the decision-making process the only restraints on waging war will be common sense, morality, and logic. Yeah, let's start taking bets on how well that works.

Comments


Tricks with python and music


Posted: 2010-05-22 17:30   |  More posts about art code computer science computers experimental music oddities

From Music Machinery:

One of my favorite hacks at last weekend’s Music Hack Day is Tristan’s Swinger.  The Swinger is a bit of python code that takes any song and makes it swing.  It does this be taking each beat and time-stretching the first half of each beat while time-shrinking the second half.  It has quite a magical effect.  Some examples:

Every Breath You Take (swing version) by TeeJay Sweet Child O' Mine (Swing Version) by plamere

You can find more examples in the original blog post. The results really are truly impressive. I'm looking forward to playing with Tristan Jehan's code, and also having a look at his PhD thesis:
Machines have the power and potential to make expressive music on their own. This thesis aims to computationally model the process of creating music using experience from listening to examples. Our unbiased signal-based solution models the life cycle of listening, composing, and performing, turning the machine into an active musician, instead of simply an instrument. We accomplish this through an analysis-synthesis technique by combined perceptual and structural modeling of the musical surface, which leads to a minimal data representation.

Fascinating stuff!

Comments


Big Picture Cataloguer: An update


Posted: 2010-05-20 20:11   |  More posts about art code computers internet photography

In just over a week since I released the Big Picture Cataloguer, there's been a surprising amount of interest and enthusiasm about it. Since I still haven't gotten binary versions of the program for OS X and Linux up (I've no access to an OS X computer, and getting the required libraries installed on Linux has proved to be quite difficult), I've decided to relent and share the source code of the cataloguer under a Creative Commons license.

The script makes use of pyexiv2 - the 0.2 branch - for metadata editing, mechanize for grabbing pages and submitting error reports, the very handy unaccented_map() class (included) for unicode trickery and of course the wonderful XML parser, BeautifulSoup.

Naturally, it's available from the Big Picture Cataloguer's page in the Code section of this site.

Given how much The Big Picture galleries' HTML format has subtly changed over time, and the fact I wrote this in a rush, it's quite messy, but it does the job.

Today's update is of version 0.3, which has an optional "quiet mode" to enable users to schedule the program to run frequently. Enjoy!

Comments


Boston.com Big Picture Cataloguer


Posted: 2010-05-12 20:02   |  More posts about art code computers internet media photography politics

I'm a big fan of The Boston Globe's photojournalism series, The Big Picture. So much so, in fact, that I decided to dedicate a few hours this week to building a program that would not just download the entire series, but add caption metadata to each photo, since many are informative and look very nice in Picasa, for example.

Now, I'm happy that the application is stable enough to release to the world in the Code section of my website.

Since I don't want people to be hammering The Boston Globe's servers, I've made the script wait a fraction of a second between each request, and since I don't want people to be able to disable this functionality, unfortunately only binaries will be available for the time being. Windows binaries are available already, OS X and Linux binaries to come in a few days.

Indeed, if those at The Boston Globe have a problem with how the program operates, they need simply contact me and we can come to an agreement, but I've worked hard to make sure that the program contacts their servers as little as possible.

Bug reports will be automatically submitted through this website too, but if you have any unforeseen problems (e.g. a crash or a hang), email me with as much information as possible (text describing the "Traceback" printed before the crash, what album/photo the program was working on, etc).

What can you do once you've got the entire 2GB collection of photos downloaded? Well, you can simply look through them at your own pace and comfort, or indeed choose to create a montage screensaver from them (although be warned - a screensaver that fades from a beautiful Antarctic landscape to a bloody photo of a victim of the war in Afghanistan might not be exactly what you had in mind.)

But in any event, hopefully it'll be of some use. Enjoy!

Comments


New discoveries


Posted: 2010-02-25 16:21   |  More posts about architecture computers design internet oddities

This was posted on reddit today. I agree entirely with the poster's sentiment: interesting links on reddit are, more often than not, not links to the gateway of a whole website of interesting stuff. When they are links to a website's front page, it's generally a very narrow, single-purpose website that is quickly forgotten about. Hopefully, the poster's subreddit -- apparently yet to be made -- will be a success.

In any event, having gone through the blog-post he had linked I decided to share some of my new discoveries here myself:

  • Building Maker: A Google app I was unaware of, which lets you add the 3D element to Google Maps. For all bored architects out there (since this is just what they want to be doing in their time off.)
  • Ikea Hacker: Neat stuff done with bog-standard Ikea furniture.
  • Strange Maps: A blog of, well, old and interesting maps. I don't know if I'd go as far as to say strange...
  • Newseum: The front pages of newspapers from 78 countries around the world.
  • Cooking For Engineers: This one reminded me of my father, a pragmatist who insists on weighing pasta before cooking it, in order to make sure he'll be doling out the correct amount. Nothing wrong with approaching cooking as a science, as opposed to an art!
  • GetHuman.com: An excellent idea for a website. This one tells you which keys you need to press in order to get an actual human operator on the line when calling a large company, saving you the time of listening to and trying to interact with a computerised system.
  • PDFGeni.com: Another great idea -- a repository of PDF documents such as old technical manuals, academic texts, and so on.

I feel I must write a disclaimer, saying I haven't used or read these sites extensively, having just discovered them a few hours ago, but from first impressions they do look like they deserve a bookmark.

Comments


Rette deine Freiheit


Posted: 2010-01-05 22:19   |  More posts about art censorship computers design digital rights germany internet politics words

At the moment in Germany, there is fierce opposition growing against plans by the CDU to implement internet censorship under the guise of attacking the spread of child pornography. A movement championed by the German Piratenpartei has dubbed ex-minister for family affairs Ursula von der Leyen "Zensursula", a portmandeau of Zensur (Censor) and Ursula, and is referring to the CDU's plans as Stasi 2.0, a nod to the brutal secret police which operated in former East Germany.

Not only is there to be a secret list of blocked websites, such as exists in Australia, but the government is pushing for more data to be collected from citizens and retained for a long period of time.

A video which caught my attention a while back was entitled Du bist Terrorist (You are a terrorist). With soft ambient music playing, and deceptively pleasantly designed imagery, the two-minute video parodies the Du bist Deutschland ad-campaign with a soft, reassuring voice informing you of what the German government has in store for you, in terms of heavier and more invasive surveillance -- because You are a terrorist.

Earlier this week I found that the same people had created a new video in the same vein, entitled Rette deine Freiheit (Save your freedom). The video focuses much more on the coming internet censorship in Germany than just data retention and physical surveillance.

Since there was no English translation available, I decided to translate it and re-upload to Youtube. The result is below:

The translation is by no means perfect, but at least it's something. There were a few tricky problems with it:

  • Einfach wegschauen: Literally "simply look away", the video describes this as the method tried-and-tested by members of families with a history of domestic abuse. I was going to translate it as "simply look the other way" in its first instance, since this is the closest phrase in English that pertains to such a situation. However, this doesn't exactly capture the double-meaning employed in the video, since it implies wilful ignorance which isn't quite applicable to what the government is doing, so I decided to settle on "simply block it out". I'm not sure I'm happy with this, however. Suggestions?
  • In the sentence, "In Prävention, Therapie und Personal investiert hätte dies vielen Opfern helfen können: Reinste Verschwendung", the meaning that is sarcastically implied is that the money that could be invested in preventative measures, therapy and personelle is much better spent on building an internet block. I don't think I captured this very well.

In any event, there's likely to be an official translation soon (I just saw an "Englisch (bald verfügbar)" notice at the top of the official page now -- perhaps my emailing asking for a transcript of the video got them in a rush) and these issues will cease to be.

One last thing -- if you are interested in learning more about the situation in Germany regarding internet freedom and the child pornography scare, I'd not only urge you to visit the links above, but also this shocking, but morbidly fascinating account of one techie's work in the murkiest of subcultures. Thankfully, he doesn't go into detail about actual child abuse, but instead details exactly how child pornography rings work, using the internet and computers.

Put simply, it proves what anyone with a clue already knows: current proposals for internet censorship will have absolutely no impact whatsoever on paedophiles and child pornographers and will only serve to infringe the rights of normal, law-abiding internet users.

Thanks to Áine and Patricia for help with one or two minor parts of the translation.

Comments