facebewk


Posted: 2012-09-30 17:30   |  More posts about client code facebewk facebook graph api internet python

facebewk ['feɪsbuk] is a Python wrapper for the facebook Graph API. A while ago, there was quite a commotion online when official repositories for Python and C# API clients were removed by facebook without any forewarning. These clients had become neglected and were no longer accurate, and so facebook wanted to stop supporting them. Since then, the hole left in the Python ecosystem doesn't seem to have been plugged - most people seem to be using straight HTTP requests to the API.

I decided to write facebewk - a simple little API client that's designed to minimise HTTP requests and encapsulate data nicely.

Facebook have put an awful lot of work into making the Graph API as friendly and robust as it is. Apps developers have already spent their time familiarising themselves with it. So why mess with a good thing? Instead of coming up with a new API for developers to learn, facebewk provides a developer-friendly Node object, representative of exactly what information facebook returns when queried about a certain node on the social graph.

Some important points:

  • These Node objects are generated recursively. Therefore, a Node object representing my own facebook profile will, in turn, contain subnodes representing my hometown of Dublin and my current location, for example.
  • Information contained in subnodes may not be complete, but this is invisible to the developer: facebewk takes care of grabbing a subnode's full information if a key is accessed which doesn't currently exist in the subnode.

This is all very difficult to describe in prose, so instead, here is an example session from a Python interactive shell to show you exactly what facebewk is about:

>>> from facebewk import Client
>>> c = Client('FACEBOOK_ACCESS_TOKEN')
>>> me = c.get('me')
>>> type(me)
>>> me['name']
u'Aengus Walton'
>>> type(me['hometown'])
>>> me['hometown']
{'__fetched__': False, u'id': u'110769XXXXXXXXX', u'name': u'Dublin, Ireland'}
>>> me['hometown']['checkins']
16734
>>> me['hometown']
{u'category': u'City', u'likes': 146053, u'description': u'<p><b>Dublin</b> is the capital and most populous city of ........', '__fetched__': True, u'talking_about_count': 115999, u'name': u'Dublin, Ireland', u'link': u'http://www.facebook.com/pages/Dublin-Ireland/110769888951990', u'location': {u'latitude': 53.344037395687, u'longitude': -6.2632156999178}, u'is_community_page': True, u'checkins': 16734, u'id': u'110769888951990', u'is_published': True}
>>>
>>> newsfeed = c.get('/me/home')
>>> type(newsfeed)
>>> type(newsfeed['data'])
>>>
>>> me['significant_other']
{'__fetched__': False, u'name': u'Patricia Korcz', u'id': u'100000XXXXXXXXX'}
>>> me['significant_other']['hometown']['checkins']
222
>>>
>>> status_update = c.post(me, {'message': 'writing my blog post innit', 
...     'privacy': {'value': 'CUSTOM', 'networks': 1, 'friends': 'NO_FRIENDS'}})
>>> status_update
{'__fetched__': False, u'id': u'37300126_632748066014'}
>>> status_update['message']
u'writing my blog post innit'
>>> my_comment = c.post(status_update, {'message': 'blablabla'})
>>> c.like(my_comment)
True

The last few lines result in the following appearing on my profile:

Example status message made to facebook from the facebewk client

So, the Node is essentially a subclass of dict, with some magic thrown in in order to grab missing information on the fly. The only HTTP GET requests to facebook occur on lines 3, 12, 19, 27 and 34. Information retrieved from the graph is stored, overwriting the previously scant Node objects with more detailed ones. Posting to the graph is possible with the same API as is defined here. facebewk also automatically figures out whether the target node to be posted to is a user profile or page, in which case it posts a normal wall post, or a pre-existing post of some kind, in which case it posts a comment.

Authentication is unsupported and, at the moment, outside the scope of this project. As you can see above, I used an API key generated by the Graph API Explorer, which is probably the best way to get started with facebewk if you want to try it out.

There's a lot more that can be done here, for example, making the privacy controls above configurable with keyword arguments, adding wrappers for grabbing friendlists and newsfeeds, and so on, but I'm publishing this now to gauge interest and see if it's something which would really benefit developers in real-world scenarios.

If you have any thoughts on this project, if there are improvements or additional functionality you'd love to see, then please get in touch via email or the github issue tracker.

Source code available on github

Comments


The last few months at a glance (a summary)


Posted: 2012-09-10 20:31   |  More posts about drink

This blog has gone without updates for a good half year. Here is what you missed.

That will be all.

Comments


Due Process


Posted: 2012-02-28 17:51   |  More posts about film music

Another tune I've been searching for for years (this time since probably around 2000) - first heard it as the backing to Ronnie Creager's part on the Blind Skateboards section of Rodney Mullen VS Daewon Song Round II:

Happened to give this video a watch for a bit of a blast of nostalgia and thought to have another look for the tune. This site has popped up in the last decade and gave me the full run-down of tunes:

  • Blind #1 - instrumental mix by DJ Mino
  • Blind #2 - Gideon Choi - Afromaxx Music Production - Two Steps Beyond
  • Blind #3 - Josh Kasper - Visionaries - Say Where (instrumental)
  • Blind #4 - Corey Sheppard - Diamond D - When It Pours It Rains (instrumental)
  • Blind #5 - James Craig - Choclair - What It Takes (remix instrumental)
  • Blind #6 - Ronnie Creager - Lone Catalysts - Due Process (instrumental)

Incredibly happy to have finally tracked it down after all these years. Easily one of the best hip-hop productions I've ever heard.

Here's the normal version:

And, most importantly, the instrumental:

Comments


Albumart Replacer


Posted: 2012-02-04 18:33   |  More posts about art code computers music

The Albumart Replacer is a little script I wrote to automatically find higher-resolution copies of album covers. For listening to music, I use foobar2000 and although the wonderful foo_discogs automatically adds album art to my music, sometimes the artwork on discogs.com isn't the best.

Enter Albumart Replacer. If I'm listening to music and I notice the album art isn't up to scratch, e.g.:

I simply run the script, it sends the image's data to tineye.com, and if any higher-resolution copies of the same image are found, it'll grab the best quality one:

This is all done without any user interaction. Using foo_run, it's easy to set up a keybinding in foobar2000 so that it just takes one key press (simply pass the script a song's %PATH% variable from foobar and it'll do the rest.)

Source code, downloads and further details are available at the github repository.

Enjoy!

Comments


Is this the worst piece of music ever made?


Posted: 2012-01-29 12:56   |  More posts about experimental music

I'm fairly certain it is. I've nothing further to say about it.

The Sea of Providence by Richard Youngs

If Richard or Root Strata take exception to me hosting this music here, they need just contact me and it'll be taken down.

Comments


@grammer_man who the fuck is this nigga and why u comin at me like that #Hoeassnigga


Posted: 2012-01-09 20:06   |  More posts about code computers funny idiots internet oddities

Had a spare hour last Thursday and decided to write a little twitter bot. There he is above. His name is Grammer_Man and he corrects other twitter users' misspellings, using data scraped from these Wikipedia pages.

Responses have been pouring in already, some agitated, some confused, but most positive -- which was a pleasant surprise. In any event, the minimal amount of effort in coding has paid off many times over in entertainment.

You can see who's responding at the moment by searching for @grammer_man, and also by checking his list of favourites.

Here is the (somewhat slapdash) code that powers our fearless spelling Nazi:

grabber.py

This module grabs the spelling data from Wikipedia.

#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import pickle

import requests
from BeautifulSoup import BeautifulSoup

def grab(letter):
    '''
    Grabs spellings from wikipedia
    '''
    url = 'http://en.wikipedia.org/wiki/Wikipedia:Lists_of_common_misspellings/%s' % letter
    html = requests.get(url).content
    soup = BeautifulSoup(html)
    bullets = soup.findAll('li')
    retval = {}
    for bullet in bullets:
        if 'plainlinks' in repr(bullet):
            values = bullet.text.split('(')
            if len(values) == 2:
                retval[values[0]] = values[1][:-1] # shave off the ) at end
    return retval

def get_spellings():
    '''
    Returns a dictionary of {false: correct} spellings
    '''
    if not os.path.exists('words.pkl'):
        retval = {}
        for c in 'ABCDEFGHIJKLMNOPQRSTUVWXYZ':
            print 'Getting typos - %s' % c
            retval.update(grab(c))
        print 'Dumping...'
        f = open('words.pkl', 'w')
        pickle.dump(retval, f)
        f.close()
        return retval
    else:
        f = open('words.pkl', 'r')
        retval = pickle.load(f)
        f.close()
        return retval

if __name__ == '__main__':
    get_spellings()

bot.py

The bot. Selects misspellings at random, searches for them, responds to them, while also taking breaks between tweets and longer breaks every few hours.

#!/usr/bin/env python
# -*- coding: utf-8 -*-

import os
import random
import time
import pickle

import twitter

from grabber import get_spellings

API = twitter.Api()

MESSAGES = u'''
Hey $USERNAME, didn't you mean $CORRECT there?
#
# All messages stored in here, one per line.
#
'''.split('\n')

def compose_message(twitter_post, mistake, correct):
    '''
    Choose a message from MESSAGES at random, substitute fields to personalise it and 
    check if it exceeds the twitter message limit. Try this 100 times before failing.
    '''
    retries = 0
    while retries &lt; 100:
        message = MESSAGES[random.randint(0, len(MESSAGES) - 1)]
        message = message.replace('$USERNAME', '@%s' % twitter_post.user.screen_name)
        message = message.replace('$MISTAKE', '"%s"' % mistake).replace('$CORRECT', '"%s"' % correct)
        if message and len(message) &lt; 141:
            return message
    return None

def correct_spelling(twitter_post, mistake, correct):
    '''
    Correct someone's spelling in a twitter_post
    '''
    print u'Correcting @%s for using %s...' %(twitter_post.user.screen_name, 
                                            mistake)
    message = compose_message(twitter_post, mistake, correct)
    if not message:
        print u'All messages were too long... Aborting...'
        return False
    else:
        failures = 0
        try:
            API.PostUpdate(message, in_reply_to_status_id=twitter_post.id)
        except Exception, e:
            print 'Failed to submit tweet (%s).'
            return False
        return True

def search(word):
    '''
    Search twitter for uses of a word, return one if it's been used recently.
    Otherwise return None.

    TODO: Add time awareness.
    '''
    print 'Searching for uses of %s...' % word
    results = API.GetSearch(word)
    if results:
        for result in results:
            if not check_if_done(result.id) and\
                not result.user.screen_name == 'grammer_man' and word in result.text:
                return result
    return None

def check_if_done(id):
    '''
    Checks if a tweet has already been responded to
    '''
    if os.path.exists('done.pkl'):
        f = open('done.pkl', 'r')
        done = pickle.load(f)
        f.close()
        if id in done:
            return True
    return False

def update_done(id):
    '''
    Updates a list of tweets that've been replied to
    '''
    if os.path.exists('done.pkl'):
        f = open('done.pkl', 'r')
        done = pickle.load(f)
        f.close()
    else:
        done = []

    done.append(id)

    f = open('done.pkl', 'w')
    pickle.dump(done, f)
    f.close()

def main():
    '''
    Main program flow
    '''
    words = get_spellings()
    counter = 0 
    while True:
        word = random.choice(words.keys())
        post = search(word)
        if counter &gt; 100:
            rand_time = random.randint(120*60, 240*60)
            print 'Done %s tweets, sleeping for %s minutes' % (counter, rand_time/60)
            time.sleep(rand_time)
            counter = 0
        # TODO: PROPERLY PRUNE THE MISTAKES/CORRECTIONS FROM WIKIPEDIA AND REMOVE THIS:
        if not u',' in word + words[word] and not u';' in word + words[word]:
            if post:
                result = correct_spelling(post, word, words[word])
                if result:
                    counter += 1
                    print '#%s Done' % counter
                    update_done(post.id)
                    time.sleep(random.randint(300,500))

if __name__ == '__main__':
    main()

Grammer_Man uses the following libraries:

Comments


The Chaos


Posted: 2012-01-06 15:02   |  More posts about art funny linguistics poetry words

A poem by Gerard Nolst Trenité demonstrating the abundant irregularities of English spelling and pronunciation. More info here. 

Read more...

Comments


Cowboys and Anthrax


Posted: 2011-12-12 21:56   |  More posts about film funny politics

Great stuff here from the Bad Lip Reading Youtube channel. Check it out for plenty more.

Thanks to Hugh for bringing this to my attention.

Comments


A Winged Victory For The Sullen Live


Posted: 2011-11-24 17:42   |  More posts about art music

From their Facebook page:

for those of you who could not make to one of our US shows...our videographer friend Joshua Smelser filmed the entire Los Angles show. Enjoy!

You can watch the entire concert in high-definition below.

EDIT: Although the sound quality is pretty dreadful...

Comments


Slow Walkers


Posted: 2011-11-23 00:12   |  More posts about art film music

"Wake" from "Slow Walkers", a Grouper / Lawrence English collaboration, to be released in 2012. The video is something else -- "meditations for the zombie as cultural phenomena".

Thanks to Dennis for bringing this to my attention. He also makes great ambient music himself -- if you like this kind of thing, you should check his music out.

Comments