facebewk


Posted: 2012-09-30 17:30   |  More posts about client code facebewk facebook graph api internet python

facebewk ['feɪsbuk] is a Python wrapper for the facebook Graph API. A while ago, there was quite a commotion online when official repositories for Python and C# API clients were removed by facebook without any forewarning. These clients had become neglected and were no longer accurate, and so facebook wanted to stop supporting them. Since then, the hole left in the Python ecosystem doesn't seem to have been plugged - most people seem to be using straight HTTP requests to the API.

I decided to write facebewk - a simple little API client that's designed to minimise HTTP requests and encapsulate data nicely.

Facebook have put an awful lot of work into making the Graph API as friendly and robust as it is. Apps developers have already spent their time familiarising themselves with it. So why mess with a good thing? Instead of coming up with a new API for developers to learn, facebewk provides a developer-friendly Node object, representative of exactly what information facebook returns when queried about a certain node on the social graph.

Some important points:

  • These Node objects are generated recursively. Therefore, a Node object representing my own facebook profile will, in turn, contain subnodes representing my hometown of Dublin and my current location, for example.
  • Information contained in subnodes may not be complete, but this is invisible to the developer: facebewk takes care of grabbing a subnode's full information if a key is accessed which doesn't currently exist in the subnode.

This is all very difficult to describe in prose, so instead, here is an example session from a Python interactive shell to show you exactly what facebewk is about:

>>> from facebewk import Client
>>> c = Client('FACEBOOK_ACCESS_TOKEN')
>>> me = c.get('me')
>>> type(me)
>>> me['name']
u'Aengus Walton'
>>> type(me['hometown'])
>>> me['hometown']
{'__fetched__': False, u'id': u'110769XXXXXXXXX', u'name': u'Dublin, Ireland'}
>>> me['hometown']['checkins']
16734
>>> me['hometown']
{u'category': u'City', u'likes': 146053, u'description': u'<p><b>Dublin</b> is the capital and most populous city of ........', '__fetched__': True, u'talking_about_count': 115999, u'name': u'Dublin, Ireland', u'link': u'http://www.facebook.com/pages/Dublin-Ireland/110769888951990', u'location': {u'latitude': 53.344037395687, u'longitude': -6.2632156999178}, u'is_community_page': True, u'checkins': 16734, u'id': u'110769888951990', u'is_published': True}
>>>
>>> newsfeed = c.get('/me/home')
>>> type(newsfeed)
>>> type(newsfeed['data'])
>>>
>>> me['significant_other']
{'__fetched__': False, u'name': u'Patricia Korcz', u'id': u'100000XXXXXXXXX'}
>>> me['significant_other']['hometown']['checkins']
222
>>>
>>> status_update = c.post(me, {'message': 'writing my blog post innit', 
...     'privacy': {'value': 'CUSTOM', 'networks': 1, 'friends': 'NO_FRIENDS'}})
>>> status_update
{'__fetched__': False, u'id': u'37300126_632748066014'}
>>> status_update['message']
u'writing my blog post innit'
>>> my_comment = c.post(status_update, {'message': 'blablabla'})
>>> c.like(my_comment)
True

The last few lines result in the following appearing on my profile:

Example status message made to facebook from the facebewk client

So, the Node is essentially a subclass of dict, with some magic thrown in in order to grab missing information on the fly. The only HTTP GET requests to facebook occur on lines 3, 12, 19, 27 and 34. Information retrieved from the graph is stored, overwriting the previously scant Node objects with more detailed ones. Posting to the graph is possible with the same API as is defined here. facebewk also automatically figures out whether the target node to be posted to is a user profile or page, in which case it posts a normal wall post, or a pre-existing post of some kind, in which case it posts a comment.

Authentication is unsupported and, at the moment, outside the scope of this project. As you can see above, I used an API key generated by the Graph API Explorer, which is probably the best way to get started with facebewk if you want to try it out.

There's a lot more that can be done here, for example, making the privacy controls above configurable with keyword arguments, adding wrappers for grabbing friendlists and newsfeeds, and so on, but I'm publishing this now to gauge interest and see if it's something which would really benefit developers in real-world scenarios.

If you have any thoughts on this project, if there are improvements or additional functionality you'd love to see, then please get in touch via email or the github issue tracker.

Source code available on github

Comments


@grammer_man who the fuck is this nigga and why u comin at me like that #Hoeassnigga


Posted: 2012-01-09 20:06   |  More posts about code computers funny idiots internet oddities

Had a spare hour last Thursday and decided to write a little twitter bot. There he is above. His name is Grammer_Man and he corrects other twitter users' misspellings, using data scraped from these Wikipedia pages.

Responses have been pouring in already, some agitated, some confused, but most positive -- which was a pleasant surprise. In any event, the minimal amount of effort in coding has paid off many times over in entertainment.

You can see who's responding at the moment by searching for @grammer_man, and also by checking his list of favourites.

Here is the (somewhat slapdash) code that powers our fearless spelling Nazi:

grabber.py

This module grabs the spelling data from Wikipedia.

#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import pickle

import requests
from BeautifulSoup import BeautifulSoup

def grab(letter):
    '''
    Grabs spellings from wikipedia
    '''
    url = 'http://en.wikipedia.org/wiki/Wikipedia:Lists_of_common_misspellings/%s' % letter
    html = requests.get(url).content
    soup = BeautifulSoup(html)
    bullets = soup.findAll('li')
    retval = {}
    for bullet in bullets:
        if 'plainlinks' in repr(bullet):
            values = bullet.text.split('(')
            if len(values) == 2:
                retval[values[0]] = values[1][:-1] # shave off the ) at end
    return retval

def get_spellings():
    '''
    Returns a dictionary of {false: correct} spellings
    '''
    if not os.path.exists('words.pkl'):
        retval = {}
        for c in 'ABCDEFGHIJKLMNOPQRSTUVWXYZ':
            print 'Getting typos - %s' % c
            retval.update(grab(c))
        print 'Dumping...'
        f = open('words.pkl', 'w')
        pickle.dump(retval, f)
        f.close()
        return retval
    else:
        f = open('words.pkl', 'r')
        retval = pickle.load(f)
        f.close()
        return retval

if __name__ == '__main__':
    get_spellings()

bot.py

The bot. Selects misspellings at random, searches for them, responds to them, while also taking breaks between tweets and longer breaks every few hours.

#!/usr/bin/env python
# -*- coding: utf-8 -*-

import os
import random
import time
import pickle

import twitter

from grabber import get_spellings

API = twitter.Api()

MESSAGES = u'''
Hey $USERNAME, didn't you mean $CORRECT there?
#
# All messages stored in here, one per line.
#
'''.split('\n')

def compose_message(twitter_post, mistake, correct):
    '''
    Choose a message from MESSAGES at random, substitute fields to personalise it and 
    check if it exceeds the twitter message limit. Try this 100 times before failing.
    '''
    retries = 0
    while retries &lt; 100:
        message = MESSAGES[random.randint(0, len(MESSAGES) - 1)]
        message = message.replace('$USERNAME', '@%s' % twitter_post.user.screen_name)
        message = message.replace('$MISTAKE', '"%s"' % mistake).replace('$CORRECT', '"%s"' % correct)
        if message and len(message) &lt; 141:
            return message
    return None

def correct_spelling(twitter_post, mistake, correct):
    '''
    Correct someone's spelling in a twitter_post
    '''
    print u'Correcting @%s for using %s...' %(twitter_post.user.screen_name, 
                                            mistake)
    message = compose_message(twitter_post, mistake, correct)
    if not message:
        print u'All messages were too long... Aborting...'
        return False
    else:
        failures = 0
        try:
            API.PostUpdate(message, in_reply_to_status_id=twitter_post.id)
        except Exception, e:
            print 'Failed to submit tweet (%s).'
            return False
        return True

def search(word):
    '''
    Search twitter for uses of a word, return one if it's been used recently.
    Otherwise return None.

    TODO: Add time awareness.
    '''
    print 'Searching for uses of %s...' % word
    results = API.GetSearch(word)
    if results:
        for result in results:
            if not check_if_done(result.id) and\
                not result.user.screen_name == 'grammer_man' and word in result.text:
                return result
    return None

def check_if_done(id):
    '''
    Checks if a tweet has already been responded to
    '''
    if os.path.exists('done.pkl'):
        f = open('done.pkl', 'r')
        done = pickle.load(f)
        f.close()
        if id in done:
            return True
    return False

def update_done(id):
    '''
    Updates a list of tweets that've been replied to
    '''
    if os.path.exists('done.pkl'):
        f = open('done.pkl', 'r')
        done = pickle.load(f)
        f.close()
    else:
        done = []

    done.append(id)

    f = open('done.pkl', 'w')
    pickle.dump(done, f)
    f.close()

def main():
    '''
    Main program flow
    '''
    words = get_spellings()
    counter = 0 
    while True:
        word = random.choice(words.keys())
        post = search(word)
        if counter &gt; 100:
            rand_time = random.randint(120*60, 240*60)
            print 'Done %s tweets, sleeping for %s minutes' % (counter, rand_time/60)
            time.sleep(rand_time)
            counter = 0
        # TODO: PROPERLY PRUNE THE MISTAKES/CORRECTIONS FROM WIKIPEDIA AND REMOVE THIS:
        if not u',' in word + words[word] and not u';' in word + words[word]:
            if post:
                result = correct_spelling(post, word, words[word])
                if result:
                    counter += 1
                    print '#%s Done' % counter
                    update_done(post.id)
                    time.sleep(random.randint(300,500))

if __name__ == '__main__':
    main()

Grammer_Man uses the following libraries:

Comments


But what does it mean?


Posted: 2010-10-22 15:15   |  More posts about art funny internet oddities photography

500 / 200.

From a collective of people including the man behind King Lud's Revenge.

Comments


The Videogame Music Preservation Foundation


Posted: 2010-08-20 18:24   |  More posts about art computer games computers downloads electronic internet music

A friend came across this website a few weeks ago, and I was very excited about it - an archive of plenty of video game music (mainly for DOS, which is what I grew up with), all recorded properly in order to maximise the nostalgia, and made available in ogg format.

I contacted the guy who runs it about setting up a torrent of the entire archive, and he very kindly obliged. You can get the entire collection here (~4.4GB in total). Enjoy!

Comments


Why the future doesn't need us.


Posted: 2010-07-24 01:21   |  More posts about computer science computers internet robotics science

I finally managed to get around to reading Bill Joy's article Why the future doesn't need us the other day while waiting to board a plane. Bill Joy is a renowned computer scientist who co-founded Sun Microsystems and authored the popular UNIX text editor vi. The article is concerned with the ever increasing speed of "progress" in fields of new technology (primarily robotics, nanotechnology and genetic engineering) which Joy views with apprehension, arguing that the products of these fields will eventually render mankind obsolete and lead to our self-destruction.

There's no point trying to quote it, so instead you can read the article here, read more about Bill Joy here, or read responses and criticism of the article here.

Comments


Big Picture Cataloguer: An update


Posted: 2010-05-20 20:11   |  More posts about art code computers internet photography

In just over a week since I released the Big Picture Cataloguer, there's been a surprising amount of interest and enthusiasm about it. Since I still haven't gotten binary versions of the program for OS X and Linux up (I've no access to an OS X computer, and getting the required libraries installed on Linux has proved to be quite difficult), I've decided to relent and share the source code of the cataloguer under a Creative Commons license.

The script makes use of pyexiv2 - the 0.2 branch - for metadata editing, mechanize for grabbing pages and submitting error reports, the very handy unaccented_map() class (included) for unicode trickery and of course the wonderful XML parser, BeautifulSoup.

Naturally, it's available from the Big Picture Cataloguer's page in the Code section of this site.

Given how much The Big Picture galleries' HTML format has subtly changed over time, and the fact I wrote this in a rush, it's quite messy, but it does the job.

Today's update is of version 0.3, which has an optional "quiet mode" to enable users to schedule the program to run frequently. Enjoy!

Comments


Boston.com Big Picture Cataloguer


Posted: 2010-05-12 20:02   |  More posts about art code computers internet media photography politics

I'm a big fan of The Boston Globe's photojournalism series, The Big Picture. So much so, in fact, that I decided to dedicate a few hours this week to building a program that would not just download the entire series, but add caption metadata to each photo, since many are informative and look very nice in Picasa, for example.

Now, I'm happy that the application is stable enough to release to the world in the Code section of my website.

Since I don't want people to be hammering The Boston Globe's servers, I've made the script wait a fraction of a second between each request, and since I don't want people to be able to disable this functionality, unfortunately only binaries will be available for the time being. Windows binaries are available already, OS X and Linux binaries to come in a few days.

Indeed, if those at The Boston Globe have a problem with how the program operates, they need simply contact me and we can come to an agreement, but I've worked hard to make sure that the program contacts their servers as little as possible.

Bug reports will be automatically submitted through this website too, but if you have any unforeseen problems (e.g. a crash or a hang), email me with as much information as possible (text describing the "Traceback" printed before the crash, what album/photo the program was working on, etc).

What can you do once you've got the entire 2GB collection of photos downloaded? Well, you can simply look through them at your own pace and comfort, or indeed choose to create a montage screensaver from them (although be warned - a screensaver that fades from a beautiful Antarctic landscape to a bloody photo of a victim of the war in Afghanistan might not be exactly what you had in mind.)

But in any event, hopefully it'll be of some use. Enjoy!

Comments


New discoveries


Posted: 2010-02-25 16:21   |  More posts about architecture computers design internet oddities

This was posted on reddit today. I agree entirely with the poster's sentiment: interesting links on reddit are, more often than not, not links to the gateway of a whole website of interesting stuff. When they are links to a website's front page, it's generally a very narrow, single-purpose website that is quickly forgotten about. Hopefully, the poster's subreddit -- apparently yet to be made -- will be a success.

In any event, having gone through the blog-post he had linked I decided to share some of my new discoveries here myself:

  • Building Maker: A Google app I was unaware of, which lets you add the 3D element to Google Maps. For all bored architects out there (since this is just what they want to be doing in their time off.)
  • Ikea Hacker: Neat stuff done with bog-standard Ikea furniture.
  • Strange Maps: A blog of, well, old and interesting maps. I don't know if I'd go as far as to say strange...
  • Newseum: The front pages of newspapers from 78 countries around the world.
  • Cooking For Engineers: This one reminded me of my father, a pragmatist who insists on weighing pasta before cooking it, in order to make sure he'll be doling out the correct amount. Nothing wrong with approaching cooking as a science, as opposed to an art!
  • GetHuman.com: An excellent idea for a website. This one tells you which keys you need to press in order to get an actual human operator on the line when calling a large company, saving you the time of listening to and trying to interact with a computerised system.
  • PDFGeni.com: Another great idea -- a repository of PDF documents such as old technical manuals, academic texts, and so on.

I feel I must write a disclaimer, saying I haven't used or read these sites extensively, having just discovered them a few hours ago, but from first impressions they do look like they deserve a bookmark.

Comments


Goodbye Electronica


Posted: 2010-02-03 01:26   |  More posts about art downloads electronic internet music

Came across a link to a song, "Goodbye Electronica" by Dave Graham, on the electronic music board xltronic tonight. Really, really enjoyed it - lovely guitar work, atmosphere and lyrics. He's allowed me to share it with you here, saying it's a "freebie", so give it a listen and pass it on to anyone you think might enjoy it!

You can download it locally here:

http://ventolin.org/wp-content/uploads/2010/02/GoodbyeElectronica.mp3

Comments


Rette deine Freiheit


Posted: 2010-01-05 22:19   |  More posts about art censorship computers design digital rights germany internet politics words

At the moment in Germany, there is fierce opposition growing against plans by the CDU to implement internet censorship under the guise of attacking the spread of child pornography. A movement championed by the German Piratenpartei has dubbed ex-minister for family affairs Ursula von der Leyen "Zensursula", a portmandeau of Zensur (Censor) and Ursula, and is referring to the CDU's plans as Stasi 2.0, a nod to the brutal secret police which operated in former East Germany.

Not only is there to be a secret list of blocked websites, such as exists in Australia, but the government is pushing for more data to be collected from citizens and retained for a long period of time.

A video which caught my attention a while back was entitled Du bist Terrorist (You are a terrorist). With soft ambient music playing, and deceptively pleasantly designed imagery, the two-minute video parodies the Du bist Deutschland ad-campaign with a soft, reassuring voice informing you of what the German government has in store for you, in terms of heavier and more invasive surveillance -- because You are a terrorist.

Earlier this week I found that the same people had created a new video in the same vein, entitled Rette deine Freiheit (Save your freedom). The video focuses much more on the coming internet censorship in Germany than just data retention and physical surveillance.

Since there was no English translation available, I decided to translate it and re-upload to Youtube. The result is below:

The translation is by no means perfect, but at least it's something. There were a few tricky problems with it:

  • Einfach wegschauen: Literally "simply look away", the video describes this as the method tried-and-tested by members of families with a history of domestic abuse. I was going to translate it as "simply look the other way" in its first instance, since this is the closest phrase in English that pertains to such a situation. However, this doesn't exactly capture the double-meaning employed in the video, since it implies wilful ignorance which isn't quite applicable to what the government is doing, so I decided to settle on "simply block it out". I'm not sure I'm happy with this, however. Suggestions?
  • In the sentence, "In Prävention, Therapie und Personal investiert hätte dies vielen Opfern helfen können: Reinste Verschwendung", the meaning that is sarcastically implied is that the money that could be invested in preventative measures, therapy and personelle is much better spent on building an internet block. I don't think I captured this very well.

In any event, there's likely to be an official translation soon (I just saw an "Englisch (bald verfügbar)" notice at the top of the official page now -- perhaps my emailing asking for a transcript of the video got them in a rush) and these issues will cease to be.

One last thing -- if you are interested in learning more about the situation in Germany regarding internet freedom and the child pornography scare, I'd not only urge you to visit the links above, but also this shocking, but morbidly fascinating account of one techie's work in the murkiest of subcultures. Thankfully, he doesn't go into detail about actual child abuse, but instead details exactly how child pornography rings work, using the internet and computers.

Put simply, it proves what anyone with a clue already knows: current proposals for internet censorship will have absolutely no impact whatsoever on paedophiles and child pornographers and will only serve to infringe the rights of normal, law-abiding internet users.

Thanks to Áine and Patricia for help with one or two minor parts of the translation.

Comments