Code talks! This is a simple web crawler that fetches a bunch of urls concurrently:
urls = ["http://www.google.com/intl/en_ALL/images/logo.gif",
"https://wiki.secondlife.com/w/images/secondlife.jpg",
"http://us.i1.yimg.com/us.yimg.com/i/ww/beta/y3.gif"]
import evy
from evy.patched import urllib2
def fetch(url):
return urllib2.urlopen(url).read()
pool = evy.GreenPool()
for body in pool.imap(fetch, urls):
print "got body", len(body)
Evy is made available under the terms of the open source MIT license