Evy Documentation

Code talks! This is a simple web crawler that fetches a bunch of urls concurrently:

urls = ["http://www.google.com/intl/en_ALL/images/logo.gif",
     "https://wiki.secondlife.com/w/images/secondlife.jpg",
     "http://us.i1.yimg.com/us.yimg.com/i/ww/beta/y3.gif"]

import evy
from evy.patched import urllib2

def fetch(url):
  return urllib2.urlopen(url).read()

pool = evy.GreenPool()
for body in pool.imap(fetch, urls):
  print "got body", len(body)

Indices and tables

Table Of Contents

Related Topics

This Page