`
473687880
  • 浏览: 478197 次
文章分类
社区版块
存档分类
最新评论

搜索引擎–Scrapy爬虫使用Bloom Filter算法进行URL去重

 
阅读更多

主机环境:Ubuntu 13.04

Python版本:2.7.4

转载请标明:http://blog.yanming8.cn/archives/135

1、安装

1 sudo pip install pybloomfiltermmap

或者直接在github获取最新源代码,编译安装

1 sudo python setup.py install

2、使用方法

1 class pybloomfilter.BloomFilter(capacity : int, error_rate : float, filename : string)

Create a new BloomFilter object with a given capacity and error_rate.Note that we do not check capacity.This is important, because I want to be able to support logical OR and AND (see below). The capacity and error_rate then together serve as a contract—you add less than capacity items, and the Bloom Filter will have an error rate less than error_rate.

NEW: If you specifyNonefor the filename, then the bloom filter will be backed by malloc’d memory, rather than by a file.

1 BloomFilter.add(item) → Boolean

Add the item to the bloom filter.

  • item– Hashable object
  • Boolean (True if item already in the filter)
分享到:
评论

相关推荐

Global site tag (gtag.js) - Google Analytics