Python爬虫:Scrapy的Crawler对象及扩展Extensions和信号Signals

版权声明:本文为博主原创文章,欢迎转载,请注明出处 https://blog.csdn.net/mouday/article/details/84024767

先了解Scrapy中的Crawler对象体系

Crawler对象

  • settings crawler的配置管理器

    • set(name, value, priority=‘project’)
    • setdict(values, priority=‘project’)
    • setmodule(module, priority=‘project’)
    • get(name, default=None)
    • getbool(name, default=False)
    • getint(name, default=0)
    • getfloat(name, default=0.0)
    • getlist(name, default=None)
    • getdict(name, default=None)
    • copy() # 深拷贝当前配置
    • freeze()
    • frozencopy()
  • signals crawler的信号管理器

    • connect(receiver, signal)
    • send_catch_log(signal, **kwargs)
    • send_catch_log_deferred(signal, **kwargs)
    • disconnect(receiver, signal)
    • disconnect_all(signal)
  • stats crawler的统计信息收集器

    • get_value(key, default=None)
    • get_stats()
    • set_value(key, value)
    • set_stats(stats)
    • inc_value(key, count=1, start=0)
    • max_value(key, value)
    • min_value(key, value)
    • clear_stats()
    • open_spider(spider)
    • close_spider(spider)
  • extensions 扩展管理器,跟踪所有开启的扩展

  • engine 执行引擎,协调crawler的核心逻辑,包括调度,下载和spider

  • spider 正在爬取的spider。该spider类的实例由创建crawler时所提供

  • crawl(*args, **kwargs) 初始化spider类,启动执行引擎,启动crawler

Scrapy内置信号

engine_started   # 引擎启动
engine_stopped  # 引擎停止
spider_opened   # spider开始
spider_idle  # spider进入空闲(idle)状态
spider_closed  # spider被关闭
spider_error  # spider的回调函数产生错误
request_scheduled    # 引擎调度一个 Request
request_dropped   # # 引擎丢弃一个 Request
response_received   # 引擎从downloader获取到一个新的 Response
response_downloaded  # 当一个 HTTPResponse 被下载
item_scraped   # item通过所有 Item Pipeline 后,没有被丢弃dropped
item_dropped  #   DropItem丢弃item

以下代码实现一个简单的扩展 + 信号,功能是统计没有处理的item数量和丢弃的item数量
myextension.py


from scrapy import signals
from scrapy.exceptions import NotConfigured

class SpiderOpenCloseLogging(object):

    def __init__(self):
        self.items_scraped = 0
        self.items_dropped = 0

    @classmethod
    def from_crawler(cls, crawler):
        # 读取settings配置信息,检查是否启动扩展,没有启用则抛出异常,扩展被禁用
        if not crawler.settings.getbool('MY_EXTENSION'):
            raise NotConfigured

        # 实例化扩展对象
        ext = cls()

        # 注册信号处理函数
        crawler.signals.connect(ext.spider_opened, signal=signals.spider_opened)
        crawler.signals.connect(ext.spider_closed, signal=signals.spider_closed)
        crawler.signals.connect(ext.item_scraped, signal=signals.item_scraped)
        crawler.signals.connect(ext.item_dropped, signal=signals.item_dropped)

        return ext

    # 自定义的3个信号处理函数
    def spider_opened(self, spider):
        spider.log(">>> opened spider %s" % spider.name)

    def spider_closed(self, spider, reason):
        spider.log(">>> closed spider %s" % spider.name)
        spider.log(">>>scraped %d items" % self.items_scraped)
        spider.log(">>>dropped %d items" % self.items_dropped)
        # 获取状态收集器信息
	    print(spider.crawler.stats.get_stats())
	    
    def item_scraped(self, item, response, spider):
        self.items_scraped += 1

    def item_dropped(self, item, response, exception, spider):
        self.items_dropped += 1

启用扩展 settings.py

EXTENSIONS = {
    "scrapys.myextension.SpiderOpenCloseLogging": 100
}

MY_EXTENSION = True

baidu_spider.py


from scrapy import Spider, cmdline
from scrapy_demo.scrapys.items import ScrapysItem


class BaiduSpider(Spider):
    name = "baidu_spider"

    start_urls = [
        "https://www.baidu.com/"
    ]

    def parse(self, response):
        self.crawler.stats.set_value("key", "value")
        return ScrapysItem()


if __name__ == '__main__':
    cmdline.execute("scrapy crawl baidu_spider".split())

item.py中新建item对象

import scrapy

class ScrapysItem(scrapy.Item):
    pass

最后扩展中调用spider.crawler.stats.get_stats(),可以查看爬虫运行后的信息,打印出的信息和每次爬虫运行完的信息一样,而且是一个字典对象,可以对这些数据进行获取

{
'log_count/INFO': 7, 
'start_time': datetime.datetime(2018, 11, 13, 2, 22, 15, 596577), 
'log_count/DEBUG': 6, 
'memusage/startup': 47587328, 
'memusage/max': 47587328, 
'scheduler/enqueued/memory': 1, 
'scheduler/enqueued': 1, 
'scheduler/dequeued/memory': 1, 
'scheduler/dequeued': 1, 
'downloader/request_count': 1, 
'downloader/request_method_count/GET': 1, 
'downloader/request_bytes': 212, 
'downloader/response_count': 1, 
'downloader/response_status_count/200': 1, 
'downloader/response_bytes': 1511, 
'response_received_count': 1, 
'key': 'value', 'log_count/WARNING': 1, 
'item_dropped_count': 1, 
'item_dropped_reasons_count/DropItem': 1, 
'finish_time': datetime.datetime(2018, 11, 13, 2, 22, 16, 113701),
'finish_reason': 'finished'
 }

参考

  1. 扩展(Extensions)
  2. 内置信号参考手册(Built-in signals reference)
  3. 数据收集(Stats Collection)

猜你喜欢

转载自blog.csdn.net/mouday/article/details/84024767