Python爬虫系列之----Scrapy里settings.py配置文件详解

让我们先来看下它里面的内容:


  1. # -*- coding: utf-8 -*-
  2. # Scrapy settings for demo1 project
  3. #
  4. # For simplicity, this file contains only settings considered important or
  5. # commonly used. You can find more settings consulting the documentation:
  6. #
  7. # http://doc.scrapy.org/en/latest/topics/settings.html
  8. # http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
  9. # http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
  10. BOT_NAME = 'demo1' #Scrapy项目的名字,这将用来构造默认 User-Agent,同时也用来log,当您使用 startproject 命令创建项目时其也被自动赋值。
  11. SPIDER_MODULES = ['demo1.spiders'] #Scrapy搜索spider的模块列表 默认: [xxx.spiders]
  12. NEWSPIDER_MODULE = 'demo1.spiders' #使用 genspider 命令创建新spider的模块。默认: 'xxx.spiders'
  13. #爬取的默认User-Agent,除非被覆盖
  14. #USER_AGENT = 'demo1 (+http://www.yourdomain.com)'
  15. #如果启用,Scrapy将会采用 robots.txt策略
  16. ROBOTSTXT_OBEY = True
  17. #Scrapy downloader 并发请求(concurrent requests)的最大值,默认: 16
  18. #CONCURRENT_REQUESTS = 32
  19. #为同一网站的请求配置延迟(默认值:0)
  20. # See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
  21. # See also autothrottle settings and docs
  22. #DOWNLOAD_DELAY = 3 下载器在下载同一个网站下一个页面前需要等待的时间,该选项可以用来限制爬取速度,减轻服务器压力。同时也支持小数:0.25 以秒为单位
  23. #下载延迟设置只有一个有效
  24. #CONCURRENT_REQUESTS_PER_DOMAIN = 16 对单个网站进行并发请求的最大值。
  25. #CONCURRENT_REQUESTS_PER_IP = 16 对单个IP进行并发请求的最大值。如果非0,则忽略 CONCURRENT_REQUESTS_PER_DOMAIN 设定,使用该设定。 也就是说,并发限制将针对IP,而不是网站。该设定也影响 DOWNLOAD_DELAY: 如果 CONCURRENT_REQUESTS_PER_IP 非0,下载延迟应用在IP而不是网站上。
  26. #禁用Cookie(默认情况下启用)
  27. #COOKIES_ENABLED = False
  28. #禁用Telnet控制台(默认启用)
  29. #TELNETCONSOLE_ENABLED = False
  30. #覆盖默认请求标头:
  31. #DEFAULT_REQUEST_HEADERS = {
  32. # 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
  33. # 'Accept-Language': 'en',
  34. #}
  35. #启用或禁用蜘蛛中间件
  36. # See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
  37. #SPIDER_MIDDLEWARES = {
  38. # 'demo1.middlewares.Demo1SpiderMiddleware': 543,
  39. #}
  40. #启用或禁用下载器中间件
  41. # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
  42. #DOWNLOADER_MIDDLEWARES = {
  43. # 'demo1.middlewares.MyCustomDownloaderMiddleware': 543,
  44. #}
  45. #启用或禁用扩展程序
  46. # See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
  47. #EXTENSIONS = {
  48. # 'scrapy.extensions.telnet.TelnetConsole': None,
  49. #}
  50. #配置项目管道
  51. # See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
  52. #ITEM_PIPELINES = {
  53. # 'demo1.pipelines.Demo1Pipeline': 300,
  54. #}
  55. #启用和配置AutoThrottle扩展(默认情况下禁用)
  56. # See http://doc.scrapy.org/en/latest/topics/autothrottle.html
  57. #AUTOTHROTTLE_ENABLED = True
  58. #初始下载延迟
  59. #AUTOTHROTTLE_START_DELAY = 5
  60. #在高延迟的情况下设置的最大下载延迟
  61. #AUTOTHROTTLE_MAX_DELAY = 60
  62. #Scrapy请求的平均数量应该并行发送每个远程服务器
  63. #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
  64. #启用显示所收到的每个响应的调节统计信息:
  65. #AUTOTHROTTLE_DEBUG = False
  66. #启用和配置HTTP缓存(默认情况下禁用)
  67. # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
  68. #HTTPCACHE_ENABLED = True
  69. #HTTPCACHE_EXPIRATION_SECS = 0
  70. #HTTPCACHE_DIR = 'httpcache'
  71. #HTTPCACHE_IGNORE_HTTP_CODES = []
  72. #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

解释几个参数:

ROBOTSTXT_OBEY = True-----------是否遵守robots.txt

CONCURRENT_REQUESTS = 16-----------开启线程数量,默认16

AUTOTHROTTLE_START_DELAY = 3-----------开始下载时限速并延迟时间

AUTOTHROTTLE_MAX_DELAY = 60-----------高并发请求时最大延迟时间

最底下的几个:是否启用在本地缓存,如果开启会优先读取本地缓存,从而加快爬取速度,视情况而定

HTTPCACHE_ENABLED = True

HTTPCACHE_EXPIRATION_SECS = 0

HTTPCACHE_DIR = 'httpcache'

HTTPCACHE_IGNORE_HTTP_CODES = []

HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

以上几个可以视项目需要开启,但是有两个参数最好每次都开启,而每次都是项目文件手动开启不免有些麻烦,最好是项目创建后就自动开启


#DEFAULT_REQUEST_HEADERS = {
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}

这个是浏览器请求头,很多网站都会检查客户端的headers,比如豆瓣就是每一个请求都检查headers的user_agent,否则只会返回403,可以开启

#USER_AGENT = 'Chirco (+http://www.yourdomain.com)'

这个是至关重要的,大部分服务器在请求快了会首先检查User_Agent,而scrapy默认的浏览器头是scrapy1.1 我们需要开启并且修改成浏览器头,如:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1

但是最好是这个USER-AGENT会随机自动更换最好了。

下面的代码可以从预先定义的user-agent的列表中随机选择一个来采集不同的页面 

在settings.py中添加以下代码

  1. DOWNLOADER_MIDDLEWARES = {
  2. 'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware' : None,
  3. 'randoms.rotate_useragent.RotateUserAgentMiddleware' :400
  4. }

rotate_useragent的代码为:

  1. # -*- coding: utf-8 -*-
  2. import random
  3. from scrapy.contrib.downloadermiddleware.useragent import UserAgentMiddleware
  4. class RotateUserAgentMiddleware(UserAgentMiddleware):
  5. def __init__(self, user_agent=''):
  6. self.user_agent = user_agent
  7. def process_request(self, request, spider):
  8. #这句话用于随机选择user-agent
  9. ua = random.choice(self.user_agent_list)
  10. if ua:
  11. print('User-Agent:'+ua)
  12. request.headers.setdefault('User-Agent', ua)
  13. #the default user_agent_list composes chrome,I E,firefox,Mozilla,opera,netscape
  14. user_agent_list = [\
  15. "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1"\
  16. "Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11",\
  17. "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",\
  18. "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",\
  19. "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",\
  20. "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",\
  21. "Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",\
  22. "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",\
  23. "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",\
  24. "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",\
  25. "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",\
  26. "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",\
  27. "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",\
  28. "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",\
  29. "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",\
  30. "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",\
  31. "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",\
  32. "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"
  33. ]


运行爬虫可以看到信息:

  1. 2017-04-16 00:07:40 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
  2. User-Agent:Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3
  3. 2017-04-16 00:07:40 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to <GET http://www.jianshu.com/robots.txt> from <GET http://jianshu.com/robots.txt>
  4. User-Agent:Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1
  5. 2017-04-16 00:07:41 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.jianshu.com/robots.txt> (referer: None)
  6. User-Agent:Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24
  7. 2017-04-16 00:07:41 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to <GET http://www.jianshu.com/> from <GET http://jianshu.com/>
  8. User-Agent:Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3

也可以把 user_agent_list放到 settings文件中去:


在rotate_useragent文件中加入一行代码

from randoms.settings import user_agent_list

运行效果如下:


完整的示例:http://download.csdn.net/detail/u011781521/9815390

猜你喜欢

转载自blog.csdn.net/rookie_is_me/article/details/80939708