1.settings.py里面的参数说明
每个参数其对应的官方得文档的网址
# -*- coding: utf-8 -*- # Scrapy settings for tencent project # # For simplicity, this file contains only settings considered important or # commonly used. You can find more settings consulting the documentation: # # https://doc.scrapy.org/en/latest/topics/settings.html # https://doc.scrapy.org/en/latest/topics/downloader-middleware.html # https://doc.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = 'tencent' SPIDER_MODULES = ['tencent.spiders'] # 新建一个爬虫会在什么位置 NEWSPIDER_MODULE = 'tencent.spiders' # LOG_LEVEL = "WARNING" # Crawl responsibly by identifying yourself (and your website) on the user-agent # 浏览器的标识 # USER_AGENT = 'tencent (+http://www.yourdomain.com)' # Obey robots.txt rules # 是否遵守robots协议 ROBOTSTXT_OBEY = True # Configure maximum concurrent requests performed by Scrapy (default: 16) # 设置最大并发请求 #CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0) # See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay # See also autothrottle settings and docs # 下载延迟 #DOWNLOAD_DELAY = 3 # The download delay setting will honor only one of: # 每个域名的最大并发请求数 #CONCURRENT_REQUESTS_PER_DOMAIN = 16 # 每个IP的最大并发请求数 #CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default) # cookies 是否开启,默认是被开启来的 #COOKIES_ENABLED = False # Disable Telnet Console (enabled by default) # Telnet 插件是否开启 #TELNETCONSOLE_ENABLED = False # Override the default request headers: # 默认请求头 DEFAULT_REQUEST_HEADERS = { 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'Accept-Language': 'en', } # Enable or disable spider middlewares # See https://doc.scrapy.org/en/latest/topics/spider-middleware.html # 爬虫中间键 #SPIDER_MIDDLEWARES = { # 'tencent.middlewares.TencentSpiderMiddleware': 543, #} # Enable or disable downloader middlewares # See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html # 下载中间键 #DOWNLOADER_MIDDLEWARES = { # 'tencent.middlewares.TencentDownloaderMiddleware': 543, #} # Enable or disable extensions # See https://doc.scrapy.org/en/latest/topics/extensions.html # 设置插件 #EXTENSIONS = { # 'scrapy.extensions.telnet.TelnetConsole': None, #} # Configure item pipelines # See https://doc.scrapy.org/en/latest/topics/item-pipeline.html # 设置 pipelines ITEM_PIPELINES = { 'tencent.pipelines.TencentPipeline': 300, } # Enable and configure the AutoThrottle extension (disabled by default) # See https://doc.scrapy.org/en/latest/topics/autothrottle.html # 通过设置下面的参数可以让爬虫的速度变慢一点 #AUTOTHROTTLE_ENABLED = True # The initial download delay #AUTOTHROTTLE_START_DELAY = 5 # The maximum download delay to be set in case of high latencies #AUTOTHROTTLE_MAX_DELAY = 60 # The average number of requests Scrapy should be sending in parallel to # each remote server #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0 # Enable showing throttling stats for every response received: #AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default) # 有关HTTP缓存设置 # See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings #HTTPCACHE_ENABLED = True #HTTPCACHE_EXPIRATION_SECS = 0 #HTTPCACHE_DIR = 'httpcache' #HTTPCACHE_IGNORE_HTTP_CODES = [] #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
2. 在settings.py里面定义好的配置或者参数怎样在爬虫和pipelines等里面用呢?
(1) 在其他py文件通过 from ... import ... 的方式直接导入就行
想在那个文件用,直接导入直接用就行
(2) 可以使用 .settings 的方式, 因为是一个字典,
可以通过 .settings["配置或参数名"] 或者 .settings.get("配置或参数名")
例如在爬虫文件里的parese(self,response) 里面进行
self.settings["配置或参数名"] 或者 self.settings.get("配置或参数名")
在pipelines里面 def process_item(self, item, spider) 的方法里面,
spider.settings["配置或参数名"] 或者 spider.settings.get("配置或参数名")
Scrapy框架的学习(8.scrapy中settings.py里面配置说明以及怎样设置配置或者参数以及怎样使用)
猜你喜欢
转载自blog.csdn.net/wei18791957243/article/details/86303861
今日推荐
周排行