#第6篇分享:python-scrapy框架爬虫-开启数据收集新时代(2)

#scrapy框架爬虫介绍
1.为什么使用框架:
框架的使用就是为了让我们的开发更大项目的时候更加容易,小的爬虫我觉得不需要;但是学习框架确实没那么容易,为了能搞出更大的事情,只能是坚持下去,自己选择的路跪着也要走完呀。

2.scrapy框架的安装:
cmd下->pip install scrapy --default -timeout=1000 :运行这个指令不只是安装scrapy,而是安装所有的支持模块, 最好找个天时地利人和的日子,说实话还不一定会成功,安装过程中 他会告诉你缺少或者找不到某个模块,木有办法,只能网上去找安装包进行安装后,再重新运行这个命令 pip install scrapy。

3.新建第一个工程:
a.scrapy如何创建爬虫Project:
新建文件夹:自己选择地方新建文件夹 Scrapy_Project ;
在新建的文件夹路径: 写cmd,回车调出命令行窗口(注意路径要是新建的文件夹的路径)->输入:scrapy startproject mySpider(工程名,可选);
b.Scrapy如何创建爬虫主程序:
cmd下切换目录: cd D:\python\Scrapy\Scrapy_Project \mySpider\mySpider
创建主程序: cmd->scrapy genspider musicSpider(文件名/主程序名) ‘www.baidu.con’(爬取的网址,随便写一个,后续去程序里面更新)
c.运行爬虫:
运行爬虫: scrapy crawl 主程序名(主程序所在路径下);在命令行及pycharm操作都可以。

注:在当前文件夹,按住Shift +右键->选择在此处打开powershell窗口,可以达到cmd中切换路径的功能,可以快速创建爬虫的主程序;

========================================
4.简易介绍各个文件功能:
下图是非常经典的scrapy框架运行图,看一下,不理解继续往下看,看完回头再看:
在这里插入图片描述scrapy中程序块的作用:

1.items:创建需要爬取的内容;
2.pipelines:处理主程序传来的items文件,保存;
3.主程序,自己命名musicSpider:爬取数据&处理数据;程序执行后parse函数会自动执行;
4.settings:设置爬虫内部的功能。

5.来个实例进行展示: 第一个实例:
a.整体的结构:
在这里插入图片描述
b.配置设置文件:settings:

# -*- coding: utf-8 -*-

# Scrapy settings for sql_test project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'sql_test'

SPIDER_MODULES = ['sql_test.spiders']
NEWSPIDER_MODULE = 'sql_test.spiders'


# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'sql_test (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = False

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
COOKIES_ENABLED = False      #1.开启cookie,不反爬不用管

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
 #2.配置cookie文件,爬取拉勾网,存在反爬情况,不反爬不用管
DEFAULT_REQUEST_HEADERS = {
    
    
  'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
  'Accept-Language': 'en',
   'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36 Edge/18.18362',
  'Cookie': 'JSESSIONID=ABAAABAABAGABFA5E2A9982DD32C30995DFCF367761BEBE; WEBTJ-ID=20200903091843-174518b636b37a-0da60136b42b44-71415a3b-1227387-174518b636c7c0; TG-TRACK-CODE=index_navigation; RECOMMEND_TIP=true; hasDeliver=0; showExpriedCompanyHome=1; SEARCH_ID=e562294187854c77abb3cbcff526d70e; showExpriedMyPublish=1; showExpriedIndex=1; LGRID=20200903162742-681a73b9-053b-4dc5-86c7-31528a2af023; X_HTTP_TOKEN=1bb30f90edb33c7226612199513736c21633b95238; _putrc=DE433D2C9A7878A3123F89F2B170EADC; Hm_lpvt_4233e74dff0ae5bd0a3d81c6ccf756e6=1599121663; login=true; unick=%E8%B4%B9%E8%BF%9C%E8%88%AA; privacyPolicyPopup=false; Hm_lvt_4233e74dff0ae5bd0a3d81c6ccf756e6=1598880815,1598949023,1599013207,1599095924; LGUID=20200831095201-fcf6b9f8-6c84-48bb-b670-dde7bd1e9528; user_trace_token=20200831095201-68297c46-8bb2-4c48-90c6-d5dda9390cf0; LG_LOGIN_USER_ID=2c18b1295c1ccffcc9b39c7e1b2b587f99e98fa6cb69fcb5eafb1b86b84344c7; index_location_city=%E5%8C%97%E4%BA%AC; sensorsdata2015jssdkcross=%7B%22distinct_id%22%3A%2218391023%22%2C%22first_id%22%3A%221744e0b45ca211-020f7802e09e57-71415a3b-1227387-1744e0b45cb811%22%2C%22props%22%3A%7B%22%24os%22%3A%22Windows%22%2C%22%24browser%22%3A%22Chrome%22%2C%22%24browser_version%22%3A%2270.0.3538.102%22%2C%22%24latest_traffic_source_type%22%3A%22%E7%9B%B4%E6%8E%A5%E6%B5%81%E9%87%8F%22%2C%22%24latest_referrer%22%3A%22%22%2C%22%24latest_referrer_host%22%3A%22%22%2C%22%24latest_search_keyword%22%3A%22%E6%9C%AA%E5%8F%96%E5%88%B0%E5%80%BC_%E7%9B%B4%E6%8E%A5%E6%89%93%E5%BC%80%22%7D%2C%22%24device_id%22%3A%22174423a144f2ea-0bdcfdb5c25a4a-71415a3b-1227387-174423a1450415%22%7D; _gid=GA1.2.1687110760.1598838723; _ga=GA1.2.388102064.1598838723; gate_login_token=250d8611e626e58f3c738c10f02ce585e4e956bdd815b5b76498821e495ba739; LG_HAS_LOGIN=1',

}

# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
    
    
#    'sql_test.middlewares.SqlTestSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
    
    
#    'sql_test.middlewares.SqlTestDownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
    
    
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
#3.设定优先级,这就可以了,建完程序就开启
ITEM_PIPELINES = {
    
    
   'sql_test.pipelines.SqlTestPipeline': 300,
}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

c.设置爬取的内容:items:

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html

#爬取拉钩网的python标题的招聘信息
#标题:Title
#内容:content
#薪资:money
#工作地点:jobsite

import scrapy


class SqlTestItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    # 工作标题
    Title = scrapy.Field()
    # 薪资
    money = scrapy.Field()
    # 工作地点
    jobsite = scrapy.Field()
    # 工作内容
    content = scrapy.Field()
    # 工作链接:url
    url = scrapy.Field()

d.爬取数据,处理数据:test(里面包含了selenium用法,有兴趣可以先了解一下,后续博客也会介绍),并传给管道pipelines:

# -*- coding: utf-8 -*-
import scrapy
from  ..items import SqlTestItem
from selenium import webdriver     #网站测试模块,用来爬虫很强大,但是速度慢,使用的话先安装
from lxml import etree

#爬取拉钩网的python标题的招聘信息
#列表:https://www.lagou.com/beijing-zhaopin/Python/1/
#https://www.lagou.com/beijing-zhaopin/Python/2/

#主体内容:https://www.lagou.com/jobs/7594816.html
global number
number = 1
class TestSpider(scrapy.Spider):
    name = 'test'
    allowed_domains = ['www.lagou.com']   #设置允许访问网站的格式,就是去掉了http;
    url = 'https://www.lagou.com/beijing-zhaopin/Python/'
    start_urls = [url+str(number)]       #初始爬取的网站,是列表形式

    def parse(self, response):      #程序执行后,会自动来执行这个函数,重复执行,直到爬虫结束
        num = response.xpath('//div[@class="p_top"]//a[@class="position_link"]/@href').extract()
           for link in num:
            # 打开火狐浏览器
            driver = webdriver.Firefox(executable_path="D:\python\PF\geckodriver")
            driver.get(link)  
            html=driver.page_source  # 通过URL获取网页源码
            response = etree.HTML(html)
            item = SqlTestItem()     #引入item模块,实例化
            # 获取标题
            item["Title"] = response.xpath('//title/text()')[0]        #scrapy中,xpath可以直接用,item写法注意一下,固定的
            # 获取薪资
            item["money"] = response.xpath('//h3//span/text()')
            # 获取工作地点
            item["jobsite"] = response.xpath('//div[@class="work_addr"]/a/text()')
            # 获取要求内容
            item["content"] = response.xpath('//div[@class="job-detail"]/text()')
            # 获取url
            # item["url"] = response.url
            yield item            #爬取一次返回一次,生成器的用法

e.保存数据(包含存储到本地及数据库两种方式):pipelines

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
import pymysql

class SqlTestPipeline(object):
    def __init__(self):  # 定义需要初始化的参数(可省略)
        #1.创建数据库链接,给cursor添加一个参数,让其的到数据是一个字典
        # self.db = pymysql.connect(host='localhost',port=3306,user='root',password='f199506',db='scrapy_mysql',charset='utf8')
        # self.cursor = self.db.cursor(cursor=pymysql.cursors.DictCursor)

        #2.打开并创建txt文件用于测试
       self.file = open("url.txt", "a", encoding="utf-8")

    def process_item(self, item, spider):
        #1.保存数据到数据库:
        #a.创建表格
        # self.sql = 'create table test (id int auto_increment key ,Title varchar(150),money varchar(100),jobsite varchar(50),content varchar(500)) '
        # self.cursor.execute(self.sql)  # 执行mysql语句
        # self.db.commit()  # 提交事务
        #b.插入数据
        # self.sql = 'INSERT INTO test(Title,money,jobsite,content,url) VALUES("%s","%s","%s","%s","%s")'
        # self.cursor.execute(self.sql % (item["Title"], item["money"], item["jobsite"], item["content"],item["url"]))  # 执行mysql语句
        # self.db.commit()  # 提交事务
        #c.删除数据
        # self.sql = 'DELETE FROM test'
        # self.cursor.execute(self.sql)  # 执行mysql语句
        # self.db.commit()  # 提交事务
        #查询数据:
        # self.sql = 'select *from test'
        # self.a = self.cursor.execute(self.sql)  # 执行mysql语句
        # print(self.a)
        # self.db.commit()  # 提交事务

        #2.保存数据到本地文件
        content = str(item)+'\n'
        self.file.write(content)  # 写入数据到本地
        return item

    # 当爬取结束时候执行的方法(可省略)
    def close_spider(self, spider):
        #1.关闭数据库链接
        # self.cursor.close()
        # self.db.close()
        #2.关闭文件保存数据
        self.file.close()

以上就构建了一个完整的scrapy框架,程序还不是很完美,需要你去发现并且优化,初学者可以把程序复制一下,看是否可以成功运行,因为网站是随时在优化的,所以今天好用不代表明天好用,这个真的不骗你。

再来个实例吧,爬取图片的,有专门的函数,好用的很(记住一个事情,存储图片,音乐,视频都是二进制操作,直接在test里面操作就行了,传到管道就变成字符串了,不用专门的函数不会成功的):

第二个实例:爬取图片(手机内容,头条图片爬取,后续会讲解移动端爬取,你使用的话可以换成任何网站,没影响):
文件夹结构:
在这里插入图片描述
settings:

# -*- coding: utf-8 -*-

# Scrapy settings for images project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'images'

SPIDER_MODULES = ['images.spiders']
NEWSPIDER_MODULE = 'images.spiders'


# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'images (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = True

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
    
    
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}

# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
    
    
#    'images.middlewares.ImagesSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
    
    
#    'images.middlewares.ImagesDownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
    
    
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
# USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36'
# #上面只是个访问header,加个降低被拒绝的保险
ITEM_PIPELINES = {
    
    
   'images.pipelines.ImagesPipeline': 300,
}
IMAGES_STORE ='D:\\python\\Scrapy\\image\\test'


#IMAGES_EXPIRES = 90
#IMAGES_MIN_HEIGHT = 100
#IMAGES_MIN_WIDTH = 100
#其中IMAGES_STORE是设置的是图片保存的路径。IMAGES_EXPIRES是设置的项目保存的最长时间。
# IMAGES_MIN_HEIGHT和IMAGES_MIN_WIDTH是设置的图片尺寸大小

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

items:

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html

import scrapy


class ImagesItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    image_urls = scrapy.Field()
    images = scrapy.Field()
# image_urls和images是固定的,不能改名字

images_toutiao:

# -*- coding: utf-8 -*-
import scrapy
import re
from ..items import ImagesItem

class ImagesToutiaoSpider(scrapy.Spider):
    name = 'images_toutiao'
    allowed_domains = ['a3-ipv6.pstatp.com']
    start_urls = ['https://a3-ipv6.pstatp.com/article/content/25/1/6819945008301343243/6819945008301343243/1/0/0']  # 构造爬取的URL

    # 爬取图片ID:
#:https://a3-ipv6.pstatp.com/article/content/25/1/6819945008301343243/6819945008301343243/1/0/0
#https://a3-ipv6.pstatp.com/article/content/25/1/6848145029051974155/6848145029051974155/1/0/0
#https://a6-ipv6.pstatp.com/article/content/25/1/6848145029051974155/6848145029051974155/1/0/0
#https://a3-ipv6.pstatp.com/article/content/25/1/6848145029051974155/6848145029051974155/1/0/0        #找了三个链接,是基本相同的地址

    def parse(self, response):
        result = response.body.decode()  # 对start_urls获取的响应进行解码
        contents = re.findall(r'},{"url":"(.*?)"}', result)

        for i in range(0, len(contents)):
            if len(contents[i]) <= len(contents[0]):

                item = ImagesItem()
                list = []
                list.append(contents[i])
                item['image_urls'] = contents
                print(list)
                yield item
            else:
                pass
        #翻页-爬取多个页面的图片
        # self.page = [6819945008301343243/6819945008301343243/1/0/0,6819945008301343243/6819945008301343243/1/0/0,]
        # for i in self.page  #只爬前5页
        #     url="https://a3-ipv6.pstatp.com/article/content/25/1/"+str(self.page)
        #     yield scrapy.Request(url,callback=self.parse)

pipelines:

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
from scrapy.pipelines.images import ImagesPipeline
from scrapy.exceptions import DropItem
from scrapy.http import Request

#这里的两个函数get_media_requests和item_completed都是scrapy的内置函数,想重命名的就这这里操作
#可以直接复制这里的代码就可以用了

class ImagesPipeline(ImagesPipeline):
    def get_media_requests(self, item, info):
        for image_url in item['image_urls']:
            yield Request(image_url)

    def item_completed(self, results, item, info):
        image_path = [x['path'] for ok, x in results if ok]
        if not image_path:
            raise DropItem('Item contains no images')
        #item['image_paths'] = image_path
        return item

#     def file_path(self, request, response=None, info=None):
#         name = request.meta['name']    # 接收上面meta传递过来的图片名称
#         name = re.sub(r'[?\\*|“<>:/]', '', name)    # 过滤windows字符串,不经过这么一个步骤,你会发现有乱码或无法下载
#         filename= name +'.jpg'          #添加图片后缀名
#         return filename




# import scrapy
# import re
# from scrapy.pipelines.images import ImagesPipeline
# from scrapy.exceptions import DropItem
# class ImagePipeline(ImagesPipeline):
#     def get_media_requests(self, item, info):
#         image_url = item['image_urls']
#         yield scrapy.Request(image_url,meta={'name':item['image_name']})
#     def item_completed(self, results, item, info):
#         image_paths = [x['path'] for ok, x in results if ok]
#         if not image_paths:
#             raise DropItem("Item contains no images")
#         return item
#     def file_path(self, request, response=None, info=None):
#         name = request.meta['name']    # 接收上面meta传递过来的图片名称
#         name = re.sub(r'[?\\*|“<>:/]', '', name)    # 过滤windows字符串,不经过这么一个步骤,你会发现有乱码或无法下载
#         filename= name +'.jpg'          #添加图片后缀名
#         return filename

第三个实例(爬取预告片):
文件夹结构:在这里插入图片描述
settings:

# -*- coding: utf-8 -*-

# Scrapy settings for DYVideos project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'DYVideos'

SPIDER_MODULES = ['DYVideos.spiders']
NEWSPIDER_MODULE = 'DYVideos.spiders'


# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'DYVideos (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = False

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
    
    
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}

# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
    
    
#    'DYVideos.middlewares.DyvideosSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
    
    
#    'DYVideos.middlewares.DyvideosDownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
    
    
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
    
    
   'DYVideos.pipelines.DyvideosPipeline': 300,
}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

items:

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html

import scrapy


class DyvideosItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    video_url = scrapy.Field()  # 视频源url
    video_title = scrapy.Field()  # 视频标题

shipin(这里面包含了一层一层的调用如何写,很经典,注意收藏):

# -*- coding: utf-8 -*-
import scrapy
from  ..items import DyvideosItem
import re
#提取预告片界面的url
#https://www.yugaopian.cn/allmovies
#从中提取出movie/28388

#提取具体视频界面url,还无视频,只有跳转链接
#https://www.yugaopian.cn/movie/28388
#https://www.yugaopian.cn/movie/24897

#爬取具体视频url,有视频播放
#http://video.mtime.com/78149/?mid=234915

#爬取视频url,只有单独视频,进行爬取
#http://shortvideo.mtime.cn/play/getPlayUrl?videoId=76682&source=1

class ShipinSpider(scrapy.Spider):
    name = 'shipin'
    allowed_domains = ['www.yugaopian.cn']
    start_urls = ['https://www.yugaopian.cn/allmovies/1']

    def parse(self, response):
        self.number = 1
        #爬取包含所有预告片网页,获取ID:例:/movie/24897
        yurls = response.xpath('//li//a[@target="_blank"]/@href').extract()# 获取响应信息
        # item = DyvideosItem()
        # item["video_url"] = yurls
        # yield item

        #获取视频跳转链接,点击后会出现视频界面:例:http://video.mtime.com/78149/?mid=234915
        for surl in yurls:
            yield  scrapy.Request( "https://www.yugaopian.cn" + surl,callback = self.G_item)


    def G_item(self, response):
        result = response.xpath('//td//a[@target="_blank"]/@href').extract()  # 获取响应信息
        # for i in (0,len(result)):
        #     list = []
        #     list.append(result[i])
        #     item = DyvideosItem()
        #     item["video_url"] = result
        #     yield item

        for  i in range(0,len(result)):
            if len(result[0]) == len(result[i]):

                number1 = re.findall(r'movie.mtime.com%2F\d+%2Ftrailer%2F(\d+)', result[i])
                number2 = re.findall(r'movie.mtime.com%2F(\d+)%2Ftrailer%2F\d+', result[i])
                V_url = "http://video.mtime.com/"+number1[i]+"/?mid="+number2[i]
                # item = DyvideosItem()
                # item["video_url"] = number1
                # yield item
                yield scrapy.Request(V_url, callback=self.V_item,dont_filter=True)  #meta={'item': item},
            else:
                pass

    def V_item(self , response):
        # item = response.meta['item']
        #item = DyvideosItem()
        video_url = response.body.decode()  # 获取响应信息# 获取响应信息
        video_num= re.findall(r'},{"id":(\d+),"mid":', video_url)
        for i in range(0,len(video_num)):
            video = "http://shortvideo.mtime.cn/play/getPlayUrl?videoId="+video_num[i]+"&source=1"
            yield scrapy.Request(video, callback=self.D_item, dont_filter=True)
            # item = DyvideosItem()
            # item["video_url"] = video
            # yield item

    def D_item(self,response):

        video_mp4 = response.body.decode()  # 获取响应信息# 获取响应信息
        video_mp4_url = re.findall(r'"url":"(.*?)"}', video_mp4)
        for i in (0, len(video_mp4_url)):
            yield scrapy.Request(video_mp4_url[i], callback=self.O_item, dont_filter=True)
            # item = DyvideosItem()
            # item["video_url"] = video_mp4_url
            # yield item

    def O_item(self,response):

        self.file = open(str(self.number) + "video.mp4", "wb")
        self.file.write(response.body)  # 写入数据到本地

        self.file.close()
        self.number +=1

pipelines:

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html

class DyvideosPipeline(object):

    def __init__(self):
    #定义需要初始化的参数(可省略)
        self.file = open(r"video.txt","a",encoding = "utf-8")

    # 管道每次接收到item后执行的方法(不可省略)
    # return item必须要有

    def process_item(self, item, spider):
        content = str(item)+"\n"
        self.file.write(content)  # 写入数据到本地

        return item
    #当爬取结束时候执行的方法(可省略)
    def close_spider(self, spider):
        self.spider.close()
        self.file.close()

以上就是scrapy框架的全部介绍了,以上程序不一定好用,主要是给大家一个框架思路,可以拿来改自己想爬取的网站,授人以鱼不如授人以渔;无论学习什么,我们要学的是方法,而不是一道题的做法,大千世界变化多端,没有自己的思维肯定是不行的。
爬虫框架说实话不算大,也不难理解,不要急于求成,先写出一个实例吧,而不是一开始就想写出多好的项目。我也不喜欢从基础学习,但是事实证明,基础不熟练,确实很难做出优秀的项目。
第六篇分享,持续更新中,,
,,,

附件图片,本人觉得比较有用的那种:
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

猜你喜欢

转载自blog.csdn.net/weixin_46008828/article/details/108666391