scrapy遇见的坑

0.windows安装scrapy

1、安装wheel:
      在控制台输入pip install wheel即可自动完成安装
2、安装lxml:
      到 https://www.lfd.uci.edu/~gohlke/pythonlibs/,往下拉找到 lxml,下载适合自己电脑
      操作系统及python版本的.whl文件。cp27、cp35等代表python版本2.7、3.5,win32代表
      32位windows操作系统,win_amd64代表64位操作系统。
      下载完成后,右键点击文件-属性-安全-对象名称,可以复制到文件地址。复制完成后回到控制
      台,输入“pip install 右键粘贴地址",然后按回车即可完成安装。
3、安装PyOpenssl
      到 https://pypi.python.org/pypi/pyOpenSSL#downloads ,往下拉找到下面文件后下
      载。下载完成后同安装lxml的方法,对该PyOpenssl的whl文件进行安装.
      windows系统下的scrapy框架安装及测试
4、安装Twisted
      到 https://www.lfd.uci.edu/~gohlke/pythonlibs/#Twisted,往下拉找到Twisted,下载
      适合自己电脑操作系统及python版本的.whl文件。同安装lxml的方法将Twisted安装完成。
5、安装Pywin32
      到 https://sourceforge.net/projects/pywin32/files/pywin32/Build 220/,下载适合自
      己电脑操作系统及python版本的文件,下载完成后双击即可开始安装。程序会自动定位
      python的目录,所以不用自己调整安装设置,一直下一步就行了。
6、安装scrapy
      进行完1-5步后,安装scrapy就很简单了,在控制台输入 pip install scrapy即可完成安装。

1.No module named win32api

pip install pypiwin32

2.文件夹下的文件找不到,或者No module named 'scrapy.pipelines' 或者no module named ×××.items?

    

scrapy项目处于pycharm项目的子项目,所以pycharm找不到items
。我的解决办法是在scrapy项目上右键-》make_directory as
-->sources roo如果项目文件夹变这个颜色就可以了。

3.No module named PIL

pip install pillow

4.下载图片到本地、并提取本地保存地址

1)在settings.py中打开 ITEM_PIPELINES 的注释,在  ITEM_PIPELINES 中加入
ITEM_PIPELINES = {
   'spider_first.pipelines.SpiderFirstPipeline': 300,
   'scrapy.pipelines.images.ImagesPipeline':5,   #后面的数字代表执行优先级 ,当执行pipeine的时候会按照数字由小到大执行 
}
2)settings.py中加入
IMAGES_URLS_FIELD ="image_url"  #image_url是在items.py中配置的网络爬取得图片地址
#配置保存本地的地址
project_dir=os.path.abspath(os.path.dirname(__file__))  #获取当前爬虫项目的绝对路径
IMAGES_STORE=os.path.join(project_dir,'images')  #组装新的图片路径

5.python 安装mysql模块 

windows

pip install mysqlclient

ubuntu

sudo apt-get install libmysqlclent-dev

centos

sudo yum install python-devel mysql-devel

6. IndentationError: unindent does not match any outer indentation level

方法名开头的时候别用空格,用tab符号

 

7. 连接池的代码

from twisted.enterprise import adbapi
import MySQLdb.cursors


class MysqlTwistedPipline(object):
    #通过连接池的方式
    def __init__(self,dbpool):
        self.dbpool = dbpool

    @classmethod
    def from_settings(cls, settings):
       dbparms = dict(
           host=settings["MYSQL_HOST"],
           db=settings["MYSQL_DBNAME"],
           user=settings["MYSQL_USER"],
           passwd=settings["MYSQL_PASSWORD"],
           charset=settings["MYSQL_CHARSET"],
           cursorclass=MySQLdb.cursors.DictCursor,
           use_unicode=settings["MYSQL_USE_UNICODE"],
       )
       dbpool = adbapi.ConnectionPool("MySQLdb", **dbparms)
       return cls(dbpool)

    def process_item(self, item, spider):
        # 使用twisted将mysql插入变成异步执行
        query = self.dbpool.runInteraction(self.do_insert, item)
        query.addErrback(self.handle_error, item, spider)  # 处理异常

    def handle_error(self, failure, item, spider):
        # 处理异步插入的异常
        print(failure)
    def do_insert(self, cursor, item):
            insert_sql = """
                   insert into jobbole(post_url_id,post_url,re_selector,img_url,img_path,zan,shoucang,pinglun,zhengwen,riqi,fenlei) VALUES(%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)
               """
            cursor.execute(insert_sql, (
            item["post_url_id"], item["post_url"], item["re_selector"], item["img_url"][0], item["img_path"],
            item["zan"], item["shoucang"], item["pinglun"], item["zhengwen"], item["riqi"], item["fenlei"]))

 

8. 验证码爬虫的解决方案

1.如果一个网站有下载验证码的功能,对于我这种还没深入了解机器学习,只能通过人工的方式来进行解决验证码的问题,咱们就进行手动的验证码解析,通过下载图片,然后根据图片手动的输入坐标或者验证码来完成,本质的目的是为了能够完成登录打入敌人的内部,吸收他们里面的养分
2.如果是针对https://www.zhihu.com/captcha.gif?r=1514042860066&type=login&lang=cn
记住要跟登录的用户的获取的通过session来进行
header这边直接github很多 我就不粘贴了

session = requests.session()

response = session.get("https://www.zhihu.com/",headers=header)

图片类别的下载我遇到的坑是
  with open(file_name, 'wb') as f:
        f.write(response.content)
        f.close()
记住如果网页打开直接是图片直接使用response.content 而不是response.text.ecode()
对于一些网站返回的是 unicode json格式看的时候很不爽
解决方案是:
 print(response.text.encode('latin-1').decode('unicode_escape'))

 

9. 出现主键冲突的时候解决ON DUPLICATE KEY UPDATE(只限mysql)

 insert into zhihu_question
                      (zhihu_id,topics,url,title,content,creat_time,update_time,answer_num,comments_num,watch_user_num,click_num,crawl_time,crawl_update_time)
                       VALUES(%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)
                       ON DUPLICATE KEY UPDATE comments_num=VALUES(comments_num),watch_user_num=VALUES(watch_user_num),click_num=VALUES(click_num)

 

10. srcapy items方法很强大通过反向调用的方式就可以动态控制

items

class ZhihuQuestionItem(scrapy.Item):
    zhihu_id = scrapy.Field()
    topics = scrapy.Field()
    url = scrapy.Field()
    title = scrapy.Field()
    content = scrapy.Field()
    creat_time = scrapy.Field()
    update_time = scrapy.Field()
    answer_num = scrapy.Field()
    comments_num = scrapy.Field()
    watch_user_num = scrapy.Field()
    click_num = scrapy.Field()

    def get_insert_sql(self):
        insert_sql = """
                      insert into zhihu_question
                      (zhihu_id,topics,url,title,content,creat_time,update_time,answer_num,comments_num,watch_user_num,click_num,crawl_time,crawl_update_time)
                       VALUES(%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)
                       ON DUPLICATE KEY UPDATE comments_num=VALUES(comments_num),watch_user_num=VALUES(watch_user_num),click_num=VALUES(click_num)
                  """
        zhihu_id = self["zhihu_id"][0]
        topics = ",".join(self["topics"])
        url = "".join(self["url"])
        title = "".join(self["title"])
        content = "".join(self["content"])
        creat_time =  datetime.datetime.now().strftime(SQL_DATETIME_FORMAT)
        update_time =  datetime.datetime.now().strftime(SQL_DATETIME_FORMAT)
        answer_num = self["answer_num"][0]
        comments_num = get_nums(self["comments_num"][0])
        watch_user_num =  self["watch_user_num"][0]
        click_num = self["watch_user_num"][1]
        crawl_time = datetime.datetime.now().strftime(SQL_DATETIME_FORMAT)
        crawl_update_time =datetime.datetime.now().strftime(SQL_DATETIME_FORMAT)
        params = (zhihu_id,topics,url,title,content,creat_time,update_time,answer_num,comments_num,watch_user_num,click_num,crawl_time,crawl_update_time)
        return insert_sql,params

 pipelines.py 

class MysqlTwistedZhihuPipline(object):
    #通过连接池的方式
    def __init__(self,dbpool):
        self.dbpool = dbpool

    @classmethod
    def from_settings(cls, settings):
       dbparms = dict(
           host=settings["MYSQL_HOST"],
           db=settings["MYSQL_DBNAME"],
           user=settings["MYSQL_USER"],
           passwd=settings["MYSQL_PASSWORD"],
           charset=settings["MYSQL_CHARSET"],
           cursorclass=MySQLdb.cursors.DictCursor,
           use_unicode=settings["MYSQL_USE_UNICODE"],
       )
       dbpool = adbapi.ConnectionPool("MySQLdb", **dbparms)
       return cls(dbpool)

    def process_item(self, item, spider):
        # 使用twisted将mysql插入变成异步执行
        query = self.dbpool.runInteraction(self.do_insert, item)
        query.addErrback(self.handle_error, item, spider)  # 处理异常

    def handle_error(self, failure, item, spider):
        # 处理异步插入的异常
        print(failure)
    def do_insert(self, cursor, item):
            insert_sql,params = item.get_insert_sql()
            cursor.execute(insert_sql, params)

9. 'dict' object has no attribute 'has_key'  Python3以后删除了has_key()方法

if adict.has_key(key1):  

修改为:

if key1 in adict:  

 

10.  爬虫Max retries exceeded with url

 页面中的requests不要直接都requests.post 尽量统一requests.session()然后运行完关闭。s.keep_alive = False

s = requests.session()
  s.keep_alive = False

11.  阿里云centos安装python3最牛逼的教程(make编译的是深坑啊,建议用yum)

sudo yum install epel-release
sudo yum install python34
wget --no-check-certificate https://bootstrap.pypa.io/get-pip.py
python3 get-pip.py
pip3 -V

 

猜你喜欢

转载自394498036.iteye.com/blog/2404505