python3 爬取36氪新闻网页

一个做了反爬的36氪,返回数据恶心,感觉是一堆垃圾。这里只是记录一下爬取过程。

(一)爬取环境

  • win10
  • python3
  • scrapy

(二)爬取过程

(1)入口:搜索

image.png

(2)动态js数据加载,查看下一页操作:

image.png

(3)返回数据:

image.png

(4)请求链接


http://36kr.com/api//search/entity-search?page=4&per_page=40&keyword=机器人&entity_type=post&ts=1532794031142&_=1532848230039

分析:这里的ts及后面的都为时间戳格式,可不要,entity_type=post这个是必须要的,可变参数为page

(4)列表页的json数据,id为详情页链接所需标志

image.png

image.png

(5)详情页数据

抓取内容:
image.png

image.png

字段:标题,作者,日期,简要,标签,内容

查看源码,数据全在var props所包含的script标签里面

(6)正则获取并将之转为正常的json数据(理由:json文件可以更好的获取某个字段的内容,单纯全用正则截取的话,不好获取或者直接是获取不到)

image.png

源码

# -*- coding: utf-8 -*-
# @Time    : 2018/7/28 17:13
# @Author  : 蛇崽
# @Email   : [email protected] 1532773314218
# @File    : 36kespider.py
import json

import re
import scrapy
import time

class ke36Spider(scrapy.Spider):

    name = 'ke36'

    allowed_domains = ['www.36kr.com']

    start_urls = ['https://36kr.com/']

    def parse(self, response):
        print('start parse -------------------------  ')
        word = '机器人'
        t = time.time()
        page = '1'
        print('t',t)
        for page in range(1,200):
            burl = 'http://36kr.com/api//search/entity-search?page={}&per_page=40&keyword={}&entity_type=post'.format(page,word)
            yield scrapy.Request(burl,callback=self.parse_list,dont_filter=True)

    def parse_list(self,response):
        res = response.body.decode('utf-8')
        # print(res)

        jdata = json.loads(res)
        code = jdata['code']
        timestamp = jdata['timestamp']
        timestamp_rt = jdata['timestamp_rt']
        items = jdata['data']['items']
        m_id =  items[0]['id']
        for item in items:
            m_id = item['id']
            b_url = 'http://36kr.com/p/{}.html'.format(str(m_id))
            # b_url = 'http://36kr.com/p/5137751.html'
            yield scrapy.Request(b_url,callback=self.parse_detail,dont_filter=True)

    def parse_detail(self,response):
        res = response.body.decode('utf-8')
        content = re.findall(r'<script>var props=(.*?)</script>',res)
        temstr = content[0]

        minfo = re.findall('\"detailArticle\|post\"\:(.*?)"hotPostsOf30',temstr)[0]
        print('minfo -----------------------------  ')
        minfo = minfo.rstrip(',')
        jdata = json.loads(minfo)
        print('j'*40)

        published_at = jdata['published_at']
        username = jdata['user']['name']
        title = jdata['user']['title']
        extraction_tags = jdata['extraction_tags']
        content = jdata['content']
        print(published_at,username,title,extraction_tags)
        print('*'*50)
        print(content)

更多资源请访问:

https://blog.csdn.net/xudailong_blog/article/details/78762262

小密圈精品源码根据地

欢迎光临我的小网站:http://www.00reso.com

陆续优化中,后续会开发更多更好玩的有趣的小工具

猜你喜欢

转载自blog.csdn.net/xudailong_blog/article/details/81271675