网络爬虫笔记(Day1)

                                                      Day 1

爬虫的过程

1.首先需要了解业务需求

2.根据需求,寻找网站

3.将网站数据获取到本地 (可以通过urllib,requests等包)

4.定位数据(re  xpath  css  json等)

5.存储数据(mysql   redis   文件格式)

 

最简单的爬虫结构

from urllib import request

url = 'http://www.baidu.com'
response = request.urlopen(url)  
info = response.read()
print(info)

当用上面的代码去爬取某些网页时就会获取不到数据,此时就需要加入 headers 

from urllib import request

url = 'http://www.xicidaili.com/'

user_agent = 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36'
headers = {
    "User-Agent": user_agent
}
req = request.Request(url, headers=headers)
response = request.urlopen(req)
info = response.read()   # 切记response仅仅返回一次

with open('西刺代理.html', 'wb') as f:
    f.write(info)
    

代码封装

from urllib import request, parse
from urllib.error import HTTPError, URLError

def get(url, headers=None):
    '''get请求'''
    return urlrequests(url, headers=headers)

def post(url, form, headers=None):
    '''post请求'''
    return urlrequests(url, form, headers=headers)

def urlrequests(url, form=None, headers=None):
    '''
    1. 传入url   2. user_agent   3. headers   4. 定义Request   5. urlopen   6. 返回byte数组
    '''
    user_agent = 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36'
    # 如果用户需要自行传入headers, 则覆盖之前的headers
    if headers == None:
        headers = {
            'User-Agent': user_agent
        }
    html_bytes = b''
    try:
        if form:
            # POST
            # 2.1 转换成str
            form_str = parse.urlencode(form)
            # 2.2 转换成bytes
            form_bytes = form_str.encode('utf-8')
            req = request.Request(url, data=form_bytes, headers=headers)
        else:
            # GET
            req = request.Request(url, headers=headers)
        response = request.urlopen(req)
        html_bytes = response.read()
    except HTTPError as e:
        print(e)
    except URLError as e:
        print(e)

    return html_bytes

if __name__ == '__main__':
    # post请求,百度翻译,利用sug获取json数据
    # url = 'http://fanyi.baidu.com/sug'
    # form = {
    #     'kw': '呵呵'
    # }
    # html_bytes = post(url, form=form)
    # print(html_bytes)

    url = 'http://www.baidu.com'
    html_byte = get(url)
    print(html_byte)

猜你喜欢

转载自blog.csdn.net/Clany888/article/details/81635500