爬虫(1)、requests模块介绍

基础

发送请求

1、GET获取网页,返回一个response对象。

pip install requests

import requests

respone=requests.get('http://www.jianshu.com',
                headers={'User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36'}) 
                
print(respone.status_code)  # 200

2、其他请求方式:

>>> r = requests.put('http://httpbin.org/put', data = {'key':'value'})
>>> r = requests.delete('http://httpbin.org/delete')
>>> r = requests.head('http://httpbin.org/get')
>>> r = requests.options('http://httpbin.org/get')

url传参

  • 手动写参数:http://httpbin.org/get?key1=value1&key2=value2&key2=value32
  • 在requests中,可以指定params参数:

    >>> payload = {'key1': 'value1', 'key2': ['value2', 'value3']}
    
    >>> r = requests.get('http://httpbin.org/get', params=payload)
    >>> print(r.url)

获取内容

response.text

文本数据,优先根据设定的编码 或者 使用响应头携带的编码类型去解码response.text

# response.encoding='gbk' # 指定解码类型
print(response.encoding)  # utf8
print(response.text)
response.content

二进制数据。

from PIL import Image
from io import BytesIO  # io缓存
import requests

response = requests.get('https://timgsa.baidu.com/timg?image&quality=80&size=b9999_10000&sec=1534138298110&di=ed2453e14d86f67dd38c37f5dc2c0d35&imgtype=0&src=http%3A%2F%2Fimgsrc.baidu.com%2Fimgad%2Fpic%2Fitem%2F314e251f95cad1c8db8bfa03753e6709c93d51e5.jpg')

# open:识别文件或者io,但是暂时不读取
img = Image.open(BytesIO(response.content))  # 将图片的二进制数据放入BytesIO缓存起来
img.show()
response.json()
import requests

r = requests.get('https://api.github.com/events')
print(type(r.json()))  # <class 'list'>
print(type(r.text))  # <class 'str'>



import requests
import json

response=requests.get('http://httpbin.org/get')>

# 方法1
res1=json.loads(response.text) 
# 方法2
res2=response.json()

print(res1 == res2)  # True

In case the JSON decoding fails, r.json() raises an exception. For example, if the response gets a 204 (No Content), or if the response contains invalid JSON, attempting r.json() raises ValueError: No JSON object could be decoded.

It should be noted that the success of the call to r.json() does not indicate the success of the response. Some servers may return a JSON object in a failed response (e.g. error details with HTTP 500). Such JSON will be decoded and returned. To check that a request is successful, use r.raise_for_status() or check r.status_code is what you expect.

读取并保存二进制文件

1、小文件可以直接将response.content读取进内存再保存:

import requests
response=requests.get('https://timgsa.baidu.com/timg?image&quality=80&size=b9999_10000&sec=1528990859791&di=e346b0d7c9c869d055260d50e972936d&imgtype=0&src=http%3A%2F%2Fold.bz55.com%2Fuploads%2Fallimg%2F150210%2F139-150210134411-50.jpg')

with open('img.jpg', 'wb') as f:
    f.write(response.content)

2、大文件保存:

import requests

response=requests.get('https://gss3.baidu.com/6LZ0ej3k1Qd3ote6lo7D0j9wehsv/tieba-smallvideo-transcode/1767502_56ec685f9c7ec542eeaf6eac93a65dc7_6fe25cd1347c_3.mp4',
                      stream=True)

with open('b.mp4','wb') as f:
    for line in response.iter_content():
        f.write(line)

具体参考:http://www.python-requests.org/en/master/user/quickstart/#response-content


进阶

SSL Cert Verification

证书验证(大部分网站都是https),即http+ssl,首先检查证书是否合法,若不合法则报错。

import requests
respone=requests.get('https://www.12306.cn')
# requests.exceptions.SSLError

改进1:去掉报错,但是会报警告

import requests
respone=requests.get('https://www.12306.cn',verify=False)
print(respone.status_code)

改进2:去掉报错,并且去掉警报信息

import requests
from requests.packages import urllib3
urllib3.disable_warnings()
respone=requests.get('https://www.12306.cn',verify=False)
print(respone.status_code)

改进3:加上证书。

  • 很多网站都是https,但是不用证书也可以访问,大多数情况都是可以携带也可以不携带证书,知乎、百度等都是可带可不带。
  • 若有硬性要求的,则必须带,比如对于定向的用户,拿到证书后才有权限访问某个特定网站。
import requests
respone=requests.get('https://www.12306.cn',cert=('/path/server.crt','/path/key'))
print(respone.status_code)

使用代理

代理设置:先发送请求给代理,然后由代理帮忙发送(封ip是常见的事情)。

import requests
proxies={
    # 带用户名密码的代理,@符号前是用户名与密码
    'http':'http://egon:123@localhost:9743',  
    
    'http':'http://localhost:9743',
    
    'https':'https://localhost:9743',
}


respone=requests.get('https://www.12306.cn', proxies=proxies)
print(respone.status_code)


# 支持socks代理,安装:pip install requests[socks]
import requests
proxies = {
    'http': 'socks5://user:pass@host:port',
    'https': 'socks5://user:pass@host:port'
}
respone=requests.get('https://www.12306.cn',
                     proxies=proxies)

print(respone.status_code)

超时设置

# 两种超时:float or tuple
timeout=0.1  # 代表接收数据的超时时间
timeout=(0.1, 0.2)  # 0.1代表链接超时,0.2代表接收数据的超时时间

import requests
respone=requests.get('https://www.baidu.com', timeout=0.0001)

异常处理

import requests
from requests.exceptions import * # 可以查看requests.exceptions获取异常类型

try:
    r=requests.get('http://www.baidu.com',timeout=0.00001)
except ReadTimeout:
    print('===:')
# except ConnectionError: #网络不通
#     print('-----')
# except Timeout:
#     print('aaaaa')

except RequestException:
    print('Error')

上传文件

import requests
files={'file':open('a.jpg','rb')}
respone=requests.post('http://httpbin.org/post',files=files)
print(respone.status_code)

requests实例:

github免登录

携带对应的cookies即可,在浏览器缓存中查看:

import requests

my_cookies={'user_session':'asasas'}  # 在浏览器Network中可以查看cookies

response=requests.get('https://github.com/settings/profile', cookies=Cookies)

print(response.text)

模拟登陆知乎

import requests
Cookies={
    '_xsrf':"....",
    '_zap':'...',
    'capsion_ticket':"...",
    "d_c0":"ADDnDI6Svw2PTi2CvKPEsYDLfzMbFDO8Qhg=|1528974510",
    'q_c1':'74bf780893864dfda5784fb064d1b52a|1528974510000|1528974510000',
    'tgw_l7_route' :'4902c7c12bebebe28366186aba4ffcde',
    'unlock_ticket':"AAACFZbHTgsmAAAAYAJVTf9TIlsO8Byt4UBclHqUBWexafUUvnkPSg==",
    'z_c0':"2|1:0|10:1528974583|4:z_c0|92:Mi4xREZJaEJBQUFBQUFBTU9jTWpwS19EU1lBQUFCZ0FsVk45NW9QWEFBMldYa2xZbExZQS0wWTRmcjlFM3lFalMxRlFR|7ccb79983cca77d3b6e66fc1b22cf4e1d16f6d8e2146356a6dd44eb61ef9a253"
    }
    
response=requests.get('https://www.zhihu.com/',cookies=Cookies,
            headers={'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.75 Safari/537.36'}
            )
print(response.status_code)
print(response.text)

模拟浏览器自动登录github

自己处理cookie信息
import requests
import re

# 1、第一次请求
r1 = requests.get('https://github.com/login')

# 2、拿到初始cookie(仅仅是标识,但未被授权),server返回的
r1_cookie=r1.cookies.get_dict()

# 3、从页面中拿到CSRF TOKEN,标签是hidden的
authenticity_token=re.findall(r'name="authenticity_token".*?value="(.*?)"',r1.text)[0] 


# 4、第二次请求:带着初始cookie和TOKEN发送POST请求给登录页面,带上账号密码
data={
    'commit':'Sign in',
    'utf8':'✓',
    'authenticity_token':authenticity_token,
    'login':'[email protected]',
    'password':'alex3714'
}

r2=requests.post('https://github.com/session',
             data=data,
             cookies=r1_cookie
             )

# 5、服务器返回完整的cookie信息
login_cookie=r2.cookies.get_dict()

# 6、以后的登录,拿着login_cookie就可以,比如访问一些个人配置
r3=requests.get('https://github.com/settings/emails',
                cookies=login_cookie)

print('[email protected]' in r3.text)
利用session自动处理cookies信息
import requests
import re

# 1、新建session对象
session=requests.session()

# 2、利用session发起第一次请求
r1=session.get('https://github.com/login')

# 3、从页面中拿到CSRF TOKEN
authenticity_token=re.findall(r'name="authenticity_token".*?value="(.*?)"',r1.text)[0] 

# 4、第二次请求
data={
    'commit':'Sign in',
    'utf8':'✓',
    'authenticity_token':authenticity_token,
    'login':'[email protected]',
    'password':'alex3714'
}
r2=session.post('https://github.com/session',data=data)

# 5、还是利用同一个session发起第三次请求
r3=session.get('https://github.com/settings/emails')

print('[email protected]' in r3.text) #True

猜你喜欢

转载自www.cnblogs.com/fqh202/p/9470239.html
今日推荐