python3 urllib爬虫,你只需要看这一篇就够了

版权声明:本文为博主原创文章,未经允许,不得转载,如需转载请注明出处 https://blog.csdn.net/ssjdoudou/article/details/83412751

写在最前面:以下数据均脱敏

from urllib import request
import requests
import urllib

if __name__ == "__main__":
    # 接口的url
    session_requests = requests.session()
    data = {'username': '11111111', 'password': '11111111'}
    requrl ='https://xxxxxx.com/xx/login?xxxxxxxxxxxxxxxxxxxxxxx' #登录请求url
    headers = {'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:58.0) Gecko/20100101 Firefox/58.0'}
    # 发送请求
    conn=requests.post(requrl,data,headers)
    #cookies = conn.cookies.get_dict()
    print(conn.request.headers)
    newheaders = conn.request.headers
    url = "http://xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.htm" #请求爬虫的url
    print(type(newheaders))
    newheaders = dict(newheaders)
    print(type(newheaders))
    del newheaders['Accept-Encoding']
    print(newheaders)
    req = request.Request(url=url,headers=newheaders)
    rsp = request.urlopen(req)
    html = rsp.read().decode("utf-8","ignore")
    print(html)

因为不把Accepe-Encoding去掉,会报错,或者乱码

UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte

 所以转成字典,再去掉Accepe-Encoding

下面稍微解释一下吧,首先构造登录请求报文,包含用户名,密码,登录成功后获取cookie,使用cookie再去访问你要爬虫的页面,不然还是会被登录页面给拦截掉

能抓到你想访问的页面,接下来想干什么都可以了

关于cookie,其实你也可以手动F12看一下,Network里,Headers里有一个Request Headers,其中最重要的就是你的cookie,保存了你本次登录的所有信息,每次重新登录都会改变
 

猜你喜欢

转载自blog.csdn.net/ssjdoudou/article/details/83412751