Python3网络爬虫——(4)urllib.error异常处理

异常处理
1、使用URLError进行异常处理
# -*- coding: UTF-8 -*-
from urllib import request
from urllib import error
if __name__ == "__main__" :
url = 'https://blog.csdn.net/asialee_bir' # 错误链接
try :
response=request. urlopen (url)
file=response. read (). decode ( 'utf-8' )
print (file)
except error.URLError as e:
print (e.code)
print (e.reason)

异常结果:(403错误表示禁止访问)



2、使用HTTPError进行异常处理
# -*- coding: UTF-8 -*-
from urllib import request
from urllib import error
if __name__ == "__main__" :
url = 'https://blog.csdn.net/asialee_bir' # 错误链接
try :
response=request. urlopen (url)
file=response. read (). decode ( 'utf-8' )
print (file)
except error.HTTPError as e:
print (e.code) #返回状态码
print (e.reason)
异常结果:(403错误表示禁止访问)

注意:URLError是HTTPError的父类

3、URLError和TTPError混合使用
# -*- coding: UTF-8 -*-
from urllib import request
from urllib import error
if __name__ == "__main__" :
url = 'https://blog.baidusss.net' # 不存在的链接
try :
response=request. urlopen (url)
file=response. read (). decode ( 'utf-8' )
print (file)
except error.URLError as e:
if hasattr (e, 'code' ): # 使用 hasattr() 判断是否有这些属性
print ( 'HTTPError' )
print (e.code)
if hasattr (e, 'reason' ):
print ( 'URLError' )
print (e.reason)










猜你喜欢

转载自blog.csdn.net/asialee_bird/article/details/79821227