scrapy爬取数据后存储在本地mysql数据库中

话不多说,直接上代码

 1 # -*- coding: utf-8 -*-
 2 
 3 # Define your item pipelines here
 4 #
 5 # Don't forget to add your pipeline to the ITEM_PIPELINES setting
 6 # See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
 7 import pymysql
 8 class MysqlPipeline(object):
 9     """
10     同步操作
11     """
12 
13     def __init__(self):
14         # 建立连接
15         self.conn = pymysql.connect(host='localhost',user='root',passwd='',db='spider_test',charset='utf8',port=3306)  # 有中文要存入数据库的话要加charset='utf8'
16         # 创建游标
17         self.cursor = self.conn.cursor()
18 
19     def process_item(self, item, spider):
20         # sql语句
21         insert_sql = """
22         insert into test_zxf(title,url,detail,answer) VALUES(%s,%s,%s,%s)
23         """
24         # 执行插入数据到数据库操作
25         self.cursor.execute(insert_sql, (item['title'], item['url'], item['detail'], item['answer']))
26         # 提交,不进行提交无法保存到数据库
27         self.conn.commit()
28 
29     def close_spider(self, spider):
30         # 关闭游标和连接
31         self.cursor.close()
32         self.conn.close()

注:

1、需要先在数据库中建立好数据库,表,字段

2、setting文件设置pipline

如果对你有帮助请给个赞,谢谢~

猜你喜欢

转载自www.cnblogs.com/ClarenceSun/p/13200216.html