requests库下,python爬虫后将数据存入到MySQL中

单纯使用request库进行爬虫,将数据存入MySQL的方法如下

第一步:

从Navicat进入创建数据库(articles),数据表(info)

第二步:

定义函数

import pymysql
def save_to_mysql(data):
    try:
        conn = pymysql.connect(host='localhost', port=3306, user='root', password='123456', db='articles', charset="utf8")

        cursor = conn.cursor()
        insert_sql = """
                    insert into info(info_title,author)
                    values(%s, %s)
                """
        cursor.execute(insert_sql, (data["info_title"], data["author"]))

        print('save to mysql', data)
        conn.commit()
        cursor.close()
        conn.close()
    except Exception as e:
        print('wrong' + e)

第三步:在提取到具体的data = {xxxxxx}后面调用 save_to_mysql(data)

猜你喜欢

转载自blog.csdn.net/xiongzaiabc/article/details/81008094