用request库进行简单爬虫,将数据存入MySQL的实现方法

单纯使用request库进行爬虫,将数据存入MySQL的方法如下

第一步:

从Navicat/Mysql-Front进入创建数据库(articles),数据表(info)

第二步:

定义函数

import pymysql

def save_to_mysql(data):
    try:
        conn = pymysql.connect(host='localhost', port=3306, user='root', password='123456', db='articles', charset="utf8")
 
        cursor = conn.cursor()
        insert_sql = """
                    insert into info(info_title,author)
                    values(%s, %s)
                """
        cursor.execute(insert_sql, (data["info_title"], data["author"]))
 
        print('save to mysql', data)
        conn.commit()
        cursor.close()
        conn.close()
    except Exception as e:
        print('wrong' + e

第三步:在提取到具体的data = {xxxxxx}后面调用 save_to_mysql(data)

#参考:https://blog.csdn.net/xiongzaiabc/article/details/81008094

自己试验过的链接,放在for循环里面, 如果放在和for平行的位置只取出每一页的最后一条数据:

        #保存数据到mysql中
        conn = pymysql.connect(host='localhost', port=3306, user='root', password='123456', db='xiaozhu', charset="utf8")
        cur = conn.cursor()
        # cur.execute('drop table if NOT EXISTS xiaozhuduanzu')
        # cur.execute('''create table xiaozhuduanzu(
        #         id int auto_increment primary key,
        #         title varchar(200),
        #         price varchar(200),
        #         infos varchar(200),
        #         dianping varchar(200),
        #         tubiao varchar(400),
        #         picture varchar(1000),
        #         picturelink varchar(1000))
        #         ''')
        insert_sql = """
                      insert into xiaozhuduanzu(title, price, infos, dianping, tubiao, picture, picturelink )
                      values(%s, %s,%s, %s,%s, %s,%s)
                     """
        cur.execute(insert_sql,(title, price, infos, dianping, tubiao, picture, picturelink ))
        conn.commit()
        cur.close()
        conn.close()

写于2019-04-27 21.52

猜你喜欢

转载自blog.csdn.net/hellenlee22/article/details/89608426