An example of a crawler that obtains information related to Chinese university rankings

The university ranking website is as follows:

[Ranking of Soft Science] The latest Ranking of Chinese Universities in Soft Science in 2023 | Ranking of the Best Universities in China

Web content information

33618b514ef74936be032ae508dffc2a.png

The construction of a university ranking crawler requires three important steps:

First, obtain the content of the web page from the Internet;

Second, analyze web page content and extract useful data into appropriate data structures;

Third, use data structures to display or further process data.

Since the university ranking is a typical two-dimensional data, a two-dimensional list is used to store the form data involved in the ranking.

Find CSS selectors for targeting HTML elements

727ec2d632ac4772b7abd0327281dd36.png

Relevant python code:

# @Time: 2023/6/7 17:11
# @Author:
# @File: 实验八网络爬虫.py
# @software: PyCharm
import requests
import bs4
import csv

# 获取网页内容
def get_html(url):
    response = requests.get(
        url=url,
        headers={
            'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36'
        }
    )
    response.encoding = response.apparent_encoding
    html_text = response.text
    return html_text

# 提取有用数据到恰当的数据结构中
def parse_html(html_text):
    soup = bs4.BeautifulSoup(html_text, "html.parser")
    table = soup.select("table[class='rk-table']")[0]
    tbody = table.select("tbody")[0]
    rows = tbody.find_all("tr")
    data = []
    for row in rows:
        cols = row.find_all("td")
        rank = cols[0].get_text().strip()
        name = cols[1].select_one("div.univname div:nth-child(1) a").get_text().strip()
        # type_elem = cols[1].select_one("div.univtype")
        level = cols[2].get_text().strip()
        score = cols[3].get_text().strip()

        total_score = cols[4].get_text().strip()
        educational_level = cols[5].get_text().strip()
        data.append([rank, name, level, score, total_score, educational_level])
    return data

# 创建 csv 文件并写入表头
with open('中国大学排名.csv', 'w', newline='', encoding='utf-8') as f:
    writer = csv.writer(f)

# 将数据写入 csv 文件,并打印输出每个学校的信息
def write_to_csv(data):
    with open('中国大学排名.csv', 'a', newline='', encoding='utf-8') as f:
        writer = csv.writer(f)
        writer.writerow(['排名', '学校名称', '省市' ,'类型', '总分', '办学层次'])
        print("{:<5s}{:<13s}{:<10s}{:<10s}{:<10s}{:<10s}".format('排名', '学校名称', '省市' ,'类型', '总分', '办学层次'))
        for item in data:
            writer.writerow(item)
            print("{:<5s}{:<13s}{:<10s}{:<10s}{:<10s}{:<10s}".format(item[0], item[1], item[2], item[3], item[4], item[5]))

def main():
    # 从网络上获取网页内容
    url = "https://www.shanghairanking.cn/rankings/bcur/2023"
    html_text = get_html(url)

    # 分析网页内容并提取有用数据到恰当的数据结构中
    data = parse_html(html_text)

    # 将数据写入 csv 文件,并打印输出每个学校的信息
    write_to_csv(data)

if __name__ == '__main__':
    main()

Crawl the top 30 university information and generate a csv file

Printout content:

d5e4ba52fdf94aebb83aef97cc7fe989.png

 csv file content:

8585ea36ed0f4cf78e8503e7ecf94b32.png

Guess you like

Origin blog.csdn.net/m0_74972727/article/details/131129445