溫馨提示×

python多線程爬蟲 數據存儲咋處理

小樊
133
2024-11-29 19:37:00
欄目: 云計算

在Python中,使用多線程進行爬蟲并處理數據存儲可以通過以下幾個步驟實現:

  1. 導入所需庫:
import threading
import requests
from bs4 import BeautifulSoup
import json
import sqlite3
  1. 創建一個數據庫連接:
def create_connection():
    conn = sqlite3.connect('data.db')
    return conn
  1. 創建一個用于存儲數據的表(如果尚未創建):
def create_table(conn):
    cursor = conn.cursor()
    cursor.execute('''CREATE TABLE IF NOT EXISTS web_data (
                        id INTEGER PRIMARY KEY AUTOINCREMENT,
                        url TEXT NOT NULL,
                        title TEXT NOT NULL,
                        content TEXT NOT NULL
                    )''')
    conn.commit()
  1. 定義一個函數來處理爬取到的數據:
def process_data(url, title, content):
    # 在這里可以對數據進行清洗、解析等操作
    return {
        'url': url,
        'title': title,
        'content': content
    }
  1. 定義一個函數來存儲數據到數據庫:
def save_data(conn, data):
    cursor = conn.cursor()
    cursor.execute('''INSERT INTO web_data (url, title, content)
                        VALUES (?, ?, ?)''', (data['url'], data['title'], data['content']))
    conn.commit()
  1. 定義一個爬蟲函數,該函數將在多線程中運行:
def crawl(url, title, conn):
    try:
        response = requests.get(url)
        response.raise_for_status()
        soup = BeautifulSoup(response.text, 'html.parser')
        content = soup.get_text()
        data = process_data(url, title, content)
        save_data(conn, data)
    except Exception as e:
        print(f"Error while processing {url}: {e}")
  1. 定義一個函數來啟動多個線程:
def start_threads(urls, num_threads):
    conn = create_connection()
    create_table(conn)

    threads = []
    for i in range(num_threads):
        url = urls[i % len(urls)]
        thread = threading.Thread(target=crawl, args=(url, url, conn))
        threads.append(thread)
        thread.start()

    for thread in threads:
        thread.join()

    conn.close()
  1. 準備要爬取的URL列表,并設置線程數量:
urls = [
    'https://example.com/page1',
    'https://example.com/page2',
    # ...
]
num_threads = 10
  1. 啟動多線程爬蟲:
start_threads(urls, num_threads)

這個示例使用了SQLite數據庫來存儲數據。你可以根據需要替換為其他數據庫,如MySQL、PostgreSQL等。同時,你可以根據需要調整數據處理和存儲的邏輯。

0
亚洲午夜精品一区二区_中文无码日韩欧免_久久香蕉精品视频_欧美主播一区二区三区美女