在Python中,可以使用concurrent.futures模塊中的ThreadPoolExecutor或ProcessPoolExecutor來實現多線程爬蟲。為了設置任務優先級,可以使用queue.PriorityQueue來存儲和排序任務。以下是一個簡單的示例:
import requests
from bs4 import BeautifulSoup
import queue
import threading
from concurrent.futures import ThreadPoolExecutor
def process_url(url):
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
# 處理數據,例如提取信息、存儲到數據庫等
print(f"Processed {url}")
def worker(work_queue):
while not work_queue.empty():
priority, url = work_queue.get()
process_url(url)
work_queue.task_done()
url_queue = queue.PriorityQueue()
# 添加任務到優先級隊列,任務的優先級為負數,數值越小優先級越高
url_queue.put((-3, 'https://example.com/high-priority-url'))
url_queue.put((-1, 'https://example.com/low-priority-url'))
url_queue.put((-2, 'https://example.com/medium-priority-url'))
num_threads = 3
with ThreadPoolExecutor(max_workers=num_threads) as executor:
for _ in range(num_threads):
worker_thread = threading.Thread(target=worker, args=(url_queue,))
worker_thread.start()
url_queue.join()
這個示例中,我們使用了一個優先級隊列來存儲任務,任務的優先級由優先級值決定,數值越小優先級越高。工作線程從優先級隊列中獲取任務并處理。這樣,我們可以根據任務的優先級來設置任務的執行順序。