在Python中,可以使用concurrent.futures
模塊中的ThreadPoolExecutor
類來管理多線程爬蟲的線程池。以下是一個簡單的示例:
import requests
from bs4 import BeautifulSoup
from concurrent.futures import ThreadPoolExecutor, as_completed
def fetch_and_parse(url):
try:
response = requests.get(url)
response.raise_for_status()
soup = BeautifulSoup(response.text, 'html.parser')
# 在這里提取所需的數據
data = soup.title.string
return data
except requests.exceptions.RequestException as e:
print(f"Error fetching {url}: {e}")
return None
def fetch_and_parse_urls(urls):
results = []
with ThreadPoolExecutor(max_workers=10) as executor:
future_to_url = {executor.submit(fetch_and_parse, url): url for url in urls}
for future in as_completed(future_to_url):
url = future_to_url[future]
try:
data = future.result()
if data:
results.append((url, data))
except Exception as e:
print(f"Error processing {url}: {e}")
return results
urls = [
"https://www.example.com",
"https://www.example2.com",
"https://www.example3.com",
# 更多URL...
]
fetch_and_parse_urls
函數來處理這些URL:results = fetch_and_parse_urls(urls)
for url, data in results:
print(f"URL: {url}, Data: {data}")
在這個示例中,我們使用ThreadPoolExecutor
創建了一個線程池,最大工作線程數為10。fetch_and_parse_urls
函數接受一個URL列表,然后使用線程池來并行處理這些URL。as_completed
函數用于在任務完成時獲取結果。最后,我們將結果打印出來。