在Python中,可以使用requests
庫和random
庫來動態爬蟲處理代理IP。以下是一個簡單的示例:
requests
庫:pip install requests
proxies.txt
,每行一個代理IP:http://proxy1.example.com:8080
http://proxy2.example.com:8080
http://proxy3.example.com:8080
dynamic_crawler.py
,使用requests
庫和random
庫從代理IP列表中隨機選擇一個代理IP,然后使用該代理IP發送HTTP請求:import requests
import random
# 從文件中讀取代理IP列表
with open('proxies.txt', 'r') as f:
proxies = [line.strip() for line in f.readlines()]
def get_proxy():
# 從代理IP列表中隨機選擇一個代理IP
return random.choice(proxies)
def fetch_url(url):
proxy = get_proxy()
try:
response = requests.get(url, proxies={"http": proxy, "https": proxy}, timeout=5)
response.raise_for_status()
return response.text
except requests.exceptions.RequestException as e:
print(f"Error fetching {url} using proxy {proxy}: {e}")
return None
if __name__ == "__main__":
url = "https://example.com"
content = fetch_url(url)
if content:
print(f"Content of {url}:")
print(content)
在這個示例中,get_proxy
函數從proxies.txt
文件中隨機選擇一個代理IP,然后fetch_url
函數使用該代理IP發送HTTP請求。如果請求成功,返回響應內容;否則,打印錯誤信息。