在Python中編寫爬蟲并將數據存儲起來,通常有以下幾種方法:
import json
data = {"title": "example", "content": "This is an example."}
# 保存到JSON文件
with open("data.json", "w") as f:
json.dump(data, f)
首先,安裝數據庫相關的庫:
pip install pymysql pymongo
然后,連接到數據庫并保存數據:
import pymysql
import pymongo
# 連接到MySQL數據庫
conn = pymysql.connect(host="localhost", user="username", password="password", db="database_name")
cursor = conn.cursor()
sql = "INSERT INTO table_name (title, content) VALUES (%s, %s)"
data = ("example", "This is an example.")
cursor.execute(sql, data)
conn.commit()
# 連接到MongoDB數據庫
client = pymongo.MongoClient("mongodb://localhost:27017/")
db = client["database_name"]
collection = db["collection_name"]
document = {"title": "example", "content": "This is an example."}
collection.insert_one(document)
首先,安裝requests庫:
pip install requests
然后,發送數據到API:
import requests
url = "https://api.example.com/data"
data = {"title": "example", "content": "This is an example."}
response = requests.post(url, json=data)
if response.status_code == 200:
print("Data saved successfully.")
else:
print("Error saving data:", response.status_code)
根據你的需求和數據類型,可以選擇合適的方法來存儲爬取到的數據。