今天就跟大家聊聊有關怎么利用python爬蟲抓取指數,可能很多人都不太了解,為了讓大家更加了解,小編給大家總結了以下內容,希望大家根據這篇文章可以有所收獲。
今天方法如下:
import requests
import sys
import time
word_url = 'http://index.baidu.com/api/SearchApi/thumbnail?area=0&word={}'
COOKIES = ''
headers = {
'Accept': 'application/json, text/plain, */*',
'Accept-Encoding': 'gzip, deflate',
'Accept-Language': 'zh-CN,zh;q=0.9',
'Cache-Control': 'no-cache',
'Cookie': COOKIES,
'DNT': '1',
'Host': 'index.baidu.com',
'Pragma': 'no-cache',
'Proxy-Connection': 'keep-alive',
'Referer': 'http://index.baidu.com/v2/main/index.html',
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.90 Safari/537.36',
'X-Requested-With': 'XMLHttpRequest',
}
def decrypt(t,e):
n = list(t)
i = list(e)
a = {}
result = []
ln = int(len(n)/2)
start = n[ln:]
end = n[:ln]
for j,k in zip(start, end):
a.update({k: j})
for j in e:
result.append(a.get(j))
return ''.join(result)
def get_ptbk(uniqid):
url = 'http://index.baidu.com/Interface/ptbk?uniqid={}'
resp = requests.get(url.format(uniqid), headers=headers)
if resp.status_code != 200:
print('獲取uniqid失敗')
sys.exit(1)
return resp.json().get('data')
def get_index_data(keyword, start='2011-01-03', end='2019-08-05'):
keyword = str(keyword).replace("'", '"')
url = f'http://index.baidu.com/api/SearchApi/index?area=0&word={keyword}&area=0&startDate={start}&endDate={end}'
resp = requests.get(url, headers=headers)
print('獲取指數失敗')
content = resp.json()
data = content.get('data')
user_indexes = data.get('userIndexes')[0]
uniqid = data.get('uniqid')
ptbk = get_ptbk(uniqid)
while ptbk is None or ptbk == '':
ptbk = get_ptbk(uniqid)
all_data = user_indexes.get('all').get('data')
result = decrypt(ptbk, all_data)
result = result.split(',')
print(result)
if __name__ == '__main__':
words = [[{"name": "酷安", "wordType": 1}]]
get_index_data(words)輸出:
看完上述內容,你們對怎么利用python爬蟲抓取指數有進一步的了解嗎?如果還想了解更多知識或者相關內容,請關注億速云行業資訊頻道,感謝大家的支持。
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。