一 應用場景描述
在有些情況下,僅僅通過Zabbix去監控MongoDB的端口和各種狀態還不夠,MongoDB的日志監控也是很重要的。例如Mongos連接后端的Shard報SocketException錯誤等。
二 使用Logstash分析MongoDB日志
要記錄慢查詢首先需要開啟慢查詢記錄功能
use jd05;
db.setProfilingLevel(1,50)
{ "was" : 1, "slowms" : 50, "ok" : 1 }1表示只記錄慢查詢,慢于50毫秒的操作會被記錄
如果寫成2就會記錄所有的操作,不建議在生產環境中使用,可以在開發環境中使用
db.setProfilingLevel(2)
在MongoDB的日志文件中會記錄如下操作信息:
Mon Apr 27 16:45:01.853 [conn282854698] command jd01.$cmd command: { count: "player", query: { request_time: { $gte: 1430123701 } } } ntoreturn:1 keyUpdates:0 numYields: 7 locks(micros) r:640822 reslen:48 340mslogstash配置文件shipper_mongodb.conf如下
input {
file {
path => "/data/app_data/mongodb/log/*.log"
type => "mongodb"
sincedb_path => "/dev/null"
}
}
filter {
if [type] == "mongodb" {
grok {
match => ["message","(?m)%{GREEDYDATA} \[conn%{NUMBER:mongoConnection}\] %{WORD:mongoCommand} %{WORD:mongoDatabase}.%{NOTSPACE:mongoCollection} %{WORD}: \{ %{GREEDYDATA:mongoStatement} \} %{GREEDYDATA} %{NUMBER:mongoElapsedTime:int}ms"
]
add_tag => "mongodb"
}
grok {
match => ["message"," cursorid:%{NUMBER:mongoCursorId}"]
add_tag => "mongo_profiling_data"
}
grok {
match => ["message"," ntoreturn:%{NUMBER:mongoNumberToReturn:int}"]
add_tag => "mongo_profiling_data"
}
grok {
match => ["message"," ntoskip:%{NUMBER:mongoNumberToSkip:int}"]
add_tag => "mongo_profiling_data"
}
grok {
match => ["message"," nscanned:%{NUMBER:mongoNumberScanned:int}"]
add_tag => "mongo_profiling_data"
}
grok {
match => ["message"," scanAndOrder:%{NUMBER:mongoScanAndOrder:int}"]
add_tag => "mongo_profiling_data"
}
grok {
match => ["message"," idhack:%{NUMBER:mongoIdHack:int}"]
add_tag => "mongo_profiling_data"
}
grok {
match => ["message"," nmoved:%{NUMBER:mongoNumberMoved:int}"]
add_tag => "mongo_profiling_data"
}
grok {
match => ["message"," nupdated:%{NUMBER:mongoNumberUpdated:int}"]
add_tag => "mongo_profiling_data"
}
grok {
match => ["message"," keyUpdates:%{NUMBER:mongoKeyUpdates:int}"]
add_tag => "mongo_profiling_data"
}
grok {
match => ["message"," numYields: %{NUMBER:mongoNumYields:int}"]
add_tag => "mongo_profiling_data"
}
grok {
match => ["message"," locks\(micros\) r:%{NUMBER:mongoReadLocks:int}"]
add_tag => "mongo_profiling_data"
}
grok {
match => ["message"," locks\(micros\) w:%{NUMBER:mongoWriteLocks:int}"]
add_tag => "mongo_profiling_data"
}
grok {
match => ["message"," nreturned:%{NUMBER:mongoNumberReturned:int}"]
add_tag => "mongo_profiling_data"
}
grok {
match => ["message"," reslen:%{NUMBER:mongoResultLength:int}"]
add_tag => "mongo_profiling_data"
}
if "mongo_profiling_data" in [tags] {
mutate {
remove_tag => "_grokparsefailure"
}
}
if "_grokparsefailure" in [tags] {
grep {
match => ["message","(Failed|error|SOCKET)"]
add_tag => ["zabbix-sender"]
add_field => [
"zabbix_host","%{host}",
"zabbix_item","mongo.error"
# "send_field","%{message}"
]
}
mutate {
remove_tag => "_grokparsefailure"
}
}
}
}
output {
stdout {
codec => "rubydebug"
}
zabbix {
tags => "zabbix-sender"
host => "zabbixserver"
port => "10051"
zabbix_sender => "/usr/local/zabbix/bin/zabbix_sender"
}
redis {
host => "10.4.29.162"
data_type => "list"
key => "logstash"
}
}配置文件分為幾步:
使用logstash的file插件從/data/app_data/mongodb/log/目錄中讀取mongodb的日志文件然后對日志內容進行解析
如果日志文件中有類似cursorid,nreturned等關鍵字的就截取添加標簽mongo_profiling_data用于以后進行數據統計
對于其他日志就過濾關鍵字看是否含有錯誤信息,如果有就通過zabbix發送報警。
注意使用zabbix插件發送報警的時候需要先進行過濾關鍵字,然后要有zabbix_host,zabbix_item,zabbix_field三個字段,zabbix_item的值需要和zabbix監控頁面配置的item相對應。zabbix_field如果沒有指定,默認就是發送這個message字段
添加zabbix的模板


同理可以通過zabbix對PHP-FPM,Nginx,Redis,MySQL等發送報警
然后要做的就是根據不同的字段定義不同的圖表
參考文檔:
http://techblog.holidaycheck.com/profiling-mongodb-with-logstash-and-kibana/
http://tech.rhealitycheck.com/visualizing-mongodb-profiling-data-using-logstash-and-kibana/
http://www.logstash.net/docs/1.4.2/outputs/zabbix
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。