魯春利的工作筆記,誰說程序員不能有文藝范?
1、安全模式
系統啟動的時候會自動進入安全模式,如果DN節點正常那么經過30秒之后安全模式會自動關閉。
(1)、參數定義
dfs.replication:設置數據塊應該被復制的份數;
dfs.replication.min:所規定的數據塊副本的最小份數;
dfs.safemode.threshold.pct:指定應有多少比例的數據塊滿足最小副本數要求。
a. 當小于這個比例,那就將系統切換成安全模式,對數據塊進行復制;
b. 當大于該比例時,就離開安全模式,說明系統有足夠的數據塊副本數,可以對外提供服務。
c. 小于等于0意味不進入安全模式,等于1意味一直處于安全模式。
(2)、dfs.replication.min存在的意義
副本數按dfs.replication設置,如果節點失效將導致數據塊副本數降低,當低于dfs.replication.min后,系統再在其他節點處復制新的副本。如果該數據塊的副本經常丟失,導致在太多的節點處進行了復制,那么當超過dfs.replication.max的副本數(默認為512),就不再復制了。
(3)、安全模式
NameNode在啟動的時候首先進入安全模式,如果datanode丟失的block達到一定的比例(1-dfs.safemode.threshold.pct),則系統會一直處于安全模式狀態即只讀狀態。
dfs.safemode.threshold.pct(缺省值0.999f)表示HDFS啟動的時候,如果DataNode上報的block個數達到了元數據記錄的block個數的0.999倍才可以離開安全模式,否則一直是這種只讀模式。如果設為1則HDFS永遠是處于SafeMode。
(4)、通過命令操作
[hadoop@nnode hadoop2.6.0]$ hdfs dfsadmin -help safemode -safemode <enter|leave|get|wait>: Safe mode maintenance command. Safe mode is a Namenode state in which it 1. does not accept changes to the name space (read-only) 2. does not replicate or delete blocks. Safe mode is entered automatically at Namenode startup, and leaves safe mode automatically when the configured minimum percentage of blocks satisfies the minimum replication condition. Safe mode can also be entered manually, but then it can only be turned off manually as well.
enter:進入安全模式;
leave:強制NN離開安全模式;
get:返回安全模式是否開啟的信息;
wait:等待,一直到安全模式結束。
2、磁盤限額
hdfs dfsadmin [-setQuota <quota> <dirname>...<dirname>] [-clrQuota <dirname>...<dirname>] [-setSpaceQuota <quota> <dirname>...<dirname>] [-clrSpaceQuota <dirname>...<dirname>] hdfs dfs [-count [-q] [-h] <path> ...]
示例:
[hadoop@nnode hadoop2.6.0]$ hdfs dfs -ls /data Found 2 items -rw-r--r-- 2 hadoop hadoop 47 2015-06-09 17:59 /data/file1.txt -rw-r--r-- 2 hadoop hadoop 36 2015-06-09 17:59 /data/file2.txt [hadoop@nnode hadoop2.6.0]$ hdfs dfsadmin -setQuota 4 /data [hadoop@nnode hadoop2.6.0]$ hdfs dfs -count /data 1 2 83 /data # 47 + 36 = 83
查看count的幫助信息:
[hadoop@nnode hadoop2.6.0]$ hdfs dfs -help count -count [-q] [-h] <path> ... : Count the number of directories, files and bytes under the paths that match the specified file pattern. The output columns are: DIR_COUNT FILE_COUNT CONTENT_SIZE FILE_NAME or QUOTA REMAINING_QUOTA SPACE_QUOTA REMAINING_SPACE_QUOTA DIR_COUNT FILE_COUNT CONTENT_SIZE FILE_NAME The -h option shows file sizes in human readable format.
上傳文件:
[hadoop@nnode hadoop2.6.0]$ hdfs dfs -put NOTICE.txt /data/ [hadoop@nnode hadoop2.6.0]$ hdfs dfs -put README.txt /data/ put: The NameSpace quota (directories and files) of directory /data is exceeded: quota=4 file count=5 [hadoop@nnode hadoop2.6.0]$ # data目錄本身占一個限額 [hadoop@nnode hadoop2.6.0]$ hdfs dfs -ls /data Found 3 items -rw-r--r-- 2 hadoop hadoop 101 2015-11-28 21:02 /data/NOTICE.txt -rw-r--r-- 2 hadoop hadoop 47 2015-06-09 17:59 /data/file1.txt -rw-r--r-- 2 hadoop hadoop 36 2015-06-09 17:59 /data/file2.txt [hadoop@nnode hadoop2.6.0]$ hdfs dfs -count /data 1 3 184 /data [hadoop@nnode hadoop2.6.0]$ hdfs dfsadmin -clrQuota /data [hadoop@nnode hadoop2.6.0]$ hdfs dfs -rm /data/NOTICE.txt 15/11/28 21:21:24 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes. Deleted /data/NOTICE.txt [hadoop@nnode hadoop2.6.0]$ [hadoop@nnode hadoop2.6.0]$ hdfs dfsadmin -setSpaceQuota 200 /data [hadoop@nnode hadoop2.6.0]$ hdfs dfs -ls /data Found 2 items -rw-r--r-- 2 hadoop hadoop 47 2015-06-09 17:59 /data/file1.txt -rw-r--r-- 2 hadoop hadoop 36 2015-06-09 17:59 /data/file2.txt [hadoop@nnode hadoop2.6.0]$ hdfs dfs -put README.txt /data 15/11/28 21:31:08 WARN hdfs.DFSClient: DataStreamer Exception org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: The DiskSpace quota of /data is exceeded: quota = 200 B = 200 B but diskspace consumed = 268435622 B = 256.00 MB [hadoop@nnode hadoop2.6.0]$ hdfs dfsadmin -clrSpaceQuota /data
3、動態修改復制因子
hdfs dfs [-setrep [-R] [-w] <rep> <path> ...]
查看副本數,現在是2:
-rw-r--r-- 2 hadoop hadoop 47 2015-06-09 17:59 /data/file1.txt -rw-r--r-- 2 hadoop hadoop 36 2015-06-09 17:59 /data/file2.txt
設置文件file1.txt的副本數為1:
[hadoop@nnode hadoop2.6.0]$ hdfs dfs -setrep 1 /data/file1.txt Replication 1 set: /data/file1.txt [hadoop@nnode hadoop2.6.0]$ hdfs dfs -ls /data Found 2 items -rw-r--r-- 1 hadoop hadoop 47 2015-06-09 17:59 /data/file1.txt -rw-r--r-- 2 hadoop hadoop 36 2015-06-09 17:59 /data/file2.txt [hadoop@nnode hadoop2.6.0]$
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。