這篇文章主要介紹Hadoop環境如何配置及啟動,文中介紹的非常詳細,具有一定的參考價值,感興趣的小伙伴們一定要看完!
core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://slave2.hadoop:8020</value>
<final>true</final>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/home/hadoop/hadoop-root/tmp</value>
</property>
<property>
<name>fs.checkpoint.period</name>
<value>300</value>
<description>The number of seconds between two periodic checkpoints.</description>
</property>
<property>
<name>fs.checkpoint.size</name>
<value>67108864</value>
<description>The size of the current edit log (in bytes) that triggers a periodic checkpoint even if the fs.checkpoint.period hasn't expired. </description>
</property>
<property>
<name>fs.checkpoint.dir</name>
<value>${hadoop.tmp.dir}/dfs/namesecondary</value>
<description>Determines where on the local filesystem the DFS secondary namenode should store the temporary images to merge.If this is a comma-delimited list of directories then the image is replicated in all of the directories for redundancy.</description>
</property>
</configuration>hdfs-site.xml
<configuration> <property> <name>dfs.namenode.name.dir</name> <value>/home/hadoop/hadoop-root/dfs/name</value> <final>true</final> </property> <property> <name>dfs.datanode.data.dir</name> <value>/home/hadoop/hadoop-root/dfs/data</value> <final>true</final> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.permissions</name> <value>false</value> </property> <property> <name>dfs.namenode.secondary.http-address</name> <value>slave1:50090</value> </property> </configuration>
mapred-site.xml
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapred.system.dir</name> <value>/home/hadoop/hadoop-root/mapred/system</value> <final>true</final> </property> <property> <name>mapred.local.dir</name> <value>/home/hadoop/hadoop-root/mapred/local</value> <final>true</final> </property> <property> <name>mapreduce.tasktracker.map.tasks.maximum</name> <value>2</value> </property> <property> <name>mapreduce.tasktracker.reduce.tasks.maximum</name> <value>1</value> </property> <property> <name>mapreduce.job.maps</name> <value>2</value> </property> <property> <name>mapreduce.job.reduces</name> <value>1</value> </property> <property> <name>mapreduce.tasktracker.http.threads</name> <value>50</value> </property> <property> <name>io.sort.factor</name> <value>20</value> </property> <property> <name>mapred.child.java.opts</name> <value>-Xmx400m</value> </property> <property> <name>mapreduce.task.io.sort.mb</name> <value>200</value> </property> <property> <name>mapreduce.map.sort.spill.percent</name> <value>0.8</value> </property> <property> <name>mapreduce.map.output.compress</name> <value>true</value> </property> <property> <name>mapreduce.map.output.compress.codec</name> <value>org.apache.hadoop.io.compress.DefaultCodec</value> </property> <property> <name>mapreduce.reduce.shuffle.parallelcopies</name> <value>10</value> </property> </configuration>
一、恢復hadoop
1、停止所有服務
2、刪除/home/hadoop/hadoop-root/dfs下的data和name,并且重新建立
3、刪除/home/hadoop/hadoop-root/tmp下的文件
4、在namenode節點執行 hadoop namenode -format
5、啟動hadoop服務
-----自此hadoop恢復----
6、停止hbase服務,停不掉就殺掉
7、(多個節點)進入/tmp/hbase-root/zookeeper 刪除所有文件
8、啟動hbase服務
以上是“Hadoop環境如何配置及啟動”這篇文章的所有內容,感謝各位的閱讀!希望分享的內容對大家有幫助,更多相關知識,歡迎關注億速云行業資訊頻道!
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。