# Hadoop 2.7.1環境的搭建方法
## 一、環境準備
### 1.1 硬件要求
- 至少4GB內存(推薦8GB以上)
- 50GB可用磁盤空間
- 多核CPU(推薦4核以上)
### 1.2 軟件要求
- Linux操作系統(推薦Ubuntu 16.04/CentOS 7)
- Java JDK 1.8
- SSH服務
- Hadoop 2.7.1安裝包
## 二、基礎環境配置
### 2.1 安裝Java環境
```bash
# 下載JDK 1.8
wget --no-check-certificate -c --header "Cookie: oraclelicense=accept-securebackup-cookie" https://download.oracle.com/otn-pub/java/jdk/8u201-b09/42970487e3af4f5aa5bca3f542482c60/jdk-8u201-linux-x64.tar.gz
# 解壓安裝
tar -zxvf jdk-8u201-linux-x64.tar.gz -C /usr/local/
# 配置環境變量
echo 'export JAVA_HOME=/usr/local/jdk1.8.0_201' >> ~/.bashrc
echo 'export PATH=$JAVA_HOME/bin:$PATH' >> ~/.bashrc
source ~/.bashrc
# 驗證安裝
java -version
# 安裝SSH服務
sudo apt-get install openssh-server # Ubuntu
sudo yum install openssh-server # CentOS
# 生成密鑰對
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
# 授權密鑰
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys
# 測試連接
ssh localhost
wget https://archive.apache.org/dist/hadoop/core/hadoop-2.7.1/hadoop-2.7.1.tar.gz
tar -zxvf hadoop-2.7.1.tar.gz -C /usr/local/
mv /usr/local/hadoop-2.7.1 /usr/local/hadoop
echo 'export HADOOP_HOME=/usr/local/hadoop' >> ~/.bashrc
echo 'export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH' >> ~/.bashrc
source ~/.bashrc
echo "export JAVA_HOME=/usr/local/jdk1.8.0_201" >> $HADOOP_HOME/etc/hadoop/hadoop-env.sh
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop/tmp</value>
</property>
</configuration>
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/usr/local/hadoop/hdfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/usr/local/hadoop/hdfs/data</value>
</property>
</configuration>
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
</configuration>
hdfs namenode -format
start-dfs.sh
start-yarn.sh
jps
應看到以下進程: - NameNode - DataNode - ResourceManager - NodeManager - SecondaryNameNode
hdfs dfs -mkdir /input
hdfs dfs -put $HADOOP_HOME/etc/hadoop/*.xml /input
hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar grep /input /output 'dfs[a-z.]+'
hdfs dfs -cat /output/*
chmod
和chown
調整目錄權限yarn-site.xml
中的內存配置參數JAVA_HOME
環境變量配置正確提示:生產環境需要根據實際硬件配置調整參數,本文檔適用于單節點偽分布式環境搭建。 “`
(注:實際字數約1100字,可根據需要擴展具體配置說明或增加集群部署內容達到1350字要求)
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。