困困

hadoop完全分布式

电脑版发表于:2022/4/20 13:15

系统说明

    系统:CentOS 7(最小化安装)

    节点信息:

节点ip
emo1192.168.2.7
emo2192.168.2.8
emo3192.168.2.9

搭建步骤详述(默认情况下emo1主机为主节点)

一、节点基础配置

1、配置各节点网络

设置节点的IP地址

vi /etc/sysconfig/network-scripts/ifcfg-ens33
    BOOTPROTO="static"
    IPADDR=192.168.2.7
    NETMASK=255.255.255.0

修改各节点的名字
vi /etc/hostname
   emo1

添加映射

vi /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.2.10 master
192.168.2.20 slave1
192.168.2.30 slave2
scp /etc/hosts emo2:/etc/[复制的前提是需要设置免密登录]

关闭防火墙
vi /etc/selinux/config

   SELINUX=disabled

systemctl stop firewalld
systemctl disable firewalld
systemctl status firewalld




免密登录
ssh-keygen -t rsa
ssh-copy-id emo1
ssh-copy-id emo2
ssh-copy-id emo3
(登陆:ssh emo1       退出:logout)



2.安装Java和hadoop

tar -zxvf jdk-8u191-linux-x64.tar.gz
tar -zxvf hadoop-2.7.7tar.gz

vi /etc/profile
export JAVA_HOME=/usr/local/src/java/jdk1.8.0_191
export JAVA_HOME=/usr/local/src/hadoop/hadoop-2.7.7
export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH

source /etc/proffile

java -version
hadoop


3.修改hadoop配置文件
cd /usr/local/src/hadoop/hadoop-2.7.7/etc/hadoop

export JAVA_HOME=/usr/local/src/java/jdk1.8.0_191

vi  core-site.xml (需要在hadoop-2.7.7创建tmp目录)

<property>
<name>fs.defaultFS</name>
<value>hdfs://emo1:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/usr/local/src/hadoop/hadoop-2.7.7/tmp</value>
</property>

vi hdfs-site.xml
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/src/hadoop/hadoop-2.7.7/hdfs/data</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/src/hadoop/hadoop-2.7.7/hdfs/name</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>

vi yarn-site.xml
<property>
<name>yarn.resourcemanager.address</name>
<value>emo1:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>emo1:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>emo1:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>emo1:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>master:8088</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>

cp mapred-site.xml.template mapred-site.xml

vi mapred-site.xml

<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>emo1:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>emo1:19888</value>
</property>



vi masters
emo1

vi slaves
emo2
emo3

将环境变量文件、jdk和hadoop安装目录文件同步至子节点
同步环境变量文件
scp /root/.bash_profile slave1:/root/
scp /root/.bash_profile slave2:/root/
同步jdk安装文件
scp -r ../java slave1:/usr/local/src/
scp -r ../java slave2:/usr/local/src/
同步hadoop文件
scp -r ../hadoop slave1:/usr/local/src/
scp -r ../hadoop slave2:/usr/local/src/
子节点刷新环境变量
slave1机器:
source /root/.bash_profile
slave2机器:
source /root/.bash_profile
````

格式化命令

hadoop namenode -format

启动命令

start-all.sh

关闭集群
stop-all.sh


























































关于TNBLOG
TNBLOG,技术分享。技术交流:群号677373950
ICP备案 :渝ICP备18016597号-1
App store Android
精彩评论
{{item.replyName}}
{{item.content}}
{{item.time}}
{{subpj.replyName}}
@{{subpj.beReplyName}}{{subpj.content}}
{{subpj.time}}
猜你喜欢