大数据运维—Linux安装hadoop Hadoop HA集群部署

1.Hadoop下载好之后解压到相应目录:

        为了方便管理,我们使用mv把名称更改为hadoop

[root@master ~]# tar -zxvf hadoop-2.7.1.tar.gz -C /usr/local/src/
[root@master ~]# cd /usr/local/src/
[root@master src]# ls
hadoop-2.7.1  java  zookeeper
[root@master src]# mv hadoop-2.7.1/ hadoop
[root@master src]# ls
hadoop  java  zookeeper

2.配置Hadoop的环境变量

[root@master ~]# vi /etc/profile


#hadoop
export HADOOP_HOME=/usr/local/src/hadoop
export HADOOP_PREFIX=$HADOOP_HOME
export HADOOP_INSTALL=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/bin/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib:$HADOOP_COMMON_LIB_NATIVE_DIR"
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin


//生效环境变量
[root@master ~]# source /etc/profile

3.配置hadoop-env.sh配置文件

进入到hadoop/etc/hadoop下面

[root@master ~]# cd /usr/local/src/
[root@master src]# cd hadoop/etc/hadoop/
[root@master hadoop]# ls
capacity-scheduler.xml  hadoop-env.sh               httpfs-env.sh            kms-env.sh            mapred-env.sh               ssl-server.xml.example
configuration.xsl       hadoop-metrics2.properties  httpfs-log4j.properties  kms-log4j.properties  mapred-queues.xml.template  yarn-env.cmd
container-executor.cfg  hadoop-metrics.properties   httpfs-signature.secret  kms-site.xml          mapred-site.xml.template    yarn-env.sh
core-site.xml           hadoop-policy.xml           httpfs-site.xml          log4j.properties      slaves                      yarn-site.xml
hadoop-env.cmd          hdfs-site.xml               kms-acls.xml             mapred-env.cmd        ssl-client.xml.example
[root@master hadoop]# vi hadoop-env.sh 
//将Java的路径修改为自己的绝对路径

# The java implementation to use.
export JAVA_HOME=/usr/local/src/java

4.创建namenode,datanode,journalnode等存放数据的目录

[root@master hadoop]# pwd
/usr/local/src/hadoop
[root@master hadoop]# mkdir -p tmp/hdfs/nn
[root@master hadoop]# mkdir -p tmp/hdfs/dn
[root@master hadoop]# mkdir -p tmp/hdfs/jn
[root@master hadoop]# mkdir -p tmp/logs

5.配置core-site.xml文件

core-site.xml文件是Hadoop 核心配置,例如HDFS、MapReduce和YARN常用的I/O设置等

[root@master hadoop]# pwd
/usr/local/src/hadoop/etc/hadoop
[root@master hadoop]# vi core-site.xml 

//文件core-site.xml的具体配置如下:

<?xml version="1.0" encoding="UTF-8"?>
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
        <!--指定hdfs的nameservice为mycluster-->
        <property>
                <name>fs.defaultFS</name>
                <value>hdfs://mycluster</value>
        </property>
        <property>
                <name>hadoop.tmp.dir</name>
                <value>file:/usr/local/src/hadoop/tmp</value>
        </property>
        <!--指定zookeeper的地址-->
        <property>
                <name>ha.zookeeper.quorum</name>
                <value>master:2181,slave1:2181,slave2:2181</value>
        </property>
        <!--hadoop链接zookeeper的超时时长设置-->
        <property>
                <name>ha.zookeeper.session-timeout.ms</name>
                <value>30000</value>
                <description>ms</description>
        </property>
        <property>
                <name>fs.trash.interval</name>
                <value>1440</value>
        </property>
</configuration>

6.配置hadoop的hdfs-site.xml文件

hdfs-site.xml文件是Hadoop守护进程的配置项,包括namenode、辅助namenode(即SecondNameNode)和datanode等

[root@master hadoop]# vi hdfs-site.xml


<configuration>
        <!-- journalnode 集群之间通信的超时时间 -->
        <property>
                <name>dfs.qjournal.start-segment.timeout.ms</name>
                <value>60000</value>
        </property>
        <property>
                <name>dfs.nameservices</name>
                <value>mycluster</value>
        </property>
        <!-- mycluster 下面有两个 NameNode,分别是 master,slave1 -->
         <property>
                <name>dfs.ha.namenodes.mycluster</name>
                <value>master,slave1</value>
        </property>
        <!-- master 的 RPC 通信地址 -->
        <property>
                <name>dfs.namenode.rpc-address.mycluster.master</name>
                <value>master:8020</value>
        </property>
        <!-- slave1 的 RPC 通信地址 -->
        <property>
                <name>dfs.namenode.rpc-address.mycluster.slave1</name>
                <value>slave1:8020</value>
        </property>
        <!-- master 的 http 通信地址 -->
        <property>
                <name>dfs.namenode.http-address.mycluster.master</name>
                <value>master:50070</value>
        </property>
        <!-- slave1 的 http 通信地址 -->
        <property>
                <name>dfs.namenode.http-address.mycluster.slave1</name>
                <value>slave1:50070</value>
        </property>
        <property>
                <name>dfs.namenode.shared.edits.dir</name>
                <value>qjournal://master:8485;slave1:8485;slave2:8485/mycluster</value>
        </property>
        <!-- 配置失败自动切换实现方式 -->
        <property>
                <name>dfs.client.failover.proxy.provider.mycluster</name>
                <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
        </property>
        <!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行 -->
        <property>
                <name>dfs.ha.fencing.methods</name>
                <value>
                        sshfence
                        shell(/bin/true)
                </value>
        </property>
        <property>
                <name>dfs.permissions.enabled</name>
                <value>false</value>
        </property>
        <property>
                <name>dfs.support.append</name>
                <value>true</value>
        </property>
        <!-- 使用 sshfence 隔离机制时需要 ssh 免登陆 -->
        <property>
                <name>dfs.ha.fencing.ssh.private-key-files</name>
                <value>/root/.ssh/id_rsa</value>
        </property>
        <!-- 指定副本数 -->
        <property>
                <name>dfs.replication</name>
                <value>2</value>
        </property>
        <property>
                <name>dfs.namenode.name.dir</name>
                <value>slave1:8020</value>
        </property>
        <!-- master 的 http 通信地址 -->
        <property>
                <name>dfs.namenode.http-address.mycluster.master</name>
                <value>master:50070</value>
        </property>
        <!-- slave1 的 http 通信地址 -->
        <property>
                <name>dfs.namenode.http-address.mycluster.slave1</name>
                <value>slave1:50070</value>
        </property>
        <!-- 指定 NameNode 的 edits 元数据的共享存储位置。也就是 JournalNode 列表该 url 的配置格式:qjournal://host1:port1;host2:port2;host3:port3/journalIdjournalId 推荐>使用 nameservice,默认端口号是:8485 -->
        <property>
                <name>dfs.namenode.shared.edits.dir</name>
                <value>qjournal://master:8485;slave1:8485;slave2:8485/mycluster</value>
        </property>
        <!-- 配置失败自动切换实现方式 -->
        <property>
                <name>dfs.client.failover.proxy.provider.mycluster</name>
                <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
        </property>
        <!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行 -->
        <property>
                <name>dfs.ha.fencing.methods</name>
                <value>
                        sshfence
                        shell(/bin/true)
                </value>
        </property>
        <property>
                <name>dfs.permissions.enabled</name>
                <value>false</value>
        </property>
        <property>
                <name>dfs.support.append</name>
                <value>true</value>
        </property>
        <!-- 使用 sshfence 隔离机制时需要 ssh 免登陆 -->
        <property>
                <name>dfs.ha.fencing.ssh.private-key-files</name>
                <value>/root/.ssh/id_rsa</value>
        </property>
        <!-- 指定副本数 -->
        <property>
                <name>dfs.replication</name>
                <value>2</value>
        </property>
        <property>
                <name>dfs.namenode.name.dir</name>
                <value>/usr/local/src/hadoop/tmp/hdfs/nn</value>
        </property>
        <property>
                <name>dfs.datanode.data.dir</name>
                <value>/usr/local/src/hadoop/tmp/hdfs/dn</value>
        </property>
        <!-- 指定 JournalNode 在本地磁盘存放数据的位置 -->
        <property>
                <name>dfs.journalnode.edits.dir</name>
                <value>/usr/local/src/hadoop/tmp/hdfs/jn</value>
        </property>
        <!-- 开启 NameNode 失败自动切换 -->
        <property>
                <name>dfs.ha.automatic-failover.enabled</name>
                <value>true</value>
        </property>
        <!-- 启用 webhdfs -->
        <property>
                <name>dfs.webhdfs.enabled</name>
                <value>true</value>
        </property>
        <!-- 配置 sshfence 隔离机制超时时间 -->
        <property>
                <name>dfs.ha.fencing.ssh.connect-timeout</name>
                <value>30000</value>
        </property>
        <property>
                <name>ha.failover-controller.cli-check.rpc-timeout.ms</name>
                <value>60000</value>
        </property>
</configuration>

7.配置hadoop的mapred-site.xml文件( MapReduce守护进程的配置项,包括作业历史服务器 

[root@master hadoop]# cp mapred-site.xml.template mapred-site.xml
[root@master hadoop]# vi mapred-site.xml


<configuration>
        <!-- 指定 mapreduce 框架为 yarn 方式 -->
        <property>
                <name>mapreduce.framework.name</name>
                <value>yarn</value>
        </property>
        <!-- 指定 mapreduce jobhistory 地址 -->
        <property>
                <name>mapreduce.jobhistory.address</name>
                <value>master:10020</value>
        </property>
        <!-- 任务历史服务器的 web 地址 -->
        <property>
                <name>mapreduce.jobhistory.webapp.address</name>
                <value>master:19888</value>
        </property>
</configuration>

8.配置Hadoop的yarn-site.xml文件( YARN守护进程的配置项,包括资源管理器、web应用代理服务器和节点管理器 )

[root@master hadoop]# vi yarn-site.xml 


<!-- Site specific YARN configuration properties -->
        <!-- 开启 RM 高可用 -->
        <property>
                <name>yarn.resourcemanager.ha.enabled</name>
                <value>true</value>
        </property>
        <!-- 指定 RM 的 cluster id -->
        <property>
                <name>yarn.resourcemanager.cluster-id</name>
                <value>yrc</value>
        </property>
        <!-- 指定 RM 的名字 -->
        <property>
                <name>yarn.resourcemanager.ha.rm-ids</name>
                <value>rm1,rm2</value>
        </property>
        <!-- 分别指定 RM 的地址 -->
        <property>
                <name>yarn.resourcemanager.hostname.rm1</name>
                <value>master</value>
        </property>
        <property>
                <name>yarn.resourcemanager.hostname.rm2</name>
                <value>slave1</value>
        </property>
        <!-- 指定 zk 集群地址 -->
        <property>
                <name>yarn.resourcemanager.zk-address</name>
                <value>master:2181,slave1:2181,slave2:2181</value>
        </property>
        <property>
                <name>yarn.nodemanager.aux-services</name>
                <value>mapreduce_shuffle</value>
        </property>
        <property>
                <name>yarn.log-aggregation-enable</name>
                <value>true</value>
        </property>
        <property>
                <name>yarn.log-aggregation.retain-seconds</name>
                <value>86400</value>
        </property>
        <!-- 启用自动恢复 -->
        <property>
                <name>yarn.resourcemanager.recovery.enabled</name>
                <value>true</value>
        </property>
        <!-- 制定 resourcemanager 的状态信息存储在 zookeeper 集群上 -->
        <property>
                <name>yarn.resourcemanager.store.class</name>
                <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
        </property>
</configuration>

9.配置Hadoop的slaves配置文件(控制我们的从节点在哪里 datanode nodemanager在哪些机器上)

[root@master hadoop]# vi slaves 


master
slave1
slave2

10.分发文件到从节点

(1)分发Hadoop文件

//分发到slave1节点
[root@master ~]# scp -r /usr/local/src/hadoop/ root@slave1:/usr/local/src/

//分发到slave2节点
[root@master ~]# scp -r /usr/local/src/hadoop/ root@slave2:/usr/local/src/

(2)分发环境变量

//分发到slave1节点
[root@master ~]# scp -r /etc/profile root@slave1:/etc/

//分发到slave2节点
[root@master ~]# scp -r /etc/profile root@slave2:/etc/

11.修改所有者和所有者组

[root@master ~]# chown hadoop:hadoop /usr/local/src/hadoop/

[root@slave1 ~]# chown hadoop:hadoop /usr/local/src/hadoop/

[root@slave2 ~]# chown hadoop:hadoop /usr/local/src/hadoop/

2.生效环境变量

[root@master ~]# su hadoop
[hadoop@master root]$ cd
[hadoop@master ~]$ source /etc/profile

[root@slave1 ]# su hadoop
[hadoop@slave1 root]$ cd
[hadoop@slave1 ~]$ source /etc/profile

[root@slave2 ~]# su hadoop
[hadoop@slave2 root]$ cd
[hadoop@slave2 ~]$ source /etc/profile

ok,到这里就完成了Hadoop HA高可用集群的配置

[hadoop@master ~]$ hadoop version
Hadoop 2.7.1
Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r 15ecc87ccf4a0228f35af08fc56de536e6ce657a
Compiled by jenkins on 2015-06-29T06:04Z
Compiled with protoc 2.5.0
From source with checksum fc0a1a23fc1868e4d5ee7fa2b28a58a
This command was run using /usr/local/src/hadoop/share/hadoop/common/hadoop-common-2.7.1.jar
 

 下一章讲解Hadoop HA集群的启动与测试

本图文内容来源于网友网络收集整理提供,作为学习参考使用,版权属于原作者。
THE END
分享
二维码
< <上一篇
下一篇>>