十年网站开发经验 + 多家企业客户 + 靠谱的建站团队
量身定制 + 运营维护+专业推广+无忧售后,网站问题一站解决
机器分布
创新互联是专业的天峻网站建设公司,天峻接单;提供成都做网站、网站制作,网页设计,网站设计,建网站,PHP网站建设等专业做网站服务;采用PHP框架,可快速的进行天峻网站开发网页制作和功能扩展;专业做搜索引擎喜爱的网站,专业的做网站团队,希望更多企业前来合作!
hadoop1 192.168.56121
hadoop2 192.168.56122
hadoop3 192.168.56123
准备安装包
jdk-7u71-linux-x64.tar.gz
zookeeper-3.4.9.tar.gz
hadoop-2.9.2.tar.gz
把安装包上传到三台机器的/usr/local目录下并解压
配置hosts
echo "192.168.56.121 hadoop1" >> /etc/hosts echo "192.168.56.122 hadoop2" >> /etc/hosts echo "192.168.56.123 hadoop3" >> /etc/hosts
配置环境变量
/etc/profile
export HADOOP_PREFIX=/usr/local/hadoop-2.9.2 export JAVA_HOME=/usr/local/jdk1.7.0_71
部署zookeeper
创建zoo用户
useradd zoo passwd zoo
修改zookeeper目录的属主为zoo
chown zoo:zoo -R /usr/local/zookeeper-3.4.9
修改zookeeper配置文件
到/usr/local/zookeeper-3.4.9/conf目录
cp zoo_sample.cfg zoo.cfg vi zoo.cfg tickTime=2000 initLimit=10 syncLimit=5 dataDir=/usr/local/zookeeper-3.4.9 clientPort=2181 server.1=hadoop1:2888:3888 server.2=hadoop2:2888:3888 server.3=hadoop3:2888:3888
创建myid文件放在/usr/local/zookeeper-3.4.9目录下,myid文件中只保存1-255的数字,与zoo.cfg中server.id行中的id相同。
hadoop1中myid为1
hadoop2中myid为2
hadoop3中myid为3
在三台机器启动zookeeper服务
[zoo@hadoop1 zookeeper-3.4.9]$ bin/zkServer.sh start
验证zookeeper
[zoo@hadoop1 zookeeper-3.4.9]$ bin/zkServer.sh status ZooKeeper JMX enabled by default Using config: /usr/local/zookeeper-3.4.9/bin/../conf/zoo.cfg Mode: follower
配置Hadoop
创建用户
useradd hadoop passwd hadoop
修改hadoop目录属主为hadoop
chmod hadoop:hadoop -R /usr/local/hadoop-2.9.2
创建目录
mkdir /hadoop1 /hadoop2 /hadoop3 chown hadoop:hadoop /hadoop1 chown hadoop:hadoop /hadoop2 chown hadoop:hadoop /hadoop3
配置互信
ssh-keygen ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@hadoop1 ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@hadoop2 ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@hadoop3 #使用如下命令测试互信 ssh hadoop1 date ssh hadoop2 date ssh hadoop3 date
配置环境变量
/home/hadoop/.bash_profile
export PATH=$JAVA_HOME/bin:$HADOOP_PREFIX/bin:$HADOOP_PREFIX/sbin:$PATH
配置参数
etc/hadoop/hadoop-env.sh
export JAVA_HOME=/usr/local/jdk1.7.0_71
etc/hadoop/core-site.xml
fs.defaultFS hdfs://ns hadoop.tmp.dir /usr/loca/hadoop-2.9.2/temp io.file.buffer.size 4096 ha.zookeeper.quorum hadoop1:2181,hadoop2:2181,hadoop3:2181
etc/hadoop/hdfs-site.xml
dfs.nameservices ns dfs.ha.namenodes.ns nn1,nn2 dfs.namenode.rpc-address.ns.nn1 hadoop1:9000 dfs.namenode.http-address.ns.nn1 hadoop1:50070 dfs.namenode.rpc-address.ns.nn2 hadoop2:9000 dfs.namenode.http-address.ns.nn2 hadoop2:50070 dfs.namenode.shared.edits.dir qjournal://hadoop1:8485;hadoop2:8485;hadoop3:8485/ns dfs.journalnode.edits.dir /hadoop1/hdfs/journal dfs.ha.automatic-failover.enabled true dfs.client.failover.proxy.provider.ns org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider dfs.ha.fencing.methods sshfence dfs.ha.fencing.ssh.private-key-files /home/hadoop/.ssh/id_rsa dfs.namenode.name.dir file:/hadoop1/hdfs/name,file:/hadoop2/hdfs/name dfs.datanode.data.dir file:/hadoop1/hdfs/data,file:/hadoop2/hdfs/data,file:/hadoop3/hdfs/data dfs.replication 2 dfs.webhdfs.enabled true dfs.hosts.exclude /usr/local/hadoop-2.9.2/etc/hadoop/excludes
etc/hadoop/mapred-site.xml
yarn-site.xml mapreduce.framework.name yarn yarn.nodemanager.aux-services mapreduce_shuffle yarn.nodemanager.aux-services.mapreduce_shuffle.class org.apache.hadoop.mapred.ShuffleHandler yarn.resourcemanager.hostname hadoop1
etc/hadoop/slaves
hadoop1 hadoop2 hadoop3
首次启动命令
1、首先启动各个节点的Zookeeper,在各个节点上执行以下命令: bin/zkServer.sh start 2、在某一个namenode节点执行如下命令,创建命名空间 hdfs zkfc -formatZK 3、在每个journalnode节点用如下命令启动journalnode sbin/hadoop-daemon.sh start journalnode 4、在主namenode节点格式化namenode和journalnode目录 hdfs namenode -format ns 5、在主namenode节点启动namenode进程 sbin/hadoop-daemon.sh start namenode 6、在备namenode节点执行第一行命令,这个是把备namenode节点的目录格式化并把元数据从主namenode节点copy过来,并且这个命令不会把journalnode目录再格式化了!然后用第二个命令启动备namenode进程! hdfs namenode -bootstrapStandby sbin/hadoop-daemon.sh start namenode 7、在两个namenode节点都执行以下命令 sbin/hadoop-daemon.sh start zkfc 8、在所有datanode节点都执行以下命令启动datanode sbin/hadoop-daemon.sh start datanode
日常启停命令
#启动脚本,启动所有节点服务 sbin/start-dfs.sh #停止脚本,停止所有节点服务 sbin/stop-dfs.sh验证
jps检查进程
http://192.168.56.122:50070
http://192.168.56.121:50070
测试文件上传下载
#创建目录 [hadoop@hadoop1 ~]$ hadoop fs -mkdir /test #验证 [hadoop@hadoop1 ~]$ hadoop fs -ls / Found 1 items drwxr-xr-x - hadoop supergroup 0 2019-04-12 12:16 /test #上传文件 [hadoop@hadoop1 ~]$ hadoop fs -put /usr/local/hadoop-2.9.2/LICENSE.txt /test #验证 [hadoop@hadoop1 ~]$ hadoop fs -ls /test Found 1 items -rw-r--r-- 2 hadoop supergroup 106210 2019-04-12 12:17 /test/LICENSE.txt #下载文件到/tmp [hadoop@hadoop1 ~]$ hadoop fs -get /test/LICENSE.txt /tmp #验证 [hadoop@hadoop1 ~]$ ls -l /tmp/LICENSE.txt -rw-r--r--. 1 hadoop hadoop 106210 Apr 12 12:19 /tmp/LICENSE.txt
参考:https://blog.csdn.net/Trigl/article/details/55101826