十年网站开发经验 + 多家企业客户 + 靠谱的建站团队
量身定制 + 运营维护+专业推广+无忧售后,网站问题一站解决
分布式安装(至少三台主机):
环境所需软件:
CentOS7
hadoop-2.7.3.tar.gz
jdk-8u102-linux-x64.tar.gz
成都创新互联公司企业建站,十多年网站建设经验,专注于网站建设技术,精于网页设计,有多年建站和网站代运营经验,设计师为客户打造网络企业风格,提供周到的建站售前咨询和贴心的售后服务。对于网站设计、成都做网站中不同领域进行深入了解和探索,创新互联在网站建设中充分了解客户行业的需求,以灵动的思维在网页中充分展现,通过对客户行业精准市场调研,为客户提供的解决方案。
安装前准备工作:
配置免密钥登陆
cd
ssh-keygen -t rsa
一直回车,直到结束
ssh-copy-id .ssh/id_rsa.pub bigdata1
ssh-copy-id .ssh/id_rsa.pub bigdata2
ssh-copy-id .ssh/id_rsa.pub bigdata3
同步时间
通过设置计划任务实现各主机间的时间同步
vim /etc/crontab
0 0 1 root ntpdate -s time.windows.com
或者部署一个时间服务器实现同步,这里就不详细讲解了
(*)hdfs-site.xml
dfs.replication
2
core-site.xml
fs.defaultFS
hdfs://bigdata1:9000
hadoop.tmp.dir
/root/training/hadoop-2.7.3/tmp
mapred-site.xml
mapreduce.framework.name
yarn
yarn-site.xml
yarn.resourcemanager.hostname
bigdata1
yarn.nodemanager.aux-services
mapreduce_shuffle
对NameNode进行格式化: hdfs namenode -format
日志:Storage directory /root/training/hadoop-2.7.3/tmp/dfs/name has been successfully formatted.
scp -r /root/training/hadoop-2.7.3 bigdata2:/root/training/hadoop-2.7.3
scp -r /root/training/hadoop-2.7.3 bigdata3:/root/training/hadoop-2.7.3
启动:start-all.sh = start-dfs.sh + start-yarn.sh
验证
(*)命令行:hdfs dfsadmin -report
(*)网页:HDFS:http://192.168.157.12:50070/
Yarn:http://192.168.157.12:8088
(*)Demo:测试MapReduce程序
example: /root/training/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar
hadoop jar hadoop-mapreduce-examples-2.7.3.jar wordcount /input/data.txt /output/wc1204