十年网站开发经验 + 多家企业客户 + 靠谱的建站团队
量身定制 + 运营维护+专业推广+无忧售后,网站问题一站解决
这篇文章主要讲解了“hadoop运行实例分析”,文中的讲解内容简单清晰,易于学习与理解,下面请大家跟着小编的思路慢慢深入,一起来研究和学习“hadoop运行实例分析”吧!
建网站原本是网站策划师、网络程序员、网页设计师等,应用各种网络程序开发技术和网页设计技术配合操作的协同工作。创新互联公司专业提供成都网站制作、网站设计,网页设计,网站制作(企业站、成都响应式网站建设公司、电商门户网站)等服务,从网站深度策划、搜索引擎友好度优化到用户体验的提升,我们力求做到极致!1.找到examples的jar包
2.创建输入和输出目录
3.将需要分隔的文件上传到wc_input目录下
4.查看上传的文件
5.hadoop jar /hadoop_soft/hadoop-2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar wordcount /wc_input/* /wc_output/
[root@hadoop input]# hadoop jar /hadoop_soft/hadoop-2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar wordcount /wc_input/* /wc_output/
17/08/15 10:25:24 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/08/15 10:25:25 INFO client.RMProxy: Connecting to ResourceManager at /192.168.1.120:18040
17/08/15 10:25:27 INFO input.FileInputFormat: Total input paths to process : 2
17/08/15 10:25:27 INFO mapreduce.JobSubmitter: number of splits:2
17/08/15 10:25:28 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1502762082449_0001
17/08/15 10:25:28 INFO impl.YarnClientImpl: Submitted application application_1502762082449_0001
17/08/15 10:25:29 INFO mapreduce.Job: The url to track the job: http://hadoop:18088/proxy/application_1502762082449_0001/
17/08/15 10:25:29 INFO mapreduce.Job: Running job: job_1502762082449_0001
17/08/15 10:25:48 INFO mapreduce.Job: Job job_1502762082449_0001 running in uber mode : true
17/08/15 10:25:48 INFO mapreduce.Job: map 0% reduce 0%
17/08/15 10:25:50 INFO mapreduce.Job: map 100% reduce 0%
17/08/15 10:25:51 INFO mapreduce.Job: map 100% reduce 100%
17/08/15 10:25:51 INFO mapreduce.Job: Job job_1502762082449_0001 completed successfully
17/08/15 10:25:52 INFO mapreduce.Job: Counters: 52
File System Counters
FILE: Number of bytes read=276
FILE: Number of bytes written=545
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=798
HDFS: Number of bytes written=398613
HDFS: Number of read operations=66
HDFS: Number of large read operations=0
HDFS: Number of write operations=23
Job Counters
Launched map tasks=2
Launched reduce tasks=1
Other local map tasks=2
Total time spent by all maps in occupied slots (ms)=1972
Total time spent by all reduces in occupied slots (ms)=803
TOTAL_LAUNCHED_UBERTASKS=3
NUM_UBER_SUBMAPS=2
NUM_UBER_SUBREDUCES=1
Total time spent by all map tasks (ms)=1972
Total time spent by all reduce tasks (ms)=803
Total vcore-milliseconds taken by all map tasks=1972
Total vcore-milliseconds taken by all reduce tasks=803
Total megabyte-milliseconds taken by all map tasks=2019328
Total megabyte-milliseconds taken by all reduce tasks=822272
Map-Reduce Framework
Map input records=5
Map output records=11
Map output bytes=111
Map output materialized bytes=109
Input split bytes=210
Combine input records=11
Combine output records=8
Reduce input groups=7
Reduce shuffle bytes=109
Reduce input records=8
Reduce output records=7
Spilled Records=16
Shuffled Maps =2
Failed Shuffles=0
Merged Map outputs=2
GC time elapsed (ms)=637
CPU time spent (ms)=1820
Physical memory (bytes) snapshot=830070784
Virtual memory (bytes) snapshot=8998096896
Total committed heap usage (bytes)=500510720
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=70
File Output Format Counters
Bytes Written=57
6.查看运行结果
7.检查结果数据
感谢各位的阅读,以上就是“hadoop运行实例分析”的内容了,经过本文的学习后,相信大家对hadoop运行实例分析这一问题有了更深刻的体会,具体使用情况还需要大家实践验证。这里是创新互联,小编将为大家推送更多相关知识点的文章,欢迎关注!