千家信息网

怎么在Hadoop-1.2.1中跑wordcount

发表于:2025-02-07 作者:千家信息网编辑
千家信息网最后更新 2025年02月07日,这篇文章主要介绍"怎么在Hadoop-1.2.1中跑wordcount",在日常操作中,相信很多人在怎么在Hadoop-1.2.1中跑wordcount问题上存在疑惑,小编查阅了各式资料,整理出简单好
千家信息网最后更新 2025年02月07日怎么在Hadoop-1.2.1中跑wordcount

这篇文章主要介绍"怎么在Hadoop-1.2.1中跑wordcount",在日常操作中,相信很多人在怎么在Hadoop-1.2.1中跑wordcount问题上存在疑惑,小编查阅了各式资料,整理出简单好用的操作方法,希望对大家解答"怎么在Hadoop-1.2.1中跑wordcount"的疑惑有所帮助!接下来,请跟着小编一起来学习吧!

1、在主目录下创建两个文本文件

[wukong@bd01 ~]$ mkdir test[wukong@bd01 ~]$  cd test[wukong@bd01 test]$ ls[wukong@bd01 test]$ echo "hello world" >text1[wukong@bd01 test]$ echo "hello hadoop" >text2[wukong@bd01 test]$ cat text1hello world[wukong@bd01 test]$ cat text2hello hadoop

2、启动Hadoop

[wukong@bd01 bin]$ ./start-all.shstarting namenode, logging to /home/wukong/a_usr/hadoop-1.2.1/libexec/../logs/ha doop-wukong-namenode-bd01.outbd02: starting datanode, logging to /home/wukong/a_usr/hadoop-1.2.1/libexec/../l ogs/hadoop-wukong-datanode-bd02.outbd01: starting secondarynamenode, logging to /home/wukong/a_usr/hadoop-1.2.1/lib exec/../logs/hadoop-wukong-secondarynamenode-bd01.outstarting jobtracker, logging to /home/wukong/a_usr/hadoop-1.2.1/libexec/../logs/ hadoop-wukong-jobtracker-bd01.outbd02: starting tasktracker, logging to /home/wukong/a_usr/hadoop-1.2.1/libexec/. ./logs/hadoop-wukong-tasktracker-bd02.out[wukong@bd01 bin]$ jps1440 Jps1132 NameNode1280 SecondaryNameNode1364 JobTracker

3、把新建的文件夹放到hdfs上

[wukong@bd01 ~]$ a_usr/hadoop-1.2.1/bin/hadoop fs -put ./test test_in[wukong@bd01 ~]$ a_usr/hadoop-1.2.1/bin/hadoop fs -ls ./test_inFound 2 items-rw-r--r--   1 wukong supergroup         12 2014-07-31 15:38 /user/wukong/test_i n/text1-rw-r--r--   1 wukong supergroup         13 2014-07-31 15:38 /user/wukong/test_i n/text2[wukong@bd01 ~]$ a_usr/hadoop-1.2.1/bin/hadoop fs -lsFound 1 itemsdrwxr-xr-x   - wukong supergroup          0 2014-07-31 15:38 /user/wukong/test_i n

4、跑wordcount程序

[wukong@bd01 hadoop-1.2.1]$ bin/hadoop jar hadoop-examples-1.2.1.jar wordcount t est_in test_out14/07/31 15:43:44 INFO input.FileInputFormat: Total input paths to process : 214/07/31 15:43:44 INFO util.NativeCodeLoader: Loaded the native-hadoop library14/07/31 15:43:44 WARN snappy.LoadSnappy: Snappy native library not loaded14/07/31 15:43:46 INFO mapred.JobClient: Running job: job_201407311530_000114/07/31 15:43:47 INFO mapred.JobClient:  map 0% reduce 0/07/31 15:44:11 INFO mapred.JobClient:  map 100% reduce 0/07/31 15:44:27 INFO mapred.JobClient:  map 100% reduce 100/07/31 15:44:29 INFO mapred.JobClient: Job complete: job_201407311530_000114/07/31 15:44:29 INFO mapred.JobClient: Counters: 2914/07/31 15:44:29 INFO mapred.JobClient:   Job Counters14/07/31 15:44:29 INFO mapred.JobClient:     Launched reduce tasks=114/07/31 15:44:29 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=4340614/07/31 15:44:29 INFO mapred.JobClient:     Total time spent by all reduces wai ting after reserving slots (ms)=014/07/31 15:44:29 INFO mapred.JobClient:     Total time spent by all maps waitin g after reserving slots (ms)=014/07/31 15:44:29 INFO mapred.JobClient:     Launched map tasks=214/07/31 15:44:29 INFO mapred.JobClient:     Data-local map tasks=214/07/31 15:44:29 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=1468814/07/31 15:44:29 INFO mapred.JobClient:   File Output Format Counters14/07/31 15:44:29 INFO mapred.JobClient:     Bytes Written=2514/07/31 15:44:29 INFO mapred.JobClient:   FileSystemCounters14/07/31 15:44:29 INFO mapred.JobClient:     FILE_BYTES_READ=5514/07/31 15:44:29 INFO mapred.JobClient:     HDFS_BYTES_READ=23914/07/31 15:44:29 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=17669414/07/31 15:44:29 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=2514/07/31 15:44:29 INFO mapred.JobClient:   File Input Format Counters14/07/31 15:44:29 INFO mapred.JobClient:     Bytes Read=2514/07/31 15:44:29 INFO mapred.JobClient:   Map-Reduce Framework14/07/31 15:44:29 INFO mapred.JobClient:     Map output materialized bytes=6114/07/31 15:44:29 INFO mapred.JobClient:     Map input records=214/07/31 15:44:29 INFO mapred.JobClient:     Reduce shuffle bytes=6114/07/31 15:44:29 INFO mapred.JobClient:     Spilled Records=814/07/31 15:44:29 INFO mapred.JobClient:     Map output bytes=4114/07/31 15:44:29 INFO mapred.JobClient:     Total committed heap usage (bytes)= 41743974414/07/31 15:44:29 INFO mapred.JobClient:     CPU time spent (ms)=288014/07/31 15:44:29 INFO mapred.JobClient:     Combine input records=414/07/31 15:44:29 INFO mapred.JobClient:     SPLIT_RAW_BYTES=21414/07/31 15:44:29 INFO mapred.JobClient:     Reduce input records=414/07/31 15:44:29 INFO mapred.JobClient:     Reduce input groups=314/07/31 15:44:29 INFO mapred.JobClient:     Combine output records=414/07/31 15:44:29 INFO mapred.JobClient:     Physical memory (bytes) snapshot=41 805004814/07/31 15:44:29 INFO mapred.JobClient:     Reduce output records=314/07/31 15:44:29 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=217 401753614/07/31 15:44:29 INFO mapred.JobClient:     Map output records=4

跑完之后可以查看一下

[wukong@bd01 hadoop-1.2.1]$ bin/hadoop fs -lsFound 2 itemsdrwxr-xr-x   - wukong supergroup          0 2014-07-31 15:38 /user/wukong/test_indrwxr-xr-x   - wukong supergroup          0 2014-07-31 15:44 /user/wukong/test_out[wukong@bd01 hadoop-1.2.1]$ a_usr/hadoop-1.2.1/bin/hadoop fs -ls ./test_out-bash: a_usr/hadoop-1.2.1/bin/hadoop: No such file or directory[wukong@bd01 hadoop-1.2.1]$ bin/hadoop fs -ls ./test_outFound 3 items-rw-r--r--   1 wukong supergroup          0 2014-07-31 15:44 /user/wukong/test_out/_SUCCESSdrwxr-xr-x   - wukong supergroup          0 2014-07-31 15:43 /user/wukong/test_out/_logs-rw-r--r--   1 wukong supergroup         25 2014-07-31 15:44 /user/wukong/test_out/part-r-00000

5、最终的结果就在part-r-00000中!

[wukong@bd01 hadoop-1.2.1]$ bin/hadoop fs -cat ./test_out/part-r-00000hadoop  1hello   2world   1

到此,关于"怎么在Hadoop-1.2.1中跑wordcount"的学习就结束了,希望能够解决大家的疑惑。理论与实践的搭配能更好的帮助大家学习,快去试试吧!若想继续学习更多相关知识,请继续关注网站,小编会继续努力为大家带来更多实用的文章!

0