$ hive
# 創建表
hive> create external table lines(ling string);
# 裝載數據
hive> load data inpath '/data/wordcount' overwrite into table lines;
# 查詢統計
hive> select word,count(*) as wc from lines lateral view explode(split(line,' ')) t1 as word group by word;
(3)執行結果如下。
hive> create external table lines(line string);
OK
Time taken: 4.742 seconds
hive> load data inpath '/data/wordcount' overwrite into table lines;
Loading data to table default. lines
Table default.lines stats: [numFiles=l, totalSize=36]
OK
Time taken: 0.514 seconds
hive> select word,count(*) as wc from lines lateral view explode(spl it(line,' ')) tl as word group by word;
Query ID = root_20181129142424_2948e218-ddd7-4e7c-803e-bcd3d21db21f
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified . Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
Hadoop job information for Stage-1: number of mappers:
14:24:46,642 Stage-1 map = 0%,reduce = 0%
14:24:55,246 Stage-1 map = 100%,reduce = 0%,Cumulative CPU 4.41 sec
14:25:04,715 Stage-1 map = 100%, redu ce = 100%,Cumulative CPU 9.13 sec
MapReduce Total cumulative CPU time: 9 seconds 130 msec
Ended Job = job_1543310100051_0033
MapReduce Jobs Launched:
Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 9.13 sec HDFS Read: 9637 HDFS Write: 32 SUCCESS
Total MapReduce CPU Time Spent: 9 seconds 130 msec
OK
Hadoop 1
Hello 3
Hive 1
World 1
Time taken: 47.78 seconds, Fetched: 4 row(s)
hive>