千家信息网

hive关联查询连接hbase的外部表时如何解决内存溢出问题

发表于:2024-11-20 作者:千家信息网编辑
千家信息网最后更新 2024年11月20日,这篇文章主要为大家展示了"hive关联查询连接hbase的外部表时如何解决内存溢出问题",内容简而易懂,条理清晰,希望能够帮助大家解决疑惑,下面让小编带领大家一起研究并学习一下"hive关联查询连接h
千家信息网最后更新 2024年11月20日hive关联查询连接hbase的外部表时如何解决内存溢出问题

这篇文章主要为大家展示了"hive关联查询连接hbase的外部表时如何解决内存溢出问题",内容简而易懂,条理清晰,希望能够帮助大家解决疑惑,下面让小编带领大家一起研究并学习一下"hive关联查询连接hbase的外部表时如何解决内存溢出问题"这篇文章吧。

计划把数据都保存到hbase中,而查询统计数据使用hive。在把数据灌入hbase中后,发现两个外部表的数据大概在200万和300万左右,join查询后报错,内存溢出。错误如下方

感觉不会呀,对于hadoop来说,这么点数据不至于吧。上网查了很多资料未果。今天终于找到一个靠谱的,见链接http://itindex.net/detail/51977-hive-hbase-%E5%A4%96%E9%83%A8

"默认情况下,Hive会自动将小表加到DistributeCache中,然后在Map扫描大表的时候,去和DistributeCache中的小表做join,这称为Mapjoin。"

这里意思是,因为查询的表是连接hbase后的外部表,所以在hive的存储目录中是空的目录,数据文件大小是0,所以hive将之视为小数据表,会把它加载到DistributeCache并进行mapjoin。但实际上数据量是很大的,结果就内存溢出了。

解决方法:

在hive中执行以下命令:

SET mapred.job.queue.name=queue3;SET hbase.client.scanner.caching=10000;SET hbase.zookeeper.quorum=node1,node2,node3,node4,node5;SET zookeeper.znode.parent=/hbase;SET hbase.regionserver.lease.period=180000;SET hive.auto.convert.join=false;SET hive.ignore.mapjoin.hint=false;

其中SET mapred.job.queue.name=queue3;是设置一个queue队列,我的hadoop没有配置队列,我也不太熟,直接改为SET mapred.job.queue.name=default;(貌似不执行就是default吧)

SET hbase.client.scanner.caching=10000;是设置hbase每次所描数据的记录个数,默认是1的,改了会有较大性能提升。

SET hbase.zookeeper.quorum=node1,node2,node3,node4,node5;设置zookeeper

SET zookeeper.znode.parent=/hbase;这个还不知道

set hbase.regionserver.lease.period=180000; 这个也不清楚

SET hive.auto.convert.join=false;关闭自动转化MapJoin,默认为true;

SET hive.ignore.mapjoin.hint=false;关闭忽略mapjoin的hints(不忽略,hints有效),默认为true(忽略hints)。

hive的sql执行后,内存溢出的日志信息如下:

Query ID = hadoop_20150826162508_4c06d5ea-0f33-485d-8e0d-904dc67f51b1Total jobs = 1Execution log at: /tmp/hadoop/hadoop_20150826162508_4c06d5ea-0f33-485d-8e0d-904dc67f51b1.log2015-08-26 16:25:17     Starting to launch local task to process map join;      maximum memory = 4771020802015-08-26 16:27:09     Dump the side-table for tag: 1 with group count: 60765 into file: file:/tmp/c84b84b9-452b-46cd-b90d-45a628385803/hive_2015-08-26_16-25-08_004_3456412479615750499-1/-local-10003/HashTable-Stage-2/MapJoin-mapfile01--.hashtable2015-08-26 16:27:11     Uploaded 1 File to: file:/tmp/c84b84b9-452b-46cd-b90d-45a628385803/hive_2015-08-26_16-25-08_004_3456412479615750499-1/-local-10003/HashTable-Stage-2/MapJoin-mapfile01--.hashtable (19315960 bytes)2015-08-26 16:27:11     End of local task; Time Taken: 114.62 sec.Execution completed successfullyMapredLocal task succeededLaunching Job 1 out of 1Number of reduce tasks not specified. Estimated from input data size: 1In order to change the average load for a reducer (in bytes):  set hive.exec.reducers.bytes.per.reducer=In order to limit the maximum number of reducers:  set hive.exec.reducers.max=In order to set a constant number of reducers:  set mapreduce.job.reduces=Starting Job = job_1440161019277_0326, Tracking URL = http://db2:8088/proxy/application_1440161019277_0326/Kill Command = /data/hadoop/bin/hadoop job  -kill job_1440161019277_0326Hadoop job information for Stage-2: number of mappers: 2; number of reducers: 12015-08-26 16:27:32,336 Stage-2 map = 0%,  reduce = 0%2015-08-26 16:28:32,735 Stage-2 map = 0%,  reduce = 0%2015-08-26 16:29:32,789 Stage-2 map = 0%,  reduce = 0%2015-08-26 16:30:33,082 Stage-2 map = 0%,  reduce = 0%2015-08-26 16:31:33,823 Stage-2 map = 0%,  reduce = 0%2015-08-26 16:32:34,188 Stage-2 map = 0%,  reduce = 0%2015-08-26 16:33:34,868 Stage-2 map = 0%,  reduce = 0%2015-08-26 16:33:54,390 Stage-2 map = 100%,  reduce = 100%Ended Job = job_1440161019277_0326 with errorsError during job, obtaining debugging information...Job Tracking URL: http://db2:8088/proxy/application_1440161019277_0326/Examining task ID: task_1440161019277_0326_m_000000 (and more) from job job_1440161019277_0326Task with the most failures(4): -----Task ID:  task_1440161019277_0326_m_000000URL:  http://db2:8088/taskdetails.jsp?jobid=job_1440161019277_0326&tipid=task_1440161019277_0326_m_000000-----Diagnostic Messages for this Task:Error: java.lang.RuntimeException: Error in configuring object        at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:112)        at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78)        at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:449)        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)        at java.security.AccessController.doPrivileged(Native Method)        at javax.security.auth.Subject.doAs(Unknown Source)        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)Caused by: java.lang.reflect.InvocationTargetException        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)        at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)        at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)        at java.lang.reflect.Method.invoke(Unknown Source)        at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)        ... 9 moreCaused by: java.lang.RuntimeException: Error in configuring object        at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:112)        at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78)        at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)        at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38)        ... 14 moreCaused by: java.lang.reflect.InvocationTargetException        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)        at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)        at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)        at java.lang.reflect.Method.invoke(Unknown Source)        at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)        ... 17 moreCaused by: java.lang.OutOfMemoryError: Java heap space        at org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableIntObjectInspector.copyObject(WritableIntObjectInspector.java:41)        at org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.copyToStandardObject(ObjectInspectorUtils.java:311)        at org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.copyToStandardObject(ObjectInspectorUtils.java:346)        at org.apache.hadoop.hive.ql.exec.persistence.MapJoinEagerRowContainer.read(MapJoinEagerRowContainer.java:138)        at org.apache.hadoop.hive.ql.exec.persistence.MapJoinEagerRowContainer.read(MapJoinEagerRowContainer.java:131)        at org.apache.hadoop.hive.ql.exec.persistence.MapJoinTableContainerSerDe.load(MapJoinTableContainerSerDe.java:85)        at org.apache.hadoop.hive.ql.exec.mr.HashTableLoader.load(HashTableLoader.java:98)        at org.apache.hadoop.hive.ql.exec.MapJoinOperator.loadHashTable(MapJoinOperator.java:288)        at org.apache.hadoop.hive.ql.exec.MapJoinOperator$1.call(MapJoinOperator.java:173)        at org.apache.hadoop.hive.ql.exec.MapJoinOperator$1.call(MapJoinOperator.java:169)        at org.apache.hadoop.hive.ql.exec.mr.ObjectCache.retrieve(ObjectCache.java:55)        at org.apache.hadoop.hive.ql.exec.mr.ObjectCache.retrieveAsync(ObjectCache.java:63)        at org.apache.hadoop.hive.ql.exec.MapJoinOperator.initializeOp(MapJoinOperator.java:166)        at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:362)        at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:481)        at org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:438)        at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:375)        at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.configure(ExecMapper.java:131)        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)        at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)        at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)        at java.lang.reflect.Method.invoke(Unknown Source)        at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)        at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78)        at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)        at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38)        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)        at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)        at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)        at java.lang.reflect.Method.invoke(Unknown Source)        at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)        at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78)Container killed by the ApplicationMaster.Container killed on request. Exit code is 143Container exited with a non-zero exit code 143FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTaskMapReduce Jobs Launched: Stage-Stage-2: Map: 2  Reduce: 1   HDFS Read: 0 HDFS Write: 0 FAILTotal MapReduce CPU Time Spent: 0 msec

以上是"hive关联查询连接hbase的外部表时如何解决内存溢出问题"这篇文章的所有内容,感谢各位的阅读!相信大家都有了一定的了解,希望分享的内容对大家有所帮助,如果还想学习更多知识,欢迎关注行业资讯频道!

0