千家信息网

org.apache.hadoop.hive.contrib.serde2.RegexSerDe not found

发表于:2025-01-31 作者:千家信息网编辑
千家信息网最后更新 2025年01月31日,异常信息如下:at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSub
千家信息网最后更新 2025年01月31日org.apache.hadoop.hive.contrib.serde2.RegexSerDe not found

异常信息如下:

at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)

at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)

at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)

at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)

at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

java.lang.RuntimeException: MetaException(message:java.lang.ClassNotFoundException Class org.apache.hadoop.hive.contrib.serde2.RegexSerDe not found)

at org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:290)

at org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:281)

at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:631)

at org.apache.hadoop.hive.ql.metadata.Table.checkValidity(Table.java:189)

at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1017)

at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:950)

at org.apache.spark.sql.hive.HiveMetastoreCatalog.lookupRelation(HiveMetastoreCatalog.scala:201)

at org.apache.spark.sql.hive.HiveContext$$anon$2.org$apache$spark$sql$catalyst$analysis$OverrideCatalog$$super$lookupRelation(HiveContext.scala:262)

at org.apache.spark.sql.catalyst.analysis.OverrideCatalog$$anonfun$lookupRelation$3.apply(Catalog.scala:161)

at org.apache.spark.sql.catalyst.analysis.OverrideCatalog$$anonfun$lookupRelation$3.apply(Catalog.scala:161)

at scala.Option.getOrElse(Option.scala:120)

at org.apache.spark.sql.catalyst.analysis.OverrideCatalog$class.lookupRelation(Catalog.scala:161)

at org.apache.spark.sql.hive.HiveContext$$anon$2.lookupRelation(HiveContext.scala:262)

at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.getTable(Analyzer.scala:174)

at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$6.applyOrElse(Analyzer.scala:186)

at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$6.applyOrElse(Analyzer.scala:181)

at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:188)

at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:188)



背景:

CREATE TABLE apachelog (

host STRING,

identity STRING,

user STRING,

time STRING,

request STRING,

status STRING,

size STRING,

referer STRING,

agent STRING)

ROW FORMAT SERDE 'org.apache.hadoop.hive.contrib.serde2.RegexSerDe'

WITH SERDEPROPERTIES (

"input.regex" = "([^]*) ([^]*) ([^]*) (-|\\[^\\]*\\]) ([^ \"]*|\"[^\"]*\") (-|[0-9]*) (-|[0-9]*)(?: ([^ \"]*|\".*\") ([^ \"]*|\".*\"))?"

)

STORED AS TEXTFILE;


因为建表时使用了 'org.apache.hadoop.hive.contrib.serde2.RegexSerDe',而不是'org.apache.hadoop.hive.serde2.RegexSerDe',导致在使用spark-sql或者spark-shell访问时,一直报上述异常,总是找不到相应的类,导入相关的Jar包仍然无法解决。


解决办法:

要启动spark-shell,spark-sql时导入 --jar xxxxx.jar将相应的jar包导入。(注意:正常情况下,大家应该都会想到将相应jar导入。但我遇到的问题,如果jar的路径是个软连接路径的话,仍然会报上述异常,找不到相应的类,必须导入jar包的实际路径才行。可能因为spark对软路径的处理有bug,不确定哦。。。。)

0