快上网专注成都网站设计 成都网站制作 成都网站建设
成都网站建设公司服务热线:028-86922220

网站建设知识

十年网站开发经验 + 多家企业客户 + 靠谱的建站团队

量身定制 + 运营维护+专业推广+无忧售后,网站问题一站解决

spark取得lzo压缩文件报错java.lang.ClassNotFoundException该怎么办

这期内容当中小编将会给大家带来有关 spark取得lzo压缩文件报错 java.lang.ClassNotFoundException该怎么办,文章内容丰富且以专业的角度为大家分析和叙述,阅读完这篇文章希望大家可以有所收获。

创新互联秉承专业、诚信、服务、进取的价值观,坚持以客户为中心、客户至上的服务理念,以“关注企业需求,实现企业价值”为导向,努力为企业提供全面优质的互联网应用服务。服务包括域名注册、网页空间、企业邮箱、网站建设、网站优化、网络营销解决方案和咨询服务,以帮助企业客户应用互联网。

恩,这个问题,反正是我从来没有注意的问题,但今天还是写出来吧

配置信息

hadoop core-site.xml配置


   io.compression.codecs
        org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,com.hadoop.compression.lzo.LzoCodec,com.hadoop.compression.lzo.LzopCodec,org.apache.hadoop.io.compress.BZip2Codec,org.apache.hadoop.io.compress.LzmaCodec
    

    
        io.compression.codec.lzo.class
        com.hadoop.compression.lzo.LzoCodec
    12345678910

io compression codec 是lzo

spark-env.sh配置

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/cluster/apps/hadoop/lib/nativeexport SPARK_LIBRARY_PATH=$SPARK_LIBRARY_PATH:/home/cluster/apps/hadoop/lib/nativeexport SPARK_CLASSPATH=$SPARK_CLASSPATH:/home/cluster/apps/hadoop/share/hadoop/yarn/:/home/cluster/apps/hadoop/share/hadoop/yarn/lib/:/home/cluster/apps/hadoop/share/hadoop/common/:/home/cluster/apps/hadoop/share/hadoop/common/lib/:/home/cluster/apps/hadoop/share/hadoop/hdfs/:/home/cluster/apps/hadoop/share/hadoop/hdfs/lib/:/home/cluster/apps/hadoop/share/hadoop/mapreduce/:/home/cluster/apps/hadoop/share/hadoop/mapreduce/lib/:/home/cluster/apps/hadoop/share/hadoop/tools/lib/:/home/cluster/apps/spark/spark-1.4.1/lib/123

操作信息

启动 spark-shell 
执行如下代码

 val lzoFile  = sc.textFile("/tmp/data/lzo/part-m-00000.lzo")
 lzoFile.count12

具体报错信息

java.lang.RuntimeException: Error in configuring object 
        at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) 
        at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75) 
        at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) 
        at org.apache.spark.rdd.HadoopRDD.getInputFormat(HadoopRDD.scala:190) 
        at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:203) 
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219) 
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217) 
        at scala.Option.getOrElse(Option.scala:120) 
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:217) 
        at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32) 
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219) 
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217) 
        at scala.Option.getOrElse(Option.scala:120) 
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:217) 
        at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32) 
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219) 
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217) 
        at scala.Option.getOrElse(Option.scala:120) 
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:217) 
        at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32) 
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219) 
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217) 
        at scala.Option.getOrElse(Option.scala:120) 
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:217) 
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1781) 
        at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:885) 
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147) 
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108) 
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:286) 
        at org.apache.spark.rdd.RDD.collect(RDD.scala:884) 
        at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:105) 
        at org.apache.spark.sql.hive.HiveContext$QueryExecution.stringResult(HiveContext.scala:503) 
        at org.apache.spark.sql.hive.thriftserver.AbstractSparkSQLDriver.run(AbstractSparkSQLDriver.scala:58) 
        at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:283) 
        at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:423) 
        at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:218) 
        at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala) 
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
        at java.lang.reflect.Method.invoke(Method.java:606) 
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:665) 
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:170) 
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:193) 
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:112) 
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 
Caused by: java.lang.reflect.InvocationTargetException 
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
        at java.lang.reflect.Method.invoke(Method.java:606) 
        at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106) 
        ... 45 more 
Caused by: java.lang.IllegalArgumentException: Compression codec com.hadoop.compression.lzo.LzoCodec not found. 
        at org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:135) 
        at org.apache.hadoop.io.compress.CompressionCodecFactory.(CompressionCodecFactory.java:175) 
        at org.apache.hadoop.mapred.TextInputFormat.configure(TextInputFormat.java:45) 
        ... 50 more 
Caused by: java.lang.ClassNotFoundException: Class com.hadoop.compression.lzo.LzoCodec not found 
        at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1803) 
        at org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:128) 
        ... 52 more 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263

然后如何解决呢

后来有点怀疑 hadoop core-site.xml配置格式问题,然后让同事帮我跟进hadoop 源码,可以肯定不是hadoop问题 
然后 我就想了想,之前也遇到类似的问题,我是这样配置spark-env.sh

export SPARK_LIBRARY_PATH=$SPARK_LIBRARY_PATH:/home/stark_summer/opt/hadoop/hadoop-2.3.0-cdh6.1.0/lib/native/Linux-amd64-64/*:/home/stark_summer/opt/hadoop/hadoop-2.3.0-cdh6.1.0/share/hadoop/common/hadoop-lzo-0.4.15-cdh6.1.0.jar:/home/stark_summer/opt/spark/spark-1.3.1-bin-hadoop2.3/lib/*
export SPARK_CLASSPATH=$SPARK_CLASSPATH:/home/stark_summer/opt/hadoop/hadoop-2.3.0-cdh6.1.0/share/hadoop/common/hadoop-lzo-0.4.15-cdh6.1.0.jar:/home/stark_summer/opt/spark/spark-1.3.1-bin-hadoop2.3/lib/*12

这个配置是之前fix这个问题的,但是 是很久之前的事情,我早已经忘了,所以这是平日写博客的好处,把每次遇到的问题全部记录下来 
恩?如果我指定具体.jar包,那就没问题了,但是在spark中 难道必须要用 * 来指定某个目录下的所有jar么?那这个跟hadoop还真不一样呢,在hadoop中 我们要指定某个目录下的jar包,都是/xxx/yyy/lib/ 
而spark必须要求/xxx/yyy/lib/*,才能加载到这个目录下的jar包,否则就会包如上错误

修改后的spark-env.sh配置文件

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/cluster/apps/hadoop/lib/nativeexport SPARK_LIBRARY_PATH=$SPARK_LIBRARY_PATH:/home/cluster/apps/hadoop/lib/nativeexport SPARK_CLASSPATH=$SPARK_CLASSPATH:/home/cluster/apps/hadoop/share/hadoop/yarn/*:/home/cluster/apps/hadoop/share/hadoop/yarn/lib/*:/home/cluster/apps/hadoop/share/hadoop/common/*:/home/cluster/apps/hadoop/share/hadoop/common/lib/*:/home/cluster/apps/hadoop/share/hadoop/hdfs/*:/home/cluster/apps/hadoop/share/hadoop/hdfs/lib/*:/home/cluster/apps/hadoop/share/hadoop/mapreduce/*:/home/cluster/apps/hadoop/share/hadoop/mapreduce/lib/*:/home/cluster/apps/hadoop/share/hadoop/tools/lib/*:/home/cluster/apps/spark/spark-1.4.1/lib/*123

当再次执行上述代码就没有问题了

但是 但是 但是

如果 我把 /home/cluster/apps/hadoop/lib/native 改成/home/cluster/apps/hadoop/lib/native/*

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/cluster/apps/hadoop/lib/native/*export SPARK_LIBRARY_PATH=$SPARK_LIBRARY_PATH:/home/cluster/apps/hadoop/lib/native/*export SPARK_CLASSPATH=$SPARK_CLASSPATH:/home/cluster/apps/hadoop/share/hadoop/yarn/*:/home/cluster/apps/hadoop/share/hadoop/yarn/lib/*:/home/cluster/apps/hadoop/share/hadoop/common/*:/home/cluster/apps/hadoop/share/hadoop/common/lib/*:/home/cluster/apps/hadoop/share/hadoop/hdfs/*:/home/cluster/apps/hadoop/share/hadoop/hdfs/lib/*:/home/cluster/apps/hadoop/share/hadoop/mapreduce/*:/home/cluster/apps/hadoop/share/hadoop/mapreduce/lib/*:/home/cluster/apps/hadoop/share/hadoop/tools/lib/*:/home/cluster/apps/spark/spark-1.4.1/lib/*123

尼玛 就会报错如下:

spark.repl.class.uri=http://10.32.24.78:52753) error [Ljava.lang.StackTraceElement;@4efb0b1f2015-09-11 17:52:02,357 ERROR [main] spark.SparkContext (Logging.scala:logError(96)) - Error initializing SparkContext.java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:68)
    at org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:60)
    at org.apache.spark.scheduler.EventLoggingListener.(EventLoggingListener.scala:69)
    at org.apache.spark.SparkContext.(SparkContext.scala:513)
    at org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:1017)
    at $line3.$read$$iwC$$iwC.(:9)
    at $line3.$read$$iwC.(:18)
	at $line3.$read.(:20)
	at $line3.$read$.(:24)
	at $line3.$read$.()
	at $line3.$eval$.(:7)
	at $line3.$eval$.()
	at $line3.$eval.$print()
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
	at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1338)
	at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
	at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
	at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
	at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
	at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
	at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
	at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:123)
    at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:122)
	at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
	at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:122)
	at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
	at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974)
	at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:157)
	at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
	at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:106)
	at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
	at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991)
    at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
    at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
    at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
    at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
	at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
	at org.apache.spark.repl.Main$.main(Main.scala:31)
	at org.apache.spark.repl.Main.main(Main.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:665)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:170)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:193)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:112)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.IllegalArgumentException
    at org.apache.spark.io.SnappyCompressionCodec.(CompressionCodec.scala:155)
    ... 56 more1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162

上述就是小编为大家分享的 spark取得lzo压缩文件报错 java.lang.ClassNotFoundException该怎么办了,如果刚好有类似的疑惑,不妨参照上述分析进行理解。如果想知道更多相关知识,欢迎关注创新互联行业资讯频道。


网页标题:spark取得lzo压缩文件报错java.lang.ClassNotFoundException该怎么办
链接地址:http://6mz.cn/article/gigosj.html

其他资讯