Spark算子:RDD行动Action操作(6)–saveAsHadoopFile、saveAsHadoopDataset

saveAsHadoopFile

def saveAsHadoopFile(path: String, keyClass: Class[_], valueClass: Class[_], outputFormatClass: Class[_ <: OutputFormat[_, _]], codec: Class[_ <: CompressionCodec]): Unit

def saveAsHadoopFile(path: String, keyClass: Class[_], valueClass: Class[_], outputFormatClass: Class[_ <: OutputFormat[_, _]], conf: JobConf = …, codec: Option[Class[_ <: CompressionCodec]] = None): Unit

saveAsHadoopFile是将RDD存储在HDFS上的文件中,支持老版本Hadoop API。

可以指定outputKeyClass、outputValueClass以及压缩格式。

每个分区输出一个文件。

 
  1. var rdd1 = sc.makeRDD(Array(("A",2),("A",1),("B",6),("B",3),("B",7)))

  2.  
  3. import org.apache.hadoop.mapred.TextOutputFormat

  4. import org.apache.hadoop.io.Text

  5. import org.apache.hadoop.io.IntWritable

  6.  
  7. rdd1.saveAsHadoopFile("/tmp/lxw1234.com/",classOf[Text],classOf[IntWritable],classOf[TextOutputFormat[Text,IntWritable]])

  8.  
  9. rdd1.saveAsHadoopFile("/tmp/lxw1234.com/",classOf[Text],classOf[IntWritable],classOf[TextOutputFormat[Text,IntWritable]],

  10. classOf[com.hadoop.compression.lzo.LzopCodec])

saveAsHadoopDataset

def saveAsHadoopDataset(conf: JobConf): Unit

saveAsHadoopDataset用于将RDD保存到除了HDFS的其他存储中,比如HBase。

在JobConf中,通常需要关注或者设置五个参数:

文件的保存路径、key值的class类型、value值的class类型、RDD的输出格式(OutputFormat)、以及压缩相关的参数。

##使用saveAsHadoopDataset将RDD保存到HDFS中

 
  1. import org.apache.spark.SparkConf

  2. import org.apache.spark.SparkContext

  3. import SparkContext._

  4. import org.apache.hadoop.mapred.TextOutputFormat

  5. import org.apache.hadoop.io.Text

  6. import org.apache.hadoop.io.IntWritable

  7. import org.apache.hadoop.mapred.JobConf

  8.  
  9.  
  10.  
  11. var rdd1 = sc.makeRDD(Array(("A",2),("A",1),("B",6),("B",3),("B",7)))

  12. var jobConf = new JobConf()

  13. jobConf.setOutputFormat(classOf[TextOutputFormat[Text,IntWritable]])

  14. jobConf.setOutputKeyClass(classOf[Text])

  15. jobConf.setOutputValueClass(classOf[IntWritable])

  16. jobConf.set("mapred.output.dir","/tmp/lxw1234/")

  17. rdd1.saveAsHadoopDataset(jobConf)

  18.  
  19. 结果:

  20. hadoop fs -cat /tmp/lxw1234/part-00000

  21. A 2

  22. A 1

  23. hadoop fs -cat /tmp/lxw1234/part-00001

  24. B 6

  25. B 3

  26. B 7


##保存数据到HBASE

HBase建表:

create ‘lxw1234′,{NAME => ‘f1′,VERSIONS => 1},{NAME => ‘f2′,VERSIONS => 1},{NAME => ‘f3′,VERSIONS => 1}

 
  1. import org.apache.spark.SparkConf

  2. import org.apache.spark.SparkContext

  3. import SparkContext._

  4. import org.apache.hadoop.mapred.TextOutputFormat

  5. import org.apache.hadoop.io.Text

  6. import org.apache.hadoop.io.IntWritable

  7. import org.apache.hadoop.mapred.JobConf

  8. import org.apache.hadoop.hbase.HBaseConfiguration

  9. import org.apache.hadoop.hbase.mapred.TableOutputFormat

  10. import org.apache.hadoop.hbase.client.Put

  11. import org.apache.hadoop.hbase.util.Bytes

  12. import org.apache.hadoop.hbase.io.ImmutableBytesWritable

  13.  
  14. var conf = HBaseConfiguration.create()

  15. var jobConf = new JobConf(conf)

  16. jobConf.set("hbase.zookeeper.quorum","zkNode1,zkNode2,zkNode3")

  17. jobConf.set("zookeeper.znode.parent","/hbase")

  18. jobConf.set(TableOutputFormat.OUTPUT_TABLE,"lxw1234")

  19. jobConf.setOutputFormat(classOf[TableOutputFormat])

  20.  
  21. var rdd1 = sc.makeRDD(Array(("A",2),("B",6),("C",7)))

  22. rdd1.map(x =>

  23. {

  24. var put = new Put(Bytes.toBytes(x._1))

  25. put.add(Bytes.toBytes("f1"), Bytes.toBytes("c1"), Bytes.toBytes(x._2))

  26. (new ImmutableBytesWritable,put)

  27. }

  28. ).saveAsHadoopDataset(jobConf)

  29.  
  30. ##结果:

  31. hbase(main):005:0> scan 'lxw1234'

  32. ROW COLUMN+CELL

  33. A column=f1:c1, timestamp=1436504941187, value=\x00\x00\x00\x02

  34. B column=f1:c1, timestamp=1436504941187, value=\x00\x00\x00\x06

  35. C column=f1:c1, timestamp=1436504941187, value=\x00\x00\x00\x07

  36. 3 row(s) in 0.0550 seconds

猜你喜欢

转载自blog.csdn.net/qq_36932624/article/details/82965351