本文将通过源码出发讲述spark如何调用hadoop几种OutputFormat,从而实现的文件输出,这里将讲述几种工作中常使用的算子,例如:saveAsTextFile(path) 、saveAsHadoopFile(path)
saveAsTextFile(path)底层调用也是saveAsHadoopFile(path),所以这里主要是讲述后者的源码;这一步也将带你认识到可以自定义的内容;
def main(args: Array[String]): Unit = {
val conf: SparkConf = new SparkConf().setMaster("local[*]").setAppName("")
val sc = new SparkContext(conf)
//禁用success文件
sc.hadoopConfiguration.set("mapreduce.fileoutputcommitter.marksuccessfuljobs", "false")
val value: RDD[(String,Int)] = sc.parallelize(List(
("1",1), ("1",1), ("2",1), ("2",1),("2",1),
))
value1
.saveAsHadoopFile("C:\\Users\\Desktop\\learn\\spark_program_test\\definedFileName"
,classOf[String]
,classOf[String]
,classOf[TextOutputFormat[String,String]])
sc.stop()
}
def saveAsHadoopFile[F <: OutputFormat[K, V]](
path: String)(implicit fm: ClassTag[F]): Unit = self.withScope {
saveAsHadoopFile(path, keyClass, valueClass, fm.runtimeClass.asInstanceOf[Class[F]])
}
def saveAsHadoopFile(
path: String,
keyClass: Class[_],
valueClass: Class[_],
outputFormatClass: Class[_ <: OutputFormat[_, _]],
conf: JobConf = new JobConf(self.context.hadoopConfiguration),
codec: Option[Class[_ <: CompressionCodec]] = None): Unit = self.withScope {
// Rename this as hadoopConf internally to avoid shadowing (see SPARK-2038).
val hadoopConf = conf
hadoopConf.setOutputKeyClass(keyClass)
hadoopConf.setOutputValueClass(valueClass)
conf.setOutputFormat(outputFormatClass)
for (c <- codec) {
hadoopConf.setCompressMapOutput(true)
hadoopConf.set("mapreduce.output.fileoutputformat.compress", "true")
hadoopConf.setMapOutputCompressorClass(c)
hadoopConf.set("mapreduce.output.fileoutputformat.compress.codec", c.getCanonicalName)
hadoopConf.set("mapreduce.output.fileoutputformat.compress.type",
CompressionType.BLOCK.toString)
}
// Use configured output committer if already set
if (conf.getOutputCommitter == null) {
hadoopConf.setOutputCommitter(classOf[FileOutputCommitter])
}
// When speculation is on and output committer class name contains "Direct", we should warn
// users that they may loss data if they are using a direct output committer.
val speculationEnabled = self.conf.getBoolean("spark.speculation", false)
val outputCommitterClass = hadoopConf.get("mapred.output.committer.class", "")
if (speculationEnabled && outputCommitterClass.contains("Direct")) {
val warningMessage =
s"$outputCommitterClass may be an output committer that writes data directly to " +
"the final location. Because speculation is enabled, this output committer may " +
"cause data loss (see the case in SPARK-10063). If possible, please use an output " +
"committer that does not have this behavior (e.g. FileOutputCommitter)."
logWarning(warningMessage)
}
FileOutputFormat.setOutputPath(hadoopConf,
SparkHadoopWriterUtils.createPathFromString(path, hadoopConf))
saveAsHadoopDataset(hadoopConf)
}
这里指定了OutputFormat为TextOutputFormat,如果不指定也是默认TextOutputFormat;进入到PairRDDFunctions第二个方法,其中saveAsHadoopDataset(hadoopConf)再进去;
def saveAsHadoopDataset(conf: JobConf): Unit = self.withScope {
// Rename this as hadoopConf internally to avoid shadowing (see SPARK-2038).
val hadoopConf = conf
val outputFormatInstance = hadoopConf.getOutputFormat
val keyClass = hadoopConf.getOutputKeyClass
val valueClass = hadoopConf.getOutputValueClass
if (outputFormatInstance == null) {
throw new SparkException("Output format class not set")
}
if (keyClass == null) {
throw new SparkException("Output key class not set")
}
if (valueClass == null) {
throw new SparkException("Output value class not set")
}
SparkHadoopUtil.get.addCredentials(hadoopConf)
logDebug("Saving as hadoop file of type (" + keyClass.getSimpleName + ", " +
valueClass.getSimpleName + ")")
if (SparkHadoopWriterUtils.isOutputSpecValidationEnabled(self.conf)) {
// FileOutputFormat ignores the filesystem parameter
val ignoredFs = FileSystem.get(hadoopConf)
hadoopConf.getOutputFormat.checkOutputSpecs(ignoredFs, hadoopConf)
}
val writer = new SparkHadoopWriter(hadoopConf)
writer.preSetup()
val writeToFile = (context: TaskContext, iter: Iterator[(K, V)]) => {
// Hadoop wants a 32-bit task attempt ID, so if ours is bigger than Int.MaxValue, roll it
// around by taking a mod. We expect that no task will be attempted 2 billion times.
val taskAttemptId = (context.taskAttemptId % Int.MaxValue).toInt
val (outputMetrics, callback) = SparkHadoopWriterUtils.initHadoopOutputMetrics(context)
writer.setup(context.stageId, context.partitionId, taskAttemptId)
writer.open()
var recordsWritten = 0L
Utils.tryWithSafeFinallyAndFailureCallbacks {
while (iter.hasNext) {
val record = iter.next()
writer.write(record._1.asInstanceOf[AnyRef], record._2.asInstanceOf[AnyRef])
// Update bytes written metric every few records
SparkHadoopWriterUtils.maybeUpdateOutputMetrics(outputMetrics, callback, recordsWritten)
recordsWritten += 1
}
}(finallyBlock = writer.close())
writer.commit()
outputMetrics.setBytesWritten(callback())
outputMetrics.setRecordsWritten(recordsWritten)
}
self.context.runJob(self, writeToFile)
writer.commitJob()
}
到达这里就是写入文件的主要逻辑了:
①writer.open():他是SparkHadoopWriter的方法;首先这里会初始化文件名(诸如part-0000),然后传入你所设置的OutputFormat类的getRecordWriter返回的RecordWriter中,所以如果想要自定义文件名,从这里看来可以重写getRecordWriter方法,后面会讲解TextOutputFormat和MultipleTextOutputFormat如何重写的getRecordWriter;
def open() {
val numfmt = NumberFormat.getInstance(Locale.US)
numfmt.setMinimumIntegerDigits(5)
numfmt.setGroupingUsed(false)
val outputName = "part-" + numfmt.format(splitID)
val path = FileOutputFormat.getOutputPath(conf.value)
val fs: FileSystem = {
if (path != null) {
path.getFileSystem(conf.value)
} else {
FileSystem.get(conf.value)
}
}
getOutputCommitter().setupTask(getTaskContext())
writer = getOutputFormat().getRecordWriter(fs, conf.value, outputName, Reporter.NULL)
}
②writeToFile函数:这是一块是具体如何写入文件的,首先可以看出这里每个分区只会生成一个文件,然后是调用你所设置的OutputFormat所使用的RecordWriter的write方法写入文件;如果要自定义写入内容,也就要自定义RecordWriter类;
这个类可以直接复制,然后根据自己的需求稍微改点代码就行了,接下来我就从需求出来来看看怎么重写这个类;
①文件编码格式为UTF-8以外的编码格式,或者换行符不为 '\n':因为在TextoutputFormat的LineRecoderWriter中这两个是写死的,所以需要重写TextoutputFormat类,只需要复制整个类代码就行,然后修改你需要改的地方,例如:
public class MyOutput extends FileOutputFormat {
protected static class LineRecordWriter
implements RecordWriter {
private static final String utf8 = "GBK";
private static final byte[] newline;
static {
try {
newline = "\r\n".getBytes(utf8);
} catch (UnsupportedEncodingException uee) {
throw new IllegalArgumentException("can't find " + utf8 + " encoding");
}
}
。。。
这里就修改了编码格式和换行符
②key/value的分隔符,这个可以写死(重写的时候),也可以在main中改hadoopconf配置
def main(args: Array[String]): Unit = {
val conf: SparkConf = new SparkConf().setMaster("local[*]").setAppName("")
val sc = new SparkContext(conf)
//修改输出文件的key/value分隔符
sc.hadoopConfiguration.set("mapreduce.output.textoutputformat.separator",",")
③修改文件名
@Override
public RecordWriter getRecordWriter(FileSystem ignored, JobConf job, String name, Progressable progress) throws IOException {
//重写的类加上的,这个可以自定义
name = Integer.parseInt(name.split("-")[1])+"";
。。。
}
④修改key/value写入逻辑:这里我没有做修改,可以根据业务逻辑修改;
public synchronized void write(K key, V value)
throws IOException {
boolean nullKey = key == null || key instanceof NullWritable;
boolean nullValue = value == null || value instanceof NullWritable;
if (nullKey && nullValue) {
return;
}
if (!nullKey) {
writeObject(key);
}
if (!(nullKey || nullValue)) {
out.write(keyValueSeparator);
}
if (!nullValue) {
writeObject(value);
}
out.write(newline);
}
//PairRddFunction中传过来的时候,key/value都转换为了Anyval,所以这里会走else
private void writeObject(Object o) throws IOException {
if (o instanceof Text) {
Text to = (Text) o;
out.write(to.getBytes(), 0, to.getLength());
} else {
out.write(o.toString().getBytes(utf8));
}
}
这是hadoop提供的简易的自定义文件名,自定义输出key/value数据,但是最终写入文件还是TextOutputFormat的LineRcorderWriter,也就是说文件编码格式、换行符无法自定义。
//修改生成的分区文件名,每个分区传入的name不同诸如:Part-0001,优先级低于generateFileNameForKeyValue
protected String generateLeafFileName(String name) {
return name;
}
//key,value不用解释,这里的name是generateLeafFileName返回的name,如果没有generateLeafFileName则是Part—0001,需要注意的是,由于是多分区写文件,如果不同分区生成文件名同样的文件,将会被覆盖,如果仅用key,必须保证相同key在同一分区,key+name则可以保证不会被覆盖,但是可能文件生成太多
protected String generateFileNameForKeyValue(K key, V value, String name) {
return name;
}
//实际写入key
protected K generateActualKey(K key, V value) {
return key;
}
//实际写入的value
protected V generateActualValue(K key, V value) {
return value;
}
//这个方法决定了最终写入文件的RecorderWriter,是getRecordWriter方法调用的,实际上mutipleOutputFormat,重写的RcordWriter(内部类的形式),只是使得name,key,value可以自定义
abstract protected RecordWriter getBaseRecordWriter(FileSystem fs,
JobConf job, String name, Progressable arg3) throws IOException;
备注:1.如果业务逻辑保函需要按照规定的文件大小或者条数切分,除了用foreachPartition算子,剩下的就必须修改PairRddFunction类的saveAsHadoopDataset方法的源码了;
2.spark写本地文件会生成.crc文件,写入hdfs不会生成;