1. Java自带的Serialize
依赖jar包:无
代码示意:
importjava.io.{ByteArrayInputStream, ByteArrayOutputStream, ObjectInputStream, ObjectOutputStream}
object JavaSerialize {
def serialize(obj: Object): Array[Byte]={
var oos: ObjectOutputStream= nullvar baos: ByteArrayOutputStream= null
try{
baos= newByteArrayOutputStream()
oos= newObjectOutputStream(baos)
oos.writeObject(obj)
baos.toByteArray()
}catch{case e: Exception =>println(e.getLocalizedMessage+e.getStackTraceString)null}
}
def deserialize(bytes: Array[Byte]): Object={
var bais: ByteArrayInputStream= null
try{
bais= newByteArrayInputStream(bytes)
val ois= newObjectInputStream(bais)
ois.readObject()
}catch{case e: Exception =>println(e.getLocalizedMessage+e.getStackTraceString)null}
}
}
2. Jackson序列化方式
依赖jar包:json4s-jackson_2.10-3.2.11.jar、jackson-annotations-2.3.0.jar、jackson-core-2.3.1.jar、jackson-databind-2.3.1.jar(均可在maven上下载)
代码示意:
importorg.json4s.NoTypeHintsimportorg.json4s.jackson.Serializationimportorg.json4s.jackson.Serialization._
object JacksonSerialize {
def serialize[T<: serializable with anyref : manifest t string="{
implicit val formats=Serialization.formats(NoTypeHints)
write(obj)
}
def deserialize[T: Manifest](objStr: String): T={
implicit val formats=Serialization.formats(NoTypeHints)
read[T](objStr)
}
}
代码也是非常简单,好处是序列化后的结果是以json格式显示,可以直接阅读,更人性化,但是缺点是序列化耗时较久,并且序列化后大小也不小
3. Avro序列化方式
依赖jar包:avro-tools-1.7.7.jar(用于编译生成类)、avro-1.7.7.jar
第一步:定义数据结构scheme文件user.avsc,如下
{"namespace": "example.avro",
"type": "record",
"name": "User",
"fields": [
{"name": "name", "type": "string"},
{"name": "favorite_number", "type": ["int", "null"]},
{"name": "favorite_color", "type": ["string", "null"]}
]
}
第二步:通过工具生成类
(1)将avro-tools-1.7.7.jar 包和user.avsc 放置在同一个路径下
(2)执行 java -jar avro-tools-1.7.7.jar compile schema user.avsc java.
(3)会在当前目录下,自动生成User.java文件,然后在代码中引用此类
第三步:代码示意
importjava.io.ByteArrayOutputStreamimportexample.avro.Userimportorg.apache.avro.file.{DataFileReader, DataFileWriter}importorg.apache.avro.io.{DecoderFactory, EncoderFactory}importorg.apache.avro.specific.{SpecificDatumReader, SpecificDatumWriter}
object AvroSerialize {//将序列化的结果返回为字节数组
def serialize(user: User): Array[Byte] ={
val bos= newByteArrayOutputStream()
val writer= newSpecificDatumWriter[User](User.getClassSchema)
val encoder= EncoderFactory.get().binaryEncoder(bos, null)
writer.write(user, encoder)
encoder.flush()
bos.close()
bos.toByteArray
}//将序列化后的字节数组反序列化为对象
def deserialize(bytes: Array[Byte]): Any ={
val reader= newSpecificDatumReader[User](User.getClassSchema)
val decoder= DecoderFactory.get().binaryDecoder(bytes, null)
var user: User= nulluser= reader.read(null, decoder)
user
}//将序列化的结果存入到文件
def serialize(user: User, path: String): Unit ={
val userDatumWriter= newSpecificDatumWriter[User](User.getClassSchema)
val dataFileWriter= newDataFileWriter[User](userDatumWriter)
dataFileWriter.create(user.getSchema(),newjava.io.File(path))
dataFileWriter.append(user)
dataFileWriter.close()
}//从文件中反序列化为对象
def deserialize(path: String): List[User] ={
val reader= newSpecificDatumReader[User](User.getClassSchema)
val dataFileReader= new DataFileReader[User](newjava.io.File(path), reader)
var users: List[User]=List[User]()while(dataFileReader.hasNext()) {
users :+=dataFileReader.next()
}
users
}
}
这里提供了两种方式,一种是通过二进制,另一种是通过文件。方法相对上面两种有点复杂,在hadoop RPC中使用了这种序列化方式
4. Kryo序列化方式
依赖jar包:kryo-4.0.0.jar、minlog-1.2.jar、objenesis-2.6.jar、commons-codec-1.8.jar
代码示意:
importjava.io.{ByteArrayOutputStream}importcom.esotericsoftware.kryo.{Kryo}importcom.esotericsoftware.kryo.io.{Input, Output}importcom.esotericsoftware.kryo.serializers.JavaSerializerimportorg.objenesis.strategy.StdInstantiatorStrategy
object KryoSerialize {
val kryo= newThreadLocal[Kryo]() {
override def initialValue(): Kryo={
val kryoInstance= newKryo()
kryoInstance.setReferences(false)
kryoInstance.setRegistrationRequired(false)
kryoInstance.setInstantiatorStrategy(newStdInstantiatorStrategy())
kryoInstance.register(classOf[Serializable],newJavaSerializer())
kryoInstance
}
}
def serialize[T<: serializable with anyref : manifest t array>
val baos= newByteArrayOutputStream()
val output= newOutput(baos)
output.clear()try{
kryo.get().writeClassAndObject(output, t)
}catch{case e: Exception =>e.printStackTrace()
}finally{
}
output.toBytes
}
def deserialize[T<: serializable with anyref : manifest array t="{
val input= newInput()try{
input.setBuffer(bytes)
kryo.get().readClassAndObject(input).asInstanceOf[T]
}finally{
}
}
}
这种方式经过我本地测试,速度是最快的,关键是做好对kryo对象的复用,因为大量创建会非常耗时,在这里要处理好多线程情况下对kryo对象的使用,spark中也会使用到kryo
其实还有其他的序列化方式,比如protobuf、thrify,操作上也有一定复杂性,由于环境问题暂时未搞定,搞定了再发出来。
来源:https://blog.csdn.net/u013597009/article/details/78538018