thriftのjava应用

转载:http://itindex.net/detail/46937-thrift-%E5%8E%9F%E7%90%86-java
          http://www.micmiu.com/soa/rpc/thrift-sample/
          http://blog.csdn.net/m13321169565/article/details/7835957
          http://blog.csdn.net/m13321169565/article/details/7836006
          http://blog.csdn.net/jun55xiu/article/details/8988429

一.基本介绍

      Apache Thrift是一个跨语言的服务框架,本质上为RPC;当开发的service需要开放出去的时候,就会遇到跨语言调用的问题,JAVA语言开发了一个UserService用来提供获取用户信息的服务,如果服务消费端有PHP/Python/C++等,我们不可能为所有的语言都适配出相应的调用方式,有时候会使用Http来作为访问协议;但是如果服务消费端不能使用HTTP,而且更加倾向于以操作本地API的方式来使用服务,那么就需要Thrift来提供支持.

二.基本概念

1.数据类型

Base Types(基本类型):
     bool:布尔值,true 或 false,对应 Java 的 boolean
     byte:8 位有符号整数,对应 Java 的 byte
     i16:16 位有符号整数,对应 Java 的 short
     i32:32 位有符号整数,对应 Java 的 int
     i64:64 位有符号整数,对应 Java 的 long
     double:64 位浮点数,对应 Java 的 double
     string:utf-8编码的字符串,对应 Java 的 String
Struct:(结构体类型):
     struct:定义公共的对象,在 Java 中相当于一个 JavaBean
Container(容器类型):
     list:对应 Java 的 ArrayList
     set:对应 Java 的 HashSet
     map:对应 Java 的 HashMap
Exception(异常类型):
     exception:对应 Java 的 Exception
Service(定义对象的接口和一系列方法服务类型):
     service:对应服务的类

2. 数据传输协议

      Thrift可以选择客户端与服务端之间传输通信协议的类别,在传输协议上总体上划分为文本(text)和二进制(binary)传输协议, 为节约带宽,提供传输效率,一般情况下使用二进制类型的传输协议为多数,但有时会还是会使用基于文本类型的协议,这需要根据实际需求而定:
      TBinaryProtocol :二进制编码格式进行数据传输。
      TCompactProtocol :非常有效的,使用Variable-Length Quantity (VLQ) 编码对数据进行压缩。
      TJSONProtocol :使用JSON的数据编码协议进行数据传输。
      TSimpleJSONProtocol :只提供JSON只写的协议,适用于通过脚本语言解析。
      TDebugProtocol : 在开发的过程中帮助开发人员调试用的,以文本的形式展现方便阅读。
      ps:客户端和服务端的协议要一致。

3.传输层

     TSocket : 使用堵塞式I/O进行传输,也是最常见的模式。
     TFramedTransport : 使用非阻塞方式,按块的大小,进行传输,类似于Java中的NIO。
     TFileTransport : 按照文件的方式进程传输,虽然这种方式不提供Java的实现,但是实现起来非常简单。
     TMemoryTransport : 使用内存I/O,就好比Java中的ByteArrayOutputStream实现。
     TZlibTransport : 使用执行zlib压缩,不提供Java的实现。

4.服务端类型

     TSimpleServer : 单线程服务器端使用标准的堵塞式I/O。
     TThreadPoolServer : 多线程服务器端使用标准的堵塞式I/O。
     TNonblockingServer : 多线程服务器端使用非堵塞式I/O,并且实现了Java中的NIO通道。

三.编码基本步骤

服务器端:
      实现服务处理接口impl
      创建TProcessor
      创建TServerTransport
      创建TProtocol
      创建TServer
      启动Server

客户端:
      创建Transport
      创建TProtocol
      基于TTransport和TProtocol创建 Client
      调用Client的相应方法

四.示例

1.准备jar包和exe工具

slf4j-api-1.5.6.jar
slf4j-log4j12-1.5.6.jar
log4j-1.2.14.jar
libthrift-0.9.1.jar
thrift-0.9.1.exe

2.service.thrift

namespace java com.zero.thrift   # 定义生成代码的命名空间,与定义的package相对应

struct User{
	1:i64 id,
	2:string name,
	3:i64 timestamp,
	4:bool vip	
}

service IUserService{
	User getById(1:i64 id)
}

3.生成java代码

windows平台下,将service.thrift文件利用thrift工具生成java代码:
   > thrift.exe --gen java service.thrift
在gen-java\com\zero\thrift目录下会生成User.java、IUserService.java两个java文件。

4.实现IUserService.Iface接口

package com.zero.thrift;

import java.util.Date;

import org.apache.thrift.TException;

public class UserServiceImpl implements IUserService.Iface{

	@Override
	public User getById(long id) throws TException {
		// TODO Auto-generated method stub
		System.out.println("getById()...");
		return new User(10000L, "zero", new Date().getTime(), true);
	}
}

5.服务端与客户端

5.1、TSimpleServer: 简单的单线程服模型,一般用于测试
TSimpleServerDemo.java
package com.zero.thrift;

import org.apache.thrift.TProcessor;
import org.apache.thrift.protocol.TBinaryProtocol;
import org.apache.thrift.protocol.TBinaryProtocol.Factory;
import org.apache.thrift.server.TServer;
import org.apache.thrift.server.TSimpleServer;
import org.apache.thrift.transport.TServerSocket;

public class TSimpleServerDemo {

	public static void main(String[] args) {
		// TODO Auto-generated method stub
		new TSimpleServerDemo().startServer();
	}

	public void startServer() {
		try {
			System.out.println("TSimpleServer start ....");

			// 设置服务器端口为7911
			TServerSocket serverTransport = new TServerSocket(7911);

			// 设置协议工厂为TBinaryProtocol.Factory
			Factory protocolFactory = new TBinaryProtocol.Factory();

			// 关联IUserService.Processor处理器与IUserService服务的实现类UserServiceImpl
			TProcessor processor = new IUserService.Processor<IUserService.Iface>(
					new UserServiceImpl());

			TServer.Args tArgs = new TServer.Args(serverTransport);
			tArgs.processor(processor);
			tArgs.protocolFactory(protocolFactory);

			TServer server = new TSimpleServer(tArgs);
			server.serve();
		} catch (Exception e) {
			System.out.println("Server start error!!!");
			e.printStackTrace();
		}
	}
}
TSimpleClientDemo.java
package com.zero.thrift;

import org.apache.thrift.TException;
import org.apache.thrift.protocol.TBinaryProtocol;
import org.apache.thrift.protocol.TProtocol;
import org.apache.thrift.transport.TSocket;
import org.apache.thrift.transport.TTransport;
import org.apache.thrift.transport.TTransportException;

public class TSimpleClientDemo {

	public static void main(String[] args) {
		// TODO Auto-generated method stub
		new TSimpleClientDemo().startClient();
	}
	public void startClient() {
		TTransport transport = null;
		try {
			transport = new TSocket("localhost", 7911, 5000);
			// 协议要和服务端一致
			TProtocol protocol = new TBinaryProtocol(transport);
			// TProtocol protocol = new TCompactProtocol(transport);
			// TProtocol protocol = new TJSONProtocol(transport);
			IUserService.Client client = new IUserService.Client(
					protocol);
			transport.open();
			User user = client.getById(1000L);
			user.setName(user.getName()+"007");
			System.out.println("TSimpleClientDemo client, result = " + user);
		} catch (TTransportException e) {
			e.printStackTrace();
		} catch (TException e) {
			e.printStackTrace();
		} finally {
			if (null != transport) {
				transport.close();
			}
		}
	}
}
客户端运行结果:
TSimpleClientDemo client, result = User(id:10000, name:zero007, timestamp:1442977728903, vip:true)

5.2、TThreadPoolServer : 多线程服务器端,标准的堵塞式I/O

TThreadPoolServerDemo.java
package com.zero.thrift;

import org.apache.thrift.TProcessor;
import org.apache.thrift.protocol.TBinaryProtocol;
import org.apache.thrift.protocol.TBinaryProtocol.Factory;
import org.apache.thrift.server.TServer;
import org.apache.thrift.server.TThreadPoolServer;
import org.apache.thrift.transport.TServerSocket;

public class TThreadPoolServerDemo {

	public static void main(String[] args) {
		// TODO Auto-generated method stub
		new TThreadPoolServerDemo().startServer();
	}

	public void startServer() {
		try {
			System.out.println("TThreadPoolServer start ....");

			// 设置服务器端口为7911
			TServerSocket serverTransport = new TServerSocket(7911);

			// 设置协议工厂为TBinaryProtocol.Factory
			Factory protocolFactory = new TBinaryProtocol.Factory();

			// 关联IUserService.Processor处理器与IUserService服务的实现类UserServiceImpl
			TProcessor processor = new IUserService.Processor<IUserService.Iface>(
					new UserServiceImpl());

			TThreadPoolServer.Args tArgs = new TThreadPoolServer.Args(
					serverTransport);
			tArgs.processor(processor);
			tArgs.protocolFactory(protocolFactory);

			// 线程池服务模型,使用标准的阻塞式IO,预先创建一组线程处理请求。
			TServer server = new TThreadPoolServer(tArgs);
			server.serve();
		} catch (Exception e) {
			System.out.println("Server start error!!!");
			e.printStackTrace();
		}
	}
}
TThreadPoolClientDemo.java
package com.zero.thrift;

import org.apache.thrift.TException;
import org.apache.thrift.protocol.TBinaryProtocol;
import org.apache.thrift.protocol.TProtocol;
import org.apache.thrift.transport.TSocket;
import org.apache.thrift.transport.TTransport;
import org.apache.thrift.transport.TTransportException;

public class TThreadPoolClientDemo {

	public static void main(String[] args) {
		// TODO Auto-generated method stub
		new TThreadPoolClientDemo().startClient();
	}

	public void startClient() {
		TTransport transport = null;
		try {
			// 设置传输通道
			transport = new TSocket("localhost", 7911, 5000);
			// 协议要和服务端一致
			TProtocol protocol = new TBinaryProtocol(transport);
			// TProtocol protocol = new TCompactProtocol(transport);
			// TProtocol protocol = new TJSONProtocol(transport);
			IUserService.Client client = new IUserService.Client(protocol);
			transport.open();
			long start = System.currentTimeMillis();
			for (int i = 0; i < 100; i++) {
				User user = client.getById(1000L);
				user.setName(user.getName() + "007");
				System.out
						.println("TThreadPoolClientDemo client, result = " + user);
			}
			System.out.println("耗时:" + (System.currentTimeMillis() - start));
		} catch (TTransportException e) {
			e.printStackTrace();
		} catch (TException e) {
			e.printStackTrace();
		} finally {
			if (null != transport) {
				transport.close();
			}
		}
	}
}

5.3、TNonblockingServer: 多线程服务器端使用非阻塞式 I/O .使用非阻塞式IO,服务端和客户端需要指定 TFramedTransport 数据传输的方式

TNonblockingServerDemo.java
package com.zero.thrift;

import org.apache.thrift.TProcessor;
import org.apache.thrift.protocol.TBinaryProtocol;
import org.apache.thrift.protocol.TBinaryProtocol.Factory;
import org.apache.thrift.server.TNonblockingServer;
import org.apache.thrift.server.TServer;
import org.apache.thrift.transport.TFramedTransport;
import org.apache.thrift.transport.TNonblockingServerSocket;

public class TNonblockingServerDemo {

	public static void main(String[] args) {
		// TODO Auto-generated method stub
		new TNonblockingServerDemo().startServer();
	}

	public void startServer() {
		try {
			System.out.println("TNonblockingServer start ....");

			// 设置服务器端口为7911
			TNonblockingServerSocket tnbSocketTransport = new TNonblockingServerSocket(
					7911);

			// 设置协议工厂为TBinaryProtocol.Factory
			Factory protocolFactory = new TBinaryProtocol.Factory();

			// 关联IUserService.Processor处理器与IUserService服务的实现类UserServiceImpl
			TProcessor processor = new IUserService.Processor<IUserService.Iface>(
					new UserServiceImpl());

			TNonblockingServer.Args tArgs = new TNonblockingServer.Args(
					tnbSocketTransport);
			tArgs.processor(processor);
			tArgs.protocolFactory(protocolFactory);
			tArgs.transportFactory(new TFramedTransport.Factory());

			// 使用非阻塞式IO,服务端和客户端需要指定TFramedTransport数据传输的方式
			TServer server = new TNonblockingServer(tArgs);
			server.serve();
		} catch (Exception e) {
			System.out.println("Server start error!!!");
			e.printStackTrace();
		}
	}
}
TNonblockingClientDemo.java
package com.zero.thrift;

import org.apache.thrift.TException;
import org.apache.thrift.protocol.TBinaryProtocol;
import org.apache.thrift.protocol.TProtocol;
import org.apache.thrift.transport.TFramedTransport;
import org.apache.thrift.transport.TSocket;
import org.apache.thrift.transport.TTransport;
import org.apache.thrift.transport.TTransportException;

public class TNonblockingClientDemo {

	public static void main(String[] args) {
		// TODO Auto-generated method stub
		new TNonblockingClientDemo().startClient();
	}

	public void startClient() {
		TTransport transport = null;
		try {
			transport = new TFramedTransport(new TSocket("localhost", 7911,
					5000));
			transport.open();
			
			// 协议要和服务端一致
			TProtocol protocol = new TBinaryProtocol(transport);
			IUserService.Client client = new IUserService.Client(protocol);
			long start = System.currentTimeMillis();
			for (int i = 0; i < 100; i++) {
				User user = client.getById(1000L);
				user.setName(user.getName() + "007");
				System.out.println("TNonblockingClientDemo client, result = "
						+ user);
			}
			System.out.println("耗时:" + (System.currentTimeMillis() - start));
		} catch (TTransportException e) {
			e.printStackTrace();
		} catch (TException e) {
			e.printStackTrace();
		} finally {
			if (null != transport) {
				transport.close();
			}
		}
	}
}

5.4、THsHaServer:半同步半异步的服务端模型,需要指定为TFramedTransport 数据传输的方式

package com.zero.thrift;

import org.apache.thrift.TProcessor;
import org.apache.thrift.protocol.TBinaryProtocol;
import org.apache.thrift.server.THsHaServer;
import org.apache.thrift.server.TServer;
import org.apache.thrift.transport.TFramedTransport;
import org.apache.thrift.transport.TNonblockingServerSocket;

public class THsHaServerDemo {

	public static void main(String[] args) {
		// TODO Auto-generated method stub
		new THsHaServerDemo().startServer();
	}

	public void startServer() {
		try {
			System.out.println("THsHaServer start ....");

			TProcessor tprocessor = new IUserService.Processor<IUserService.Iface>(
					new UserServiceImpl());

			TNonblockingServerSocket tnbSocketTransport = new TNonblockingServerSocket(
					7911);
			THsHaServer.Args thhsArgs = new THsHaServer.Args(tnbSocketTransport);
			thhsArgs.processor(tprocessor);
			thhsArgs.transportFactory(new TFramedTransport.Factory());
			thhsArgs.protocolFactory(new TBinaryProtocol.Factory());

			// 半同步半异步的服务模型
			TServer server = new THsHaServer(thhsArgs);
			server.serve();

		} catch (Exception e) {
			System.out.println("Server start error!!!");
			e.printStackTrace();
		}
	}
}
客户端同TNonblockingClientDemo,只需要注意传输协议一致以及指定传输方式为TFramedTransport。

5.5、AsynClient 异步客户端

package com.zero.thrift;

import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;

import org.apache.thrift.TException;
import org.apache.thrift.async.AsyncMethodCallback;
import org.apache.thrift.async.TAsyncClientManager;
import org.apache.thrift.protocol.TBinaryProtocol;
import org.apache.thrift.protocol.TCompactProtocol;
import org.apache.thrift.protocol.TProtocolFactory;
import org.apache.thrift.transport.TNonblockingSocket;
import org.apache.thrift.transport.TNonblockingTransport;

import com.zero.thrift.IUserService.AsyncClient.getById_call;

public class AsynClientDemo {

	public static void main(String[] args) {
		// TODO Auto-generated method stub
		new AsynClientDemo().startClient();
	}

	public void startClient() {
		try {
			// 异步调用管理器
			TAsyncClientManager clientManager = new TAsyncClientManager();
			// 设置传输通道,调用非阻塞IO
			TNonblockingTransport transport = new TNonblockingSocket(
					"localhost", 7911, 5000);
			// 设置协议
			TProtocolFactory tprotocol = new TBinaryProtocol.Factory();
			// 创建Client
			IUserService.AsyncClient asyncClient = new IUserService.AsyncClient(
					tprotocol, clientManager, transport);
			System.out.println("Client start .....");

			CountDownLatch latch = new CountDownLatch(1);
			AsynCallback callBack = new AsynCallback(latch);
			System.out.println("call method sayHello start ...");
			asyncClient.getById(1000L, callBack);
			System.out.println("call method sayHello .... end");
			boolean wait = latch.await(5, TimeUnit.SECONDS);
			System.out.println("latch.await = " + wait);
		} catch (Exception e) {
			e.printStackTrace();
		}
		System.out.println("startClient end.");
	}

	public class AsynCallback implements
			AsyncMethodCallback<IUserService.AsyncClient.getById_call> {
		private CountDownLatch latch;

		public AsynCallback(CountDownLatch latch) {
			this.latch = latch;
		}

		@Override
		public void onComplete(getById_call response) {
			System.out.println("onComplete");
			try {
				System.out.println("AsynCall result =:"
						+ response.getResult().toString());
			} catch (TException e) {
				e.printStackTrace();
			} catch (Exception e) {
				e.printStackTrace();
			} finally {
				latch.countDown();
			}
		}

		@Override
		public void onError(Exception exception) {
			System.out.println("onError :" + exception.getMessage());
			latch.countDown();
		}
	}

}
运行结果:
Client start .....
call method sayHello start ...
call method sayHello .... end
onComplete
AsynCall result =:User(id:10000, name:zero, timestamp:1442989991561, vip:true)
latch.await = true
startClient end.




你可能感兴趣的:(thriftのjava应用)