之前使用socket实现了一个简单的RPC框架调用,不了解RPC的实现原理的可以看下那篇文章
手写实现RPC框架基础功能
之前的客户端里是写死了服务端的ip和端口号,这里代码做了个优化,使用zookeeper实现注册中心,服务端把自己的ip和端口注册到zk上,客户端从服务端获取到对应的地址信息之后,再通过通讯协议连接到服务端实现远程调用
ZooKeeper(后面称为zk)是一种用于分布式应用程序的分布式开源协调服务
zookeeper官网下载地址:https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/
解压后将conf下的zoo_sample.cfg重命名为zoo.cfg,在cfg配置文件上添加对应的log位置
dataDir=D:\opt\zookeeper\data
dataLogDir=D:\software\zookeeper\log
进入bin目录,分别运行zkServer.cmd和zkCli.cmd即可启动zk服务端和客户端
zk的结构和linux系统的文件目录比较相似,整体是一个树状结构,zk允许各分布式应用通过一个共享的命名空间相互联系,该命名空间类似于一个标准的文件系统:由若干注册了的数据节点构成(每一个节点称为ZNode,每个节点不单需要有目录名称,还必须要有值,类似于键值对) 图上没有画出value
每个节点拥有唯一的路径path,客户端基于唯一的path新增和修改节点数据,zookeeper 收到后会实时通知对该路径进行监听的客户端。
序列化协议 :使用了Netty自带的编解码器
IOC框架:这里使用注解自动注册服务到zookeeper,所以引入了Spring来实现自动注入
底层通信框架:使用netty,为了方便不懂Netty的同学,也实现了Socket的版本
注册中心:使用ZooKeeper 提供服务注册与发现功能(zk还具有实现负载均衡的功能,不过这里没有实现)
服务端启动和服务发布
客户端调用
更具体的步骤都写在代码注释里
public interface IServiceRegister {
/**
* 注册服务
* @param serviceName 服务名称
* @param serviceIp 服务IP
* @param port 端口号
*/
void register(String serviceName,String serviceIp,int port);
}
定义注册接口,并实现根据serviceName注册指定ip,端口号到zk(这里直接使用基于Curator实现的ZK注册中心,它帮我们封装了很多zk客户端底层的操作)
/**
* @Description
* @Author: chenpp
* @Date: 2020/3/10 18:52
*/
@Component
public class ServiceRegisterCenter implements IServiceRegister {
private CuratorFramework curatorFramework;
{ // 通过curator连接zk
curatorFramework = CuratorFrameworkFactory.builder().
//定义连接串
connectString(ZKConfig.ZK_CONNECTION).
// 定义session超时时间
sessionTimeoutMs(ZKConfig.SESSION_TIMEOUT).
//定义重试策略
retryPolicy(new ExponentialBackoffRetry(1000, 10)).build();
//启动
curatorFramework.start();
}
//实现注册方法,将对应的服务名称和服务地址注册到zk上 serviceAddress--- ip : port
public void register(String serviceName, String serviceIp, int port) {
//注册相应的服务 注意 zk注册的节点名称需要以/开头
String servicePath = ZKConfig.REGISTER_NAMESPACE + "/" + serviceName;
try {
//判断 /${registerPath}/${serviceName}节点是否存在,不存在则创建对应的持久节点
if (curatorFramework.checkExists().forPath(servicePath) == null) {
curatorFramework.create().creatingParentsIfNeeded().withMode(CreateMode.PERSISTENT).forPath(servicePath, "0".getBytes());
}
//设置节点的value为对应的服务地址信息(临时节点)
String serviceAddress = servicePath + "/" + serviceIp + ":" + port;
String zkNode = curatorFramework.create().withMode(CreateMode.EPHEMERAL).forPath(serviceAddress, "0".getBytes());
System.out.println(serviceName + "服务,地址:" + serviceAddress + " 注册成功:" + zkNode);
} catch (Exception e) {
e.printStackTrace();
}
}
}
核心类,在spring启动时自动将添加了RpcService注解的服务注册到zk上,并在完成初始化之后启动netty服务端监听
/**
* 实现了InitializingBean接口,那么会在对应的AutoRpcServer实例化之后调用afterPropertiesSet方法
* 而实现了ApplicationContextAware接口,当spring容器初始化的时候,会自动的将ApplicationContext注入进来,
* 使得当前bean可以获得ApplicationContext上下文
*/
@Component
public class ZkRpcServer implements ApplicationContextAware, InitializingBean {
private static final ExecutorService executor = Executors.newCachedThreadPool();
@Autowired
private IServiceRegister registerCenter;
//key 为对应的接口类名,valeu 为具体的实例
private Map beanMappings = new HashMap();
//当rpc server端初始化完成后,就会开启监听 这里也可以改成Socket调用
public void afterPropertiesSet() throws Exception {
nettyRpc();
//socketRpc();
}
private void nettyRpc() throws InterruptedException {
//定义主线程池
EventLoopGroup bossGroup = new NioEventLoopGroup();
//定义工作线程池
EventLoopGroup workerGroup = new NioEventLoopGroup();
//类似于ServerSocket
ServerBootstrap serverBootstrap = new ServerBootstrap();
serverBootstrap.group(workerGroup, bossGroup)
.channel(NioServerSocketChannel.class)
//定义工作线程的处理函数
.childHandler(new ChannelInitializer() {
protected void initChannel(SocketChannel socketChannel) throws Exception {
//添加编码/解码器 用于转化对应的传输数据 从字节流到目标对象称之为解码 反之则为编码
ChannelPipeline pipeline = socketChannel.pipeline();
//自定义协议解码器
/**
* 入参有5个,分别解释如下
* maxFrameLength:框架的最大长度。如果帧的长度大于此值,则将抛出TooLongFrameException。
* lengthFieldOffset:长度字段的偏移量:即对应的长度字段在整个消息数据中得位置
* lengthFieldLength:长度字段的长度。如:长度字段是int型表示,那么这个值就是4(long型就是8)
* lengthAdjustment:要添加到长度字段值的补偿值
* initialBytesToStrip:从解码帧中去除的第一个字节数
*/
pipeline.addLast(new LengthFieldBasedFrameDecoder(Integer.MAX_VALUE, 0, 4, 0, 4))
//自定义协议编码器
.addLast(new LengthFieldPrepender(4))
//对象参数类型编码器
.addLast(new ObjectEncoder())
//对象参数类型解码器
.addLast(new ObjectDecoder(Integer.MAX_VALUE, ClassResolvers.cacheDisabled(null)))
.addLast(new RpcServerHandler(beanMappings));
}
})
//boss线程池的最大线程数
.option(ChannelOption.SO_BACKLOG, 128)
//工作线程保持长连接
.childOption(ChannelOption.SO_KEEPALIVE, true);
//绑定端口启动netty服务端
ChannelFuture future = serverBootstrap.bind(ZKConfig.SERVER_PORT).sync();
System.out.println("netty服务端启动,端口为:" + ZKConfig.SERVER_PORT + "....");
future.channel().closeFuture().sync();
}
private void socketRpc(){
ServerSocket serverSocket = null;
try {
//创建socket
serverSocket = new ServerSocket(ZKConfig.SERVER_PORT);
while(true){
//监听端口,是个阻塞的方法
Socket socket = serverSocket.accept();
//处理rpc请求,这里使用线程池来处理
executor.submit(new SpringHandleThread(beanMappings,socket));
}
} catch (Exception e) {
e.printStackTrace();
}finally {
if(serverSocket != null){
try {
serverSocket.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
public void setApplicationContext(ApplicationContext context) throws BeansException {
//从spring上下文中获取添加了RegisterService的注解的bean
String[] beanNames = context.getBeanNamesForAnnotation(RpcService.class);
for (String beanName : beanNames) {
Object bean = context.getBean(beanName);
RpcService annotation = bean.getClass().getAnnotation(RpcService.class);
Class interfaceClass = annotation.interfaceClass();
String serviceName = annotation.serviceName();
//将接口的类名和对应的实例bean的映射关系保存起来
beanMappings.put(interfaceClass.getName(), bean);
//注册实例到zk
registerCenter.register(serviceName, IpUtils.getLocalHost(), ZKConfig.SERVER_PORT);
}
}
}
netty处理服务端接收到的消息handler,通过反射调用后将返回值返回给客户端
/**
* @Description
* @Author: chenpp
* @Date: 2020/3/10 20:21
*/
public class RpcServerHandler extends ChannelInboundHandlerAdapter {
private Map serviceMap;
public RpcServerHandler(Map serviceMap){
this.serviceMap = serviceMap;
}
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
RpcRequest rpcRequest = (RpcRequest)msg;
String className = rpcRequest.getClassName();
Object result = null;
ChannelFuture future = null;
try {
Class> clazz = Class.forName(className);
//这里无法实例化,因为传入的是接口类型,接口无法实例化,所以需要从注册的serviceMap获取到
Object[] parameters = rpcRequest.getArgs();
Object serviceInstance = serviceMap.get(clazz.getName());
if (parameters == null) {
Method method = clazz.getMethod(rpcRequest.getMethodName());
result = method.invoke(serviceInstance);
} else {
Class[] types = new Class[parameters.length];
for (int i = 0; i < parameters.length; i++) {
types[i] = parameters[i].getClass();
}
Method method = clazz.getMethod(rpcRequest.getMethodName(), types);
result = method.invoke(serviceInstance, parameters);
}
if (result == null) {
// 如果方法结果为空,将一个默认的OK结果给客户端
future = ctx.writeAndFlush(ZKConfig.DEFAULT_MSG);
} else {
// 将返回值写给客户端写给客户端结果
future = ctx.writeAndFlush(result);
}
// 释放通道,不是关闭连接
future.addListener(ChannelFutureListener.CLOSE);
} catch (Exception e) {
e.printStackTrace();
}finally {
if( future != null) {
future.addListener(ChannelFutureListener.CLOSE);
}
}
}
}
启动服务端,可以看到zk注册成功的日志和对应的netty服务端启动日志
实现服务发送的接口,通过serviceName获取对应的服务地址,因为这里只做了单机,所以只返回一个服务地址
@Component
public class ServiceDiscoveryImpl implements IServiceDiscovery {
private Map serviceMap = new HashMap();
private List serviceAddresses;
private CuratorFramework curatorFramework;
{ // 通过curator连接zk
curatorFramework = CuratorFrameworkFactory.builder()
.connectString(ZKConfig.ZK_CONNECTION)
.sessionTimeoutMs(ZKConfig.SESSION_TIMEOUT)
.retryPolicy(new ExponentialBackoffRetry(1000, 10)).build();
//启动
curatorFramework.start();
}
public String discover(String serviceName) {
//根据serviceName获取对应的path
String nodePath = ZKConfig.REGISTER_NAMESPACE + "/" + serviceName;
try {
//这里不考虑集群,一个服务只发布一个实例
serviceAddresses = curatorFramework.getChildren().forPath(nodePath);
addServiceAddress(serviceAddresses,serviceName);
//动态发现服务节点变化,需要注册监听
registerWatcher(nodePath,serviceName);
System.out.println("获取服务:"+serviceName +"的服务地址:"+serviceMap.get(serviceName));
} catch (Exception e) {
throw new RuntimeException("服务发现获取子节点异常!", e);
}
return serviceMap.get(serviceName);
}
/**
* 监听节点变化,更新serviceAddresses
*
* @param path
*/
private void registerWatcher(final String path,final String serviceName) {
PathChildrenCache pathChildrenCache = new PathChildrenCache(curatorFramework, path, true);
pathChildrenCache.getListenable().addListener(new PathChildrenCacheListener() {
public void childEvent(CuratorFramework curatorFramework, PathChildrenCacheEvent pathChildrenCacheEvent) throws Exception {
serviceAddresses = curatorFramework.getChildren().forPath(path);
addServiceAddress(serviceAddresses,serviceName);
System.out.println("监听到节点:" + path + "变化为:" + serviceAddresses + "....");
}
});
try {
pathChildrenCache.start();
} catch (Exception e) {
throw new RuntimeException("监听节点变化异常!", e);
}
}
private void addServiceAddress(List serviceAddresses,String serviceName){
if (!CollectionUtils.isEmpty(serviceAddresses)) {
String serviceAddress = serviceAddresses.get(0);
serviceMap.put(serviceName,serviceAddress);
}
}
}
客户端核心类,实现了InvocationHandler接口,用于把创建代理类,invoke方法里就是创建netty客户端并发送请求给从注册中心获取到的服务端
public class RpcInvocationHandler implements InvocationHandler {
private String serviceName;
private IServiceDiscovery serviceDiscovery;
public RpcInvocationHandler(String serviceName, IServiceDiscovery serviceDiscovery) {
this.serviceName = serviceName;
this.serviceDiscovery = serviceDiscovery;
}
/**
* 增强的InvocationHandler,接口调用方法的时候实际是调用socket进行传输
*/
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
//将远程调用需要的接口类、方法名、参数信息封装成RPCRequest
RpcRequest rpcRequest = new RpcRequest();
rpcRequest.setArgs(args);
rpcRequest.setClassName(method.getDeclaringClass().getName());
rpcRequest.setMethodName(method.getName());
return handleNetty(rpcRequest);
//return handleSocket(rpcRequest);
}
private Object handleNetty(RpcRequest rpcRequest){
//创建客户端线程池
EventLoopGroup group = null;
final RpcClientHandler handler = new RpcClientHandler();
try{
group = new NioEventLoopGroup();
Bootstrap bootstrap = new Bootstrap();
bootstrap.group(group).channel(NioSocketChannel.class);
//添加客户端的处理器
bootstrap.option(ChannelOption.TCP_NODELAY, true)
.handler(new ChannelInitializer() {
@Override
protected void initChannel(SocketChannel socketChannel) throws Exception {
socketChannel.pipeline()
/** 入参有5个,如下
maxFrameLength:框架的最大长度。如果帧的长度大于此值,则将抛出TooLongFrameException。
lengthFieldOffset:长度字段的偏移量:即对应的长度字段在整个消息数据中的位置
lengthFieldLength:长度字段的长度:如:长度字段是int型表示,那么这个值就是4(long型就是8)
lengthAdjustment:要添加到长度字段值的补偿值
initialBytesToStrip:从解码帧中去除的第一个字节数
*/
//自定义协议解码器
.addLast("frameDecoder", new LengthFieldBasedFrameDecoder(Integer.MAX_VALUE, 0, 4, 0, 4))
//自定义协议编码器
.addLast("frameEncoder", new LengthFieldPrepender(4))
//对象参数类型编码器
.addLast("encoder", new ObjectEncoder())
// 对象参数类型解码器
.addLast("decoder", new ObjectDecoder(Integer.MAX_VALUE, ClassResolvers.cacheDisabled(null)))
.addLast(handler);
}
});
//通过service从zk获取服务端地址
String address = serviceDiscovery.discover(serviceName);
//绑定端口启动netty客户端
String[] add = address.split(":");
ChannelFuture future = bootstrap.connect(add[0], Integer.parseInt(add[1])).sync();
//通过Netty发送 RPCRequest给服务端
future.channel().writeAndFlush(rpcRequest).sync();
future.channel().closeFuture().sync();
}catch (Exception e){
e.printStackTrace();
}finally {
group.shutdownGracefully();
}
//返回客户端获取的服务端输出
return handler.getResponse();
}
private Object handleSocket(RpcRequest rpcRequest) throws IOException, ClassNotFoundException {
String address = serviceDiscovery.discover(serviceName);
//绑定端口启动netty客户端
String[] add = address.split(":");
//通过socket发送RPCRequest给服务端并获取结果返回
Socket socket= new Socket(add[0],Integer.parseInt(add[1]));
ObjectOutputStream oos = new ObjectOutputStream(socket.getOutputStream());
oos.writeObject(rpcRequest);
ObjectInputStream ois = new ObjectInputStream(socket.getInputStream());
Object result = ois.readObject();
return result;
}
}
客户端接收到服务端返回结果的处理类
public class RpcClientHandler extends ChannelInboundHandlerAdapter {
private Object response;
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
response = msg;
}
public Object getResponse(){
return response;
}
}
客户端执行结果:
至此,简易版的带注册中心的基本rpc框架就完成了
源码地址:
https://github.com/dearfulan/cp-rpc/tree/master/register-rpc-client
https://github.com/dearfulan/cp-rpc/tree/master/register-rpc-server
参考:https://blog.csdn.net/dongguabai/article/details/83625362