它负责存储和管理大家关心的数据
,然后接受观察者的注册
,一旦这些数据发生了变化,zookeeper就将负责通知已在zookeeper上观察的观察者做出相对的反应
半数
以上的节点存活,zk就能正常服务,所以zk适合安装安装奇数服务器
提供的服务包括统一的命名服务,统一的配置管理,统一的集群管理,服务器节点动态上下线,软负载均衡等
- 在bin目录
- ./zkServer.sh start
./zkServer.sh status
./zkCli.sh
输入 quit 即可退出
./zkServer.sh stop
Zookeeper中的配置文件zoo.cfg中参数含义解读如下:
Leader和Follower初始连接时能容忍的最多心跳数(tickTime的数量)
3)syncLimit:5 同步通讯时限
Leader和Follower之间通信时间如果超过syncLimit * tickTime,Leader认为Follwer死
掉,从服务器列表中删除Follwer。
4)dataDir:保存Zookeeper中的数据
注意:默认的tmp目录,容易被Linux系统定期删除,所以一般不用默认的tmp目录。
5)clientPort = 2181:客户端连接端口,通常不做修改。
配置服务器编号
首先去/opt/module/apache-zookeeper-3.5.7-bin/ZkData
的目录下创建一个名为myid的文件,以我是102,103,104这三台集群搭建
这个的作用是一个唯一的标识,可以理解为身份证,标识自己是那一台服务器
配置zoo.cfg文件
server.2=192.168.116.132:2888:3888
server.3=192.168.116.133:2888:3888
server.4=192.168.116.134:2888:3888
Zookeeper 启动时读取此文件,拿到里面的数据与 zoo.cfg 里面的配置信息比 较从而判断到底是哪个 server。
在Bin目录中执行
./zkServer.sh start
执行查看
./zkServer.sh status
交换选举信息
此时服务器一发现服务器2的myid比自己目前投票推荐的(服务器一)y要大,更改投票选举为服务器2
此时服务器1位0票,服务器2为2票,没有达到半数以上,故而无法成功选举,此时服务器的状态为LOOKING用来表示一次服务器状态的变更
,在某一时刻,集群中的每台机器的ZXID是不同的,这个ZK的服务端更新请求有关系创建节点的事务id
,每次修改zk状态都会产生一个zk的事务id,事务id是zk中所有事务修改的总的次数,每次修改都会有一个唯一的ZXID,如果zxid1小于zxid2,那么zxid1是在zxid2之前发生的持久节点和不持久节点,不持久的节点断开连接之后会自动删除
持久节点和不持久节点下面还分持久节点和顺序编号和物顺序编号,非持久节点的顺序编号和无顺序编号
创建了一个北京节点,他的值是天安门
create /beijing "tiananmen"
创建了一个北京节点下的朝阳节点,他的值是双合
create /beijing/chaoyang "shuangjing"
- 就可以拿到北京下的值是天安门,还能把基本的信息获得到
get -s /beijing
- 之前已经创建好了一个朝阳他的值是双井,如果节点重复了会报错
- 用带编号的创建节点,创建出来的节点是带编号的,向上走的,不会重复
create -s /beijing/chaoyang "tianlemen"
- 创建一个北京下丰台,值是首经贸
create -e /beijing/fengtai "shoujingmao"
6. 创建临时节点带序号 create -e -s /beijing/tongzhou “dd”
7.修改值
- 之前丰台的值是首经贸,修改为纪家庙
set /beijing/fengtai "jijiamiao"
- 1.3 服务器通过 get -w /beijing 去监控北京的内容是否发生变化
102 去修改数据,103会成功的监听到,但是这个监听只能监听一次,如果还想继续监听就得重新注册
监听节点的增减变化
4.0.0
com.bj.sh.zk
zk
1.0-SNAPSHOT
junit
junit
RELEASE
org.apache.logging.log4j
log4j-core
2.8.2
org.apache.zookeeper
zookeeper
3.5.7
log4j.rootLogger=INFO, stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d %p [%c]
- %m%n
log4j.appender.logfile=org.apache.log4j.FileAppender
log4j.appender.logfile.File=target/spring.log
log4j.appender.logfile.layout=org.apache.log4j.PatternLayout
log4j.appender.logfile.layout.ConversionPattern=%d %p [%c]- %m%n
package com.bj.sh.zk;
import org.apache.zookeeper.*;
import org.junit.Before;
import org.junit.Test;
import java.io.IOException;
/**
* @author LXY
* @desc
* @time 2023--01--03--20:29
*/
public class zkClient {
String connectString="192.168.116.132:2181,192.168.116.133:2182,192.168.116.134:2181";
int sessionTimeout=2000;
private ZooKeeper zkClient = null;
@Before
public void init() throws IOException {
zkClient = new ZooKeeper(connectString, sessionTimeout, new Watcher() {
public void process(WatchedEvent watchedEvent) {
}
});
}
@Test
public void rreat() throws InterruptedException, KeeperException {
//创建节点:在根目录创建,值为xiongan,设置最高权限(还可以设置可读可写的权限),创建的节点的类型(持久化不带序列号的节点)
zkClient.create("/hebei","xiongan".getBytes(), ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);
}
}
@Test
public void getChildren() throws Exception {
List children = zkClient.getChildren("/", true);
for (String child : children) {
System.out.println(child);
}
// 延时阻塞
Thread.sleep(Long.MAX_VALUE);
}
@Before
public void init() throws IOException {
zkClient = new ZooKeeper(connectString, sessionTimeout, new Watcher() {
public void process(WatchedEvent watchedEvent) {
try {
System.out.println("----------------------------");
final List children = zkClient.getChildren("/", true);
for (String child : children) {
System.out.println(child);
}
// Thread.sleep(Long.MAX_VALUE);
System.out.println("-------------");
} catch (KeeperException e) {
e.printStackTrace();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
});
}
@Test
public void exist() throws InterruptedException, KeeperException {
final Stat exists = zkClient.exists("/hebei", false);
System.out.println(exists);
}
package com.bj.sh.zk;
import org.apache.zookeeper.KeeperException;
import org.apache.zookeeper.WatchedEvent;
import org.apache.zookeeper.Watcher;
import org.apache.zookeeper.ZooKeeper;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
/**
* @author LXY
* @desc //客户端的监听
* @time 2023--01--03--21:35
*/
public class DistributeClient {
String connectString="192.168.116.132:2181,192.168.116.133:2182,192.168.116.134:2181";
int sessionTimeout=2000000;
ZooKeeper zkClient=null;
public static void main(String[] args) throws IOException, InterruptedException, KeeperException {
DistributeClient client=new DistributeClient();
//连接客户端
client.getconntion();
client.getChildren();
client.sleep();
}
private void sleep() throws InterruptedException {
Thread.sleep(Long.MAX_VALUE);
}
//客户端的监听
private void getChildren() throws InterruptedException, KeeperException {
Liststr=new ArrayList();
List children = zkClient.getChildren("/server", true);
for (String child : children) {
str.add(child);
}
System.out.println(str.toString());
}
//连接客户端
private void getconntion() throws IOException {
zkClient=new ZooKeeper(connectString, sessionTimeout, new Watcher() {
public void process(WatchedEvent watchedEvent) {
try {
getChildren();
} catch (InterruptedException e) {
e.printStackTrace();
} catch (KeeperException e) {
e.printStackTrace();
}
}
});
}
}
通过shell模拟了一下,好用
DistributeServer
他的功能是运行的时候自动创建临时的server/队列,下线自动删除
package com.bj.sh.zk;
import org.apache.zookeeper.*;
import java.io.IOException;
/**
* @author LXY
* @desc
* @time 2023--01--03--21:45
*/
public class DistributeServer {
String connectString="192.168.116.132:2181,192.168.116.133:2182,192.168.116.134:2181";
int sessionTimeout=20000000;
ZooKeeper zkserver=null;
public static void main(String[] args) throws IOException, InterruptedException, KeeperException {
DistributeServer distributeServer=new DistributeServer();
distributeServer.getconntion();
distributeServer.creatserver("hadoop001");
distributeServer.sleep();
}
private void sleep() throws InterruptedException {
Thread.sleep(Long.MAX_VALUE);
}
private void creatserver(String hostname) throws InterruptedException, KeeperException {
zkserver.create("/server/"+hostname,hostname.getBytes(), ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.EPHEMERAL_SEQUENTIAL);
System.out.println("hostname:"+hostname);
}
//连接客户端
private void getconntion() throws IOException {
zkserver=new ZooKeeper(connectString, sessionTimeout, new Watcher() {
public void process(WatchedEvent watchedEvent) {
}
});
}
}
package com.bj.sh.zk.lock;
import com.bj.sh.zk.zkClient;
import org.apache.zookeeper.*;
import org.apache.zookeeper.data.Stat;
import java.io.IOException;
import java.util.Collections;
import java.util.List;
import java.util.concurrent.CountDownLatch;
/**
* @author LXY
* @desc 分布式悲观锁
* @time 2023--01--04--20:31
*/
public class DistributedLock {
String connectString="192.168.116.132:2181,192.168.116.133:2182,192.168.116.134:2181";
int sessionTimeout=2000;
ZooKeeper zk=null;
private String rootNode = "locks";
private String subNode = "seq-";
//实例化计数器
private CountDownLatch connectLatch = new CountDownLatch(1);
//节点等待
private CountDownLatch waitLatch = new CountDownLatch(1);
// 当前 client 等待的子节点
private String waitPath;
// 当前 client 创建的子节点
private String currentNode;
public DistributedLock() throws IOException, InterruptedException, KeeperException {
//连接
zk=new ZooKeeper(connectString, sessionTimeout, new Watcher() {
public void process(WatchedEvent event) {
if (event.getState() == Event.KeeperState.SyncConnected) {
connectLatch.countDown();
}
if (event.getType() == Event.EventType.NodeDeleted && event.getPath().equals(waitPath))
{
waitLatch.countDown();
}
}
});
//连接,等待一个线程连接
connectLatch.await();
Stat exists = zk.exists("/locks", false);
if (exists==null){
System.out.println("根节点不存在,创建临时节点");
zk.create("/locks", "locks".getBytes(), ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);
}
}
//加锁
void zkLock() throws InterruptedException, KeeperException {
currentNode = zk.create("/" + rootNode + "/" + subNode,null, ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.EPHEMERAL_SEQUENTIAL);
// wait 一小会, 让结果更清晰一些
Thread.sleep(10);
// 注意, 没有必要监听"/locks"的子节点的变化情况
List childrenNodes = zk.getChildren("/" +rootNode, false);
// 列表中只有一个子节点, 那肯定就是 currentNode , 说明 client 获得锁
if (childrenNodes.size() == 1) {
return;
} else {
//对根节点下的所有临时顺序节点进行从小到大排序
Collections.sort(childrenNodes);
//当前节点名称
String thisNode = currentNode.substring(("/" +rootNode + "/").length());
//获取当前节点的位置
int index = childrenNodes.indexOf(thisNode);
if (index == -1) {
System.out.println("数据异常");
} else if (index == 0) {
// index == 0, 说明 thisNode 在列表中最小, 当前 client 获得锁
return;
} else {
// 获得排名比 currentNode 前 1 位的节点
this.waitPath = "/" + rootNode + "/" +
childrenNodes.get(index - 1);
// 在 waitPath 上注册监听器, 当 waitPath 被删除时, zookeeper 会回调监听器的 process 方法
zk.getData(waitPath, true, new Stat());
//进入等待锁状态
waitLatch.await();
return;
}
}
}
public void zkUnlock() throws InterruptedException, KeeperException {
zk.delete(this.currentNode, -1);
}
}
package com.bj.sh.zk.lock;
import org.apache.zookeeper.KeeperException;
import java.io.IOException;
/**
* @author LXY
* @desc
* @time 2023--01--04--20:36
*/
public class test {
public static void main(String[] args) throws IOException, InterruptedException, KeeperException {
// 创建分布式锁 1
final DistributedLock lock1 = new DistributedLock();
// 创建分布式锁 2
final DistributedLock lock2 = new DistributedLock();
new Thread(new Runnable() {
public void run() {
// 获取锁对象
try {
lock1.zkLock();
System.out.println("线程 1 获取锁");
Thread.sleep(5 * 1000);
lock1.zkUnlock();
System.out.println("线程 1 释放锁");
} catch (Exception e) {
e.printStackTrace();
}
}
}).start();
new Thread(new Runnable() {
public void run() {
// 获取锁对象
try {
lock2.zkLock();
System.out.println("线程 2 获取锁");
Thread.sleep(5 * 1000);
lock2.zkUnlock();
System.out.println("线程 2 释放锁");
} catch (Exception e) {
e.printStackTrace();
}
}
}).start();
}
}
1)原生的 Java API 开发存在的问题
(1)会话连接是异步的,需要自己去处理。比如使用 CountDownLatch
(2)Watch 需要重复注册,不然就不能生效
(3)开发的复杂性还是比较高的
(4)不支持多节点删除和创建。需要自己去递归
curator是一款专门解决分布式锁的框架,
详情请查看官方文档:https://curator.apache.org/index.html
代码实现
pom
org.apache.curator
curator-framework
4.3.0
org.apache.curator
curator-recipes
4.3.0
org.apache.curator
curator-client
4.3.0
package com.bj.sh.zk.lock2;
import org.apache.curator.RetryPolicy;
import org.apache.curator.framework.CuratorFramework;
import org.apache.curator.framework.CuratorFrameworkFactory;
import org.apache.curator.framework.recipes.locks.InterProcessMutex;
import org.apache.curator.retry.ExponentialBackoffRetry;
/**
* @author LXY
* @desc
* @time 2023--01--04--21:45
*/
public class test {
public static void main(String[] args) {
final InterProcessMutex lock1 = new InterProcessMutex(getCuratorFramework(), "/lock");
for (int i = 0; i < 5; i++) {
new Thread(new Runnable() {
public void run() {
try {
//加锁
lock1.acquire();
System.out.println("线程" + Thread.currentThread().getName() + "加锁");
Thread.sleep(5000);
lock1.release();
System.out.println("线程" + Thread.currentThread().getName() + "释放锁");
} catch (Exception e) {
e.printStackTrace();
}
}
}).start();
}
}
private static CuratorFramework getCuratorFramework() {
//重试策略,初试时间 3 秒,重试 3 次
ExponentialBackoffRetry policy = new ExponentialBackoffRetry(3000, 3);
// 通过工厂创建
CuratorFramework build = CuratorFrameworkFactory.builder().connectString("192.168.116.132:2181,192.168.116.133:2182,192.168.116.134:2181")
.connectionTimeoutMs(2000)
.sessionTimeoutMs(2000)
.retryPolicy(policy)
.build();
build.start();
return build;
}
}