千家信息网

zookeeper中怎么实现分布式排它锁

发表于:2025-02-19 作者:千家信息网编辑
千家信息网最后更新 2025年02月19日,zookeeper中怎么实现分布式排它锁,很多新手对此不是很清楚,为了帮助大家解决这个难题,下面小编将为大家详细讲解,有这方面需求的人可以来学习下,希望你能有所收获。一:分布式互斥锁分布式锁主要用来在
千家信息网最后更新 2025年02月19日zookeeper中怎么实现分布式排它锁

zookeeper中怎么实现分布式排它锁,很多新手对此不是很清楚,为了帮助大家解决这个难题,下面小编将为大家详细讲解,有这方面需求的人可以来学习下,希望你能有所收获。

一:分布式互斥锁
分布式锁主要用来在分布式系统中同步共享资源的访问,在zookeeper中,并没有像JAVA里一样有Synchronized或者是ReentrantLock机制来实现锁机制,但是在zookeeper中,实现起来更简单:
我们可以讲将zk的一个数据节点代表一个锁,当多个客户端同时调用create()节点创建节点的时候,zookeeper会保证只会有一个客户端创建成功,那么我们就可以让这个创建成功的客户端让其持有锁,而其它的客户端则注册Watcher监听
当持有锁的客户端释放锁后,监听的客户端就会收到Watcher通知,然后再去试图获取锁,这样反复即可。

二:大概流程


三 :代码示例

import java.util.Collections;import java.util.List;import java.util.concurrent.CountDownLatch;import java.util.concurrent.ExecutorService;import java.util.concurrent.Executors;import org.apache.zookeeper.CreateMode;import org.apache.zookeeper.WatchedEvent;import org.apache.zookeeper.Watcher;import org.apache.zookeeper.Watcher.Event.EventType;import org.apache.zookeeper.Watcher.Event.KeeperState;import org.apache.zookeeper.ZooDefs.Ids;import org.apache.zookeeper.ZooKeeper;import org.apache.zookeeper.data.Stat;/** * 基于zookeeper实现互斥锁 *  * */public class DistributedClient {        private static final int SESSION_TIMEOUT = 5000;        private String hosts = "192.168.8.88:2181,192.168.8.88:2182,192.168.8.88:2183";        private String groupNode = "locks";        private String subNode = "sub";        private ZooKeeper zk;        // 当前client创建的子节点        private volatile String thisPath;        // 当前client等待的子节点        private volatile String waitPath;        private CountDownLatch latch = new CountDownLatch(1);        /**         * 连接zookeeper         *          * @param countDownLatch         */        public void connectZookeeper(final CountDownLatch countDownLatch)                        throws Exception {                zk = new ZooKeeper(hosts, SESSION_TIMEOUT, new Watcher() {                        @Override                        public void process(WatchedEvent event) {                                try {                                        if (event.getState() == KeeperState.SyncConnected) {                                                latch.countDown();                                        }                                        // 发生了waitPath的删除事件                                        /**                                         * 假设某个client在获得锁之前挂掉了, 由于client创建的节点是ephemeral类型的,                                         * 因此这个节点也会被删除, 从而导致排在这个client之后的client提前获得了锁.                                         * 此时会存在多个client同时访问共享资源. 如何解决这个问题呢? 可以在接到删除通知的时候,                                         * 进行一次确认, 确认当前的thisPath是否真的是列表中最小的节点.                                         */                                        if (event.getType() == EventType.NodeDeleted                                                        && event.getPath().equals(waitPath)) {                                                // 确认thisPath是否真的是列表中的最小节点                                                List childrenNodes = zk.getChildren("/"                                                                + groupNode, false);                                                String thisNode = thisPath                                                                .substring(("/">

可能的运行结果如下:

当前线程:pool-1-thread-16获得了锁:/locks/sub0000000053当前线程:pool-1-thread-16已经释放了锁,让其它客户端有机会去获取,/locks/sub0000000053当前线程:pool-1-thread-20-EventThread获得了锁:/locks/sub0000000054当前线程:pool-1-thread-20-EventThread已经释放了锁,让其它客户端有机会去获取,/locks/sub0000000054当前线程:pool-1-thread-5-EventThread获得了锁:/locks/sub0000000055当前线程:pool-1-thread-5-EventThread已经释放了锁,让其它客户端有机会去获取,/locks/sub0000000055当前线程:pool-1-thread-2-EventThread获得了锁:/locks/sub0000000056当前线程:pool-1-thread-2-EventThread已经释放了锁,让其它客户端有机会去获取,/locks/sub0000000056当前线程:pool-1-thread-6-EventThread获得了锁:/locks/sub0000000057当前线程:pool-1-thread-6-EventThread已经释放了锁,让其它客户端有机会去获取,/locks/sub0000000057当前线程:pool-1-thread-10-EventThread获得了锁:/locks/sub0000000058当前线程:pool-1-thread-10-EventThread已经释放了锁,让其它客户端有机会去获取,/locks/sub0000000058当前线程:pool-1-thread-3-EventThread获得了锁:/locks/sub0000000059当前线程:pool-1-thread-3-EventThread已经释放了锁,让其它客户端有机会去获取,/locks/sub0000000059当前线程:pool-1-thread-11-EventThread获得了锁:/locks/sub0000000060当前线程:pool-1-thread-11-EventThread已经释放了锁,让其它客户端有机会去获取,/locks/sub0000000060当前线程:pool-1-thread-7-EventThread获得了锁:/locks/sub0000000061当前线程:pool-1-thread-7-EventThread已经释放了锁,让其它客户端有机会去获取,/locks/sub0000000061当前线程:pool-1-thread-13-EventThread获得了锁:/locks/sub0000000062当前线程:pool-1-thread-13-EventThread已经释放了锁,让其它客户端有机会去获取,/locks/sub0000000062当前线程:pool-1-thread-15-EventThread获得了锁:/locks/sub0000000063当前线程:pool-1-thread-15-EventThread已经释放了锁,让其它客户端有机会去获取,/locks/sub0000000063当前线程:pool-1-thread-1-EventThread获得了锁:/locks/sub0000000064当前线程:pool-1-thread-1-EventThread已经释放了锁,让其它客户端有机会去获取,/locks/sub0000000064当前线程:pool-1-thread-18-EventThread获得了锁:/locks/sub0000000065当前线程:pool-1-thread-18-EventThread已经释放了锁,让其它客户端有机会去获取,/locks/sub0000000065当前线程:pool-1-thread-4-EventThread获得了锁:/locks/sub0000000066当前线程:pool-1-thread-4-EventThread已经释放了锁,让其它客户端有机会去获取,/locks/sub0000000066当前线程:pool-1-thread-19-EventThread获得了锁:/locks/sub0000000067当前线程:pool-1-thread-19-EventThread已经释放了锁,让其它客户端有机会去获取,/locks/sub0000000067当前线程:pool-1-thread-14-EventThread获得了锁:/locks/sub0000000068当前线程:pool-1-thread-14-EventThread已经释放了锁,让其它客户端有机会去获取,/locks/sub0000000068当前线程:pool-1-thread-9-EventThread获得了锁:/locks/sub0000000069当前线程:pool-1-thread-9-EventThread已经释放了锁,让其它客户端有机会去获取,/locks/sub0000000069当前线程:pool-1-thread-8-EventThread获得了锁:/locks/sub0000000070当前线程:pool-1-thread-8-EventThread已经释放了锁,让其它客户端有机会去获取,/locks/sub0000000070当前线程:pool-1-thread-12-EventThread获得了锁:/locks/sub0000000071当前线程:pool-1-thread-12-EventThread已经释放了锁,让其它客户端有机会去获取,/locks/sub0000000071当前线程:pool-1-thread-17-EventThread获得了锁:/locks/sub0000000072当前线程:pool-1-thread-17-EventThread已经释放了锁,让其它客户端有机会去获取,/locks/sub0000000072

看完上述内容是否对您有帮助呢?如果还想对相关知识有进一步的了解或阅读更多相关文章,请关注行业资讯频道,感谢您对的支持。

0