如何学习raft分布式一致性算法
发表于:2025-01-23 作者:千家信息网编辑
千家信息网最后更新 2025年01月23日,这篇文章主要讲解了"如何学习raft分布式一致性算法",文中的讲解内容简单清晰,易于学习与理解,下面请大家跟着小编的思路慢慢深入,一起来研究和学习"如何学习raft分布式一致性算法"吧!raft分布式
千家信息网最后更新 2025年01月23日如何学习raft分布式一致性算法
这篇文章主要讲解了"如何学习raft分布式一致性算法",文中的讲解内容简单清晰,易于学习与理解,下面请大家跟着小编的思路慢慢深入,一起来研究和学习"如何学习raft分布式一致性算法"吧!
raft分布式一致性算法
分布式存储系统通常会通过维护多个副本来进行容错,以提高系统的可用性。这就引出了分布式存储系统的核心问题--如何保证多个副本的一致性?Raft算法把问题分解成了四个子问题:1. 领袖选举(leader election)、2. 日志复制(log replication)、3. 安全性(safety)4. 成员关系变化(membership changes)这几个子问题。源码gitee地址:https://gitee.com/ioly/learning.gooop原文链接:https://my.oschina.net/ioly/blog/5011356
目标
根据raft协议,实现高可用分布式强一致的kv存储
子目标(Day 12)
终于可以"点火"了,来到这里不容易 _
添加大量诊断日志
修复若干细节问题
编写单元测试代码:
启动多个raft节点
检测Leader选举是否成功
向节点1写入若干数据
向节点2写入若干数据
在节点3读取数据
kill掉当前Leader节点,观察重新选举是否成功
单元测试
tRaftKVServer_test.go,在本地启动四个raft节点进行功能性测试
package serverimport ( "learning/gooop/etcd/raft/debug" "learning/gooop/etcd/raft/logger" "learning/gooop/etcd/raft/rpc" "testing" "time" nrpc "net/rpc")func Test_RaftKVServer(t *testing.T) { fnAssertTrue := func(b bool, msg string) { if !b { t.Fatal(msg) } } logger.Exclude("RaftRPCServer.Ping") logger.Exclude("RaftRPCServer.Heartbeat") logger.Exclude("feLeaderHeartbeat") logger.Exclude(").Heartbeat") // start node 1 to 3 _ = new(tRaftKVServer).BeginServeTCP("./node-01") _ = new(tRaftKVServer).BeginServeTCP("./node-02") _ = new(tRaftKVServer).BeginServeTCP("./node-03") _ = new(tRaftKVServer).BeginServeTCP("./node-04") // wait for up time.Sleep(1 * time.Second) // tRaftLSMImplement(node-01,1).HandleStateChanged, state=2 fnAssertTrue(logger.Count("HandleStateChanged, state=3") == 1, "expecting leader node") t.Logf("passing electing, leader=%v", debug.LeaderNodeID) // put into node-1 c1,_ := nrpc.Dial("tcp", "localhost:3331") defer c1.Close() kcmd := new(rpc.KVCmd) kcmd.OPCode = rpc.KVPut kcmd.Key = []byte("key-01") kcmd.Content = []byte("content 01") kret := new(rpc.KVRet) err := c1.Call("KVStoreRPCServer.ExecuteKVCmd", kcmd, kret) fnAssertTrue(err == nil && kret.Code == rpc.KVOk, "expecting KVOk") t.Log("passing put into node-01") // put into node-2 c2,_ := nrpc.Dial("tcp", "localhost:3332") defer c2.Close() kcmd.Key = []byte("key-02") kcmd.Content = []byte("content 02") err = c2.Call("KVStoreRPCServer.ExecuteKVCmd", kcmd, kret) fnAssertTrue(err == nil && kret.Code == rpc.KVOk, "expecting KVOk") t.Log("passing put into node-02") // get from node-3 c3,_ := nrpc.Dial("tcp", "localhost:3333") defer c3.Close() kcmd.OPCode = rpc.KVGet kcmd.Key = []byte("key-02") kcmd.Content = nil kret.Content = nil kret.Key = nil err = c3.Call("KVStoreRPCServer.ExecuteKVCmd", kcmd, kret) fnAssertTrue(err == nil && kret.Code == rpc.KVOk, "expecting KVOk") fnAssertTrue(kret.Content != nil && string(kret.Content) == "content 02", "expecting content 02") t.Log("passing get from node-04") // kill leader node debug.KilledNodeID = debug.LeaderNodeID time.Sleep(2 * time.Second) fnAssertTrue(logger.Count("HandleStateChanged, state=3") == 2, "expecting reelecting leader node") t.Logf("passing reelecting, leader=%v", debug.LeaderNodeID) time.Sleep(2 * time.Second)}
测试输出
可以观察到5个passing,测试ok,重新选举的时延也在预期范围内,约700ms
API server listening at: [::]:46709=== RUN Test_RaftKVServer16:51:09.329792609 tRaftKVServer.BeginServeTCP, starting node-01, port=333116:51:09.329864584 tBrokenState(from=node-01, to=node-01@localhost:3331).whenStartThenBeginDial16:51:09.329888978 tBrokenState(from=node-01, to=node-02@localhost:3332).whenStartThenBeginDial16:51:09.329903778 tBrokenState(from=node-01, to=node-03@localhost:3333).whenStartThenBeginDial16:51:09.329912231 tBrokenState(from=node-01, to=node-04@localhost:3334).whenStartThenBeginDial16:51:09.329920585 tFollowerState(node-01).init16:51:09.329926372 tFollowerState(node-01).initEventHandlers16:51:09.329941794 tFollowerState(node-01).Start16:51:09.330218761 tRaftKVServer.BeginServeTCP, service ready at port=333116:51:09.330549519 tFollowerState(node-01).whenStartThenBeginWatchLeaderTimeout, begin16:51:09.333852427 tRaftKVServer.BeginServeTCP, starting node-02, port=333216:51:09.333893483 tBrokenState(from=node-02, to=node-01@localhost:3331).whenStartThenBeginDial16:51:09.333925018 tBrokenState(from=node-02, to=node-02@localhost:3332).whenStartThenBeginDial16:51:09.333955573 tBrokenState(from=node-02, to=node-03@localhost:3333).whenStartThenBeginDial16:51:09.33397762 tBrokenState(from=node-02, to=node-04@localhost:3334).whenStartThenBeginDial16:51:09.333990318 tFollowerState(node-02).init16:51:09.333997643 tFollowerState(node-02).initEventHandlers16:51:09.334015293 tFollowerState(node-02).Start16:51:09.334089713 tRaftKVServer.BeginServeTCP, service ready at port=333216:51:09.334290701 tFollowerState(node-02).whenStartThenBeginWatchLeaderTimeout, begin16:51:09.337803901 tRaftKVServer.BeginServeTCP, starting node-03, port=333316:51:09.337842816 tBrokenState(from=node-03, to=node-01@localhost:3331).whenStartThenBeginDial16:51:09.337866444 tBrokenState(from=node-03, to=node-02@localhost:3332).whenStartThenBeginDial16:51:09.337880481 tBrokenState(from=node-03, to=node-03@localhost:3333).whenStartThenBeginDial16:51:09.337893773 tBrokenState(from=node-03, to=node-04@localhost:3334).whenStartThenBeginDial16:51:09.337905184 tFollowerState(node-03).init16:51:09.337912795 tFollowerState(node-03).initEventHandlers16:51:09.337945677 tFollowerState(node-03).Start16:51:09.338027861 tRaftKVServer.BeginServeTCP, service ready at port=333316:51:09.338089164 tFollowerState(node-03).whenStartThenBeginWatchLeaderTimeout, begin16:51:09.341594205 tRaftKVServer.BeginServeTCP, starting node-04, port=333416:51:09.34163547 tBrokenState(from=node-04, to=node-01@localhost:3331).whenStartThenBeginDial16:51:09.341679869 tBrokenState(from=node-04, to=node-02@localhost:3332).whenStartThenBeginDial16:51:09.341694419 tBrokenState(from=node-04, to=node-03@localhost:3333).whenStartThenBeginDial16:51:09.3417269 tBrokenState(from=node-04, to=node-04@localhost:3334).whenStartThenBeginDial16:51:09.341741739 tFollowerState(node-04).init16:51:09.341770267 tFollowerState(node-04).initEventHandlers16:51:09.341793763 tFollowerState(node-04).Start16:51:09.34213956 tRaftKVServer.BeginServeTCP, service ready at port=333416:51:09.342361058 tFollowerState(node-04).whenStartThenBeginWatchLeaderTimeout, begin16:51:09.481747744 tBrokenState(from=node-01, to=node-04@localhost:3334).whenDialOKThenSetConn16:51:09.481770012 tBrokenState(from=node-01, to=node-01@localhost:3331).whenDialOKThenSetConn16:51:09.481771692 tBrokenState(from=node-01, to=node-04@localhost:3334).whenDialOKThenSwitchToConnectedState16:51:09.481791046 tBrokenState(from=node-01, to=node-04@localhost:3334).beDisposing16:51:09.481781787 tBrokenState(from=node-01, to=node-01@localhost:3331).whenDialOKThenSwitchToConnectedState16:51:09.481807689 tBrokenState(from=node-01, to=node-01@localhost:3331).beDisposing16:51:09.481747893 tBrokenState(from=node-01, to=node-02@localhost:3332).whenDialOKThenSetConn16:51:09.481933708 tBrokenState(from=node-01, to=node-02@localhost:3332).whenDialOKThenSwitchToConnectedState16:51:09.481955515 tBrokenState(from=node-01, to=node-02@localhost:3332).beDisposing16:51:09.481747742 tBrokenState(from=node-01, to=node-03@localhost:3333).whenDialOKThenSetConn16:51:09.481973577 tBrokenState(from=node-01, to=node-03@localhost:3333).whenDialOKThenSwitchToConnectedState16:51:09.481980127 tBrokenState(from=node-01, to=node-03@localhost:3333).beDisposing16:51:09.485403927 tBrokenState(from=node-02, to=node-01@localhost:3331).whenDialOKThenSetConn16:51:09.485692968 tBrokenState(from=node-02, to=node-01@localhost:3331).whenDialOKThenSwitchToConnectedState16:51:09.485707781 tBrokenState(from=node-02, to=node-01@localhost:3331).beDisposing16:51:09.485462572 tBrokenState(from=node-02, to=node-02@localhost:3332).whenDialOKThenSetConn16:51:09.485520127 tBrokenState(from=node-02, to=node-03@localhost:3333).whenDialOKThenSetConn16:51:09.485723854 tBrokenState(from=node-02, to=node-02@localhost:3332).whenDialOKThenSwitchToConnectedState16:51:09.485733962 tBrokenState(from=node-02, to=node-02@localhost:3332).beDisposing16:51:09.485733667 tBrokenState(from=node-02, to=node-03@localhost:3333).whenDialOKThenSwitchToConnectedState16:51:09.485749968 tBrokenState(from=node-02, to=node-03@localhost:3333).beDisposing16:51:09.485474638 tBrokenState(from=node-02, to=node-04@localhost:3334).whenDialOKThenSetConn16:51:09.485780798 tBrokenState(from=node-02, to=node-04@localhost:3334).whenDialOKThenSwitchToConnectedState16:51:09.485787997 tBrokenState(from=node-02, to=node-04@localhost:3334).beDisposing16:51:09.489019463 tBrokenState(from=node-03, to=node-02@localhost:3332).whenDialOKThenSetConn16:51:09.489141518 tBrokenState(from=node-03, to=node-02@localhost:3332).whenDialOKThenSwitchToConnectedState16:51:09.489165663 tBrokenState(from=node-03, to=node-02@localhost:3332).beDisposing16:51:09.489021724 tBrokenState(from=node-03, to=node-03@localhost:3333).whenDialOKThenSetConn16:51:09.489191277 tBrokenState(from=node-03, to=node-03@localhost:3333).whenDialOKThenSwitchToConnectedState16:51:09.489199495 tBrokenState(from=node-03, to=node-03@localhost:3333).beDisposing16:51:09.489021727 tBrokenState(from=node-03, to=node-04@localhost:3334).whenDialOKThenSetConn16:51:09.489019621 tBrokenState(from=node-03, to=node-01@localhost:3331).whenDialOKThenSetConn16:51:09.489217044 tBrokenState(from=node-03, to=node-04@localhost:3334).whenDialOKThenSwitchToConnectedState16:51:09.489222223 tBrokenState(from=node-03, to=node-01@localhost:3331).whenDialOKThenSwitchToConnectedState16:51:09.489234054 tBrokenState(from=node-03, to=node-01@localhost:3331).beDisposing16:51:09.489225544 tBrokenState(from=node-03, to=node-04@localhost:3334).beDisposing16:51:09.492701804 tBrokenState(from=node-04, to=node-01@localhost:3331).whenDialOKThenSetConn16:51:09.492720605 tBrokenState(from=node-04, to=node-01@localhost:3331).whenDialOKThenSwitchToConnectedState16:51:09.492728029 tBrokenState(from=node-04, to=node-01@localhost:3331).beDisposing16:51:09.492702391 tBrokenState(from=node-04, to=node-02@localhost:3332).whenDialOKThenSetConn16:51:09.492764 tBrokenState(from=node-04, to=node-02@localhost:3332).whenDialOKThenSwitchToConnectedState16:51:09.492771402 tBrokenState(from=node-04, to=node-02@localhost:3332).beDisposing16:51:09.492778635 tBrokenState(from=node-04, to=node-04@localhost:3334).whenDialOKThenSetConn16:51:09.492791174 tBrokenState(from=node-04, to=node-04@localhost:3334).whenDialOKThenSwitchToConnectedState16:51:09.492799699 tBrokenState(from=node-04, to=node-04@localhost:3334).beDisposing16:51:09.492844734 tBrokenState(from=node-04, to=node-03@localhost:3333).whenDialOKThenSetConn16:51:09.492855638 tBrokenState(from=node-04, to=node-03@localhost:3333).whenDialOKThenSwitchToConnectedState16:51:09.492863777 tBrokenState(from=node-04, to=node-03@localhost:3333).beDisposing16:51:10.238765817 tFollowerState(node-01).whenLeaderHeartbeatTimeoutThenSwitchToCandidateState, term=016:51:10.238808459 tFollowerState(node-01).feDisposing, disposed=true16:51:10.238885964 tRaftLSMImplement(node-01,1).HandleStateChanged, state=216:51:10.238892892 tRaftLSMImplement(node-01,1).meStateChanged, 216:51:10.238897706 tCandidateState(node-01).whenStartThenAskForVote16:51:10.238902038 tCandidateState(node-01).ceAskingForVote, term=116:51:10.238907133 tCandidateState(node-01).ceAskingForVote, vote to myself16:51:10.2389139 tCandidateState(node-01).ceAskingForVote, ticketCount=116:51:10.238920737 tCandidateState(node-01).whenAskingForVoteThenWatchElectionTimeout16:51:10.239208777 tFollowerState(node-04).feCandidateRequestVote, reset last vote16:51:10.239233375 tFollowerState(node-04).feVoteToCandidate, candidate=node-01, term=116:51:10.239261011 tFollowerState(node-02).feCandidateRequestVote, reset last vote16:51:10.239273156 tFollowerState(node-02).feVoteToCandidate, candidate=node-01, term=116:51:10.239288823 tRaftLSMImplement(node-04,1).RequestVote, cmd=&{node-01 1 0 0}, ret=&{0 1}, err=16:51:10.239303552 RaftRPCServer.RequestVote, cmd=&{node-01 1 0 0}, ret=&{0 1}, e= 16:51:10.239343533 tRaftLSMImplement(node-02,1).RequestVote, cmd=&{node-01 1 0 0}, ret=&{0 1}, err= 16:51:10.239390716 tFollowerState(node-03).feCandidateRequestVote, reset last vote16:51:10.239431327 tFollowerState(node-03).feVoteToCandidate, candidate=node-01, term=116:51:10.239442927 tCandidateState(node-01).handleRequestVoteOK, peer=node-04, term=116:51:10.239455262 tCandidateState(node-01).ceReceiveTicket, mTicketCount=216:51:10.239463079 tCandidateState(node-01).whenReceiveTicketThenCheckTicketCount16:51:10.239473836 tRaftLSMImplement(node-03,1).RequestVote, cmd=&{node-01 1 0 0}, ret=&{0 1}, err= 16:51:10.239488078 RaftRPCServer.RequestVote, cmd=&{node-01 1 0 0}, ret=&{0 1}, e= 16:51:10.239412948 RaftRPCServer.RequestVote, cmd=&{node-01 1 0 0}, ret=&{0 1}, e= 16:51:10.239578689 tCandidateState(node-01).handleRequestVoteOK, peer=node-03, term=116:51:10.239593183 tCandidateState(node-01).ceReceiveTicket, mTicketCount=316:51:10.239601334 tCandidateState(node-01).whenReceiveTicketThenCheckTicketCount16:51:10.239629478 tCandidateState(node-01).whenWinningTheVoteThenSwitchToLeader16:51:10.239639823 tCandidateState(node-01).ceDisposing, mTicketCount=016:51:10.239696198 tCandidateState(node-01).ceDisposing, mDisposedFlag=true16:51:10.239752502 tRaftLSMImplement(node-01,2).HandleStateChanged, state=316:51:10.239764172 tRaftLSMImplement(node-01,2).meStateChanged, 3 tRaftKVServer_test.go:34: passing electing, leader=node-0116:51:10.366875446 tRaftLSMImplement(node-02,1).AppendLog, cmd=&{node-01 1 0xc0004961c0}, ret=&{0 1 0 0}, err= 16:51:10.366931566 RaftRPCServer.AppendLog, cmd=&{node-01 1 0xc0004961c0}, ret=&{0 1 0 0}, e= 16:51:10.370788589 tRaftLSMImplement(node-03,1).AppendLog, cmd=&{node-01 1 0xc00043c5c0}, ret=&{0 1 0 0}, err= 16:51:10.370829944 RaftRPCServer.AppendLog, cmd=&{node-01 1 0xc00043c5c0}, ret=&{0 1 0 0}, e= 16:51:10.374865684 tRaftLSMImplement(node-04,1).AppendLog, cmd=&{node-01 1 0xc000496580}, ret=&{0 1 0 0}, err= 16:51:10.374904568 RaftRPCServer.AppendLog, cmd=&{node-01 1 0xc000496580}, ret=&{0 1 0 0}, e= 16:51:10.375163435 tRaftLSMImplement(node-02,1).CommitLog, cmd=&{node-01 1 1}, ret=&{1}, err= 16:51:10.375176692 RaftRPCServer.CommitLog, cmd=&{node-01 1 1}, ret=&{1}, e= 16:51:10.375444843 tRaftLSMImplement(node-03,1).CommitLog, cmd=&{node-01 1 1}, ret=&{1}, err= 16:51:10.375512284 RaftRPCServer.CommitLog, cmd=&{node-01 1 1}, ret=&{1}, e= 16:51:10.375797446 tRaftLSMImplement(node-04,1).CommitLog, cmd=&{node-01 1 1}, ret=&{1}, err= 16:51:10.375859612 RaftRPCServer.CommitLog, cmd=&{node-01 1 1}, ret=&{1}, e= 16:51:10.379551174 tRaftLSMImplement(node-01,3).ExecuteKVCmd, cmd=&{1 [107 101 121 45 48 49] [99 111 110 116 101 110 116 32 48 49]}, ret=&{0 [] []}, err= 16:51:10.379577233 KVStoreRPCServer.ExecuteKVCmd, cmd=&{1 [107 101 121 45 48 49] [99 111 110 116 101 110 116 32 48 49]}, ret=&{0 [] []}, e= tRaftKVServer_test.go:46: passing put into node-0116:51:10.387761245 tRaftLSMImplement(node-02,1).AppendLog, cmd=&{node-01 1 0xc000496d80}, ret=&{0 1 0 0}, err= 16:51:10.387777654 RaftRPCServer.AppendLog, cmd=&{node-01 1 0xc000496d80}, ret=&{0 1 0 0}, e= 16:51:10.391348874 tRaftLSMImplement(node-03,1).AppendLog, cmd=&{node-01 1 0xc000496e40}, ret=&{0 1 0 0}, err= 16:51:10.391387707 RaftRPCServer.AppendLog, cmd=&{node-01 1 0xc000496e40}, ret=&{0 1 0 0}, e= 16:51:10.395137344 tRaftLSMImplement(node-04,1).AppendLog, cmd=&{node-01 1 0xc000496f00}, ret=&{0 1 0 0}, err= 16:51:10.395155304 RaftRPCServer.AppendLog, cmd=&{node-01 1 0xc000496f00}, ret=&{0 1 0 0}, e= 16:51:10.395343688 tRaftLSMImplement(node-02,1).CommitLog, cmd=&{node-01 1 1}, ret=&{1}, err= 16:51:10.395357145 RaftRPCServer.CommitLog, cmd=&{node-01 1 1}, ret=&{1}, e= 16:51:10.395495604 tRaftLSMImplement(node-03,1).CommitLog, cmd=&{node-01 1 1}, ret=&{1}, err= 16:51:10.3955081 RaftRPCServer.CommitLog, cmd=&{node-01 1 1}, ret=&{1}, e= 16:51:10.395667457 tRaftLSMImplement(node-04,1).CommitLog, cmd=&{node-01 1 1}, ret=&{1}, err= 16:51:10.395688067 RaftRPCServer.CommitLog, cmd=&{node-01 1 1}, ret=&{1}, e= 16:51:10.399174064 tRaftLSMImplement(node-01,3).ExecuteKVCmd, cmd=&{1 [107 101 121 45 48 50] [99 111 110 116 101 110 116 32 48 50]}, ret=&{0 [] []}, err= 16:51:10.399217896 KVStoreRPCServer.ExecuteKVCmd, cmd=&{1 [107 101 121 45 48 50] [99 111 110 116 101 110 116 32 48 50]}, ret=&{0 [] []}, e= 16:51:10.399373787 tRaftLSMImplement(node-02,1).ExecuteKVCmd, cmd=&{1 [107 101 121 45 48 50] [99 111 110 116 101 110 116 32 48 50]}, ret=&{0 [] []}, err= 16:51:10.399397275 KVStoreRPCServer.ExecuteKVCmd, cmd=&{1 [107 101 121 45 48 50] [99 111 110 116 101 110 116 32 48 50]}, ret=&{0 [] []}, e= tRaftKVServer_test.go:55: passing put into node-0216:51:10.400256236 tRaftLSMImplement(node-01,3).ExecuteKVCmd, cmd=&{0 [107 101 121 45 48 50] []}, ret=&{0 [] [99 111 110 116 101 110 116 32 48 50]}, err= 16:51:10.400298117 KVStoreRPCServer.ExecuteKVCmd, cmd=&{0 [107 101 121 45 48 50] []}, ret=&{0 [] [99 111 110 116 101 110 116 32 48 50]}, e= 16:51:10.400639059 tRaftLSMImplement(node-03,1).ExecuteKVCmd, cmd=&{0 [107 101 121 45 48 50] []}, ret=&{0 [] [99 111 110 116 101 110 116 32 48 50]}, err= 16:51:10.400663438 KVStoreRPCServer.ExecuteKVCmd, cmd=&{0 [107 101 121 45 48 50] []}, ret=&{0 [] [99 111 110 116 101 110 116 32 48 50]}, e= tRaftKVServer_test.go:68: passing get from node-0416:51:10.431051964 tRaftKVServer.whenStartThenWatchDebugKill, killing node-012021/04/07 16:51:10 rpc.Serve: accept:accept tcp [::]:3331: use of closed network connection16:51:11.19072568 tFollowerState(node-02).whenLeaderHeartbeatTimeoutThenSwitchToCandidateState, term=116:51:11.190755031 tFollowerState(node-02).feDisposing, disposed=true16:51:11.190856259 tRaftLSMImplement(node-02,1).HandleStateChanged, state=216:51:11.190885201 tRaftLSMImplement(node-02,1).meStateChanged, 216:51:11.190898966 tCandidateState(node-02).whenStartThenAskForVote16:51:11.190908485 tCandidateState(node-02).ceAskingForVote, term=216:51:11.1909172 tCandidateState(node-02).ceAskingForVote, vote to myself16:51:11.19093098 tCandidateState(node-02).ceAskingForVote, ticketCount=116:51:11.190944035 tCandidateState(node-02).whenAskingForVoteThenWatchElectionTimeout16:51:11.191694746 tFollowerState(node-03).feVoteToCandidate, candidate=node-02, term=216:51:11.191724769 tRaftLSMImplement(node-01,3).RequestVote, cmd=&{node-02 2 0 0}, ret=&{0 0}, err= 16:51:11.192305012 RaftRPCServer.RequestVote, cmd=&{node-02 2 0 0}, ret=&{0 0}, e= 16:51:11.192223342 tFollowerState(node-04).feCandidateRequestVote, reset last vote16:51:11.192464666 tFollowerState(node-04).feVoteToCandidate, candidate=node-02, term=216:51:11.19253627 tRaftLSMImplement(node-04,1).RequestVote, cmd=&{node-02 2 0 0}, ret=&{0 2}, err= 16:51:11.192208542 tRaftLSMImplement(node-03,1).RequestVote, cmd=&{node-02 2 0 0}, ret=&{0 2}, err= 16:51:11.192613581 tCandidateState(node-02).handleRequestVoteOK, peer=node-01, term=216:51:11.192627483 tCandidateState(node-02).ceReceiveTicket, mTicketCount=216:51:11.192634994 tCandidateState(node-02).whenReceiveTicketThenCheckTicketCount16:51:11.19260158 RaftRPCServer.RequestVote, cmd=&{node-02 2 0 0}, ret=&{0 2}, e= 16:51:11.192764937 tCandidateState(node-02).handleRequestVoteOK, peer=node-03, term=216:51:11.192778197 tCandidateState(node-02).ceReceiveTicket, mTicketCount=316:51:11.192784986 tCandidateState(node-02).whenReceiveTicketThenCheckTicketCount16:51:11.192806525 tCandidateState(node-02).whenWinningTheVoteThenSwitchToLeader16:51:11.192815315 tCandidateState(node-02).ceDisposing, mTicketCount=016:51:11.192836837 tCandidateState(node-02).ceDisposing, mDisposedFlag=true16:51:11.192853274 tRaftLSMImplement(node-02,2).HandleStateChanged, state=316:51:11.192863098 tRaftLSMImplement(node-02,2).meStateChanged, 316:51:11.193007386 tFollowerState(node-01).init16:51:11.193017792 tFollowerState(node-01).initEventHandlers16:51:11.193037127 tRaftLSMImplement(node-01,3).HandleStateChanged, state=116:51:11.193046674 tRaftLSMImplement(node-01,3).meStateChanged, 116:51:11.193053504 tFollowerState(node-01).Start16:51:11.19313721 tFollowerState(node-01).whenStartThenBeginWatchLeaderTimeout, begin16:51:11.192549822 RaftRPCServer.RequestVote, cmd=&{node-02 2 0 0}, ret=&{0 2}, e= tRaftKVServer_test.go:74: passing reelecting, leader=node-02--- PASS: Test_RaftKVServer (5.09s)PASSDebugger finished with exit code 0
debug.go
支持单元测试的上下文变量
package debug// KilledNodeID was used to detect whether a node should stop wroking, written by unit test codevar KilledNodeID = ""// LeaderNodeID presents current leader node's ID, written by lsm/tLeaderStatevar LeaderNodeID = ""
感谢各位的阅读,以上就是"如何学习raft分布式一致性算法"的内容了,经过本文的学习后,相信大家对如何学习raft分布式一致性算法这一问题有了更深刻的体会,具体使用情况还需要大家实践验证。这里是,小编将为大家推送更多相关知识点的文章,欢迎关注!
分布式
一致
学习
一致性
算法
节点
问题
测试
选举
单元
多个
数据
系统
存储
成功
个子
内容
副本
日志
目标
数据库的安全要保护哪些东西
数据库安全各自的含义是什么
生产安全数据库录入
数据库的安全性及管理
数据库安全策略包含哪些
海淀数据库安全审计系统
建立农村房屋安全信息数据库
易用的数据库客户端支持安全管理
连接数据库失败ssl安全错误
数据库的锁怎样保障安全
山东调度服务器价格
和网络安全法律法规
工业信息化网络安全发展
计算机服务器国产化新闻
金华网络安全
光明大陆服务器人数
光裕里如果显示服务器繁忙怎么办
广东无限软件开发市价
开发小程序的服务器域名
云服务器内网ip访问不了
介绍三种文本数据库
加强网络安全产业园建设包括
服务器如何上代理
济宁直播软件开发公司有哪些
连接数据库并更新数据
儋州市网络安全宣传
找出是否重复数据库
戴尔服务器mac地址
网络安全年度考核
青岛百分百网络技术
怀旧服服务器奎尔塞拉
高速车都堵在服务器
计算机职称考试网络技术员
报社网络技术岗
广东专业软件开发设施价格优惠
数据库大事
佛山三水软件开发招聘信息
滨州资产管理软件开发公司
鼎桥终端软件开发工程师面试
代码托管到本地服务器