千家信息网

Cassandra集群管理-下线正常节点

发表于:2024-11-12 作者:千家信息网编辑
千家信息网最后更新 2024年11月12日,测试前题:测试cassandra集群使用了vnodes,如何判断是否用了vnodes呢? 主要看你的cassandra.yml配置文件中。默认(3.x)为空,系统自动生成。为空表示使用virtual
千家信息网最后更新 2024年11月12日Cassandra集群管理-下线正常节点

测试前题:

测试cassandra集群使用了vnodes,如何判断是否用了vnodes呢? 主要看你的cassandra.yml配置文件中。
默认(3.x)为空,系统自动生成。为空表示使用virtual nodes,默认开启,使用了vnodes,删除了节点之后它会自己均衡数据,需要人为干预。

测试数据生成

创建一个名为kevin_test的KeySpace

创建一个名为kevin_test的KeySpace,使用网络拓扑策略(SimpleStrategy),集群内3副本,另外开启写commit log。

cassandra@cqlsh> create keyspace kevin_test with replication = {'class':'SimpleStrategy','replication_factor':3} and durable_writes = true;
创建表
CREATE TABLE t_users (  user_id text PRIMARY KEY,  first_name text,  last_name text,  emails set);
批量插入数据
BEGIN BATCHINSERT INTO t_users (user_id, first_name, last_name, emails) VALUES('0', 'kevin0', 'kang', {'k0@pt.com', 'k0-0@gmail.com'});INSERT INTO t_users (user_id, first_name, last_name, emails) VALUES('1', 'kevin1', 'kang', {'k1@pt.com', 'k1-1@gmail.com'});INSERT INTO t_users (user_id, first_name, last_name, emails) VALUES('2', 'kevin2', 'kang', {'k2@pt.com', 'k2-2@gmail.com'});INSERT INTO t_users (user_id, first_name, last_name, emails) VALUES('3', 'kevin3', 'kang', {'k3@pt.com', 'k3-3@gmail.com'});INSERT INTO t_users (user_id, first_name, last_name, emails) VALUES('4', 'kevin4', 'kang', {'k4@pt.com', 'k4-4@gmail.com'});INSERT INTO t_users (user_id, first_name, last_name, emails) VALUES('5', 'kevin5', 'kang', {'k5@pt.com', 'k5-5@gmail.com'});INSERT INTO t_users (user_id, first_name, last_name, emails) VALUES('6', 'kevin6', 'kang', {'k6@pt.com', 'k6-6@gmail.com'});INSERT INTO t_users (user_id, first_name, last_name, emails) VALUES('7', 'kevin7', 'kang', {'k7@pt.com', 'k7-7@gmail.com'});INSERT INTO t_users (user_id, first_name, last_name, emails) VALUES('8', 'kevin8', 'kang', {'k8@pt.com', 'k8-8@gmail.com'});INSERT INTO t_users (user_id, first_name, last_name, emails) VALUES('9', 'kevin9', 'kang', {'k9@pt.com', 'k9-9@gmail.com'});APPLY BATCH;
验证:
cassandra@cqlsh:kevin_test> SELECT * from t_users;  user_id | emails                          | first_name | last_name---------+---------------------------------+------------+-----------       6 | {'k6-6@gmail.com', 'k6@pt.com'} |     kevin6 |      kang       7 | {'k7-7@gmail.com', 'k7@pt.com'} |     kevin7 |      kang       9 | {'k9-9@gmail.com', 'k9@pt.com'} |     kevin9 |      kang       4 | {'k4-4@gmail.com', 'k4@pt.com'} |     kevin4 |      kang       3 | {'k3-3@gmail.com', 'k3@pt.com'} |     kevin3 |      kang       5 | {'k5-5@gmail.com', 'k5@pt.com'} |     kevin5 |      kang       0 | {'k0-0@gmail.com', 'k0@pt.com'} |     kevin0 |      kang       8 | {'k8-8@gmail.com', 'k8@pt.com'} |     kevin8 |      kang       2 | {'k2-2@gmail.com', 'k2@pt.com'} |     kevin2 |      kang       1 | {'k1-1@gmail.com', 'k1@pt.com'} |     kevin1 |      kang
查看t_users表属性:
[root@kubm-03 ~]# nodetool  cfstats kevin_test.t_usersTotal number of tables: 41----------------Keyspace : kevin_test        Read Count: 0        Read Latency: NaN ms        Write Count: 6        Write Latency: 0.116 ms        Pending Flushes: 0                Table: t_users                Number of partitions (estimate): 5                Memtable cell count: 6                Memtable data size: 828

以上表信息,在后期测试期间可以确认数据是否丢失。

集群节点信息

[root@kubm-03 ~]# nodetool  statusDatacenter: dc1===============Status=Up/Down|/ State=Normal/Leaving/Joining/Moving--  Address         Load       Tokens       Owns    Host ID                               RackUN  172.20.101.164  56.64 MiB  256          ?       dcbbad83-fe7c-4580-ade7-aa763b8d2c40  rack1UN  172.20.101.165  55.44 MiB  256          ?       cefe8a3b-918f-463b-8c7d-faab0b9351f9  rack1UN  172.20.101.166  73.96 MiB  256          ?       88e16e35-50dd-4ee3-aa1a-f10a8c61a3eb  rack1UN  172.20.101.167  55.43 MiB  256          ?       8808aaf7-690c-4f0c-be9b-ce655c1464d4  rack1UN  172.20.101.160  54.4 MiB   256          ?       57cc39fc-e47b-4c96-b9b0-b004f2b79242  rack1UN  172.20.101.157  56.05 MiB  256          ?       091ff0dc-415b-48a7-b4ce-e70c84bbfafc  rack1

下线一个正常的集群节点

节点运行状态正常,用于压缩集群节点数量,本次下线:172.20.101.165。

在要删除的机器(172.20.101.165)上执行:
nodetool decommission 或者 nodetool removenode

可以通过 nodetool status查看集群状态,节点数据恢复完成后,下线节点从集群列表消失。

查看服务状态:

[root@kubm-03 ~]# /etc/init.d/cassandra status

● cassandra.service - LSB: distributed storage system for structured data   Loaded: loaded (/etc/rc.d/init.d/cassandra; bad; vendor preset: disabled)   Active: active (running) since Tue 2019-07-09 11:29:25 CST; 2 days agoJul 09 11:29:25 kubm-03 cassandra[8495]: Starting Cassandra: OKJul 09 11:29:25 kubm-03 systemd[1]: Started LSB: distributed storage system for structured data.

测试重启服务,节点能否自动加入集群:

/etc/init.d/cassandra restart

INFO  [main] 2019-07-11 16:44:49,765 StorageService.java:639 - CQL supported versions: 3.4.4 (default: 3.4.4)INFO  [main] 2019-07-11 16:44:49,765 StorageService.java:641 - Native protocol supported versions: 3/v3, 4/v4, 5/v5-beta (default: 4/v4)INFO  [main] 2019-07-11 16:44:49,816 IndexSummaryManager.java:80 - Initializing index summary manager with a memory pool size of 198 MB and a resize interval of 60 minutesThis node was decommissioned and will not rejoin the ring unless cassandra.override_decommission=true has been set, or all existing data is removed and the node is bootstrapped againFatal configuration error; unable to start server.  See log for stacktrace.ERROR [main] 2019-07-11 16:44:49,823 CassandraDaemon.java:749 - Fatal configuration error#提示节点已经退役org.apache.cassandra.exceptions.ConfigurationException: This node was decommissioned and will not rejoin the ring unless cassandra.override_decommission=true has been set, or all existing data is removed and the node is bootstrapped again

#提示节点已经退役,无法接入集群,如果想加入修改修改集群配置 cassandra.override_decommission=true或者删除现在节点上所有数据后重启服务。

0