千家信息网

MyCat分库分表的示例分析

发表于:2025-01-27 作者:千家信息网编辑
千家信息网最后更新 2025年01月27日,这篇文章主要为大家展示了"MyCat分库分表的示例分析",内容简而易懂,条理清晰,希望能够帮助大家解决疑惑,下面让小编带领大家一起研究并学习一下"MyCat分库分表的示例分析"这篇文章吧。一、当前分片
千家信息网最后更新 2025年01月27日MyCat分库分表的示例分析

这篇文章主要为大家展示了"MyCat分库分表的示例分析",内容简而易懂,条理清晰,希望能够帮助大家解决疑惑,下面让小编带领大家一起研究并学习一下"MyCat分库分表的示例分析"这篇文章吧。

一、当前分片信息配置

schema.xml

                    
select user() select user() select user()

rule.xml

                                        id                        murmur-id                                0                0                9                160        

当前user04进行hash分区,共9个分片,存放在9个物理库。

二、数据查看

node1

mysql> select * from testdb01.user04;+----+--------+| id | name   |+----+--------+|  8 | steven || 14 | steven || 16 | steven || 17 | steven || 34 | steven || 49 | steven |+----+--------+6 rows in set (0.00 sec)mysql> select * from testdb02.user04;+----+--------+| id | name   |+----+--------+|  9 | steven || 10 | steven || 44 | steven || 45 | steven || 46 | steven || 48 | steven |+----+--------+6 rows in set (0.06 sec)mysql> select * from testdb03.user04;+----+--------+| id | name   |+----+--------+| 11 | steven || 24 | steven || 33 | steven || 35 | steven || 40 | steven |+----+--------+5 rows in set (0.07 sec)

node2

mysql> select * from testdb13.user04;+----+--------+| id | name   |+----+--------+| 20 | steven || 25 | steven || 38 | steven || 39 | steven |+----+--------+4 rows in set (0.07 sec)mysql> select * from testdb14.user04;+----+--------+| id | name   |+----+--------+|  1 | steven || 41 | steven || 50 | steven |+----+--------+3 rows in set (0.03 sec)mysql> select * from testdb15.user04;+----+--------+| id | name   |+----+--------+| 12 | steven || 18 | steven || 32 | steven || 36 | steven |+----+--------+4 rows in set (0.12 sec)

node3

mysql> select * from testdb25.user04;+----+--------+| id | name   |+----+--------+|  6 | steven || 13 | steven || 19 | steven || 23 | steven || 27 | steven || 28 | steven || 29 | steven || 31 | steven || 37 | steven |+----+--------+9 rows in set (0.05 sec)mysql> select * from testdb26.user04;+----+--------+| id | name   |+----+--------+|  4 | steven ||  5 | steven || 15 | steven || 22 | steven || 42 | steven |+----+--------+5 rows in set (0.01 sec)mysql> select * from testdb27.user04;+----+--------+| id | name   |+----+--------+|  2 | steven ||  3 | steven ||  7 | steven || 21 | steven || 26 | steven || 30 | steven || 43 | steven || 47 | steven |+----+--------+8 rows in set (0.06 sec)

下面增加9个分片,重新进行配置

三、配置schema.xml,rule.xml

复制schema.xml、rule.xml并重命名为newSchema.xml、newRule.xml放于conf目录下,修改newSchema.xml和newRule.xml配置文件为扩容后的mycat配置参数(表的节点数、数据源、路由规则)

newSchema.xml

                    
select user() select user() select user()

newRule.xml

                                        id                        murmur-id                                0                0                18                160        

将分片数量改为18

四、修改migrateTables.properties

#schema1=tb1,tb2,...#schema2=all(写all或者不写将对此schema下拆分节点变化的拆分表全部进行重新路由)#...#sample#TESTDB=travelrecord,company,goodsmycatdb=user04

五、重新分区

修改 bin 目录下的 dataMigrate.sh 脚本文件,

../bin/dataMigrate.sh

[root@mycat conf]# ../bin/dataMigrate.sh "/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-2.b14.el7.x86_64/jre/bin/java" -DMYCAT_HOME="/usr/local/mycat" -classpath "/usr/local/mycat/conf:/usr/local/mycat/lib/classes:/usr/local/mycat/lib/asm-4.0.jar:/usr/local/mycat/lib/commons-collections-3.2.1.jar:/usr/local/mycat/lib/commons-lang-2.6.jar:/usr/local/mycat/lib/curator-client-2.11.0.jar:/usr/local/mycat/lib/curator-framework-2.11.0.jar:/usr/local/mycat/lib/curator-recipes-2.11.0.jar:/usr/local/mycat/lib/disruptor-3.3.4.jar:/usr/local/mycat/lib/dom4j-1.6.1.jar:/usr/local/mycat/lib/druid-1.0.26.jar:/usr/local/mycat/lib/ehcache-core-2.6.11.jar:/usr/local/mycat/lib/fastjson-1.2.12.jar:/usr/local/mycat/lib/guava-19.0.jar:/usr/local/mycat/lib/hamcrest-core-1.3.jar:/usr/local/mycat/lib/hamcrest-library-1.3.jar:/usr/local/mycat/lib/jline-0.9.94.jar:/usr/local/mycat/lib/joda-time-2.9.3.jar:/usr/local/mycat/lib/jsr305-2.0.3.jar:/usr/local/mycat/lib/kryo-2.10.jar:/usr/local/mycat/lib/leveldb-0.7.jar:/usr/local/mycat/lib/leveldb-api-0.7.jar:/usr/local/mycat/lib/log4j-1.2.17.jar:/usr/local/mycat/lib/log4j-1.2-api-2.5.jar:/usr/local/mycat/lib/log4j-api-2.5.jar:/usr/local/mycat/lib/log4j-core-2.5.jar:/usr/local/mycat/lib/log4j-slf4j-impl-2.5.jar:/usr/local/mycat/lib/mapdb-1.0.7.jar:/usr/local/mycat/lib/minlog-1.2.jar:/usr/local/mycat/lib/mongo-java-driver-2.11.4.jar:/usr/local/mycat/lib/Mycat-server-1.6.6.1-release.jar:/usr/local/mycat/lib/mysql-binlog-connector-java-0.16.1.jar:/usr/local/mycat/lib/mysql-connector-java-5.1.35.jar:/usr/local/mycat/lib/netty-3.7.0.Final.jar:/usr/local/mycat/lib/netty-buffer-4.1.9.Final.jar:/usr/local/mycat/lib/netty-common-4.1.9.Final.jar:/usr/local/mycat/lib/objenesis-1.2.jar:/usr/local/mycat/lib/reflectasm-1.03.jar:/usr/local/mycat/lib/sequoiadb-driver-1.12.jar:/usr/local/mycat/lib/slf4j-api-1.6.1.jar:/usr/local/mycat/lib/univocity-parsers-2.2.1.jar:/usr/local/mycat/lib/velocity-1.7.jar:/usr/local/mycat/lib/wrapper.jar:/usr/local/mycat/lib/zookeeper-3.4.6.jar" -server -Xms2G -Xmx2G -XX:MaxPermSize=64M -XX:+AggressiveOpts -XX:MaxDirectMemorySize=2G io.mycat.util.dataMigrator.DataMigrator -tempFileDir= -isAwaysUseMaster=true -mysqlBin= -cmdLength=110*1024 -charset=utf8 -deleteTempFileDir=true -threadCount= -delThreadCount= -queryPageSize=OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=64M; support was removed in 8.02018-11-23 17:25:57:664 [1]-> creating migrator schedule and temp files for migrate... +---------------------------------------------[mycatdb:user04] migrate info---------------------------------------------+ |tableSize      = 50                                                                                                    | |migrate before = [dn1, dn2, dn3, dn13, dn14, dn15, dn25, dn26, dn27]                                                   | |migrate after  = [dn1, dn2, dn3, dn4, dn5, dn6, dn13, dn14, dn15, dn16, dn17, dn18, dn25, dn26, dn27, dn28, dn29, dn30]| |rule function  = PartitionByMurmurHash                                                                                 | +-----------------------------------------------------------------------------------------------------------------------+ +----------------[mycatdb:user04] migrate schedule----------------+ |dn13[4] -> [0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0]| |dn14[3] -> [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 2, 0]| |dn15[4] -> [0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0]| |dn1[6]  -> [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 1, 0, 0, 0, 1]| |dn25[9] -> [0, 0, 0, 0, 0, 0, 5, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1]| |dn26[5] -> [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0]| |dn27[8] -> [0, 0, 0, 0, 0, 0, 0, 0, 5, 0, 0, 0, 1, 1, 0, 0, 0, 0]| |dn2[6]  -> [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0]| |dn3[5]  -> [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0]| +-----------------------------------------------------------------+2018-11-23 17:26:10:896 [2]-> start migrate data...[mycatdb:user04] dn1->dn26 completed in 740ms[mycatdb:user04] dn1->dn17 completed in 792ms[mycatdb:user04] dn1->dn30 completed in 472ms[mycatdb:user04] dn2->dn17 completed in 474ms[mycatdb:user04] dn2->dn18 completed in 400ms[mycatdb:user04] dn2->dn25 completed in 458ms[mycatdb:user04] dn2->dn28 completed in 388ms[mycatdb:user04] dn3->dn16 completed in 477ms[mycatdb:user04] dn3->dn25 completed in 410ms[mycatdb:user04] dn3->dn29 completed in 423ms[mycatdb:user04] dn13->dn4 completed in 455ms[mycatdb:user04] dn13->dn17 completed in 483ms[mycatdb:user04] dn14->dn27 completed in 496ms[mycatdb:user04] dn14->dn29 completed in 449ms[mycatdb:user04] dn15->dn6 completed in 399ms[mycatdb:user04] dn15->dn17 completed in 395ms[mycatdb:user04] dn15->dn25 completed in 512ms[mycatdb:user04] dn25->dn13 completed in 486ms[mycatdb:user04] dn25->dn16 completed in 473ms[mycatdb:user04] dn25->dn18 completed in 375ms[mycatdb:user04] dn25->dn30 completed in 395ms[mycatdb:user04] dn25->dn27 completed in 482ms[mycatdb:user04] dn26->dn17 completed in 422ms[mycatdb:user04] dn26->dn14 completed in 495ms[mycatdb:user04] dn26->dn25 completed in 440ms[mycatdb:user04] dn26->dn28 completed in 438ms[mycatdb:user04] dn26->dn29 completed in 424ms[mycatdb:user04] dn27->dn15 completed in 438ms[mycatdb:user04] dn27->dn25 completed in 404ms[mycatdb:user04] dn27->dn26 completed in 396ms2018-11-23 17:26:18:106 [3]-> cleaning redundant data...[mycatdb:user04] clean dataNode dn2 completed in 97ms[mycatdb:user04] clean dataNode dn3 completed in 130ms[mycatdb:user04] clean dataNode dn2 completed in 201ms[mycatdb:user04] clean dataNode dn3 completed in 279ms[mycatdb:user04] clean dataNode dn1 completed in 243ms[mycatdb:user04] clean dataNode dn1 completed in 274ms[mycatdb:user04] clean dataNode dn3 completed in 200ms[mycatdb:user04] clean dataNode dn1 completed in 187ms[mycatdb:user04] clean dataNode dn2 completed in 199ms[mycatdb:user04] clean dataNode dn2 completed in 183ms[mycatdb:user04] clean dataNode dn15 completed in 120ms[mycatdb:user04] clean dataNode dn15 completed in 146ms[mycatdb:user04] clean dataNode dn13 completed in 155ms[mycatdb:user04] clean dataNode dn13 completed in 223ms[mycatdb:user04] clean dataNode dn14 completed in 166ms[mycatdb:user04] clean dataNode dn14 completed in 234ms[mycatdb:user04] clean dataNode dn15 completed in 221ms[mycatdb:user04] clean dataNode dn25 completed in 111ms[mycatdb:user04] clean dataNode dn27 completed in 118ms[mycatdb:user04] clean dataNode dn25 completed in 152ms[mycatdb:user04] clean dataNode dn26 completed in 186ms[mycatdb:user04] clean dataNode dn27 completed in 149ms[mycatdb:user04] clean dataNode dn26 completed in 182ms[mycatdb:user04] clean dataNode dn25 completed in 183ms[mycatdb:user04] clean dataNode dn26 completed in 208ms[mycatdb:user04] clean dataNode dn25 completed in 164ms[mycatdb:user04] clean dataNode dn27 completed in 207ms[mycatdb:user04] clean dataNode dn25 completed in 242ms[mycatdb:user04] clean dataNode dn26 completed in 179ms[mycatdb:user04] clean dataNode dn26 completed in 129ms2018-11-23 17:26:21:423 [4]-> validating tables migrate result... +------migrate result-------+ |[mycatdb:user04] -> success| +---------------------------+2018-11-23 17:26:22:385 migrate data complete in 24736ms

六、重命名newSchema.xml和newRule.xml

扩容成功后,将newSchema.xml和newRule.xml重命名为schema.xml和rule.xml并替换掉原文件,重启mycat服务,整个扩容过程完成。

七、验证数据

node1

mysql> select * from testdb01.user04;+----+--------+| id | name   |+----+--------+| 16 | steven || 49 | steven |+----+--------+2 rows in set (0.01 sec)mysql> select * from testdb02.user04;+----+--------+| id | name   |+----+--------+| 46 | steven || 48 | steven |+----+--------+2 rows in set (0.00 sec)mysql> select * from testdb03.user04;+----+--------+| id | name   |+----+--------+| 24 | steven || 40 | steven |+----+--------+2 rows in set (0.00 sec)mysql> select * from testdb04.user04;+----+--------+| id | name   |+----+--------+| 20 | steven || 25 | steven || 39 | steven |+----+--------+3 rows in set (0.00 sec)mysql> select * from testdb05.user04;Empty set (0.01 sec)mysql> select * from testdb06.user04;+----+--------+| id | name   |+----+--------+| 32 | steven || 36 | steven |+----+--------+2 rows in set (0.00 sec)

node2

mysql> select * from testdb13.user04;+----+--------+| id | name   |+----+--------+|  6 | steven || 19 | steven || 23 | steven || 28 | steven || 29 | steven |+----+--------+5 rows in set (0.01 sec)mysql> select * from testdb14.user04;+----+--------+| id | name   |+----+--------+|  5 | steven |+----+--------+1 row in set (0.00 sec)mysql> select * from testdb15.user04;+----+--------+| id | name   |+----+--------+|  7 | steven || 26 | steven || 30 | steven || 43 | steven || 47 | steven |+----+--------+5 rows in set (0.01 sec)mysql> select * from testdb16.user04;+----+--------+| id | name   |+----+--------+| 33 | steven || 37 | steven |+----+--------+2 rows in set (0.00 sec)mysql> select * from testdb17.user04;+----+--------+| id | name   |+----+--------+| 10 | steven || 12 | steven || 14 | steven || 15 | steven || 34 | steven || 38 | steven |+----+--------+6 rows in set (0.01 sec)mysql> select * from testdb18.user04;+----+--------+| id | name   |+----+--------+| 31 | steven || 45 | steven |+----+--------+2 rows in set (0.00 sec)

node3

mysql> select * from testdb25.user04;+----+--------+| id | name   |+----+--------+|  2 | steven || 18 | steven || 22 | steven || 35 | steven || 44 | steven |+----+--------+5 rows in set (0.01 sec)mysql> select * from testdb26.user04;+----+--------+| id | name   |+----+--------+|  3 | steven || 17 | steven |+----+--------+2 rows in set (0.00 sec)mysql> select * from testdb27.user04;+----+--------+| id | name   |+----+--------+| 21 | steven || 27 | steven || 41 | steven |+----+--------+3 rows in set (0.00 sec)mysql> select * from testdb28.user04;+----+--------+| id | name   |+----+--------+|  9 | steven || 42 | steven |+----+--------+2 rows in set (0.00 sec)mysql> select * from testdb29.user04;+----+--------+| id | name   |+----+--------+|  1 | steven ||  4 | steven || 11 | steven || 50 | steven |+----+--------+4 rows in set (0.00 sec)mysql> select * from testdb30.user04;+----+--------+| id | name   |+----+--------+|  8 | steven || 13 | steven |+----+--------+2 rows in set (0.00 sec)

可以看到user04分片由原来的9个分片变成了18个分片,验证完毕。

下面我们将缩减分片至9个分片

八、配置schema.xml,rule.xml

复制schema.xml、rule.xml并重命名为newSchema.xml、newRule.xml放于conf目录下,修改newSchema.xml和newRule.xml配置文件为缩容后的mycat配置参数(表的节点数、数据源、路由规则)

newSchema.xml

                    
select user() select user() select user()

newRule.xml

                                        id                        murmur-id                                0                0                9                160        

九、重新分区

修改 bin 目录下的 dataMigrate.sh 脚本文件,

../bin/dataMigrate.sh

[root@mycat conf]# ../bin/dataMigrate.sh"/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-2.b14.el7.x86_64/jre/bin/java" -DMYCAT_HOME="/usr/local/mycat" -classpath "/usr/local/mycat/conf:/usr/local/mycat/lib/classes:/usr/local/mycat/lib/asm-4.0.jar:/usr/local/mycat/lib/commons-collections-3.2.1.jar:/usr/local/mycat/lib/commons-lang-2.6.jar:/usr/local/mycat/lib/curator-client-2.11.0.jar:/usr/local/mycat/lib/curator-framework-2.11.0.jar:/usr/local/mycat/lib/curator-recipes-2.11.0.jar:/usr/local/mycat/lib/disruptor-3.3.4.jar:/usr/local/mycat/lib/dom4j-1.6.1.jar:/usr/local/mycat/lib/druid-1.0.26.jar:/usr/local/mycat/lib/ehcache-core-2.6.11.jar:/usr/local/mycat/lib/fastjson-1.2.12.jar:/usr/local/mycat/lib/guava-19.0.jar:/usr/local/mycat/lib/hamcrest-core-1.3.jar:/usr/local/mycat/lib/hamcrest-library-1.3.jar:/usr/local/mycat/lib/jline-0.9.94.jar:/usr/local/mycat/lib/joda-time-2.9.3.jar:/usr/local/mycat/lib/jsr305-2.0.3.jar:/usr/local/mycat/lib/kryo-2.10.jar:/usr/local/mycat/lib/leveldb-0.7.jar:/usr/local/mycat/lib/leveldb-api-0.7.jar:/usr/local/mycat/lib/log4j-1.2.17.jar:/usr/local/mycat/lib/log4j-1.2-api-2.5.jar:/usr/local/mycat/lib/log4j-api-2.5.jar:/usr/local/mycat/lib/log4j-core-2.5.jar:/usr/local/mycat/lib/log4j-slf4j-impl-2.5.jar:/usr/local/mycat/lib/mapdb-1.0.7.jar:/usr/local/mycat/lib/minlog-1.2.jar:/usr/local/mycat/lib/mongo-java-driver-2.11.4.jar:/usr/local/mycat/lib/Mycat-server-1.6.6.1-release.jar:/usr/local/mycat/lib/mysql-binlog-connector-java-0.16.1.jar:/usr/local/mycat/lib/mysql-connector-java-5.1.35.jar:/usr/local/mycat/lib/netty-3.7.0.Final.jar:/usr/local/mycat/lib/netty-buffer-4.1.9.Final.jar:/usr/local/mycat/lib/netty-common-4.1.9.Final.jar:/usr/local/mycat/lib/objenesis-1.2.jar:/usr/local/mycat/lib/reflectasm-1.03.jar:/usr/local/mycat/lib/sequoiadb-driver-1.12.jar:/usr/local/mycat/lib/slf4j-api-1.6.1.jar:/usr/local/mycat/lib/univocity-parsers-2.2.1.jar:/usr/local/mycat/lib/velocity-1.7.jar:/usr/local/mycat/lib/wrapper.jar:/usr/local/mycat/lib/zookeeper-3.4.6.jar" -server -Xms2G -Xmx2G -XX:MaxPermSize=64M -XX:+AggressiveOpts -XX:MaxDirectMemorySize=2G io.mycat.util.dataMigrator.DataMigrator -tempFileDir= -isAwaysUseMaster=true -mysqlBin= -cmdLength=110*1024 -charset=utf8 -deleteTempFileDir=true -threadCount= -delThreadCount= -queryPageSize=OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=64M; support was removed in 8.02018-11-23 17:47:02:047 [1]-> creating migrator schedule and temp files for migrate... +---------------------------------------------[mycatdb:user04] migrate info---------------------------------------------+ |tableSize      = 50                                                                                                    | |migrate before = [dn1, dn2, dn3, dn4, dn5, dn6, dn13, dn14, dn15, dn16, dn17, dn18, dn25, dn26, dn27, dn28, dn29, dn30]| |migrate after  = [dn1, dn2, dn3, dn13, dn14, dn15, dn25, dn26, dn27]                                                   | |rule function  = PartitionByMurmurHash                                                                                 | +-----------------------------------------------------------------------------------------------------------------------+ +--[mycatdb:user04] migrate schedule---+ |dn13[5] -> [0, 0, 0, 0, 0, 0, 5, 0, 0]| |dn14[1] -> [0, 0, 0, 0, 0, 0, 0, 1, 0]| |dn15[5] -> [0, 0, 0, 0, 0, 0, 0, 0, 5]| |dn16[2] -> [0, 0, 1, 0, 0, 0, 1, 0, 0]| |dn17[6] -> [2, 1, 0, 1, 0, 1, 0, 1, 0]| |dn18[2] -> [0, 1, 0, 0, 0, 0, 1, 0, 0]| |dn1[2]  -> [0, 0, 0, 0, 0, 0, 0, 0, 0]| |dn25[5] -> [0, 1, 1, 0, 0, 1, 0, 1, 1]| |dn26[2] -> [1, 0, 0, 0, 0, 0, 0, 0, 1]| |dn27[3] -> [0, 0, 0, 0, 1, 0, 1, 0, 0]| |dn28[2] -> [0, 1, 0, 0, 0, 0, 0, 1, 0]| |dn29[4] -> [0, 0, 1, 0, 2, 0, 0, 1, 0]| |dn2[2]  -> [0, 0, 0, 0, 0, 0, 0, 0, 0]| |dn30[2] -> [1, 0, 0, 0, 0, 0, 1, 0, 0]| |dn3[2]  -> [0, 0, 0, 0, 0, 0, 0, 0, 0]| |dn4[3]  -> [0, 0, 0, 3, 0, 0, 0, 0, 0]| |dn5[0]  -> [0, 0, 0, 0, 0, 0, 0, 0, 0]| |dn6[2]  -> [0, 0, 0, 0, 0, 2, 0, 0, 0]| +--------------------------------------+2018-11-23 17:47:06:683 [2]-> start migrate data...[mycatdb:user04] dn6->dn15 completed in 763ms[mycatdb:user04] dn4->dn13 completed in 835ms[mycatdb:user04] dn13->dn25 completed in 473ms[mycatdb:user04] dn14->dn26 completed in 468ms[mycatdb:user04] dn15->dn27 completed in 414ms[mycatdb:user04] dn16->dn3 completed in 406ms[mycatdb:user04] dn16->dn25 completed in 439ms[mycatdb:user04] dn17->dn1 completed in 474ms[mycatdb:user04] dn17->dn2 completed in 426ms[mycatdb:user04] dn17->dn13 completed in 396ms[mycatdb:user04] dn17->dn15 completed in 408ms[mycatdb:user04] dn17->dn26 completed in 444ms[mycatdb:user04] dn18->dn2 completed in 420ms[mycatdb:user04] dn18->dn25 completed in 508ms[mycatdb:user04] dn25->dn2 completed in 439ms[mycatdb:user04] dn25->dn3 completed in 459ms[mycatdb:user04] dn25->dn15 completed in 422ms[mycatdb:user04] dn25->dn26 completed in 442ms[mycatdb:user04] dn25->dn27 completed in 448ms[mycatdb:user04] dn26->dn1 completed in 412ms[mycatdb:user04] dn26->dn27 completed in 434ms[mycatdb:user04] dn27->dn14 completed in 436ms[mycatdb:user04] dn27->dn25 completed in 442ms[mycatdb:user04] dn28->dn2 completed in 453ms[mycatdb:user04] dn28->dn26 completed in 397ms[mycatdb:user04] dn29->dn3 completed in 381ms[mycatdb:user04] dn29->dn14 completed in 405ms[mycatdb:user04] dn29->dn26 completed in 440ms[mycatdb:user04] dn30->dn1 completed in 437ms[mycatdb:user04] dn30->dn25 completed in 358ms2018-11-23 17:47:13:659 [3]-> cleaning redundant data...[mycatdb:user04] clean dataNode dn6 completed in 267ms[mycatdb:user04] clean dataNode dn4 completed in 285ms[mycatdb:user04] clean dataNode dn17 completed in 154ms[mycatdb:user04] clean dataNode dn17 completed in 228ms[mycatdb:user04] clean dataNode dn15 completed in 141ms[mycatdb:user04] clean dataNode dn16 completed in 248ms[mycatdb:user04] clean dataNode dn18 completed in 241ms[mycatdb:user04] clean dataNode dn14 completed in 192ms[mycatdb:user04] clean dataNode dn16 completed in 316ms[mycatdb:user04] clean dataNode dn17 completed in 254ms[mycatdb:user04] clean dataNode dn17 completed in 325ms[mycatdb:user04] clean dataNode dn17 completed in 222ms[mycatdb:user04] clean dataNode dn13 completed in 170ms[mycatdb:user04] clean dataNode dn18 completed in 198ms[mycatdb:user04] clean dataNode dn25 completed in 101ms[mycatdb:user04] clean dataNode dn29 completed in 195ms[mycatdb:user04] clean dataNode dn30 completed in 240ms[mycatdb:user04] clean dataNode dn29 completed in 279ms[mycatdb:user04] clean dataNode dn27 completed in 172ms[mycatdb:user04] clean dataNode dn25 completed in 176ms[mycatdb:user04] clean dataNode dn28 completed in 258ms[mycatdb:user04] clean dataNode dn25 completed in 162ms[mycatdb:user04] clean dataNode dn25 completed in 202ms[mycatdb:user04] clean dataNode dn28 completed in 313ms[mycatdb:user04] clean dataNode dn26 completed in 185ms[mycatdb:user04] clean dataNode dn29 completed in 243ms[mycatdb:user04] clean dataNode dn30 completed in 258ms[mycatdb:user04] clean dataNode dn27 completed in 161ms[mycatdb:user04] clean dataNode dn25 completed in 168ms[mycatdb:user04] clean dataNode dn26 completed in 160ms2018-11-23 17:47:17:586 [4]-> validating tables migrate result... +------migrate result-------+ |[mycatdb:user04] -> success| +---------------------------+2018-11-23 17:47:18:102 migrate data complete in 16057ms

十、重命名newSchema.xml和newRule.xml

缩容成功后,将newSchema.xml和newRule.xml重命名为schema.xml和rule.xml并替换掉原文件,重启mycat服务,整个缩容过程完成。

十一、验证数据

node1

mysql> select * from testdb01.user04;+----+--------+| id | name   |+----+--------+|  8 | steven || 14 | steven || 16 | steven || 17 | steven || 34 | steven || 49 | steven |+----+--------+6 rows in set (0.00 sec)mysql> select * from testdb02.user04;+----+--------+| id | name   |+----+--------+|  9 | steven || 10 | steven || 44 | steven || 45 | steven || 46 | steven || 48 | steven |+----+--------+6 rows in set (0.00 sec)mysql> select * from testdb03.user04;+----+--------+| id | name   |+----+--------+| 11 | steven || 24 | steven || 33 | steven || 35 | steven || 40 | steven |+----+--------+5 rows in set (0.01 sec)mysql> select * from testdb04.user04;Empty set (0.00 sec)mysql> select * from testdb05.user04;Empty set (0.01 sec)mysql> select * from testdb06.user04;Empty set (0.00 sec)

node2

mysql> select * from testdb13.user04;+----+--------+| id | name   |+----+--------+| 20 | steven || 25 | steven || 38 | steven || 39 | steven |+----+--------+4 rows in set (0.00 sec)mysql> select * from testdb14.user04;+----+--------+| id | name   |+----+--------+|  1 | steven || 41 | steven || 50 | steven |+----+--------+3 rows in set (0.00 sec)mysql> select * from testdb15.user04;+----+--------+| id | name   |+----+--------+| 12 | steven || 18 | steven || 32 | steven || 36 | steven |+----+--------+4 rows in set (0.00 sec)mysql> select * from testdb16.user04;Empty set (0.00 sec)mysql> select * from testdb17.user04;Empty set (0.00 sec)mysql> select * from testdb18.user04;Empty set (0.00 sec)

node3

mysql> select * from testdb25.user04;+----+--------+| id | name   |+----+--------+|  6 | steven || 13 | steven || 19 | steven || 23 | steven || 27 | steven || 28 | steven || 29 | steven || 31 | steven || 37 | steven |+----+--------+9 rows in set (0.01 sec)mysql> select * from testdb26.user04;+----+--------+| id | name   |+----+--------+|  4 | steven ||  5 | steven || 15 | steven || 22 | steven || 42 | steven |+----+--------+5 rows in set (0.01 sec)mysql> select * from testdb27.user04;+----+--------+| id | name   |+----+--------+|  2 | steven ||  3 | steven ||  7 | steven || 21 | steven || 26 | steven || 30 | steven || 43 | steven || 47 | steven |+----+--------+8 rows in set (0.01 sec)mysql> select * from testdb28.user04;Empty set (0.00 sec)mysql> select * from testdb29.user04;Empty set (0.00 sec)mysql> select * from testdb30.user04;Empty set (0.00 sec)

缩容之后,user04的所有数据变成了9个分片,其他分片没有存放数据,验证完毕。

遇到的问题:

由于测试环境之前用作一主两从测试环境,开启了gtid_mode,所以出现报错:

ERROR 1840 (HY000) at line 3 in file: '/usr/local/mycat/temp/mycatdb-user04/dn1(old)-dn17(new)/user04.sql': @@GLOBAL.GTID_PURGED can only be set when @@GLOBAL.GTID_EXECUTED is empty. -> mysql -h292.168.8.32 -P3306 -uroot -pmysql -Dtestdb17 -f --default-character-set=utf8 -e "source /usr/local/mycat/temp/mycatdb-user04/dn1(old)-dn17(new)/user04.sql"

解决方法:关闭gtid_mode

另外,扩容也可以采用停机方法,备份逻辑表所有数据,重新进行分片,然后再导入备份的数据,从而完成扩容的目的。

以上是"MyCat分库分表的示例分析"这篇文章的所有内容,感谢各位的阅读!相信大家都有了一定的了解,希望分享的内容对大家有所帮助,如果还想学习更多知识,欢迎关注行业资讯频道!

0