MongoDB之分片
发表于:2024-11-22 作者:千家信息网编辑
千家信息网最后更新 2024年11月22日,1、环境操作系统信息:IP操作系统MongoDB10.163.91.15RHLE6.5_x64mongodb-linux-x86_64-rhel62-3.4.7.tgz10.163.91.16RHLE
千家信息网最后更新 2024年11月22日MongoDB之分片1、环境
操作系统信息:
服务器规划:
从上表可以看到有四个组件:mongos、config server、shard、replica set。
mongos:数据库集群请求的入口,所有的请求都通过mongos进行协调,不需要在应用程序添加一个路由选择器,mongos自己就是一个请求分发中心,它负责把对应的数据请求转发到对应的shard服务器上。在生产环境通常有多mongos作为请求的入口,防止其中一个挂掉所有的mongodb请求都没有办法操作。
config server:顾名思义为配置服务器,存储所有数据库元信息(路由、分片)的配置。mongos本身没有物理存储分片服务器和数据路由信息,只是缓存在内存里,配置服务器则实际存储这些数据。mongos第一次启动或者关掉重启就会从 config server 加载配置信息,以后如果配置服务器信息变化会通知到所有的 mongos 更新自己的状态,这样 mongos 就能继续准确路由。在生产环境通常有多个 config server 配置服务器(必须配置为1个或者3个),因为它存储了分片路由的元数据,防止数据丢失!
shard:分片(sharding)是指将数据库拆分,将其分散在不同的机器上的过程。将数据分散到不同的机器上,不需要功能强大的服务器就可以存储更多的数据和处理更大的负载。基本思想就是将集合切成小块,这些块分散到若干片里,每个片只负责总数据的一部分,最后通过一个均衡器来对各个分片进行均衡(数据迁移)。
replica set:中文翻译副本集,其实就是shard的备份,防止shard挂掉之后数据丢失。复制提供了数据的冗余备份,并在多个服务器上存储数据副本,提高了数据的可用性, 并可以保证数据的安全性。
仲裁者(Arbiter):是复制集中的一个MongoDB实例,它并不保存数据。仲裁节点使用最小的资源,不能将Arbiter部署在同一个数据集节点中,可以部署在其他应用服务器或者监视服务器中,也可部署在单独的虚拟机中。为了确保复制集中有奇数的投票成员(包括primary),需要添加仲裁节点做为投票,否则primary不能运行时不会自动切换primary。
简单了解之后,可以这样总结一下,应用请求mongos来操作mongodb的增删改查,配置服务器存储数据库元信息,并且和mongos做同步,数据最终存入在shard(分片)上,为了防止数据丢失同步在副本集中存储了一份,仲裁在数据存储到分片的时候决定存储到哪个节点。
总共规划了mongos 3个, config server 3个,数据分3片 shard server 3个,每个shard 有一个副本一个仲裁也就是 3 * 2 = 6 个,总共需要部署15个实例。这些实例可以部署在独立机器也可以部署在一台机器,我们这里测试资源有限,只准备了 3台机器,在同一台机器只要端口不同就可以了。
2、安装mongodb
分别在3台机器上面安装mongodb
[root@D2-POMS15 ~]# tar -xvzf mongodb-linux-x86_64-rhel62-3.4.7.tgz -C /usr/local/
[root@D2-POMS15 ~]# mv /usr/local/mongodb-linux-x86_64-rhel62-3.4.7/ /usr/local/mongodb
配置环境变量
[root@D2-POMS15 ~]# vim .bash_profile
export PATH=$PATH:/usr/local/mongodb/bin/
[root@D2-POMS15 ~]# source .bash_profile
分别在每台机器建立conf、mongos、config、shard1、shard2、shard3六个目录,因为mongos不存储数据,只需要建立日志文件目录即可。
[root@D2-POMS15 ~]# mkdir -p /usr/local/mongodb/conf
[root@D2-POMS15 ~]# mkdir -p /usr/local/mongodb/mongos/log
[root@D2-POMS15 ~]# mkdir -p /usr/local/mongodb/config/data
[root@D2-POMS15 ~]# mkdir -p /usr/local/mongodb/config/log
[root@D2-POMS15 ~]# mkdir -p /usr/local/mongodb/shard1/data
[root@D2-POMS15 ~]# mkdir -p /usr/local/mongodb/shard1/log
[root@D2-POMS15 ~]# mkdir -p /usr/local/mongodb/shard2/data
[root@D2-POMS15 ~]# mkdir -p /usr/local/mongodb/shard2/log
[root@D2-POMS15 ~]# mkdir -p /usr/local/mongodb/shard3/data
[root@D2-POMS15 ~]# mkdir -p /usr/local/mongodb/shard3/log
3、config server配置服务器
mongodb3.4以后要求配置服务器也创建副本集,不然集群搭建不成功。
在三台服务器上面添加配置文件:
[root@D2-POMS15 ~]# vi /usr/local/mongodb/conf/config.conf
## 配置文件内容
pidfilepath = /usr/local/mongodb/config/log/configsrv.pid
dbpath = /usr/local/mongodb/config/data
logpath = /usr/local/mongodb/config/log/congigsrv.log
logappend = true
bind_ip = 0.0.0.0
port = 21000
fork = true
#declare this is a config db of a cluster;
configsvr = true
#副本集名称
replSet=configs
#设置最大连接数
maxConns=20000
分别启动三台服务器的config server
[root@D2-POMS15 ~]# mongod -f /usr/local/mongodb/conf/config.conf
about to fork child process, waiting until server is ready for connections.
forked process: 15368
child process started successfully, parent exiting
登录任意一台配置服务器,初始化配置副本集
[root@D2-POMS15 ~]# mongo --port 21000
> config = {
... _id : "configs",
... members : [
... {_id : 0, host : "10.163.97.15:21000" },
... {_id : 1, host : "10.163.97.16:21000" },
... {_id : 2, host : "10.163.97.17:21000" }
... ]
... }
{
"_id" : "configs",
"members" : [
{
"_id" : 0,
"host" : "10.163.97.15:21000"
},
{
"_id" : 1,
"host" : "10.163.97.16:21000"
},
{
"_id" : 2,
"host" : "10.163.97.17:21000"
}
]
}
> rs.initiate(config)
{ "ok" : 1 }
其中,"_id" : "configs"应与配置文件中配置的replSet一致,"members" 中的 "host" 为三个节点的 ip 和 port。
4、配置分片副本集(三台机器)
设置第一个分片副本集
添加配置文件:
[root@D2-POMS15 ~]# vi /usr/local/mongodb/conf/shard1.conf
#配置文件内容
pidfilepath = /usr/local/mongodb/shard1/log/shard1.pid
dbpath = /usr/local/mongodb/shard1/data
logpath = /usr/local/mongodb/shard1/log/shard1.log
logappend = true
bind_ip = 0.0.0.0
port = 27001
fork = true
#打开web监控
httpinterface=true
rest=true
#副本集名称
replSet=shard1
#declare this is a shard db of a cluster;
shardsvr = true
#设置最大连接数
maxConns=20000
启动三台服务器的shard1 server
[root@D2-POMS15 ~]# mongod -f /usr/local/mongodb/conf/shard1.conf
about to fork child process, waiting until server is ready for connections.
forked process: 15497
child process started successfully, parent exiting
登陆一台服务器(不要在仲裁节点),初始化副本集
[root@D2-POMS15 ~]# mongo --port 27001
#使用admin数据库
> use admin
switched to db admin
#定义副本集配置,第三个节点的 "arbiterOnly":true 代表其为仲裁节点。
> config = {
... _id : "shard1",
... members : [
... {_id : 0, host : "10.163.97.15:27001" },
... {_id : 1, host : "10.163.97.16:27001" },
... {_id : 2, host : "10.163.97.17:27001" , arbiterOnly: true }
... ]
... }
{
"_id" : "shard1",
"members" : [
{
"_id" : 0,
"host" : "10.163.97.15:27001"
},
{
"_id" : 1,
"host" : "10.163.97.16:27001"
},
{
"_id" : 2,
"host" : "10.163.97.17:27001",
"arbiterOnly" : true
}
]
}
#初始化副本集配置
> rs.initiate(config);
{ "ok" : 1 }
设置第二个分片副本集
添加配置文件:
[root@D2-POMS15 ~]# vi /usr/local/mongodb/conf/shard2.conf
#配置文件内容
pidfilepath = /usr/local/mongodb/shard2/log/shard2.pid
dbpath = /usr/local/mongodb/shard2/data
logpath = /usr/local/mongodb/shard2/log/shard2.log
logappend = true
bind_ip = 0.0.0.0
port = 27002
fork = true
#打开web监控
httpinterface=true
rest=true
#副本集名称
replSet=shard2
#declare this is a shard db of a cluster;
shardsvr = true
#设置最大连接数
maxConns=20000
启动三台服务器的shard2 server:
[root@D2-POMS15 ~]# mongod -f /usr/local/mongodb/conf/shard2.conf
about to fork child process, waiting until server is ready for connections.
forked process: 15622
child process started successfully, parent exiting
登陆一台服务器(不要在仲裁节点),初始化副本集
[root@D2-POMS15 ~]# mongo --port 27002
> use admin
switched to db admin
> config = {
... _id : "shard2",
... members : [
... {_id : 0, host : "10.163.97.15:27002" , arbiterOnly: true },
... {_id : 1, host : "10.163.97.16:27002" },
... {_id : 2, host : "10.163.97.17:27002" }
... ]
... }
{
"_id" : "shard2",
"members" : [
{
"_id" : 0,
"host" : "10.163.97.15:27002",
"arbiterOnly" : true
},
{
"_id" : 1,
"host" : "10.163.97.16:27002"
},
{
"_id" : 2,
"host" : "10.163.97.17:27002"
}
]
}
> rs.initiate(config);
{ "ok" : 1 }
设置第三个分片副本集
添加配置文件:
[root@D2-POMS15 ~]# vi /usr/local/mongodb/conf/shard3.conf
#配置文件内容
pidfilepath = /usr/local/mongodb/shard3/log/shard3.pid
dbpath = /usr/local/mongodb/shard3/data
logpath = /usr/local/mongodb/shard3/log/shard3.log
logappend = true
bind_ip = 0.0.0.0
port = 27003
fork = true
#打开web监控
httpinterface=true
rest=true
#副本集名称
replSet=shard3
#declare this is a shard db of a cluster;
shardsvr = true
#设置最大连接数
maxConns=20000
启动三台服务器的shard3 server
[root@D2-POMS15 ~]# mongod -f /usr/local/mongodb/conf/shard3.conf
about to fork child process, waiting until server is ready for connections.
forked process: 15742
child process started successfully, parent exiting
登陆一台服务器(不要在仲裁节点),初始化副本集
> use admin
switched to db admin
> config = {
... _id : "shard3",
... members : [
... {_id : 0, host : "10.163.97.15:27003" },
... {_id : 1, host : "10.163.97.16:27003" , arbiterOnly: true},
... {_id : 2, host : "10.163.97.17:27003" }
... ]
... }
{
"_id" : "shard3",
"members" : [
{
"_id" : 0,
"host" : "10.163.97.15:27003"
},
{
"_id" : 1,
"host" : "10.163.97.16:27003",
"arbiterOnly" : true
},
{
"_id" : 2,
"host" : "10.163.97.17:27003"
}
]
}
> rs.initiate(config);
{ "ok" : 1 }
可以看到目前已经启动了配置服务器和分片服务器。
[root@D2-POMS15 ~]# ps -ef | grep mongo | grep -v grep
root 15368 1 0 15:52 ? 00:00:07 mongod -f /usr/local/mongodb/conf/config.conf
root 15497 1 0 16:00 ? 00:00:04 mongod -f /usr/local/mongodb/conf/shard1.conf
root 15622 1 0 16:06 ? 00:00:02 mongod -f /usr/local/mongodb/conf/shard2.conf
root 15742 1 0 16:21 ? 00:00:00 mongod -f /usr/local/mongodb/conf/shard3.conf
5、配置路由服务器 mongos
在三台服务器上面添加配置文件:
[root@D2-POMS15 ~]# vi /usr/local/mongodb/conf/mongos.conf
#内容
pidfilepath = /usr/local/mongodb/mongos/log/mongos.pid
logpath = /usr/local/mongodb/mongos/log/mongos.log
logappend = true
bind_ip = 0.0.0.0
port = 20000
fork = true
#监听的配置服务器,只能有1个或者3个 configs为配置服务器的副本集名字
configdb = configs/10.163.97.15:21000,10.163.97.16:21000,10.163.97.17:21000
#设置最大连接数
maxConns=20000
启动三台服务器的mongos server
[root@D2-POMS15 ~]# mongos -f /usr/local/mongodb/conf/mongos.conf
about to fork child process, waiting until server is ready for connections.
forked process: 20563
child process started successfully, parent exiting
[root@D2-POMS15 ~]# mongo --port 20000
mongos> db.stats()
{
"raw" : {
"shard1/10.163.97.15:27001,10.163.97.16:27001" : {
"db" : "admin",
"collections" : 1,
"views" : 0,
"objects" : 3,
"avgObjSize" : 146.66666666666666,
"dataSize" : 440,
"storageSize" : 36864,
"numExtents" : 0,
"indexes" : 2,
"indexSize" : 65536,
"ok" : 1,
"$gleStats" : {
"lastOpTime" : Timestamp(0, 0),
"electionId" : ObjectId("7fffffff0000000000000001")
}
},
"shard2/10.163.97.16:27002,10.163.97.17:27002" : {
"db" : "admin",
"collections" : 1,
"views" : 0,
"objects" : 2,
"avgObjSize" : 114,
"dataSize" : 228,
"storageSize" : 16384,
"numExtents" : 0,
"indexes" : 2,
"indexSize" : 32768,
"ok" : 1,
"$gleStats" : {
"lastOpTime" : Timestamp(0, 0),
"electionId" : ObjectId("7fffffff0000000000000001")
}
},
"shard3/10.163.97.15:27003,10.163.97.17:27003" : {
"db" : "admin",
"collections" : 1,
"views" : 0,
"objects" : 2,
"avgObjSize" : 114,
"dataSize" : 228,
"storageSize" : 16384,
"numExtents" : 0,
"indexes" : 2,
"indexSize" : 32768,
"ok" : 1,
"$gleStats" : {
"lastOpTime" : Timestamp(0, 0),
"electionId" : ObjectId("7fffffff0000000000000002")
}
}
},
"objects" : 7,
"avgObjSize" : 127.71428571428571,
"dataSize" : 896,
"storageSize" : 69632,
"numExtents" : 0,
"indexes" : 6,
"indexSize" : 131072,
"fileSize" : 0,
"extentFreeList" : {
"num" : 0,
"totalSize" : 0
},
"ok" : 1
}
6、启用分片
目前搭建了mongodb配置服务器、路由服务器,各个分片服务器,不过应用程序连接到mongos路由服务器并不能使用分片机制,还需要在路由服务器里设置分片配置,让分片生效。
登陆任意一台mongos
[root@D2-POMS15 ~]# mongo --port 20000
#使用admin数据库
mongos> use admin
switched to db admin
#串联路由服务器与分配副本集
mongos> sh.addShard("shard1/10.163.97.15:27001,10.163.97.16:27001,10.163.97.17:27001")
{ "shardAdded" : "shard1", "ok" : 1 }
mongos> sh.addShard("shard2/10.163.97.15:27002,10.163.97.16:27002,10.163.97.17:27002")
{ "shardAdded" : "shard2", "ok" : 1 }
mongos> sh.addShard("shard3/10.163.97.15:27003,10.163.97.16:27003,10.163.97.17:27003")
{ "shardAdded" : "shard3", "ok" : 1 }
#查看集群状态
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("599d34bf612249caec3fc9fe")
}
shards:
{ "_id" : "shard1", "host" : "shard1/10.163.97.15:27001,10.163.97.16:27001", "state" : 1 }
{ "_id" : "shard2", "host" : "shard2/10.163.97.16:27002,10.163.97.17:27002", "state" : 1 }
{ "_id" : "shard3", "host" : "shard3/10.163.97.15:27003,10.163.97.17:27003", "state" : 1 }
active mongoses:
"3.4.7" : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Balancer lock taken at Wed Aug 23 2017 15:54:40 GMT+0800 (CST) by ConfigServer:Balancer
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
7、测试
目前配置服务、路由服务、分片服务、副本集服务都已经串联起来了,但我们的目的是希望插入数据,数据能够自动分片。连接在mongos上,准备让指定的数据库、指定的集合分片生效。
[root@D2-POMS15 ~]# mongo --port 20000
mongos> use admin
switched to db admin
#指定testdb分片生效
mongos> db.runCommand({enablesharding :"testdb"});
{ "ok" : 1 }
#指定数据库里需要分片的集合和片键
mongos> db.runCommand( { shardcollection : "testdb.table1",key : {id: 1} } )
{ "collectionsharded" : "testdb.table1", "ok" : 1 }
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("599d34bf612249caec3fc9fe")
}
shards:
{ "_id" : "shard1", "host" : "shard1/10.163.97.15:27001,10.163.97.16:27001", "state" : 1 }
{ "_id" : "shard2", "host" : "shard2/10.163.97.16:27002,10.163.97.17:27002", "state" : 1 }
{ "_id" : "shard3", "host" : "shard3/10.163.97.15:27003,10.163.97.17:27003", "state" : 1 }
active mongoses:
"3.4.7" : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Balancer lock taken at Wed Aug 23 2017 15:54:40 GMT+0800 (CST) by ConfigServer:Balancer
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
4 : Success
databases:
{ "_id" : "testdb", "primary" : "shard1", "partitioned" : true }
testdb.table1
shard key: { "id" : 1 }
unique: false
balancing: true
chunks:
shard1 1
{ "id" : { "$minKey" : 1 } } -->> { "id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0)
以上就是设置testdb的 table1 表需要分片,根据 id 自动分片到 shard1 ,shard2,shard3 上面去。要这样设置是因为不是所有mongodb 的数据库和表 都需要分片!
测试分片:
#连接mongos服务器
[root@D2-POMS15 ~]# mongo --port 20000
#使用testdb
mongos> use testdb
switched to db testdb
#插入测试数据
mongos> for(var i=1;i<=100000;i++){db.table1.insert({id:i,"test1":"testval1"})}
WriteResult({ "nInserted" : 1 })
#查看分片情况
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("599d34bf612249caec3fc9fe")
}
shards:
{ "_id" : "shard1", "host" : "shard1/10.163.97.15:27001,10.163.97.16:27001", "state" : 1 }
{ "_id" : "shard2", "host" : "shard2/10.163.97.16:27002,10.163.97.17:27002", "state" : 1 }
{ "_id" : "shard3", "host" : "shard3/10.163.97.15:27003,10.163.97.17:27003", "state" : 1 }
active mongoses:
"3.4.7" : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Balancer lock taken at Wed Aug 23 2017 15:54:40 GMT+0800 (CST) by ConfigServer:Balancer
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
6 : Success
databases:
{ "_id" : "testdb", "primary" : "shard1", "partitioned" : true }
testdb.table1
shard key: { "id" : 1 }
unique: false
balancing: true
chunks:
shard1 1
shard2 1
shard3 1
{ "id" : { "$minKey" : 1 } } -->> { "id" : 2 } on : shard2 Timestamp(2, 0)
{ "id" : 2 } -->> { "id" : 20 } on : shard3 Timestamp(3, 0)
{ "id" : 20 } -->> { "id" : { "$maxKey" : 1 } } on : shard1 Timestamp(3, 1)
可以看到这里分片很不均衡,原因是默认的chunkSize为64M,这里的数据量没有达到64M,可以修改一下chunkSize的大小,方便测试:
[root@D2-POMS15 ~]# mongo --port 20000
mongos> use config
switched to db config
mongos> db.settings.save( { _id:"chunksize", value: 1 } )
WriteResult({ "nMatched" : 0, "nUpserted" : 1, "nModified" : 0, "_id" : "chunksize" })
mongos> db.settings.find();
{ "_id" : "balancer", "stopped" : false, "mode" : "full" }
{ "_id" : "chunksize", "value" : 1 }
修改后重新来测试:
mongos> use testdb
switched to db testdb
mongos> db.table1.drop();
true
mongos> use admin
switched to db admin
mongos> db.runCommand( { shardcollection : "testdb.table1",key : {id: 1} } )
{ "collectionsharded" : "testdb.table1", "ok" : 1 }
mongos> use testdb
switched to db testdb
mongos> for(var i=1;i<=100000;i++){db.table1.insert({id:i,"test1":"testval1"})}
WriteResult({ "nInserted" : 1 })
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("599d34bf612249caec3fc9fe")
}
shards:
{ "_id" : "shard1", "host" : "shard1/10.163.97.15:27001,10.163.97.16:27001", "state" : 1 }
{ "_id" : "shard2", "host" : "shard2/10.163.97.16:27002,10.163.97.17:27002", "state" : 1 }
{ "_id" : "shard3", "host" : "shard3/10.163.97.15:27003,10.163.97.17:27003", "state" : 1 }
active mongoses:
"3.4.7" : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Balancer lock taken at Wed Aug 23 2017 15:54:40 GMT+0800 (CST) by ConfigServer:Balancer
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
14 : Success
databases:
{ "_id" : "testdb", "primary" : "shard1", "partitioned" : true }
testdb.table1
shard key: { "id" : 1 }
unique: false
balancing: true
chunks:
shard1 4
shard2 4
shard3 3
{ "id" : { "$minKey" : 1 } } -->> { "id" : 2 } on : shard2 Timestamp(5, 1)
{ "id" : 2 } -->> { "id" : 20 } on : shard3 Timestamp(6, 1)
{ "id" : 20 } -->> { "id" : 9729 } on : shard1 Timestamp(7, 1)
{ "id" : 9729 } -->> { "id" : 21643 } on : shard1 Timestamp(3, 3)
{ "id" : 21643 } -->> { "id" : 31352 } on : shard2 Timestamp(4, 2)
{ "id" : 31352 } -->> { "id" : 43021 } on : shard2 Timestamp(4, 3)
{ "id" : 43021 } -->> { "id" : 52730 } on : shard3 Timestamp(5, 2)
{ "id" : 52730 } -->> { "id" : 64695 } on : shard3 Timestamp(5, 3)
{ "id" : 64695 } -->> { "id" : 74404 } on : shard1 Timestamp(6, 2)
{ "id" : 74404 } -->> { "id" : 87088 } on : shard1 Timestamp(6, 3)
{ "id" : 87088 } -->> { "id" : { "$maxKey" : 1 } } on : shard2 Timestamp(7, 0)
mongos> db.table1.stats()
{
"sharded" : true,
"capped" : false,
"ns" : "testdb.table1",
"count" : 100000,
"size" : 5400000,
"storageSize" : 1736704,
"totalIndexSize" : 2191360,
"indexSizes" : {
"_id_" : 946176,
"id_1" : 1245184
},
"avgObjSize" : 54,
"nindexes" : 2,
"nchunks" : 11,
"shards" : {
"shard1" : {
"ns" : "testdb.table1",
"size" : 2376864,
"count" : 44016,
"avgObjSize" : 54,
"storageSize" : 753664,
"capped" : false,
"nindexes" : 2,
"totalIndexSize" : 933888,
"indexSizes" : {
"_id_" : 405504,
"id_1" : 528384
},
"ok" : 1
},
"shard2" : {
"ns" : "testdb.table1",
"size" : 1851768,
"count" : 34292,
"avgObjSize" : 54,
"storageSize" : 606208,
"capped" : false,
"nindexes" : 2,
"totalIndexSize" : 774144,
"indexSizes" : {
"_id_" : 335872,
"id_1" : 438272
},
"ok" : 1
},
"shard3" : {
"ns" : "testdb.table1",
"size" : 1171368,
"count" : 21692,
"avgObjSize" : 54,
"storageSize" : 376832,
"capped" : false,
"nindexes" : 2,
"totalIndexSize" : 483328,
"indexSizes" : {
"_id_" : 204800,
"id_1" : 278528
},
"ok" : 1
}
},
"ok" : 1
}
可以看到现在的数据分布就均匀多了。
8、后期运维
启动关闭
mongodb的启动顺序是,先启动配置服务器,在启动分片,最后启动mongos.
mongod -f /usr/local/mongodb/conf/config.conf
mongod -f /usr/local/mongodb/conf/shard1.conf
mongod -f /usr/local/mongodb/conf/shard2.conf
mongod -f /usr/local/mongodb/conf/shard3.conf
mongod -f /usr/local/mongodb/conf/mongos.conf
关闭时,直接killall杀掉所有进程
killall mongod
killall mongos
参考:
https://docs.mongodb.com/manual/sharding/
http://www.lanceyan.com/tech/arch/mongodb_shard1.html
http://www.cnblogs.com/ityouknow/p/7344005.html
操作系统信息:
IP | 操作系统 | MongoDB |
10.163.91.15 | RHLE6.5_x64 | mongodb-linux-x86_64-rhel62-3.4.7.tgz |
10.163.91.16 | RHLE6.5_x64 | mongodb-linux-x86_64-rhel62-3.4.7.tgz |
10.163.91.17 | RHLE6.5_x64 | mongodb-linux-x86_64-rhel62-3.4.7.tgz |
服务器规划:
10.163.97.15 | 10.163.97.16 | 10.163.97.17 | 端口 |
mongos | mongos | mongos | 20000 |
config server | config server | config server | 21000 |
shard server1 主节点 | shard server1 副节点 | shard server1 仲裁 | 27001 |
shard server2 仲裁 | shard server2 主节点 | shard server2 副节点 | 27002 |
shard server3 副节点 | shard server3 仲裁 | shard server3 主节点 | 27003 |
从上表可以看到有四个组件:mongos、config server、shard、replica set。
mongos:数据库集群请求的入口,所有的请求都通过mongos进行协调,不需要在应用程序添加一个路由选择器,mongos自己就是一个请求分发中心,它负责把对应的数据请求转发到对应的shard服务器上。在生产环境通常有多mongos作为请求的入口,防止其中一个挂掉所有的mongodb请求都没有办法操作。
config server:顾名思义为配置服务器,存储所有数据库元信息(路由、分片)的配置。mongos本身没有物理存储分片服务器和数据路由信息,只是缓存在内存里,配置服务器则实际存储这些数据。mongos第一次启动或者关掉重启就会从 config server 加载配置信息,以后如果配置服务器信息变化会通知到所有的 mongos 更新自己的状态,这样 mongos 就能继续准确路由。在生产环境通常有多个 config server 配置服务器(必须配置为1个或者3个),因为它存储了分片路由的元数据,防止数据丢失!
shard:分片(sharding)是指将数据库拆分,将其分散在不同的机器上的过程。将数据分散到不同的机器上,不需要功能强大的服务器就可以存储更多的数据和处理更大的负载。基本思想就是将集合切成小块,这些块分散到若干片里,每个片只负责总数据的一部分,最后通过一个均衡器来对各个分片进行均衡(数据迁移)。
replica set:中文翻译副本集,其实就是shard的备份,防止shard挂掉之后数据丢失。复制提供了数据的冗余备份,并在多个服务器上存储数据副本,提高了数据的可用性, 并可以保证数据的安全性。
仲裁者(Arbiter):是复制集中的一个MongoDB实例,它并不保存数据。仲裁节点使用最小的资源,不能将Arbiter部署在同一个数据集节点中,可以部署在其他应用服务器或者监视服务器中,也可部署在单独的虚拟机中。为了确保复制集中有奇数的投票成员(包括primary),需要添加仲裁节点做为投票,否则primary不能运行时不会自动切换primary。
简单了解之后,可以这样总结一下,应用请求mongos来操作mongodb的增删改查,配置服务器存储数据库元信息,并且和mongos做同步,数据最终存入在shard(分片)上,为了防止数据丢失同步在副本集中存储了一份,仲裁在数据存储到分片的时候决定存储到哪个节点。
总共规划了mongos 3个, config server 3个,数据分3片 shard server 3个,每个shard 有一个副本一个仲裁也就是 3 * 2 = 6 个,总共需要部署15个实例。这些实例可以部署在独立机器也可以部署在一台机器,我们这里测试资源有限,只准备了 3台机器,在同一台机器只要端口不同就可以了。
2、安装mongodb
分别在3台机器上面安装mongodb
[root@D2-POMS15 ~]# tar -xvzf mongodb-linux-x86_64-rhel62-3.4.7.tgz -C /usr/local/
[root@D2-POMS15 ~]# mv /usr/local/mongodb-linux-x86_64-rhel62-3.4.7/ /usr/local/mongodb
配置环境变量
[root@D2-POMS15 ~]# vim .bash_profile
export PATH=$PATH:/usr/local/mongodb/bin/
[root@D2-POMS15 ~]# source .bash_profile
分别在每台机器建立conf、mongos、config、shard1、shard2、shard3六个目录,因为mongos不存储数据,只需要建立日志文件目录即可。
[root@D2-POMS15 ~]# mkdir -p /usr/local/mongodb/conf
[root@D2-POMS15 ~]# mkdir -p /usr/local/mongodb/mongos/log
[root@D2-POMS15 ~]# mkdir -p /usr/local/mongodb/config/data
[root@D2-POMS15 ~]# mkdir -p /usr/local/mongodb/config/log
[root@D2-POMS15 ~]# mkdir -p /usr/local/mongodb/shard1/data
[root@D2-POMS15 ~]# mkdir -p /usr/local/mongodb/shard1/log
[root@D2-POMS15 ~]# mkdir -p /usr/local/mongodb/shard2/data
[root@D2-POMS15 ~]# mkdir -p /usr/local/mongodb/shard2/log
[root@D2-POMS15 ~]# mkdir -p /usr/local/mongodb/shard3/data
[root@D2-POMS15 ~]# mkdir -p /usr/local/mongodb/shard3/log
3、config server配置服务器
mongodb3.4以后要求配置服务器也创建副本集,不然集群搭建不成功。
在三台服务器上面添加配置文件:
[root@D2-POMS15 ~]# vi /usr/local/mongodb/conf/config.conf
## 配置文件内容
pidfilepath = /usr/local/mongodb/config/log/configsrv.pid
dbpath = /usr/local/mongodb/config/data
logpath = /usr/local/mongodb/config/log/congigsrv.log
logappend = true
bind_ip = 0.0.0.0
port = 21000
fork = true
#declare this is a config db of a cluster;
configsvr = true
#副本集名称
replSet=configs
#设置最大连接数
maxConns=20000
分别启动三台服务器的config server
[root@D2-POMS15 ~]# mongod -f /usr/local/mongodb/conf/config.conf
about to fork child process, waiting until server is ready for connections.
forked process: 15368
child process started successfully, parent exiting
登录任意一台配置服务器,初始化配置副本集
[root@D2-POMS15 ~]# mongo --port 21000
> config = {
... _id : "configs",
... members : [
... {_id : 0, host : "10.163.97.15:21000" },
... {_id : 1, host : "10.163.97.16:21000" },
... {_id : 2, host : "10.163.97.17:21000" }
... ]
... }
{
"_id" : "configs",
"members" : [
{
"_id" : 0,
"host" : "10.163.97.15:21000"
},
{
"_id" : 1,
"host" : "10.163.97.16:21000"
},
{
"_id" : 2,
"host" : "10.163.97.17:21000"
}
]
}
> rs.initiate(config)
{ "ok" : 1 }
其中,"_id" : "configs"应与配置文件中配置的replSet一致,"members" 中的 "host" 为三个节点的 ip 和 port。
4、配置分片副本集(三台机器)
设置第一个分片副本集
添加配置文件:
[root@D2-POMS15 ~]# vi /usr/local/mongodb/conf/shard1.conf
#配置文件内容
pidfilepath = /usr/local/mongodb/shard1/log/shard1.pid
dbpath = /usr/local/mongodb/shard1/data
logpath = /usr/local/mongodb/shard1/log/shard1.log
logappend = true
bind_ip = 0.0.0.0
port = 27001
fork = true
#打开web监控
httpinterface=true
rest=true
#副本集名称
replSet=shard1
#declare this is a shard db of a cluster;
shardsvr = true
#设置最大连接数
maxConns=20000
启动三台服务器的shard1 server
[root@D2-POMS15 ~]# mongod -f /usr/local/mongodb/conf/shard1.conf
about to fork child process, waiting until server is ready for connections.
forked process: 15497
child process started successfully, parent exiting
登陆一台服务器(不要在仲裁节点),初始化副本集
[root@D2-POMS15 ~]# mongo --port 27001
#使用admin数据库
> use admin
switched to db admin
#定义副本集配置,第三个节点的 "arbiterOnly":true 代表其为仲裁节点。
> config = {
... _id : "shard1",
... members : [
... {_id : 0, host : "10.163.97.15:27001" },
... {_id : 1, host : "10.163.97.16:27001" },
... {_id : 2, host : "10.163.97.17:27001" , arbiterOnly: true }
... ]
... }
{
"_id" : "shard1",
"members" : [
{
"_id" : 0,
"host" : "10.163.97.15:27001"
},
{
"_id" : 1,
"host" : "10.163.97.16:27001"
},
{
"_id" : 2,
"host" : "10.163.97.17:27001",
"arbiterOnly" : true
}
]
}
#初始化副本集配置
> rs.initiate(config);
{ "ok" : 1 }
设置第二个分片副本集
添加配置文件:
[root@D2-POMS15 ~]# vi /usr/local/mongodb/conf/shard2.conf
#配置文件内容
pidfilepath = /usr/local/mongodb/shard2/log/shard2.pid
dbpath = /usr/local/mongodb/shard2/data
logpath = /usr/local/mongodb/shard2/log/shard2.log
logappend = true
bind_ip = 0.0.0.0
port = 27002
fork = true
#打开web监控
httpinterface=true
rest=true
#副本集名称
replSet=shard2
#declare this is a shard db of a cluster;
shardsvr = true
#设置最大连接数
maxConns=20000
启动三台服务器的shard2 server:
[root@D2-POMS15 ~]# mongod -f /usr/local/mongodb/conf/shard2.conf
about to fork child process, waiting until server is ready for connections.
forked process: 15622
child process started successfully, parent exiting
登陆一台服务器(不要在仲裁节点),初始化副本集
[root@D2-POMS15 ~]# mongo --port 27002
> use admin
switched to db admin
> config = {
... _id : "shard2",
... members : [
... {_id : 0, host : "10.163.97.15:27002" , arbiterOnly: true },
... {_id : 1, host : "10.163.97.16:27002" },
... {_id : 2, host : "10.163.97.17:27002" }
... ]
... }
{
"_id" : "shard2",
"members" : [
{
"_id" : 0,
"host" : "10.163.97.15:27002",
"arbiterOnly" : true
},
{
"_id" : 1,
"host" : "10.163.97.16:27002"
},
{
"_id" : 2,
"host" : "10.163.97.17:27002"
}
]
}
> rs.initiate(config);
{ "ok" : 1 }
设置第三个分片副本集
添加配置文件:
[root@D2-POMS15 ~]# vi /usr/local/mongodb/conf/shard3.conf
#配置文件内容
pidfilepath = /usr/local/mongodb/shard3/log/shard3.pid
dbpath = /usr/local/mongodb/shard3/data
logpath = /usr/local/mongodb/shard3/log/shard3.log
logappend = true
bind_ip = 0.0.0.0
port = 27003
fork = true
#打开web监控
httpinterface=true
rest=true
#副本集名称
replSet=shard3
#declare this is a shard db of a cluster;
shardsvr = true
#设置最大连接数
maxConns=20000
启动三台服务器的shard3 server
[root@D2-POMS15 ~]# mongod -f /usr/local/mongodb/conf/shard3.conf
about to fork child process, waiting until server is ready for connections.
forked process: 15742
child process started successfully, parent exiting
登陆一台服务器(不要在仲裁节点),初始化副本集
> use admin
switched to db admin
> config = {
... _id : "shard3",
... members : [
... {_id : 0, host : "10.163.97.15:27003" },
... {_id : 1, host : "10.163.97.16:27003" , arbiterOnly: true},
... {_id : 2, host : "10.163.97.17:27003" }
... ]
... }
{
"_id" : "shard3",
"members" : [
{
"_id" : 0,
"host" : "10.163.97.15:27003"
},
{
"_id" : 1,
"host" : "10.163.97.16:27003",
"arbiterOnly" : true
},
{
"_id" : 2,
"host" : "10.163.97.17:27003"
}
]
}
> rs.initiate(config);
{ "ok" : 1 }
可以看到目前已经启动了配置服务器和分片服务器。
[root@D2-POMS15 ~]# ps -ef | grep mongo | grep -v grep
root 15368 1 0 15:52 ? 00:00:07 mongod -f /usr/local/mongodb/conf/config.conf
root 15497 1 0 16:00 ? 00:00:04 mongod -f /usr/local/mongodb/conf/shard1.conf
root 15622 1 0 16:06 ? 00:00:02 mongod -f /usr/local/mongodb/conf/shard2.conf
root 15742 1 0 16:21 ? 00:00:00 mongod -f /usr/local/mongodb/conf/shard3.conf
5、配置路由服务器 mongos
在三台服务器上面添加配置文件:
[root@D2-POMS15 ~]# vi /usr/local/mongodb/conf/mongos.conf
#内容
pidfilepath = /usr/local/mongodb/mongos/log/mongos.pid
logpath = /usr/local/mongodb/mongos/log/mongos.log
logappend = true
bind_ip = 0.0.0.0
port = 20000
fork = true
#监听的配置服务器,只能有1个或者3个 configs为配置服务器的副本集名字
configdb = configs/10.163.97.15:21000,10.163.97.16:21000,10.163.97.17:21000
#设置最大连接数
maxConns=20000
启动三台服务器的mongos server
[root@D2-POMS15 ~]# mongos -f /usr/local/mongodb/conf/mongos.conf
about to fork child process, waiting until server is ready for connections.
forked process: 20563
child process started successfully, parent exiting
[root@D2-POMS15 ~]# mongo --port 20000
mongos> db.stats()
{
"raw" : {
"shard1/10.163.97.15:27001,10.163.97.16:27001" : {
"db" : "admin",
"collections" : 1,
"views" : 0,
"objects" : 3,
"avgObjSize" : 146.66666666666666,
"dataSize" : 440,
"storageSize" : 36864,
"numExtents" : 0,
"indexes" : 2,
"indexSize" : 65536,
"ok" : 1,
"$gleStats" : {
"lastOpTime" : Timestamp(0, 0),
"electionId" : ObjectId("7fffffff0000000000000001")
}
},
"shard2/10.163.97.16:27002,10.163.97.17:27002" : {
"db" : "admin",
"collections" : 1,
"views" : 0,
"objects" : 2,
"avgObjSize" : 114,
"dataSize" : 228,
"storageSize" : 16384,
"numExtents" : 0,
"indexes" : 2,
"indexSize" : 32768,
"ok" : 1,
"$gleStats" : {
"lastOpTime" : Timestamp(0, 0),
"electionId" : ObjectId("7fffffff0000000000000001")
}
},
"shard3/10.163.97.15:27003,10.163.97.17:27003" : {
"db" : "admin",
"collections" : 1,
"views" : 0,
"objects" : 2,
"avgObjSize" : 114,
"dataSize" : 228,
"storageSize" : 16384,
"numExtents" : 0,
"indexes" : 2,
"indexSize" : 32768,
"ok" : 1,
"$gleStats" : {
"lastOpTime" : Timestamp(0, 0),
"electionId" : ObjectId("7fffffff0000000000000002")
}
}
},
"objects" : 7,
"avgObjSize" : 127.71428571428571,
"dataSize" : 896,
"storageSize" : 69632,
"numExtents" : 0,
"indexes" : 6,
"indexSize" : 131072,
"fileSize" : 0,
"extentFreeList" : {
"num" : 0,
"totalSize" : 0
},
"ok" : 1
}
6、启用分片
目前搭建了mongodb配置服务器、路由服务器,各个分片服务器,不过应用程序连接到mongos路由服务器并不能使用分片机制,还需要在路由服务器里设置分片配置,让分片生效。
登陆任意一台mongos
[root@D2-POMS15 ~]# mongo --port 20000
#使用admin数据库
mongos> use admin
switched to db admin
#串联路由服务器与分配副本集
mongos> sh.addShard("shard1/10.163.97.15:27001,10.163.97.16:27001,10.163.97.17:27001")
{ "shardAdded" : "shard1", "ok" : 1 }
mongos> sh.addShard("shard2/10.163.97.15:27002,10.163.97.16:27002,10.163.97.17:27002")
{ "shardAdded" : "shard2", "ok" : 1 }
mongos> sh.addShard("shard3/10.163.97.15:27003,10.163.97.16:27003,10.163.97.17:27003")
{ "shardAdded" : "shard3", "ok" : 1 }
#查看集群状态
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("599d34bf612249caec3fc9fe")
}
shards:
{ "_id" : "shard1", "host" : "shard1/10.163.97.15:27001,10.163.97.16:27001", "state" : 1 }
{ "_id" : "shard2", "host" : "shard2/10.163.97.16:27002,10.163.97.17:27002", "state" : 1 }
{ "_id" : "shard3", "host" : "shard3/10.163.97.15:27003,10.163.97.17:27003", "state" : 1 }
active mongoses:
"3.4.7" : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Balancer lock taken at Wed Aug 23 2017 15:54:40 GMT+0800 (CST) by ConfigServer:Balancer
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
7、测试
目前配置服务、路由服务、分片服务、副本集服务都已经串联起来了,但我们的目的是希望插入数据,数据能够自动分片。连接在mongos上,准备让指定的数据库、指定的集合分片生效。
[root@D2-POMS15 ~]# mongo --port 20000
mongos> use admin
switched to db admin
#指定testdb分片生效
mongos> db.runCommand({enablesharding :"testdb"});
{ "ok" : 1 }
#指定数据库里需要分片的集合和片键
mongos> db.runCommand( { shardcollection : "testdb.table1",key : {id: 1} } )
{ "collectionsharded" : "testdb.table1", "ok" : 1 }
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("599d34bf612249caec3fc9fe")
}
shards:
{ "_id" : "shard1", "host" : "shard1/10.163.97.15:27001,10.163.97.16:27001", "state" : 1 }
{ "_id" : "shard2", "host" : "shard2/10.163.97.16:27002,10.163.97.17:27002", "state" : 1 }
{ "_id" : "shard3", "host" : "shard3/10.163.97.15:27003,10.163.97.17:27003", "state" : 1 }
active mongoses:
"3.4.7" : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Balancer lock taken at Wed Aug 23 2017 15:54:40 GMT+0800 (CST) by ConfigServer:Balancer
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
4 : Success
databases:
{ "_id" : "testdb", "primary" : "shard1", "partitioned" : true }
testdb.table1
shard key: { "id" : 1 }
unique: false
balancing: true
chunks:
shard1 1
{ "id" : { "$minKey" : 1 } } -->> { "id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0)
以上就是设置testdb的 table1 表需要分片,根据 id 自动分片到 shard1 ,shard2,shard3 上面去。要这样设置是因为不是所有mongodb 的数据库和表 都需要分片!
测试分片:
#连接mongos服务器
[root@D2-POMS15 ~]# mongo --port 20000
#使用testdb
mongos> use testdb
switched to db testdb
#插入测试数据
mongos> for(var i=1;i<=100000;i++){db.table1.insert({id:i,"test1":"testval1"})}
WriteResult({ "nInserted" : 1 })
#查看分片情况
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("599d34bf612249caec3fc9fe")
}
shards:
{ "_id" : "shard1", "host" : "shard1/10.163.97.15:27001,10.163.97.16:27001", "state" : 1 }
{ "_id" : "shard2", "host" : "shard2/10.163.97.16:27002,10.163.97.17:27002", "state" : 1 }
{ "_id" : "shard3", "host" : "shard3/10.163.97.15:27003,10.163.97.17:27003", "state" : 1 }
active mongoses:
"3.4.7" : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Balancer lock taken at Wed Aug 23 2017 15:54:40 GMT+0800 (CST) by ConfigServer:Balancer
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
6 : Success
databases:
{ "_id" : "testdb", "primary" : "shard1", "partitioned" : true }
testdb.table1
shard key: { "id" : 1 }
unique: false
balancing: true
chunks:
shard1 1
shard2 1
shard3 1
{ "id" : { "$minKey" : 1 } } -->> { "id" : 2 } on : shard2 Timestamp(2, 0)
{ "id" : 2 } -->> { "id" : 20 } on : shard3 Timestamp(3, 0)
{ "id" : 20 } -->> { "id" : { "$maxKey" : 1 } } on : shard1 Timestamp(3, 1)
可以看到这里分片很不均衡,原因是默认的chunkSize为64M,这里的数据量没有达到64M,可以修改一下chunkSize的大小,方便测试:
[root@D2-POMS15 ~]# mongo --port 20000
mongos> use config
switched to db config
mongos> db.settings.save( { _id:"chunksize", value: 1 } )
WriteResult({ "nMatched" : 0, "nUpserted" : 1, "nModified" : 0, "_id" : "chunksize" })
mongos> db.settings.find();
{ "_id" : "balancer", "stopped" : false, "mode" : "full" }
{ "_id" : "chunksize", "value" : 1 }
修改后重新来测试:
mongos> use testdb
switched to db testdb
mongos> db.table1.drop();
true
mongos> use admin
switched to db admin
mongos> db.runCommand( { shardcollection : "testdb.table1",key : {id: 1} } )
{ "collectionsharded" : "testdb.table1", "ok" : 1 }
mongos> use testdb
switched to db testdb
mongos> for(var i=1;i<=100000;i++){db.table1.insert({id:i,"test1":"testval1"})}
WriteResult({ "nInserted" : 1 })
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("599d34bf612249caec3fc9fe")
}
shards:
{ "_id" : "shard1", "host" : "shard1/10.163.97.15:27001,10.163.97.16:27001", "state" : 1 }
{ "_id" : "shard2", "host" : "shard2/10.163.97.16:27002,10.163.97.17:27002", "state" : 1 }
{ "_id" : "shard3", "host" : "shard3/10.163.97.15:27003,10.163.97.17:27003", "state" : 1 }
active mongoses:
"3.4.7" : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Balancer lock taken at Wed Aug 23 2017 15:54:40 GMT+0800 (CST) by ConfigServer:Balancer
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
14 : Success
databases:
{ "_id" : "testdb", "primary" : "shard1", "partitioned" : true }
testdb.table1
shard key: { "id" : 1 }
unique: false
balancing: true
chunks:
shard1 4
shard2 4
shard3 3
{ "id" : { "$minKey" : 1 } } -->> { "id" : 2 } on : shard2 Timestamp(5, 1)
{ "id" : 2 } -->> { "id" : 20 } on : shard3 Timestamp(6, 1)
{ "id" : 20 } -->> { "id" : 9729 } on : shard1 Timestamp(7, 1)
{ "id" : 9729 } -->> { "id" : 21643 } on : shard1 Timestamp(3, 3)
{ "id" : 21643 } -->> { "id" : 31352 } on : shard2 Timestamp(4, 2)
{ "id" : 31352 } -->> { "id" : 43021 } on : shard2 Timestamp(4, 3)
{ "id" : 43021 } -->> { "id" : 52730 } on : shard3 Timestamp(5, 2)
{ "id" : 52730 } -->> { "id" : 64695 } on : shard3 Timestamp(5, 3)
{ "id" : 64695 } -->> { "id" : 74404 } on : shard1 Timestamp(6, 2)
{ "id" : 74404 } -->> { "id" : 87088 } on : shard1 Timestamp(6, 3)
{ "id" : 87088 } -->> { "id" : { "$maxKey" : 1 } } on : shard2 Timestamp(7, 0)
mongos> db.table1.stats()
{
"sharded" : true,
"capped" : false,
"ns" : "testdb.table1",
"count" : 100000,
"size" : 5400000,
"storageSize" : 1736704,
"totalIndexSize" : 2191360,
"indexSizes" : {
"_id_" : 946176,
"id_1" : 1245184
},
"avgObjSize" : 54,
"nindexes" : 2,
"nchunks" : 11,
"shards" : {
"shard1" : {
"ns" : "testdb.table1",
"size" : 2376864,
"count" : 44016,
"avgObjSize" : 54,
"storageSize" : 753664,
"capped" : false,
"nindexes" : 2,
"totalIndexSize" : 933888,
"indexSizes" : {
"_id_" : 405504,
"id_1" : 528384
},
"ok" : 1
},
"shard2" : {
"ns" : "testdb.table1",
"size" : 1851768,
"count" : 34292,
"avgObjSize" : 54,
"storageSize" : 606208,
"capped" : false,
"nindexes" : 2,
"totalIndexSize" : 774144,
"indexSizes" : {
"_id_" : 335872,
"id_1" : 438272
},
"ok" : 1
},
"shard3" : {
"ns" : "testdb.table1",
"size" : 1171368,
"count" : 21692,
"avgObjSize" : 54,
"storageSize" : 376832,
"capped" : false,
"nindexes" : 2,
"totalIndexSize" : 483328,
"indexSizes" : {
"_id_" : 204800,
"id_1" : 278528
},
"ok" : 1
}
},
"ok" : 1
}
可以看到现在的数据分布就均匀多了。
8、后期运维
启动关闭
mongodb的启动顺序是,先启动配置服务器,在启动分片,最后启动mongos.
mongod -f /usr/local/mongodb/conf/config.conf
mongod -f /usr/local/mongodb/conf/shard1.conf
mongod -f /usr/local/mongodb/conf/shard2.conf
mongod -f /usr/local/mongodb/conf/shard3.conf
mongod -f /usr/local/mongodb/conf/mongos.conf
关闭时,直接killall杀掉所有进程
killall mongod
killall mongos
参考:
https://docs.mongodb.com/manual/sharding/
http://www.lanceyan.com/tech/arch/mongodb_shard1.html
http://www.cnblogs.com/ityouknow/p/7344005.html
服务
服务器
数据
配置
副本
节点
仲裁
文件
路由
存储
数据库
机器
三台
信息
测试
最大
内容
名称
就是
环境
数据库的安全要保护哪些东西
数据库安全各自的含义是什么
生产安全数据库录入
数据库的安全性及管理
数据库安全策略包含哪些
海淀数据库安全审计系统
建立农村房屋安全信息数据库
易用的数据库客户端支持安全管理
连接数据库失败ssl安全错误
数据库的锁怎样保障安全
网络安全等级保护验收要求
互联网 生物科技
ipfs部署服务器
流媒体服务器软件费用
中国物流代表型数据库
疫情过后网络安全形势
3d医学软件开发
文山计算机网络安全技术
长春盘古网络技术怎么样
云服务器推送消息给手机
互联网科技歪
尝试打开的数据库已经呗机器
航油网络安全
wsus服务器端口
联想mt-5462服务器参数
上海网络安全知识
甘肃专升本计算机数据库sql
新型网络安全科普
阿里云服务器没通过备案怎么使用
画饭圈乱象绘网络安全绘画加文字
远大住工软件开发工资多少钱
医院信息系统采用的主流数据库是
网络安全周实施的时间
人工智能软件开发与应用
软件开发同python区别
英雄联盟的服务器都是几位
数据库查看后台账号密码
怎么进入一个软件的服务器
开封市裕鸿软件开发有限公司
网络技术员每天工作内容