千家信息网

Mongodb 分片 手动维护chunk

发表于:2024-10-06 作者:千家信息网编辑
千家信息网最后更新 2024年10月06日,去年的笔记For instance, if a chunk represents a single shard key value, then MongoDB cannot split the chu
千家信息网最后更新 2024年10月06日Mongodb 分片 手动维护chunk

去年的笔记

For instance, if a chunk represents a single shard key value, then MongoDB cannot split the chunk even when the chunk exceeds the size at which splits occur.

如果一个chunk只包含一个分片键值,mongodb 就不会split这个chunk,即使这个chunk超过了 chunk需要split时的大小。所以分片键的选择非常重要。

这里举个例子,比如我们使用日期(精确到日) 作为分片键,当某一天的数据非常多时,这个分片键值(比如2015/12/12)的对应的chunk会非常大,

超过64M,但是这个chunk是不可分割的。这会造成数据在各个分片中不平衡,出现性能问题。

所以我们要以 选择性高的字段做为分片键,如果这个字段(比如 日志级别)选择性低,我们可以再添加一个选择性高的字段,两个字段做为分片键。

如果以日期做为分片键,为了避免大的chunk,我们可以把日期精确到 时分秒 然后做分片键。


if your chunk ranges get down to a single key value then no further splits are possible and you get "jumbo" chunks。

以下是 大的chunk的例子:

http://dba.stackexchange.com/questions/72626/mongo-large-chunks-will-not-split


一个常见的错误:

Mongos version 3.0.1 Split Chunk Error with Sharding

http://dba.stackexchange.com/questions/96732/mongos-version-3-0-1-split-chunk-error-with-sharding?rq=1


手动切割分片:

http://www.cnblogs.com/xuegang/archive/2012/12/27/2836209.html


一、使用splitFind对可分割的chunk 手动分割。

splitFind(namespace, query),query的值必须包括分片键。将一个query指定的chunk,分割为两个基本相等大小的chunk。

mongos> db.users003.getShardDistribution()Shard shard1 at shard1/192.168.137.111:27017,192.168.137.75:27017 data : 212KiB docs : 3359 chunks : 2 estimated data per chunk : 106KiB estimated docs per chunk : 1679Shard shard2 at shard2/192.168.137.138:27018,192.168.137.75:27018 data : 211KiB docs : 3337 chunks : 2 estimated data per chunk : 105KiB estimated docs per chunk : 1668Shard shard3 at shard3/192.168.137.111:27019,192.168.137.138:27019 data : 209KiB docs : 3304 chunks : 2 estimated data per chunk : 104KiB estimated docs per chunk : 1652Totals data : 633KiB docs : 10000 chunks : 6 Shard shard1 contains 33.58% data, 33.58% docs in cluster, avg obj size on shard : 64B Shard shard2 contains 33.37% data, 33.37% docs in cluster, avg obj size on shard : 64B Shard shard3 contains 33.03% data, 33.04% docs in cluster, avg obj size on shard : 64Bmongos> mongos> mongos> AllChunkInfo("test1.users003", true);ChunkID,Shard,ChunkSize,ObjectsInChunktest1.users003-_id_MinKey,shard1,106368,1662test1.users003-_id_-6148914691236517204,shard1,108608,1697test1.users003-_id_-3074457345618258602,shard3,107072,1673test1.users003-_id_0,shard3,104384,1631test1.users003-_id_3074457345618258602,shard2,110592,1728test1.users003-_id_6148914691236517204,shard2,102976,1609***********Summary Chunk Information***********Total Chunks: 6Average Chunk Size (bytes): 106666.66666666667Empty Chunks: 0Average Chunk Size (non-empty): 106666.66666666667mongos> db.users003.count()10000

执行splitFind之后,chunk被分割为两个基本相同大小的chunk:

mongos> sh.splitFind("test1.users003",{"name" : "u_100"}){        "ok" : 0,        "errmsg" : "no shard key found in chunk query { name: \"u_100\" }"}mongos> sh.splitFind("test1.users003",{"_id" : ObjectId("568bdf16e05cf980cec8c455")}){ "ok" : 1 }mongos> mongos> mongos> mongos> mongos> mongos> mongos> AllChunkInfo("test1.users003", true);ChunkID,Shard,ChunkSize,ObjectsInChunktest1.users003-_id_MinKey,shard1,106368,1662test1.users003-_id_-6148914691236517204,shard1,54272,848test1.users003-_id_-4665891797978533183,shard1,54336,849test1.users003-_id_-3074457345618258602,shard3,107072,1673test1.users003-_id_0,shard3,104384,1631test1.users003-_id_3074457345618258602,shard2,110592,1728test1.users003-_id_6148914691236517204,shard2,102976,1609***********Summary Chunk Information***********Total Chunks: 7Average Chunk Size (bytes): 91428.57142857143Empty Chunks: 0Average Chunk Size (non-empty): 91428.57142857143mongos> db.users003.getShardDistribution()Shard shard1 at shard1/192.168.137.111:27017,192.168.137.75:27017 data : 212KiB docs : 3359 chunks : 3 estimated data per chunk : 70KiB estimated docs per chunk : 1119Shard shard2 at shard2/192.168.137.138:27018,192.168.137.75:27018 data : 211KiB docs : 3337 chunks : 2 estimated data per chunk : 105KiB estimated docs per chunk : 1668Shard shard3 at shard3/192.168.137.111:27019,192.168.137.138:27019 data : 209KiB docs : 3304 chunks : 2 estimated data per chunk : 104KiB estimated docs per chunk : 1652Totals data : 633KiB docs : 10000 chunks : 7 Shard shard1 contains 33.58% data, 33.58% docs in cluster, avg obj size on shard : 64B Shard shard2 contains 33.37% data, 33.37% docs in cluster, avg obj size on shard : 64B Shard shard3 contains 33.03% data, 33.04% docs in cluster, avg obj size on shard : 64B

二、使用splitAt对可分割的chunk 手动分割。

splitAt(namespace, query) 官方解释:

sh.splitAt() splits the original chunk into two chunks. One chunk has a shard key range

that starts with the original lower bound (inclusive) and ends at the specified shard key value (exclusive).

The other chunk has a shard key range that starts with the specified shard key value (inclusive) as the lower bound

and ends at the original upper bound (exclusive).

三、手动迁移chunk

db.runCommand( { moveChunk : "myapp.users" ,

find : {username : "smith"} ,

to : "mongodb-shard3.example.net" } )

注释:

moveChunk:一个集合的名字要加上数据库的名称:比如test.yql

find:一个查询语句,指定集合中的符合查询的数据或者chunk,系统自动查出from 的shard

to: 指向chunk的目的shard

只要目的shard和源sharad同意指定的chunk由目的shard接管,命令就返回。迁移chunk是一个比较复杂的过程,它包括两个内部通信协议:

1 复制数据,包括在复制过程中的变化的数据

2 确保所有参与迁移的组成部分:目的shard ,源shard ,config server都确定迁移已经完成!

The command will block until the migration is complete.

四、相关脚本

--显示collection的chunk分布信息db.collection.getShardDistribution()显示chunk信息脚本:AllChunkInfo = function(ns, est){    var chunks = db.getSiblingDB("config").chunks.find({"ns" : ns}).sort({min:1}); //this will return all chunks for the ns ordered by min    //some counters for overall stats at the end    var totalChunks = 0;    var totalSize = 0;    var totalEmpty = 0;    print("ChunkID,Shard,ChunkSize,ObjectsInChunk"); // header row    // iterate over all the chunks, print out info for each     chunks.forEach(         function printChunkInfo(chunk) {         var db1 = db.getSiblingDB(chunk.ns.split(".")[0]); // get the database we will be running the command against later        var key = db.getSiblingDB("config").collections.findOne({_id:chunk.ns}).key; // will need this for the dataSize call        // dataSize returns the info we need on the data, but using the estimate option to use counts is less intensive        var dataSizeResult = db1.runCommand({datasize:chunk.ns, keyPattern:key, min:chunk.min, max:chunk.max, estimate:est});        // printjson(dataSizeResult); // uncomment to see how long it takes to run and status                   print(chunk._id+","+chunk.shard+","+dataSizeResult.size+","+dataSizeResult.numObjects);         totalSize += dataSizeResult.size;        totalChunks++;        if (dataSizeResult.size == 0) { totalEmpty++ }; //count empty chunks for summary        }    )    print("***********Summary Chunk Information***********");    print("Total Chunks: "+totalChunks);    print("Average Chunk Size (bytes): "+(totalSize/totalChunks));    print("Empty Chunks: "+totalEmpty);    print("Average Chunk Size (non-empty): "+(totalSize/(totalChunks-totalEmpty)));}  使用示例:mongos> AllChunkInfo("test1.users001", true);ChunkID,Shard,ChunkSize,ObjectsInChunktest1.users001-_id_MinKey,shard3,11347710,171935test1.users001-_id_-6148914691236517204,shard1,11293458,171113test1.users001-_id_-3074457345618258602,shard1,11320716,171526test1.users001-_id_0,shard3,11349096,171956test1.users001-_id_3074457345618258602,shard2,11340054,171819test1.users001-_id_6148914691236517204,shard2,11328966,171651***********Summary Chunk Information***********Total Chunks: 6Average Chunk Size (bytes): 11330000Empty Chunks: 0Average Chunk Size (non-empty): 11330000


0