Configuring Sharding
http://www.mongodb.org/display/DOCS/A+Sample+Configuration+Session
理论:http://www.mongodb.org/display/DOCS/Configuring+Sharding
A Sample Configuration Session
The following example uses two shards (each with a single mongod process), one config db, and onemongosprocess, all running on a single test server. In addition to the script below, apython script for starting and configuring shard components on a single machineis available. Creating the ShardsFirst, start up a couplemongods to be your shards.
Now you need a configuration server andmongos:
mongosdoes not require a data directory, it gets its information from the config server.
You can toy with sharding by using a small--chunkSize, e.g. 1MB. This is more satisfying when you're playing around, as you won't have to insert 64MB of documents before you start seeing them moving around. It should not be used in production.
Setting up the ClusterWe need to run a few commands on the shell to hook everything up. Start the shell, connecting to themongosprocess (at localhost:27017 if you followed the steps above). To set up our cluster, we'll add the two shards (aandb).
Now you need to tell the database that you want to spread out your data at a database and collection level. You have to give the collection a key (or keys) to partition by.
AdministrationTo see what's going on in the cluster, use theconfigdatabase.
These collections contain all of the sharding configuration information. |
理论:http://www.mongodb.org/display/DOCS/Configuring+Sharding
Configuring Sharding
1. One to 1000 shards. Shards are partitions of data. Each shard consists of one or moremongodprocesses which store the data for that shard. When multiplemongod's are in a single shard, they are each storing the same data – that is, they are replicating to each other. For testing purposes, it's possible to start all the required processes on a single server, whereas in a production situation, a number ofserver configurationsare possible. Once the shards (mongod's), config servers, andmongosprocesses are running, configuration is simply a matter of issuing a series of commands to establish the various shards as being part of the cluster. Once the cluster has been established, you can begin sharding individual collections. This document is fairly detailed; for a terse, code-only explanation, see thesample shard configuration. If you'd like a quick script to set up a test cluster on a single machine, we have apython sharding scriptthat can do the trick.
Sharding ComponentsFirst, start the individual shards (mongod's), config servers, andmongosprocesses. Shard ServersA shard server consists of amongodprocess or a replica set of mongod processes. For production, use areplica setfor each shard for data safety and automatic failover. To get started with a simple test, we can run a singlemongodprocess per shard, as a test configuration doesn't demand automated failover. Config ServersRun amongod --configsvrprocess for each config server. If you're only testing, you can use only one config server. For production, use three. Note: Replicating data to each config server is managed by the router (mongos); they have a synchronous replication protocol optimized for three machines, if you were wondering why that number. Do not run any of the config servers with --replSet; replication between them is automatic. Note: As the metadata of a MongoDB cluster is fairly small, it is possible to run the config server processes on boxes also used for other purposes. mongosRouterRunmongoson the servers of your choice. Specify the --configdb parameter to indicate location of the config database(s). Note: use dns names,not ip addresses, for the --configdb parameter's value. Otherwise moving config servers later isdifficult. Note that eachmongoswill read from the first config server in the list provided. If you're running config servers across more than one data center, you should put the closest config servers early in the list. Configuring the Shard ClusterOnce the shard components are running, issue the sharding commands. You may want to automate or record your steps below in a .js file for replay in the shell when needed. Start by connecting to one of themongosprocesses, and then switch to theadmindatabase before issuing any commands. The mongos will route commands to the right machine(s) in the cluster and, if commands change metadata, the mongos will update that on the config servers. So, regardless of the number ofmongosprocesses you've launched, you'll only need run these commands on one of those processes. You can connect to the admin database viamongoslike so:
Adding shardsEach shard can consist of more than one server (seereplica sets); however, for testing, only a single server with onemongodinstance need be used. You must explicitly add each shard to the cluster's configuration using theaddshardcommand:
Run this command once for each shard in the cluster. If the individual shards consist of replica sets, they can be added by specifyingreplicaSetName/
Any databases and collections that existed already in the mongod/replica set will be incorporated to the cluster. The databases will have as the "primary" host that mongod/replica set and the collections will not be sharded (but you can do so later by issuing ashardCollectioncommand). Optional Parametersname maxSize As an example:
Listing shardsTo see current set of configured shards, run thelistshardscommand:
This way, you can verify that all the shard have been committed to the system. Removing a shardSee theremoveshard command. Enabling Sharding on a Database
Once you've added one or more shards, you can enable sharding on a database. Unless enabled, all data in the database will be stored on the same shard. After enabling you then need to run shardCollection on the relevant collections (i.e., the big ones).
Once enabled,mongoswill place new collections on the primary shard for that database. Existing collections within the database will stay on the original shard. To enable partitioning of data, we have to shard an individual collection. Sharding a Collection
Use theshardcollectioncommand to shard a collection. When you shard a collection, you must specify the shard key. If there is data in the collection, mongo will require an index to be created upfront (it speeds up the chunking process); otherwise, an index will be automatically created for you.
For example, let's assume we want to shard aGridFSchunkscollection stored in thetestdatabase. We'd want to shard on thefiles_idkey, so we'd invoke theshardcollectioncommand like so:
You can use the {unique: true} option to ensure that the underlying index enforces uniqueness so long as the unique index is a prefix of the shard key. (note: prior to version 2.0 this worked only if the collection is empty).
If the "unique: true" option isnotused, the shard key does not have to be unique.
You can shard on multiple fields if you are using a compound index. In the end, picking the right shard key for your needs is extremely important for successful sharding.Choosing a Shard Key. Examples
See Also
|
Procedure
Complete this procedure by connecting to anymongosin the cluster using themongoshell.
You can only remove a shard by its shard name. To discover or confirm the name of a shard using thelistshardsorprintShardingStatuscommands or thesh.status()shell helper.
The following example will remove shard namedmongodb0.
Note
To successfully migrate data from a shard, thebalancerprocessmustbe active. Check the balancer state using thesh.getBalancerState()helper in themongoshell. See this section onbalancer operationsfor more information.
Remove Chunks from the Shard
Start by running theremoveShardcommand. This begins “draining” chunks from the shard you are removing.
db.runCommand( { removeshard: "mongodb0" } )
This operation will return a response immediately. For example:
{ msg : "draining started successfully" , state: "started" , shard :"mongodb0" , ok : 1 }
Depending on your network capacity and the amount of data in your cluster, this operation can take anywhere from a few minutes to several days to complete.
Check the Status of the Migration
You can run theremoveShardagain at any stage of the process to check the progress of the migration, as follows:
db.runCommand( { removeshard: "mongodb0" } )
The output will resemble the following document:
{ msg: "draining ongoing" , state: "ongoing" , remaining: { chunks: 42, dbs : 1 }, ok: 1 }
In theremainingsub document, a counter displays the remaining number of chunks that MongoDB must migrate to other shards, and the number of MongoDB databases that have “primary” status on this shard.
Continue checking the status of theremoveshardcommand until the number of chunks remaining is 0, then you can proceed to the next step.
Move Unsharded Databases
Databases with non-sharded collections store these collections on a single shard, known as the “primary” shard for that database. The following step is necessary only when the shard you want to remove is also the “primary” shard for one or more databases.
Issue the following command at themongoshell:
db.runCommand( { movePrimary: "myapp", to: "mongodb1" })
This command will migrate all remaining non-sharded data in the database namedmyappto the shard namedmongodb1.
Warning
Do not run themovePrimaryuntil you havefinisheddraining the shard.
This command will not return until MongoDB completes moving all data, which may take a long time. The response from this command will resemble the following:
{ "primary" : "mongodb1", "ok" : 1 }
Finalize the Migration
RunremoveShardagain to clean up all metadata information and finalize the removal, as follows:
db.runCommand( { removeshard: "mongodb0" } )
When successful, the response will be the following:
{ msg: "remove shard completed succesfully" , stage: "completed", host: "mongodb0", ok : 1 }
When the value of “state” is “completed”, you may safely stop themongodb0shard.