On this page
mongod
instances?Changed in version 3.0.
MongoDB allows multiple clients to read and write the same data. In order to ensure consistency, it uses locking and other concurrency control measures to prevent multiple clients from modifying the same piece of data simultaneously. Together, these mechanisms guarantee that all writes to a single document occur either in full or not at all and that clients never see an inconsistent view of the data.
MongoDB uses multi-granularity locking [1] that allows operations to lock at the global, database or collection level, and allows for individual storage engines to implement their own concurrency control below the collection level (e.g., at the document-level in WiredTiger).
MongoDB uses reader-writer locks that allow concurrent readers shared access to a resource, such as a database or collection, but in MMAPv1, give exclusive access to a single write operation.
In addition to a shared (S) locking mode for reads and an exclusive (X) locking mode for write operations, intent shared (IS) and intent exclusive (IX) modes indicate an intent to read or write a resource using a finer granularity lock. When locking at a certain granularity, all higher levels are locked using an intent lock.
For example, when locking a collection for writing (using mode X), both the corresponding database lock and the global lock must be locked in intent exclusive (IX) mode. A single database can simultaneously be locked in IS and IX mode, but an exclusive (X) lock cannot coexist with any other modes, and a shared (S) lock can only coexists with intent shared (IS) locks.
Locks are fair, with reads and writes being queued in order. However, to optimize throughput, when one request is granted, all other compatible requests will be granted at the same time, potentially releasing them before a conflicting request. For example, consider a case in which an X lock was just released, and in which the conflict queue contains the following items:
IS → IS → X → X → S → IS
In strict first-in, first-out (FIFO) ordering, only the first two IS modes would be granted. Instead MongoDB will actually grant all IS and S modes, and once they all drain, it will grant X, even if new IS or S requests have been queued in the meantime. As a grant will always move all other requests ahead in the queue, no starvation of any request is possible.
[1] | See the Wikipedia page on Multiple granularity locking for more information. |
Changed in version 3.0.
Beginning with version 3.0, MongoDB ships with the WiredTiger storage engine.
For most read and write operations, WiredTiger uses optimistic concurrency control. WiredTiger uses only intent locks at the global, database and collection levels. When the storage engine detects conflicts between two operations, one will incur a write conflict causing MongoDB to transparently retry that operation.
Some global operations, typically short lived operations involving multiple databases, still require a global “instance-wide” lock. Some other operations, such as dropping a collection, still require an exclusive database lock.
The MMAPv1 storage engine uses collection-level locking as of the 3.0 release series, an improvement on earlier versions in which the database lock was the finest-grain lock. Third-party storage engines may either use collection-level locking or implement their own finer-grained concurrency control.
For example, if you have six collections in a database using the MMAPv1 storage engine and an operation takes a collection-level write lock, the other five collections are still available for read and write operations. An exclusive database lock makes all six collections unavailable for the duration of the operation holding the lock.
mongod
instances?For reporting on lock utilization information on locks, use any of the following methods:
db.serverStatus()
,db.currentOp()
,Specifically, the locks
document in the output of serverStatus, or the locks
field in the current operation reporting provides insight into the type of locks and amount of lock contention in your mongod
instance.
To terminate an operation, use db.killOp()
.
In some situations, read and write operations can yield their locks.
Long running read and write operations, such as queries, updates, and deletes, yield under many conditions. MongoDB operations can also yield locks between individual document modifications in write operations that affect multiple documents like update()
with the multi
parameter.
MongoDB’s MMAPv1 storage engine uses heuristics based on its access pattern to predict whether data is likely in physical memory before performing a read. If MongoDB predicts that the data is not in physical memory, an operation will yield its lock while MongoDB loads the data into memory. Once data is available in memory, the operation will reacquire the lock to complete the operation.
For storage engines supporting document level concurrency control, such as WiredTiger, yielding is not necessary when accessing storage as the intent locks, held at the global, database and collection level, do not block other readers and writers.
Changed in version 2.6: MongoDB does not yield locks when scanning an index even if it predicts that the index is not in memory.
The following table lists common database operations and the types of locks they use.
Operation | Lock Type |
---|---|
Issue a query | Read lock |
Get more data from a cursor | Read lock |
Insert data | Write lock |
Remove data | Write lock |
Update data | Write lock |
Map-reduce | Read lock and write lock, unless operations are specified as non-atomic. Portions of map-reduce jobs can run concurrently. |
Create an index | Building an index in the foreground, which is the default, locks the database for extended periods of time. |
Deprecated since version 3.0. |
Write lock. The db.eval() method takes a global write lock while evaluating the JavaScript function. To avoid taking this global write lock, you can use the eval command with nolock: true . |
Deprecated since version 3.0. |
Write lock. By default, eval command takes a global write lock while evaluating the JavaScript function. If used with nolock: true , the eval command does not take a global write lock while evaluating the JavaScript function. However, the logic within the JavaScript function may take write locks for write operations. |
aggregate() |
Read lock |
Certain administrative commands can exclusively lock the database for extended periods of time. In some deployments, for large databases, you may consider taking the mongod
instance offline so that clients are not affected. For example, if a mongod
is part of a replica set, take the mongod
offline and let other members of the set service load while maintenance is in progress.
The following administrative operations require an exclusive lock at the database level for extended periods:
db.collection.createIndex()
, when issued without setting background
to true
,reIndex
,compact
,db.repairDatabase()
,db.createCollection()
, when creating a very large (i.e. many gigabytes) capped collection,db.collection.validate()
, anddb.copyDatabase()
. This operation may lock all databases. See Does a MongoDB operation ever lock more than one database?.The following administrative commands lock the database but only hold the lock for a very short time:
db.collection.dropIndex()
,db.getLastError()
,db.isMaster()
,rs.status()
(i.e. replSetGetStatus
),db.serverStatus()
,db.auth()
, anddb.addUser()
.The following MongoDB operations lock multiple databases:
db.copyDatabase()
must lock the entire mongod
instance at once.db.repairDatabase()
obtains a global write lock and will block other operations until it finishes.admin
database for deployments using 2.6 user credentials. For deployments using the 2.4 schema for user credentials, authentication locks the admin
database as well as the database the user is accessing.local
database for a short time. The lock for the local
database allows the mongod
to write to the primary’s oplog and accounts for a small portion of the total time of the operation. Sharding improves concurrency by distributing collections over multiple mongod
instances, allowing shard servers (i.e. mongos
processes) to perform any number of operations concurrently to the various downstream mongod
instances.
In a sharded cluster, locks apply to each individual shard, not to the whole cluster; i.e. each mongod
instance is independent of the others in the sharded cluster and uses its own locks. The operations on one mongod
instance do not block the operations on any others.
With replica sets, when MongoDB writes to a collection on the primary, MongoDB also writes to the primary’s oplog, which is a special collection in the local
database. Therefore, MongoDB must lock both the collection’s database and the local
database. The mongod
must lock both databases at the same time to keep the database consistent and ensure that write operations, even with replication, are “all-or-nothing” operations.
When writing to a replica set, the lock’s scope applies to the primary.
In replication, MongoDB does not apply writes serially to secondaries. Secondaries collect oplog entries in batches and then apply those batches in parallel. Secondaries do not allow reads while applying the write operations, and apply write operations in the order that they appear in the oplog.
MongoDB does not support multi-document transactions.
However, MongoDB does provide atomic operations on a single document. Often these document-level atomic operations are sufficient to solve problems that would require ACID transactions in a relational database.
For example, in MongoDB, you can embed related data in nested arrays or nested documents within a single document and update the entire document in a single atomic operation. Relational databases might represent the same kind of data with multiple tables and rows, which would require transaction support to update the data atomically.
SEE ALSO
Atomicity and Transactions
MongoDB provides the following guarantees in the presence of concurrent read and write operations. These guarantees hold on systems configured with either the MMAPv1 or WiredTiger storage engines.
Write operations are atomic with respect to a single document; i.e. if a write is updating multiple fields in the document, a reader will never see the document with only some of the fields updated.
With a standalone mongod
instance, a set of read and write operations to a single document is serializable. With a replica set, a set of read and write operations to a single document is serializableonly in the absence of a rollback.
Correctness with respect to query predicates, e.g. db.collection.find()
will only return documents that match and db.collection.update()
will only write to matching documents.
Correctness with respect to sort. For read operations that request a sort order (e.g. db.collection.find()
or db.collection.aggregate()
), the sort order will not be violated due to concurrent writes.
Although MongoDB provides these strong guarantees for single-document operations, read and write operations may access an arbitrary number of documents during execution. Multi-document operations do notoccur transactionally and are not isolated from concurrent writes. This means that the following behaviors are expected under the normal operation of the system, for both the MMAPv1 and WiredTiger storage engines:
SEE ALSO
Atomicity and Transactions
Changed in version 3.2: MongoDB 3.2 introduced the readConcern option. Clients using majority
readConcern
cannot see the results of writes before they are made durable.
Readers using "local"
readConcern
can see the results of writes before they are made durable, regardless of write concern level or journaling configuration. As a result, applications may observe the following behaviors:
Other systems refer to these semantics as read uncommitted.
Changed in version 3.2.