General usage:
==============
usage: ceph [-h] [-c CEPHCONF] [-i INPUT_FILE] [-o OUTPUT_FILE]
[--id CLIENT_ID] [--name CLIENT_NAME] [--cluster CLUSTER]
[--admin-daemon ADMIN_SOCKET] [--admin-socket ADMIN_SOCKET_NOPE]
[-s] [-w] [--watch-debug] [--watch-info] [--watch-sec]
[--watch-warn] [--watch-error] [--version] [--verbose] [--concise]
[-f {json,json-pretty,xml,xml-pretty,plain}]
[--connect-timeout CLUSTER_TIMEOUT]
Ceph administration tool
optional arguments:
-h, --help request mon help
-c CEPHCONF, --conf CEPHCONF
ceph configuration file
-i INPUT_FILE, --in-file INPUT_FILE
input file
-o OUTPUT_FILE, --out-file OUTPUT_FILE
output file
--id CLIENT_ID, --user CLIENT_ID
client id for authentication
--name CLIENT_NAME, -n CLIENT_NAME
client name for authentication
--cluster CLUSTER cluster name
--admin-daemon ADMIN_SOCKET
submit admin-socket commands ("help" for help
--admin-socket ADMIN_SOCKET_NOPE
you probably mean --admin-daemon
-s, --status show cluster status
-w, --watch watch live cluster changes
--watch-debug watch debug events
--watch-info watch info events
--watch-sec watch security events
--watch-warn watch warn events
--watch-error watch error events
--version, -v display version
--verbose make verbose
--concise make less verbose
-f {json,json-pretty,xml,xml-pretty,plain}, --format {json,json-pretty,xml,xml-pretty,plain}
--connect-timeout CLUSTER_TIMEOUT
set a timeout for connecting to the cluster
Monitor commands:
=================
[Contacting monitor, timeout after 5 seconds]
auth add { [...]} add auth info for from input
file, or random key if no input given,
and/or any caps specified in the
command
auth caps [...] update caps for from caps
specified in the command
auth del delete all caps for
auth export {} write keyring for requested entity, or
master keyring if none given
auth get write keyring file with requested key
auth get-key display requested key
auth get-or-create { add auth info for from input
[...]} file, or random key if no input given,
and/or any caps specified in the
command
auth get-or-create-key { get, or add, key for from
[...]} system/caps pairs specified in the
command. If key already exists, any
given caps must match the existing
caps for that key.
auth import auth import: read keyring file from -i
auth list list authentication state
auth print-key display requested key
auth print_key display requested key
compact cause compaction of monitor's leveldb
storage
config-key del delete
config-key exists check for 's existence
config-key get get
config-key list list keys
config-key put {} put , value
df {detail} show cluster free space stats
fsid show cluster FSID/UUID
health {detail} show cluster health
heap dump|start_profiler|stop_profiler| show heap usage info (available only
release|stats if compiled with tcmalloc)
injectargs [...]
log [...] log supplied text to the monitor log
mds add_data_pool add data pool
mds cluster_down take MDS cluster down
mds cluster_up bring MDS cluster up
mds compat rm_compat remove compatible feature
mds compat rm_incompat remove incompatible feature
mds compat show show mds compatibility settings
mds deactivate stop mds
mds dump {} dump info, optionally from epoch
mds fail force mds to status failed
mds getmap {} get MDS map, optionally from epoch
mds newfs {--yes-i- make new filesystom using pools
really-mean-it} and
mds remove_data_pool remove data pool
mds rm remove nonactive mds
mds rmfailed remove failed mds
mds set max_mds|max_file_size|allow_new_ set mds parameter to
snaps|inline_data {}
mds set_max_mds set max MDS index
mds set_state set mds state of to
mds setmap set mds map; must supply correct epoch
number
mds stat show MDS status
mds stop stop mds
mds tell [...] send command to particular mds
mon add add new monitor named at
mon dump {} dump formatted monmap (optionally from
epoch)
mon getmap {} get monmap
mon remove remove monitor named
mon stat summarize monitor status
mon_status report status of monitors
osd blacklist add|rm add (optionally until seconds
{} from now) or remove from
blacklist
osd blacklist ls show blacklisted clients
osd create {} create new osd (with optional UUID)
osd crush add add or update crushmap position and
[...] weight for with and
location
osd crush add-bucket add no-parent (probably root) crush
bucket of type
osd crush create-or-move [.. for at/to location
.]
osd crush dump dump crush map
osd crush link [...] link existing entry for under
location
osd crush move [...] move existing entry for to
location
osd crush remove {} remove from crush map (
everywhere, or just at )
osd crush reweight change 's weight to in
crush map
osd crush rm {} remove from crush map (
everywhere, or just at )
osd crush rule create-erasure create crush rule for erasure
{} coded pool created with (
default default)
osd crush rule create-simple create crush rule to start from
{firstn|indep} , replicate across buckets of
type , using a choose mode of
(default firstn; indep
best for erasure pools)
osd crush rule dump {} dump crush rule (default all)
osd crush rule list list crush rules
osd crush rule ls list crush rules
osd crush rule rm remove crush rule
osd crush set set crush map from input file
osd crush set update crushmap position and weight
[...] for to with location
osd crush show-tunables show current crush tunables
osd crush tunables legacy|argonaut| set crush tunables values to
bobtail|firefly|optimal|default
osd crush unlink {} unlink from crush map (
everywhere, or just at )
osd deep-scrub initiate deep scrub on osd
osd down [...] set osd(s) [...] down
osd dump {} print summary of OSD map
osd erasure-code-profile get get erasure code profile
osd erasure-code-profile ls list all erasure code profiles
osd erasure-code-profile rm remove erasure code profile
osd erasure-code-profile set create erasure code profile
{ [...]} with [ ...] pairs. Add a
--force at the end to override an
existing profile (VERY DANGEROUS)
osd find find osd in the CRUSH map and
show its location
osd getcrushmap {} get CRUSH map
osd getmap {} get OSD map
osd getmaxosd show largest OSD id
osd in [...] set osd(s) [...] in
osd lost {--yes-i-really-mean- mark osd as permanently lost. THIS
it} DESTROYS DATA IF NO MORE REPLICAS
EXIST, BE CAREFUL
osd ls {} show all OSD ids
osd lspools {} list pools
osd map find pg for