为什么80%的码农都做不了架构师?>>>
General usage:
==============
usage: ceph [-h] [-c CEPHCONF] [-i INPUT_FILE] [-o OUTPUT_FILE]
[--id CLIENT_ID] [--name CLIENT_NAME] [--cluster CLUSTER]
[--admin-daemon ADMIN_SOCKET] [--admin-socket ADMIN_SOCKET_NOPE]
[-s] [-w] [--watch-debug] [--watch-info] [--watch-sec]
[--watch-warn] [--watch-error] [--version] [--verbose] [--concise]
[-f {json,json-pretty,xml,xml-pretty,plain}]
[--connect-timeout CLUSTER_TIMEOUT]
Ceph administration tool
optional arguments:
-h, --help request mon help
-c CEPHCONF, --conf CEPHCONF
ceph configuration file
-i INPUT_FILE, --in-file INPUT_FILE
input file
-o OUTPUT_FILE, --out-file OUTPUT_FILE
output file
--id CLIENT_ID, --user CLIENT_ID
client id for authentication
--name CLIENT_NAME, -n CLIENT_NAME
client name for authentication
--cluster CLUSTER cluster name
--admin-daemon ADMIN_SOCKET
submit admin-socket commands ("help" for help
--admin-socket ADMIN_SOCKET_NOPE
you probably mean --admin-daemon
-s, --status show cluster status
-w, --watch watch live cluster changes
--watch-debug watch debug events
--watch-info watch info events
--watch-sec watch security events
--watch-warn watch warn events
--watch-error watch error events
--version, -v display version
--verbose make verbose
--concise make less verbose
-f {json,json-pretty,xml,xml-pretty,plain}, --format {json,json-pretty,xml,xml-pretty,plain}
--connect-timeout CLUSTER_TIMEOUT
set a timeout for connecting to the cluster
Monitor commands:
=================
[Contacting monitor, timeout after 5 seconds]
auth add
file, or random key if no input given,
and/or any caps specified in the
command
auth caps
specified in the command
auth del
auth export {
master keyring if none given
auth get
auth get-key
auth get-or-create
[
and/or any caps specified in the
command
auth get-or-create-key
[
command. If key already exists, any
given caps must match the existing
caps for that key.
auth import auth import: read keyring file from -i
auth list list authentication state
auth print-key
auth print_key
compact cause compaction of monitor's leveldb
storage
config-key del
config-key exists
config-key get
config-key list list keys
config-key put
df {detail} show cluster free space stats
fsid show cluster FSID/UUID
health {detail} show cluster health
heap dump|start_profiler|stop_profiler| show heap usage info (available only
release|stats if compiled with tcmalloc)
injectargs
log
mds add_data_pool
mds cluster_down take MDS cluster down
mds cluster_up bring MDS cluster up
mds compat rm_compat
mds compat rm_incompat
mds compat show show mds compatibility settings
mds deactivate
mds dump {
mds fail
mds getmap {
mds newfs
really-mean-it}
mds remove_data_pool
mds rm
mds rmfailed
mds set max_mds|max_file_size|allow_new_ set mds parameter to
snaps|inline_data
mds set_max_mds
mds set_state
mds setmap
number
mds stat show MDS status
mds stop
mds tell
mon add
mon dump {
epoch)
mon getmap {
mon remove
mon stat summarize monitor status
mon_status report status of monitors
osd blacklist add|rm
{
blacklist
osd blacklist ls show blacklisted clients
osd create {
osd crush add
location
osd crush add-bucket
bucket
osd crush create-or-move
.]
osd crush dump dump crush map
osd crush link
location
osd crush move
location
osd crush remove
everywhere, or just at
osd crush reweight
crush map
osd crush rm
everywhere, or just at
osd crush rule create-erasure
{
default default)
osd crush rule create-simple
type
best for erasure pools)
osd crush rule dump {
osd crush rule list list crush rules
osd crush rule ls list crush rules
osd crush rule rm
osd crush set set crush map from input file
osd crush set
osd crush show-tunables show current crush tunables
osd crush tunables legacy|argonaut| set crush tunables values to
bobtail|firefly|optimal|default
osd crush unlink
everywhere, or just at
osd deep-scrub
osd down
osd dump {
osd erasure-code-profile get
osd erasure-code-profile ls list all erasure code profiles
osd erasure-code-profile rm
osd erasure-code-profile set
{
--force at the end to override an
existing profile (VERY DANGEROUS)
osd find
show its location
osd getcrushmap {
osd getmap {
osd getmaxosd show largest OSD id
osd in
osd lost
it} DESTROYS DATA IF NO MORE REPLICAS
EXIST, BE CAREFUL
osd ls {
osd lspools {
osd map