记录 :337
场景 :在CentOS 7.9操作系统上,在ceph集群中,使用ceph命令查看ceph集群信息,以及mon、mgr、mds、osd、rgw等组件信息。
版本:
操作系统:CentOS 7.9
ceph版本:ceph-13.2.10
名词:
ceph:ceph集群的工具命令。
1.基础环境
在规划集群主机上需安装ceph-deploy、ceph、ceph-radosgw软件。
(1)集群主节点安装软件
安装命令:yum install -y ceph-deploy ceph-13.2.10
安装命令:yum install -y ceph-radosgw-13.2.10
解析:集群中主节点安装ceph-deploy、ceph、ceph-radosgw软件。
(2)集群从节点安装软件
安装命令:yum install -y ceph-13.2.10
安装命令:yum install -y ceph-radosgw-13.2.10
解析:集群中从节点安装ceph、ceph-radosgw软件。
2.命令应用
ceph命令,在集群主节点的/etc/ceph目录下使用。
(1)查看ceph版本
命令:ceph --version
解析:查的当前主机安装ceph的版本。
(2)查看集群状态
命令:ceph -s
解析:查看集群状态,使用频率高。
(3)查看集群实时状态
命令:ceph -w
解析:查看集群实时状态,使用命令时,控制台实时监控集群变化。
(4)查看mgr服务
命令:ceph mgr services
解析:命令会打印出:"dashboard": "https://app162:18443/";使用浏览器就能登录dashboard。
(5)查看mon 状态汇总信息
命令:ceph mon stat
解析:汇总mon状态。
(6)查看mds状态汇总信息
命令:ceph mds stat
解析:汇总mds状态。
(7)查看osd状态汇总信息
命令:ceph osd stat
解析:汇总osd状态。
(8)创建pool
命令:ceph osd pool create hz_data 16
解析:创建一个存储池,名称hz_data,分配16个pg。
(9)查看存储池
命令:ceph osd pool ls
解析:能看到存储列表。
(10)查看pool的pg数量
命令:ceph osd pool get hz_data pg_num
解析:查看pool的pg_num数量。
(11)设置pool的pg数量
命令:ceph osd pool set hz_data pg_num 18
解析:设置pool的pg_num数量。
(12)删除pool
命令:ceph osd pool delete hz_data hz_data --yes-i-really-really-mean-it
解析:删除pool时,pool的名称需要传两次。
(13)创建ceph文件系统
命令:ceph fs new hangzhoufs xihu_metadata xihu_data
解析:使用ceph fs new创建ceph文件系统;文件系统名称:hangzhoufs;存储池xihu_data和xihu_metadata。
(14)查ceph文件系统
命令:ceph fs ls
解析:查看ceph文件系统,打印文件系统名称和存储池。
(15)查ceph文件系统状态
命令:ceph fs status
解析:查ceph文件系统状态,打印文件系统的Pool的信息、类型等。
(16)删除ceph文件系统
命令:ceph fs rm hangzhoufs --yes-i-really-mean-it
解析:hangzhoufs是已创建的ceph文件系统名称。
(17)查看服务状态
命令:ceph service status
解析:查看服务状态,查看服务最后一次反应时间。
(18)查看节点quorum状态
命令:ceph quorum_status
解析:查看节点quorum状态。
(19)查看pg状态
命令:ceph pg stat
解析:查看pg状态;pg,placement group。
(20)查看pg清单
命令:ceph pg ls
解析:列出所有pg。
(21)查看osd磁盘信息
命令:ceph osd df
解析:打印osd磁盘信息,包括容量,可用空间,已经使用空间等。
3.命令帮助手册
(1)ceph帮助命令
命令:ceph --help
解析:查看ceph支持全部命令和选项,在实际工作中,查看这个手册应该是必备之选。
General usage:
==============
usage: ceph [-h] [-c CEPHCONF] [-i INPUT_FILE] [-o OUTPUT_FILE]
[--setuser SETUSER] [--setgroup SETGROUP] [--id CLIENT_ID]
[--name CLIENT_NAME] [--cluster CLUSTER]
[--admin-daemon ADMIN_SOCKET] [-s] [-w] [--watch-debug]
[--watch-info] [--watch-sec] [--watch-warn] [--watch-error]
[--watch-channel {cluster,audit,*}] [--version] [--verbose]
[--concise] [-f {json,json-pretty,xml,xml-pretty,plain}]
[--connect-timeout CLUSTER_TIMEOUT] [--block] [--period PERIOD]
Ceph administration tool
optional arguments:
-h, --help request mon help
-c CEPHCONF, --conf CEPHCONF
ceph configuration file
-i INPUT_FILE, --in-file INPUT_FILE
input file, or "-" for stdin
-o OUTPUT_FILE, --out-file OUTPUT_FILE
output file, or "-" for stdout
--setuser SETUSER set user file permission
--setgroup SETGROUP set group file permission
--id CLIENT_ID, --user CLIENT_ID
client id for authentication
--name CLIENT_NAME, -n CLIENT_NAME
client name for authentication
--cluster CLUSTER cluster name
--admin-daemon ADMIN_SOCKET
submit admin-socket commands ("help" for help
-s, --status show cluster status
-w, --watch watch live cluster changes
--watch-debug watch debug events
--watch-info watch info events
--watch-sec watch security events
--watch-warn watch warn events
--watch-error watch error events
--watch-channel {cluster,audit,*}
which log channel to follow when using -w/--watch. One
of ['cluster', 'audit', '*']
--version, -v display version
--verbose make verbose
--concise make less verbose
-f {json,json-pretty,xml,xml-pretty,plain}, --format {json,json-pretty,xml,xml-pretty,plain}
--connect-timeout CLUSTER_TIMEOUT
set a timeout for connecting to the cluster
--block block until completion (scrub and deep-scrub only)
--period PERIOD, -p PERIOD
polling period, default 1.0 second (for polling
commands only)
Local commands:
===============
ping Send simple presence/life test to a mon
may be 'mon.*' for all mons
daemon {type.id|path}
Same as --admin-daemon, but auto-find admin socket
daemonperf {type.id | path} [stat-pats] [priority] [] []
daemonperf {type.id | path} list|ls [stat-pats] [priority]
Get selected perf stats from daemon/admin socket
Optional shell-glob comma-delim match string stat-pats
Optional selection priority (can abbreviate name):
critical, interesting, useful, noninteresting, debug
List shows a table of all available stats
Run times (default forever),
once per seconds (default 1)
Monitor commands:
=================
auth add { [...]} add auth info for from input file, or random key
if no input is given, and/or any caps specified in the
command
auth caps [...] update caps for from caps specified in the command
auth export {} write keyring for requested entity, or master keyring if
none given
auth get write keyring file with requested key
auth get-key display requested key
auth get-or-create { [...]} add auth info for from input file, or random key
if no input given, and/or any caps specified in the command
auth get-or-create-key { [...]} get, or add, key for from system/caps pairs
specified in the command. If key already exists, any
given caps must match the existing caps for that key.
auth import auth import: read keyring file from -i
auth ls list authentication state
auth print-key display requested key
auth print_key display requested key
auth rm remove all caps for
balancer dump Show an optimization plan
balancer eval {} Evaluate data distribution for the current cluster or
specific pool or specific plan
balancer eval-verbose { } Evaluate data distribution for the current cluster or
specific pool or specific plan (verbosely)
balancer execute Execute an optimization plan
balancer ls List all plans
balancer mode none|crush-compat|upmap Set balancer mode
balancer off Disable automatic balancing
balancer on Enable automatic balancing
balancer optimize { [...]} Run optimizer to create a new plan
balancer reset Discard all optimization plans
balancer rm Discard an optimization plan
balancer show Show details of an optimization plan
balancer sleep Set balancer sleep interval
balancer status Show balancer status
config assimilate-conf Assimilate options from a conf, and return a new, minimal
conf file
config dump Show all configuration option(s)
config get {} Show configuration option(s) for an entity
config help Describe a configuration option
config log {} Show recent history of config changes
config reset Revert configuration to previous state
config rm Clear a configuration option for one or more entities
config set Set a configuration option for one or more entities
config show {} Show running configuration
config show-with-defaults Show running configuration (including compiled-in defaults)
config-key dump {} dump keys and values (with optional prefix)
config-key exists check for 's existence
config-key get get
config-key ls list keys
config-key rm rm
config-key set {} set to value
crash info show crash dump metadata
crash json_report Crashes in the last hours
crash ls Show saved crash dumps
crash post Add a crash dump (use -i )
crash prune Remove crashes older than days
crash rm Remove a saved crash
crash self-test Run a self test of the crash module
crash stat Summarize recorded crashes
dashboard create-self-signed-cert Create self signed certificate
dashboard get-enable-browsable-api Get the ENABLE_BROWSABLE_API option value
dashboard get-rest-requests-timeout Get the REST_REQUESTS_TIMEOUT option value
dashboard get-rgw-api-access-key Get the RGW_API_ACCESS_KEY option value
dashboard get-rgw-api-admin-resource Get the RGW_API_ADMIN_RESOURCE option value
dashboard get-rgw-api-host Get the RGW_API_HOST option value
dashboard get-rgw-api-port Get the RGW_API_PORT option value
dashboard get-rgw-api-scheme Get the RGW_API_SCHEME option value
dashboard get-rgw-api-secret-key Get the RGW_API_SECRET_KEY option value
dashboard get-rgw-api-ssl-verify Get the RGW_API_SSL_VERIFY option value
dashboard get-rgw-api-user-id Get the RGW_API_USER_ID option value
dashboard set-enable-browsable-api Set the ENABLE_BROWSABLE_API option value
dashboard set-login-credentials Set the login credentials
dashboard set-rest-requests-timeout Set the REST_REQUESTS_TIMEOUT option value
dashboard set-rgw-api-access-key Set the RGW_API_ACCESS_KEY option value
dashboard set-rgw-api-admin-resource Set the RGW_API_ADMIN_RESOURCE option value
dashboard set-rgw-api-host Set the RGW_API_HOST option value
dashboard set-rgw-api-port Set the RGW_API_PORT option value
dashboard set-rgw-api-scheme Set the RGW_API_SCHEME option value
dashboard set-rgw-api-secret-key Set the RGW_API_SECRET_KEY option value
dashboard set-rgw-api-ssl-verify Set the RGW_API_SSL_VERIFY option value
dashboard set-rgw-api-user-id Set the RGW_API_USER_ID option value
dashboard set-session-expire Set the session expire timeout
df {detail} show cluster free space stats
features report of connected features
fs add_data_pool add data pool
fs authorize [...] add auth for to access file system
based on following directory and permissions pairs
fs dump {} dump all CephFS status, optionally from epoch
fs flag set enable_multiple {--yes-i-really-mean-it} Set a global CephFS flag
fs get get info about one filesystem
fs ls list filesystems
fs new {--force} {--allow- make new filesystem using named pools and
dangerous-metadata-overlay}
fs reset {--yes-i-really-mean-it} disaster recovery only: reset to a single-MDS map
fs rm {--yes-i-really-mean-it} disable the named filesystem
fs rm_data_pool remove data pool
fs set max_mds|max_file_size|allow_new_snaps| set fs parameter to
inline_data|cluster_down|allow_dirfrags|balancer|standby_
count_wanted|session_timeout|session_autoclose|down|
joinable|min_compat_client {}
fs set-default set the default to the named filesystem
fs status {} Show the status of a CephFS filesystem
fsid show cluster FSID/UUID
health {detail} show cluster health
heap dump|start_profiler|stop_profiler|release|stats show heap usage info (available only if compiled with
tcmalloc)
hello {} Prints hello world to mgr.x.log
influx config-set Set a configuration value
influx config-show Show current configuration
influx self-test debug the module
influx send Force sending data to Influx
injectargs [...] inject config arguments into monitor
iostat Get IO rates
iostat self-test Run a self test the iostat module
log [...] log supplied text to the monitor log
log last {} {debug|info|sec|warn|error} {*| print last few lines of the cluster log
cluster|audit}
mds compat rm_compat remove compatible feature
mds compat rm_incompat remove incompatible feature
mds compat show show mds compatibility settings
mds count-metadata count MDSs by metadata field property
mds fail Mark MDS failed: trigger a failover if a standby is
available
mds metadata {} fetch metadata for mds
mds repaired mark a damaged MDS rank as no longer damaged
mds rm remove nonactive mds
mds rmfailed {} remove failed mds
mds set_state set mds state of to
mds stat show MDS status
mds versions check running versions of MDSs
mgr count-metadata count ceph-mgr daemons by metadata field property
mgr dump {} dump the latest MgrMap
mgr fail treat the named manager daemon as failed
mgr metadata {} dump metadata for all daemons or a specific daemon
mgr module disable disable mgr module
mgr module enable {--force} enable mgr module
mgr module ls list active mgr modules
mgr self-test background start Activate a background workload (one of command_spam, throw_
exception)
mgr self-test background stop Stop background workload if any is running
mgr self-test config get Peek at a configuration value
mgr self-test config get_localized Peek at a configuration value (localized variant)
mgr self-test remote Test inter-module calls
mgr self-test run Run mgr python interface tests
mgr services list service endpoints provided by mgr modules
mgr versions check running versions of ceph-mgr daemons
mon add add new monitor named at
mon compact cause compaction of monitor's leveldb/rocksdb storage
mon count-metadata count mons by metadata field property
mon dump {} dump formatted monmap (optionally from epoch)
mon feature ls {--with-value} list available mon map features to be set/unset
mon feature set {--yes-i-really-mean-it} set provided feature on mon map
mon getmap {} get monmap
mon metadata {} fetch metadata for mon
mon rm remove monitor named
mon scrub scrub the monitor stores
mon stat summarize monitor status
mon sync force {--yes-i-really-mean-it} {--i-know-what-i- force sync of and clear monitor store
am-doing}
mon versions check running versions of monitors
mon_status report status of monitors
node ls {all|osd|mon|mds|mgr} list all nodes in cluster [type]
osd add-nodown [...] mark osd(s) [...] as nodown, or use to
mark all osds as nodown
osd add-noin [...] mark osd(s) [...] as noin, or use to
mark all osds as noin
osd add-noout [...] mark osd(s) [...] as noout, or use to
mark all osds as noout
osd add-noup [...] mark osd(s) [...] as noup, or use to
mark all osds as noup
osd blacklist add|rm {} add (optionally until seconds from now) or remove
from blacklist
osd blacklist clear clear all blacklisted clients
osd blacklist ls show blacklisted clients
osd blocked-by print histogram of which OSDs are blocking their peers
osd count-metadata count OSDs by metadata field property
osd crush add add or update crushmap position and weight for with
[...] and location
osd crush add-bucket { [...]} add no-parent (probably root) crush bucket of type
to location
osd crush class ls list all crush device classes
osd crush class ls-osd list all osds belonging to the specific
osd crush class rename rename crush device class to
osd crush create-or-move at/
]> [...] to location
osd crush dump dump crush map
osd crush get-tunable straw_calc_version get crush tunable
osd crush link [...] link existing entry for under location
osd crush ls list items beneath a node in the CRUSH tree
osd crush move [...] move existing entry for to location
osd crush rename-bucket rename bucket to
osd crush reweight change 's weight to in crush map
osd crush reweight-all recalculate the weights for the tree to ensure they sum
correctly
osd crush reweight-subtree change all leaf items beneath to in crush
map
osd crush rm {} remove from crush map (everywhere, or just at
)
osd crush rm-device-class [...] remove class of the osd(s) [...],or use
to remove all.
osd crush rule create-erasure {} create crush rule for erasure coded pool created
with