Ceph Rest Api 文档

Ceph Rest Api 是一套ceph自带的HTTP接口,官方文档上待补充,但其实,配好ceph rest api之后,默认页面会显示所有的API列表。放在这里作为参考。

Possible commands: Method Description
auth/add?entity=entity()&caps={c aps() [...]} PUT add auth info for from input file, or random key if no input is given, and/or any caps specified in the command
auth/caps?entity=entity()&caps=c aps() [...] PUT update caps for from caps specified in the command
auth/del?entity=entity() PUT delete all caps for
auth/export?entity={entity()} GET write keyring for requested entity, or master keyring if none given
auth/get?entity=entity() GET write keyring file with requested key
auth/get-key?entity=entity() GET display requested key
auth/get-or-create?entity=entity()&caps={caps() [...]} PUT add auth info for from input file, or random key if no input given, and/or any caps specified in the command
auth/get-or-create-key?entity=entity()&caps={caps() [...]} PUT get, or add, key for from system/caps pairs specified in the command. If key already exists, any given caps must match the existing caps for that key.
auth/import PUT auth import: read keyring file from -i
auth/list GET list authentication state
auth/print-key?entity=entity() GET display requested key
auth/print_key?entity=entity() GET display requested key
tell//bench?count={count( )}&size={size()} PUT OSD benchmark: write -byte objects, (default 1G size 4MB). Results in log.
tell//cluster_log?level=e rror&message=message() [...] PUT log a message to the cluster log
compact PUT cause compaction of monitor's leveldb storage
config-key/del?key=key() PUT delete
config-key/exists?key=key() GET check for 's existence
config-key/get?key=key() GET get
config-key/list GET list keys
config-key/put?key=key()&val={va l()} PUT put , value
tell//cpu_profiler?arg=arg(status|flush) PUT run cpu profiling on daemon
tell//debug/kick_recovery _wq?delay=delay() PUT set osd_recovery_delay_start to
tell//debug_dump_missing? filename=filename() GET dump missing objects to a named file
df?detail={detail} GET show cluster free space stats
tell//dump_pg_recovery_stats GET dump pg recovery statistics
tell//flush_pg_stats PUT flush pg stats
fs/ls GET list filesystems
fs/new?fs_name=fs_name()&metadat a=metadata()&data=data() PUT make new filesystem using named pools and
fs/reset?fs_name=fs_name()&sure= {--yes-i-really-mean-it} PUT disaster recovery only: reset to a single-MDS map
fs/rm?fs_name=fs_name()&sure={-- yes-i-really-mean-it} PUT disable the named filesystem
fsid GET show cluster FSID/UUID
health?detail={detail} GET show cluster health
heap?heapcmd=heapcmd(dump|start_profiler |stop_profiler|release|stats) PUT show heap usage info (available only if compiled with tcmalloc)
tell//heap?heapcmd=heapcm d(dump|start_profiler|stop_profiler|rele ase|stats) PUT show heap usage info (available only if compiled with tcmalloc)
injectargs?injected_args=injected_args(< string>) [...] PUT inject config arguments into monitor
tell//injectargs?injected _args=injected_args() [...] PUT inject configuration arguments into running OSD
tell//list_missing?offset ={offset()} GET list missing objects on this pg, perhaps starting at an offset given in JSON
log?logtext=logtext() [...] PUT log supplied text to the monitor log
tell//mark_unfound_lost?m ulcmd=mulcmd(revert|delete) PUT mark all unfound objects in this pg as lost, either removing or reverting to a prior version if one is available
mds/add_data_pool?pool=pool() PUT add data pool
mds/cluster_down PUT take MDS cluster down
mds/cluster_up PUT bring MDS cluster up
mds/compat/rm_compat?feature=feature() PUT remove compatible feature
mds/compat/rm_incompat?feature=feature(< int[0-]>) PUT remove incompatible feature
mds/compat/show GET show mds compatibility settings
mds/deactivate?who=who() PUT stop mds
mds/dump?epoch={epoch()} GET dump info, optionally from epoch
mds/fail?who=who() PUT force mds to status failed
mds/getmap?epoch={epoch()} GET get MDS map, optionally from epoch
mds/newfs?metadata=metadata()&d ata=data()&sure={--yes-i -really-mean-it} PUT make new filesystem using pools and
mds/remove_data_pool?pool=pool() PUT remove data pool
mds/rm?gid=gid()&who=who() PUT remove nonactive mds
mds/rmfailed?who=who() PUT remove failed mds
mds/set?var=var(max_mds|max_file_size|al low_new_snaps|inline_data)&val=val()&confirm={confirm()} PUT set mds parameter to
mds/set_max_mds?maxmds=maxmds() PUT set max MDS index
mds/set_state?gid=gid()&state=s tate() PUT set mds state of to
mds/setmap?epoch=epoch() PUT set mds map; must supply correct epoch number
mds/stat GET show MDS status
mds/stop?who=who() PUT stop mds
mds/tell?who=who()&args=args() [...] PUT send command to particular mds
mon/add?name=name()&addr=addr() PUT add new monitor named at
mon/dump?epoch={epoch()} GET dump formatted monmap (optionally from epoch)
mon/getmap?epoch={epoch()} GET get monmap
mon/remove?name=name() PUT remove monitor named
mon/stat GET summarize monitor status
mon_status GET report status of monitors
osd/blacklist?blacklistop=blacklistop(ad d|rm)&addr=addr()&expire={ex pire()} PUT add (optionally until seconds from now) or remove from blacklist
osd/blacklist/ls GET show blacklisted clients
osd/blocked-by GET print histogram of which OSDs are blocking their peers
osd/create?uuid={uuid()} PUT create new osd (with optional UUID)
osd/crush/add?id=id()&weight=weight()&args=arg s() [...] PUT add or update crushmap position and weight for with and location
osd/crush/add- bucket?name=name()&type=type() PUT add no-parent (probably root) crush bucket of type
osd/crush/create-or-move?id=id()&weight=weight()&args=args() [...] PUT create entry or move existing entry for at/to location
osd/crush/dump GET dump crush map
osd/crush/get- tunable?tunable=straw_calc_version PUT get crush tunable
osd/crush/link?name=name()&args= args() [...] PUT link existing entry for under location
osd/crush/move?name=name()&args=args() [...] PUT move existing entry for to location
osd/crush/remove?name=name()&ancestor={ancesto r()} PUT remove from crush map (everywhere, or just at )
osd/crush/rename- bucket?srcname=srcname()&dstname=dstname() PUT rename bucket to
osd/crush/reweight?name=name()&weight=weight(< float[0.0-]>) PUT change 's weight to in crush map
osd/crush/reweight-all PUT recalculate the weights for the tree to ensure they sum correctly
osd/crush/reweight- subtree?name=name()&weight=weight() PUT change all leaf items beneath to in crush map
osd/crush/rm?name=name()&ancestor={ancestor()} PUT remove from crush map (everywhere, or just at )
osd/crush/rule/create- erasure?name=name()&profile={profile()} PUT create crush rule for erasure coded pool created with (default default)
osd/crush/rule/create- simple?name=name()&root=root()&type=type()&mode={mode(first n|indep)} PUT create crush rule to start from , replicate across buckets of type , using a choose mode of (default firstn; indep best for erasure pools)
osd/crush/rule/dump?name={name()} GET dump crush rule (default all)
osd/crush/rule/list GET list crush rules
osd/crush/rule/ls GET list crush rules
osd/crush/rule/rm?name=name() PUT remove crush rule
osd/crush/set PUT set crush map from input file
osd/crush/set?id=id()&weight=weight()&args=arg s() [...] PUT update crushmap position and weight for to with location
osd/crush/set-tunable?tunable=straw_calc _version&value=value() PUT set crush tunable to
osd/crush/show-tunables GET show current crush tunables
osd/crush/tree GET dump crush buckets and items in a tree view
osd/crush/tunables?profile=profile(legac y|argonaut|bobtail|firefly|hammer|optima l|default) PUT set crush tunables values to
osd/crush/unlink?name=name()&ancestor={ancesto r()} PUT unlink from crush map (everywhere, or just at )
osd/deep-scrub?who=who() PUT initiate deep scrub on osd
osd/df?output_method={output_method(plai n|tree)} GET show OSD utilization
osd/down?ids=ids() [...] PUT set osd(s) [...] down
osd/dump?epoch={epoch()} GET print summary of OSD map
osd/erasure-code- profile/get?name=name() GET get erasure code profile
osd/erasure-code-profile/ls GET list all erasure code profiles
osd/erasure-code- profile/rm?name=name() PUT remove erasure code profile
osd/erasure-code- profile/set?name=name()&profile={profile() [...]} PUT create erasure code profile with [ ...] pairs. Add a --force at the end to override an existing profile (VERY DANGEROUS)
osd/find?id=id() GET find osd in the CRUSH map and show its location
osd/getcrushmap?epoch={epoch()} GET get CRUSH map
osd/getmap?epoch={epoch()} GET get OSD map
osd/getmaxosd GET show largest OSD id
osd/in?ids=ids() [...] PUT set osd(s) [...] in
osd/lost?id=id()&sure={--yes-i -really-mean-it} PUT mark osd as permanently lost. THIS DESTROYS DATA IF NO MORE REPLICAS EXIST, BE CAREFUL
osd/ls?epoch={epoch()} GET show all OSD ids
osd/lspools?auid={auid()} GET list pools
osd/map?pool=pool()&object=obj ect() GET find pg for in
osd/metadata?id=id() GET fetch metadata for osd
osd/out?ids=ids() [...] PUT set osd(s) [...] out
osd/pause PUT pause osd
osd/perf GET print dump of OSD perf summary stats
osd/pg- temp?pgid=pgid()&id={id() [...]} PUT set pg_temp mapping pgid:[ [...]] (developers only)
osd/pool/create?pool=pool()&pg _num=pg_num()&pgp_num={pgp_num( )}&pool_type={pool_type(replica ted|erasure)}&erasure_code_profile={eras ure_code_profile()}&ruleset={ruleset() }&expected_num_objects={expected_num_obj ects()} PUT create pool
osd/pool/delete?pool=pool()&po ol2={pool2()}&sure={--yes-i -really-really-mean-it} PUT delete pool
osd/pool/get?pool=pool()&var=v ar(size|min_size|crash_replay_interval|p g_num|pgp_num|crush_ruleset|hit_set_type |hit_set_period|hit_set_count|hit_set_fp p|auid|target_max_objects|target_max_byt es|cache_target_dirty_ratio|cache_target _full_ratio|cache_min_flush_age|cache_mi n_evict_age|erasure_code_profile|min_rea d_recency_for_promote|write_fadvise_dont need) GET get pool parameter
osd/pool/get-quota?pool=pool() GET obtain object or byte limits for pool
osd/pool/ls?detail={detail} GET list pools
osd/pool/mksnap?pool=pool()&sn ap=snap() PUT make snapshot in
osd/pool/rename?srcpool=srcpool()&destpool=destpool() PUT rename to
osd/pool/rmsnap?pool=pool()&sn ap=snap() PUT remove snapshot from
osd/pool/set?pool=pool()&var=v ar(size|min_size|crash_replay_interval|p g_num|pgp_num|crush_ruleset|hashpspool|n odelete|nopgchange|nosizechange|hit_set_ type|hit_set_period|hit_set_count|hit_se t_fpp|debug_fake_ec_pool|target_max_byte s|target_max_objects|cache_target_dirty_ ratio|cache_target_full_ratio|cache_min_ flush_age|cache_min_evict_age|auid|min_r ead_recency_for_promote|write_fadvise_do ntneed)&val=val()&force={--yes-i -really-mean-it} PUT set pool parameter to
osd/pool/set-quota?pool=pool() &field=field(max_objects|max_bytes)&val= val() PUT set object or byte limit on pool
osd/pool/stats?name={name()} GET obtain stats from all pools, or from specified pool
osd/primary-affinity?id=id()&weight=weight( ) PUT adjust osd primary-affinity from 0.0 <= <= 1.0
osd/primary- temp?pgid=pgid()&id=id() PUT set primary_temp mapping pgid:|-1 (developers only)
osd/repair?who=who() PUT initiate repair on osd
osd/reweight?id=id()&weight=wei ght() PUT reweight osd to 0.0 < < 1.0
osd/reweight-by-pg?oload=oload()&pools={pools() [...]} PUT reweight OSDs by PG distribution [overload-percentage-for-consideration, default 120]
osd/reweight-by- utilization?oload={oload()} PUT reweight OSDs by utilization [overload-percentage-for-consideration, default 120]
osd/rm?ids=ids() [...] PUT remove osd(s) [...] in
osd/scrub?who=who() PUT initiate scrub on osd
osd/set?key=key(full|pause|noup|nodown|n oout|noin|nobackfill|norebalance|norecov er|noscrub|nodeep-scrub|notieragent) PUT set
osd/setcrushmap PUT set crush map from input file
osd/setmaxosd?newmax=newmax() PUT set new maximum osd value
osd/stat GET print summary of OSD map
osd/thrash?num_epochs=num_epochs() PUT thrash OSDs for
osd/tier/add?pool=pool()&tierp ool=tierpool()&force_nonempty ={--force-nonempty} PUT add the tier (the second one) to base pool (the first one)
osd/tier/add-cache?pool=pool() &tierpool=tierpool()&size=size () PUT add a cache (the second one) of size to existing pool (the first one)
osd/tier/cache-mode?pool=pool( )&mode=mode(none|writeback|forward|reado nly|readforward|readproxy) PUT specify the caching mode for cache tier
osd/tier/remove?pool=pool()&ti erpool=tierpool() PUT remove the tier (the second one) from base pool (the first one)
osd/tier/remove- overlay?pool=pool() PUT remove the overlay pool for base pool
osd/tier/set-overlay?pool=pool()&overlaypool=overlaypool() PUT set the overlay pool for base pool to be
osd/tree?epoch={epoch()} GET print OSD tree
osd/unpause PUT unpause osd
osd/unset?key=key(full|pause|noup|nodown |noout|noin|nobackfill|norebalance|norec over|noscrub|nodeep-scrub|notieragent) PUT unset
pg/debug?debugop=debugop(unfound_objects _exist|degraded_pgs_exist) GET show debug info about pgs
pg/deep-scrub?pgid=pgid() PUT start deep-scrub on
pg/dump?dumpcontents={dumpcontents(all|s ummary|sum|delta|pools|osds|pgs|pgs_brie f) [all|summary|sum|delta|pools|osds|pgs |pgs_brief...]} GET show human-readable versions of pg map (only 'all' valid with plain)
pg/dump_json?dumpcontents={dumpcontents( all|summary|sum|pools|osds|pgs) [all|summary|sum|pools|osds|pgs...]} GET show human-readable version of pg map in json only
pg/dump_pools_json GET show pg pools info in json only
pg/dump_stuck?stuckops={stuckops(inactiv e|unclean|stale|undersized|degraded) [in active|unclean|stale|undersized|degraded ...]}&threshold={threshold()} GET show information about stuck pgs
pg/force_create_pg?pgid=pgid() PUT force creation of pg
pg/getmap GET get binary pg map to -o/stdout
pg/ls?pool={pool()}&states={states( active|clean|down|replay|splitting|scrub bing|scrubq|degraded|inconsistent|peerin g|repair|recovering|backfill_wait|incomp lete|stale|remapped|deep_scrub|backfill| backfill_toofull|recovery_wait|undersize d) [active|clean|down|replay|splitting|s crubbing|scrubq|degraded|inconsistent|pe ering|repair|recovering|backfill_wait|in complete|stale|remapped|deep_scrub|backf ill|backfill_toofull|recovery_wait|under sized...]} GET list pg with specific pool, osd, state
pg/ls-by-osd?osd=osd()&pool={pool()}&states={states(ac tive|clean|down|replay|splitting|scrubbi ng|scrubq|degraded|inconsistent|peering| repair|recovering|backfill_wait|incomple te|stale|remapped|deep_scrub|backfill|ba ckfill_toofull|recovery_wait|undersized) [active|clean|down|replay|splitting|scru bbing|scrubq|degraded|inconsistent|peeri ng|repair|recovering|backfill_wait|incom plete|stale|remapped|deep_scrub|backfill |backfill_toofull|recovery_wait|undersiz ed...]} GET list pg on osd [osd]
pg/ls-by-pool?poolstr=poolstr()& states={states(active|clean|down|replay| splitting|scrubbing|scrubq|degraded|inco nsistent|peering|repair|recovering|backf ill_wait|incomplete|stale|remapped|deep_ scrub|backfill|backfill_toofull|recovery _wait|undersized) [active|clean|down|rep lay|splitting|scrubbing|scrubq|degraded| inconsistent|peering|repair|recovering|b ackfill_wait|incomplete|stale|remapped|d eep_scrub|backfill|backfill_toofull|reco very_wait|undersized...]} GET list pg with pool = [poolname | poolid]
pg/ls-by-primary?osd=osd()&pool={pool()}&states={state s(active|clean|down|replay|splitting|scr ubbing|scrubq|degraded|inconsistent|peer ing|repair|recovering|backfill_wait|inco mplete|stale|remapped|deep_scrub|backfil l|backfill_toofull|recovery_wait|undersi zed) [active|clean|down|replay|splitting |scrubbing|scrubq|degraded|inconsistent| peering|repair|recovering|backfill_wait| incomplete|stale|remapped|deep_scrub|bac kfill|backfill_toofull|recovery_wait|und ersized...]} GET list pg with primary = [osd]
pg/map?pgid=pgid() GET show mapping of pg to osds
pg/repair?pgid=pgid() PUT start repair on
pg/scrub?pgid=pgid() PUT start scrub on
pg/send_pg_creates PUT trigger pg creates to be issued
pg/set_full_ratio?ratio=ratio() PUT set ratio at which pgs are considered full
pg/set_nearfull_ratio?ratio=ratio() PUT set ratio at which pgs are considered nearly full
pg/stat GET show placement group status.
tell//query GET show details of a specific pg
quorum?quorumcmd=quorumcmd(enter|exit) PUT enter or exit quorum
quorum_status GET report status of monitor quorum
report?tags={tags() [...]} GET report full status of cluster, optional title tag strings
tell//reset_pg_recovery_stats PUT reset pg recovery statistics
scrub PUT scrub the monitor stores
status GET show cluster status
sync/force?validate1={--yes-i-really- mean-it}&validate2={--i-know-what-i-am- doing} PUT force sync of and clear monitor store
tell?target=target()&args=args() [...] PUT send a command to a specific daemon
tell//version GET report version of OSD
version GET show mon daemon version

你可能感兴趣的:(Openstack)