Client.admin authentication error

今天早上在ceph的跳转机上运行ceph -s突然出现一个错误

[root@dev-yum ~]# ceph -s
2016-08-29 10:05:31.853233 7fa5cf941700 0 librados: client.admin authentication error (1) Operation not permitted
Error connecting to cluster: PermissionError

好眼熟,就是认证出问题了。
去检查一下client.adminkeyring

[root@cloud-11 ~]# ceph auth list
installed auth entries:

osd.0
key: AQBkBLxXaMvICBAAeGSzBd4YIDbldNNBHDFW+Q==
caps: [mon] allow profile osd
caps: [osd] allow *
osd.1
key: AQBsBLxXgNHdIBAApF4kZ6OAmnW+t1fzuadyLA==
caps: [mon] allow profile osd
caps: [osd] allow *
osd.2
key: AQBwBLxXGD4RJRAAp/gwhYgd+taL2wdE6YNivQ==
caps: [mon] allow profile osd
caps: [osd] allow *
osd.3
key: AQB4BLxXkIyVLxAA4GQwamIrtOJTFHHCmsIqvA==
caps: [mon] allow profile osd
caps: [osd] allow *
osd.4
key: AQCCBLxXML7FLRAAhlGP5H9jKOp2/0OVhC4kXA==
caps: [mon] allow profile osd
caps: [osd] allow *
osd.5
key: AQCKBLxXkEtPORAAIGIbKOB5KBUFeQxzihl1uw==
caps: [mon] allow profile osd
caps: [osd] allow *
client.admin
key: AQDXA7xX6CVqHhAATCGZw9UcKVriXMfRUSp5SA==
caps: [mds] allow
caps: [mon] allow *
caps: [osd] allow *
client.bootstrap-mds
key: AQDYA7xXMKMEBRAANKYab6iOByPDE1c82Y+pHg==
caps: [mon] allow profile bootstrap-mds
client.bootstrap-osd
key: AQDXA7xXmPhnLxAApnks3XUe/PNEPc6K65n88g==
caps: [mon] allow profile bootstrap-osd

说明client.adminkeyring是添加进去了的,本地跟ceph一致应该就没问题,再去检查一下本地。

[root@dev-yum ~]# cat /etc/ceph/ceph.client.admin.keyring
[client.admin]
key = AQCJoPRW0N0tOBAA59XPMetKwQ5MizV0ZXot9Q==

这样问题就清楚了,把keyring改成ceph里面的就行了。估计是我前两天重装了ceph,ceph又会自动重新生成密钥,但是我就是在这台跳转机上装的,它难道就不会自动更新一下keyring么。

[root@dev-yum ~]# ceph -s
    cluster f6d631fa-d640-487b-a2aa-dde57ef579b2
     health HEALTH_WARN noscrub,nodeep-scrub flag(s) set; mon.cloud-11 low disk space; clock skew detected on mon.cloud-12, mon.cloud-13
     monmap e1: 3 mons at {cloud-11=10.10.16.11:6789/0,cloud-12=10.10.16.12:6789/0,cloud-13=10.10.16.13:6789/0}, election epoch 6, quorum 0,1,2 cloud-11,cloud-12,cloud-13
     osdmap e5802: 6 osds: 6 up, 6 in
            flags noscrub,nodeep-scrub
      pgmap v23325: 192 pgs, 3 pools, 33 bytes data, 2 objects
            250 GB used, 3154 GB / 3418 GB avail
                 192 active+clean
You have new mail in /var/spool/mail/root

你可能感兴趣的:(ceph)