ceph-ansible部署ceph参考

ceph-ansible部署ceph参考

Ansible 是一个开源的基于 OpenSSH 的自动化配置管理工具。可以用它来配置系统、部署软件和编排更高级的 IT 任务。本问主要说明如何用ceph-ansible简单快捷的部署ceph集群。

一、编辑hosts文件并设置免密登录

vim /etc/hosts
----------------------------------------------
192.168.48.132	A1
192.168.48.133	A2
192.168.48.134	A3
ssh-keygen -t rsa
ssh-copy-id root@A1
ssh-copy-id root@A2
ssh-copy-id root@A3

二、获取ceph-ansible代码

1、从git上获取。

git clone https://github.com/ceph/ceph-ansible.git 

2、下载已选好分支的压缩包

wget -q -O ceph-ansible-stable-4.0.zip https://codeload.github.com/ceph/ceph-ansible/zip/stable-4.0
unzip ceph-ansible-stable-4.0.zip
mv ceph-ansible-stable-4.0.zip ceph-ansible

三、安装ceph-ansible

注:ansible必须为2.8版本否则安装ceph时报以下错误:

TASK [ceph-validate : fail on unsupported ansible version] ************************************************************************************************************************************
Wednesday 19 February 2020  16:27:46 +0800 (0:00:00.113)       0:00:39.545 **** 
fatal: [ceph1]: FAILED! => changed=false 
  msg: Ansible version must be 2.8!
cd ceph-ansible
git branch -r							#查看分支,stable-4.0支持ceph nautilus版本
git fetch origin stable-4.0				#将分支拉到本地
git checkout  stable-4.0				#切换到4.0分支
#以上三步为git拉取后的执行步骤,若是已选好分支的压缩包,请直接进行下一步
pip install -r requirements.txt
注:如出现- bash: pip: command not found,可参考以下方式解决
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum -y install python-pip

四、安装ceph

1、在ceph-ansible目录下新建nodes文件,并添加如下内容:

[mons]				#要安装mon的节点
A1
A2
A3
[mgrs]				#要安装mgr的节点
A1
A2
A3
[clients]			#要安装client的节点
A1
A2
A3
[osds]				#要安装osd的节点
A1
A2
A3
[grafana-server]	#要安装granfana-server的节点
A1
A2
A3

2、copy配置文件并进行编辑

cp group_vars/all.yml.sample group_vars/all.yml
cp group_vars/osds.yml.sample group_vars/osds.yml
cp site.yml.sample site.yml
vim group_vars/all.yml
---------------------------------------------
---
ceph_origin: repository
ceph_repository: community
ceph_mirror: https://mirrors.163.com/ceph/
ceph_stable_release: nautilus
public_network: "192.168.48.0/24"
cluster_network: "192.168.48.0/24"
mon_host: 192.168.48.132,192.168.48.133,192.168.48.134
ceph_stable_repo: "{{ ceph_mirror }}/rpm-{{ ceph_stable_release }}"
ceph_stable_redhat_distro: el7
journal_size: 1024
fsid: 8cc6f4ef-0f46-4459-ae64-ce14ad743a42
monitor_interface: ens33
devices:
 - '/dev/sdb'
 - '/dev/sdc'
osd_scenario: collocated

rbd_cache: "true"
rbd_cache_writethrough_until_flush: "true"
rbd_concurrent_management_ops: 20
rbd_client_directories: true
osd_objectstore: bluestore
osd_auto_discovery: true
osd_auto_discovery_exclude: "dm-*|loop*|md*|rbd*"
mds_max_mds: 1
radosgw_frontend_type: beast
radosgw_thread_pool_size: 512
radosgw_interface: "{{ monitor_interface }}"
dashboard_enabled: True
dashboard_protocol: http
dashboard_port: 8443
dashboard_admin_user: admin
dashboard_admin_password: admin
grafana_admin_user: admin
grafana_admin_password: admin
grafana_uid: 472
grafana_datasource: Dashboard
grafana_dashboard_version: nautilus
grafana_port: 3000
grafana_allow_embedding: True
grafana_crt: ''
grafana_key: ''
grafana_container_image: "grafana/grafana:5.2.4"
grafana_container_cpu_period: 100000
grafana_container_cpu_cores: 2
grafana_container_memory: 4
grafana_dashboards_path: "/etc/grafana/dashboards/ceph-dashboard"
grafana_dashboard_files:
  - ceph-cluster.json
  - cephfs-overview.json
  - host-details.json
  - hosts-overview.json
  - osd-device-details.json
  - osds-overview.json
  - pool-detail.json
  - pool-overview.json
  - radosgw-detail.json
  - radosgw-overview.json
  - rbd-overview.json
grafana_plugins:
  - vonage-status-panel
  - grafana-piechart-panel
prometheus_container_image: "prom/prometheus:v2.7.2"
prometheus_container_cpu_period: 100000
prometheus_container_cpu_cores: 2
prometheus_container_memory: 4
prometheus_data_dir: /var/lib/prometheus
prometheus_conf_dir: /etc/prometheus
prometheus_user_id: '65534'  
prometheus_port: 9092

#以下部分为ceph.conf配置文件配置,如有需要请自行更改
ceph_conf_overrides:
    global:
      rbd_default_features: 7
      auth cluster required: cephx
      auth service required: cephx
      auth client required: cephx
      osd journal size: 2048
      osd pool default size: 3
      osd pool default min size: 1
      mon_pg_warn_max_per_osd: 1024
      osd pool default pg num: 128
      osd pool default pgp num: 128
      max open files: 131072
      osd_deep_scrub_randomize_ratio: 0.01

    mgr:
      mgr modules: dashboard

    mon:
      mon_allow_pool_delete: true
      mon_data_avail_warn: 10
    client:
      rbd_cache: true
      rbd_cache_size: 335544320
      rbd_cache_max_dirty: 134217728
      rbd_cache_max_dirty_age: 10

    osd:
      osd mkfs type: xfs
      ms_bind_port_max: 7100
      osd_client_message_size_cap: 2147483648
      osd_crush_update_on_start: true
      osd_deep_scrub_stride: 131072
      osd_disk_threads: 4
      osd_map_cache_bl_size: 128
      osd_max_object_name_len: 256
      osd_max_object_namespace_len: 64
      osd_max_write_size: 1024
      osd_op_threads: 8

      osd_recovery_op_priority: 1
      osd_recovery_max_active: 1
      osd_recovery_max_single_start: 1
      osd_recovery_max_chunk: 1048576
      osd_recovery_threads: 1
      osd_max_backfills: 4
      osd_scrub_begin_hour: 23
      osd_scrub_end_hour: 7

      bluestore block create: true
      bluestore block db size: 73014444032
      bluestore block db create: true
      bluestore block wal size: 107374182400
      bluestore block wal create: true

      
vim group_vars/osds.yml
----------------------------------------------------------
---
devices:
  - /dev/sdb						#虚拟机挂盘位置,请根据实际情况编辑
  - /dev/sdc
osd_scenario: collocated
osd_objectstore: bluestore

site.yml

注:此文件修改即可,保持与之前nodes文件要安装的插件一致,其余的都注释掉。
vim site.yml
-----------------------------------------------------------
---
# Defines deployment design and assigns role to server groups

- hosts:
  - mons
  - osds
#  - mdss
#  - rgws
#  - nfss
#  - rbdmirrors
  - clients
  - mgrs
#  - iscsigws
#  - iscsi-gws # for backward compatibility only!
  - grafana-server
#  - rgwloadbalancers
........

3、执行安装命令

ansible-playbook -i nodes site.yml
注:如出现报错,请根据错误信息修改配置文件

你可能感兴趣的:(ceph-ansible部署ceph参考)