OS环境是centos 7.6,openshift版本3.11
1、首先建立如下的虚拟机
10.2.3.12 master1
10.2.3.13 master2
10.2.3.14 master3
10.2.3.15 node1
10.2.3.16 node2
10.2.3.17 nfs
10.2.3.18 lb
三个master,两个node,一个nfs(共享存储),一个lb(用于haproxy代理)
本例子是在master1上通过ansible的方式,搭建openshift的3.11集群
2、在所有的机器上运行
yum install epel-release -y && yum install centos-release-openshift-origin.noarch -y && yum install docker -y && yum install -y openshift-ansible origin-node origin-clients conntrack-tools ntp httpd-tools cockpit-ws cockpit-system cockpit-brid wget git net-tools bind-utils yum-utils iptables-services bridge-utils bash-completion kexec-tools sos psacct deltarpm atomic skopeo nfs-utils glusterfs* java patch ceph-common origin-docker-excluder ansible cifs-utils samba-common samba-client atomic-openshift origin-3.11* python-ipaddress iproute dbus-python PyYAML libsemanage-python yum-utils python-docker origin-docker-excluder-3.11.0 iscsi-initiator-utils device-mapper-multipath dnsmasq device-mapper-multipath-libs iscsi-initiator-utils-iscsiuio libtomcrypt libtommath origin-excluder-3.11* python-keyczar-0.71c python2-crypto etcd-3.3.11-2 openshift-ansible
3、在master1编辑ansible的inventory文件
pc# vim /etc/ansible/hosts
#Create an OSEv3 group that contains the master, nodes, etcd, and lb groups.
# The lb group lets Ansible configure HAProxy as the load balancing solution.
# Comment lb out if your load balancer is pre-configured.
[OSEv3:children]
masters
nodes
etcd
lb
nfs
[OSEv3:vars]
ansible_ssh_user=root
#ansible_become=true
openshift_deployment_type=origin
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
openshift_disable_check=docker_image_availability,docker_storage,memory_availability,disk_availability,package_availability
openshift_master_cluster_method=native
openshift_master_cluster_hostname=lb
openshift_master_cluster_public_hostname=lb
#os_sdn_network_plugin_name='redhat/openshift-ovs-multitenant'
# Enable service catalog
openshift_enable_service_catalog=false
# Enable template service broker (requires service catalog to be enabled, above)
template_service_broker_install=false
#apply updated node defaults
openshift_node_kubelet_args={'pods-per-core': ['10'], 'max-pods': ['250'], 'image-gc-high-threshold': ['90'], 'image-gc-low-threshold': ['80']}
#
## enable ntp on masters to ensure proper failover
openshift_clock_enabled=true
# Configure one of more namespaces whose templates will be served by the TSB
openshift_template_service_broker_namespaces=['openshift']
openshift_docker_options="--selinux-enabled --insecure-registry 172.30.0.0/16 --log-driver json-file --log-opt max-size=10M --log-opt max-file=3"
openshift_hosted_etcd_storage_kind=nfs
openshift_hosted_etcd_storage_nfs_options="*(rw,root_squash,sync,no_wdelay)"
openshift_hosted_etcd_storage_nfs_directory=/var/export
openshift_hosted_etcd_storage_volume_name=asb
openshift_hosted_etcd_storage_access_modes=["ReadWriteOnce"]
openshift_hosted_etcd_storage_volume_size=1G
openshift_hosted_etcd_storage_labels={'storage': 'etcd'}
# host group for masters
[masters]
master1
master2
master3
[lb]
lb
# host group for etcd
[etcd]
master1
master2
master3
# host group for nodes, includes region info
[nodes]
master[1:3] openshift_node_group_name='node-config-master-infra'
node1 openshift_node_labels="{'region': 'primary', 'zone': 'east'}" openshift_node_group_name='node-config-compute'
node2 openshift_node_labels="{'region': 'primary', 'zone': 'east'}" openshift_node_group_name='node-config-compute'
[nfs]
nfs
4、在master1测试ping:
pc # ansible all -m ping
[WARNING]: Found both group and host with same name: nfs
[WARNING]: Found both group and host with same name: lb
lb | SUCCESS => {
"changed": false,
"ping": "pong"
}
master2 | SUCCESS => {
"changed": false,
"ping": "pong"
}
master3 | SUCCESS => {
"changed": false,
"ping": "pong"
}
master1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
nfs | SUCCESS => {
"changed": false,
"ping": "pong"
}
node1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
node2 | SUCCESS => {
"changed": false,
"ping": "pong"
}
5、启动dockers
#ansible all -m shell -a "systemctl restart docker"
6、在master1下通过ansible在所有的节点上批量下载docker镜像:
pc# DOCKER_IMAGES_ARRAY=( "docker.io/openshift/origin-node:v3.11" "docker.io/openshift/origin-control-plane:v3.11" "docker.io/openshift/origin-deployer:v3.11.0" "docker.io/openshift/origin-haproxy-router:v3.11" "docker.io/openshift/origin-pod:v3.11.0" "docker.io/openshift/origin-web-console:v3.11" "docker.io/openshift/origin-docker-registry:v3.11" "docker.io/openshift/origin-metrics-server:v3.11" "docker.io/openshift/origin-console:v3.11" "docker.io/openshift/origin-metrics-heapster:v3.11" "docker.io/openshift/origin-metrics-hawkular-metrics:v3.11" "docker.io/openshift/origin-metrics-schema-installer:v3.11" "docker.io/openshift/origin-metrics-cassandra:v3.11" "docker.io/cockpit/kubernetes" "quay.io/coreos/cluster-monitoring-operator:v0.1.1" "quay.io/coreos/prometheus-config-reloader:v0.23.2" "quay.io/coreos/prometheus-operator:v0.23.2" "docker.io/openshift/prometheus-alertmanager:v0.15.2" "docker.io/openshift/prometheus-node-exporter:v0.16.0" "docker.io/openshift/prometheus:v2.3.2" "docker.io/grafana/grafana:5.2.1" "quay.io/coreos/kube-rbac-proxy:v0.3.1" "quay.io/coreos/etcd:v3.2.22" "quay.io/coreos/kube-state-metrics:v1.3.1" "docker.io/openshift/oauth-proxy:v1.1.0" "quay.io/coreos/configmap-reload:v0.0.1" "quay.io/coreos/flannel:v0.10.0-amd64" )
pc# for image in ${DOCKER_IMAGES_ARRAY[@]}; do ansible all -m shell -a "docker pull ${image}"; done
如果不提前下载docker镜像,那么在后续的ansible部署openshift集群时会出现很多错误。这些错误基本都上因为openshift集群安装依赖的rpm包(通过yum安装)或者docker镜像导致的,所以最好提前下载所需要的包。
7、执行ansible的安装命令
1 ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml
2 ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
8、安装完成后操作:
# oc get nodes
NAME STATUS ROLES AGE VERSION
master1 Ready infra,master 3d v1.11.0+d4cacc0
master2 Ready infra,master 3d v1.11.0+d4cacc0
master3 Ready infra,master 3d v1.11.0+d4cacc0
node1 Ready compute 3d v1.11.0+d4cacc0
node2 Ready compute 3d v1.11.0+d4cacc0
#ansible all -m shell -a 'htpasswd -b /etc/origin/master/htpasswd admin admin'
8、界面登录openshift
1 修改登录openshift集群机器的host文件,以下以window为例
- 通过poweshell 以管理员运行如下命令
C:\WINDOWS\system32> C:\Windows\System32\drivers\etc
C:\Windows\System32\drivers\etc> notepad.exe .\hosts
- 增加如下字段
10.2.3.18 lb
2 在浏览器上输入:https://lb:8443 这时候出现了openshift的界面,用户名和密码是admin