Openshift Origin (OKD) v3.11安装

一. 环境准备

我准备在Azure上进行安装OKD v3.11 版本

机器名 ip地址 角色 公网IP 用户
master-1 10.0.1.4 master 23.98.41.24x okd
master-2 10.0.1.5 master - okd
node-1 10.0.1.6 compute - okd
  1. 解析*.okd.goxx.top, okd.goxx.top A记录到到 23.98.41.24x,方便后续可以使用子域名绑定到router访问服务。
  2. 在Azure控制台 中将三台主机创建在共一个网络安全组 [master-1-nsg] 中,并临时添加规则允许所有端口呵来运IP访问。

二. 主机准备

  1. 使用的是CentOS 7.4
[okd@master-1 ~]$ cat /etc/redhat-release
CentOS Linux release 7.4.1708 (Core)
  1. 拷贝本地共钥到master-1,此机器有公网IP,方便登陆到服务器
ssh-copy-id -i .ssh/id_rsa.pub [email protected]
  1. 在master-1上创建密钥对,并分发共钥到其他机器,使用此节点作为ansible分发节点
ssh-keygen -t rsa # 不设置密码
ssh-copy-id -i .ssh/id_rsa.pub [email protected]
ssh-copy-id -i .ssh/id_rsa.pub [email protected]
  1. 配置各节点的hosts
# /etc/hosts 增加如下配置
10.0.1.4 master-1
10.0.1.5 master-2
10.0.1.6 node-1
  1. 配置各主机sudo免密码,后续使用okd用户安装,需要sudo权限:
# /etc/sudoers 文件增加
okd  ALL=(ALL) NOPASSWD:ALL
  1. 分发hosts(配置ansible后分发)

二. 安装文件源准备

  • 由于3.11版本发布较近,官方还未推出rpm包,故后续使用的是http://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin311/ 安装

1. ansible安装

  • 安装失败后从官方仓库找到原因是ansible版本存在要求,故而使用官方推荐的2.6版本。
rpm -ivh https://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin311/ansible-2.6.5-1.el7.noarch.rpm
  • docker 等其他依赖的包openshift-ansible 安装程序会自行安装。
[okd@master-1 ~]$ ansible --version
ansible 2.6.5
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/home/okd/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.5 (default, Aug  4 2017, 00:39:18) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)]

2. ansible hosts 文件编写: /home/okd/okd.hosts

[okd@master-1 ~]$ cat okd.hosts
# Create an OSEv3 group that contains the masters, nodes, and etcd groups
[OSEv3:children]
masters
nodes
etcd

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
# SSH user, this user should allow ssh based auth without requiring a password
ansible_ssh_user=okd
openshift_deployment_type=origin
openshift_image_tag=v3.11
# If ansible_ssh_user is not root, ansible_become must be set to true
ansible_become=true


# default selectors for router and registry services
# openshift_router_selector='node-role.kubernetes.io/infra=true'
# openshift_registry_selector='node-role.kubernetes.io/infra=true'

# uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
openshift_disable_check=memory_availability,disk_availability,docker_image_availability

os_sdn_network_plugin_name=redhat/openshift-ovs-multitenant

# new 2018-11--5 14:40:00
# 方便访问使用,指定web console 端口为443以及域名
openshift_master_api_port=443
openshift_master_console_port=443
openshift_hosted_router_replicas=1
openshift_hosted_registry_replicas=1
openshift_master_cluster_hostname=master.okd.gois.top
openshift_master_cluster_public_hostname=master.okd.gois.top
openshift_master_default_subdomain=okd.gois.top

openshift_master_cluster_method=native
openshift_public_ip=23.98.41.245
# false
ansible_service_broker_install=false
openshift_enable_service_catalog=false
template_service_broker_install=false
openshift_logging_install_logging=false

# registry passwd
#oreg_url=172.16.37.12:5000/openshift3/ose-${component}:${version}
#openshift_examples_modify_imagestreams=true

# docker config
#openshift_docker_additional_registries=172.16.37.12:5000,172.30.0.0/16
#openshift_docker_insecure_registries=172.16.37.12:5000,172.30.0.0/16
#openshift_docker_blocked_registries
openshift_docker_options="--log-driver json-file --log-opt max-size=1M --log-opt max-file=3"

# openshift_cluster_monitoring_operator_install=false
# openshift_metrics_install_metrics=true
# openshift_enable_unsupported_configurations=True
#openshift_logging_es_nodeselector='node-role.kubernetes.io/infra: "true"'
#openshift_logging_kibana_nodeselector='node-role.kubernetes.io/infra: "true"'
# host group for masters

[masters]
master-[1:2]

[etcd]
master-[1:2]

[nodes]
master-[1:2] openshift_node_group_name='node-config-master'
node-1 openshift_node_group_name='node-config-compute'
node-1 openshift_node_group_name='node-config-infra'

2. 分发hosts

  • 顺便可以测试下ansible是否工作正常,sudo权限是否正确:
ansible -i okd.hosts  nodes --user=root -m copy -a 'src=/etc/hosts dest=/etc/hosts'

3. 分发CentOS-Openshift yum源:

[okd@master-1 ~]$ cat /etc/yum.repos.d/CentOS-OpenShift-Origin311.repo
[centos-openshift-origin311]
name=CentOS OpenShift Origin
baseurl=http://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin311/
enabled=1
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-PaaS

[centos-openshift-origin311-testing]
name=CentOS OpenShift Origin Testing
baseurl=http://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin311/
enabled=0
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-PaaS

[centos-openshift-origin311-debuginfo]
name=CentOS OpenShift Origin DebugInfo
baseurl=http://debuginfo.centos.org/centos/7/paas/x86_64/
enabled=0
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-PaaS

[centos-openshift-origin311-source]
name=CentOS OpenShift Origin Source
baseurl=http://vault.centos.org/centos/7/paas/Source/openshift-origin311/
enabled=0
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-PaaS
[okd@master-1 ~]$ ansible -i okd.hosts  nodes --user=root -m copy -a \
'src=/etc/yum.repos.d/CentOS-OpenShift-Origin311.repo  \
dest=/etc/yum.repos.d/CentOS-OpenShift-Origin311.repo'

3. 下载opensift-ansible项目(需要安装git):

[okd@master-1 ~]$ sudo yum install -y git
[okd@master-1 ~]$ cd ~ && git clone https://github.com/openshift/openshift-ansible
# 切换到release-v3.11
[okd@master-1 ~]$ cd ~/openshift-ansible && git checkout release-3.11

三. 安装OKD

1. 安装检查:

[okd@master-1 ~]$ ls
okd.hosts  openshift-ansible
[okd@master-1 ~]$ ansible-playbook -i okd.hosts openshift-ansible/playbooks/prerequisites.yml
  • 执行检查无误后,执行安装程序

2. 执行安装程序

[okd@master-1 ~]$ ansible-playbook -i okd.hosts openshift-ansible/playbooks/deploy_cluster.yml
  • 安装完成后可以看到类似如下提示.

  • 可以清楚的看到安装结果,如果有失败,寻找原因再次安装:

PLAY RECAP ***************************************************************************************************************************************************************************************
localhost                  : ok=11   changed=0    unreachable=0    failed=0
master-1                   : ok=599  changed=198  unreachable=0    failed=0
master-2                   : ok=263  changed=72   unreachable=0    failed=0
node-1                     : ok=110  changed=16   unreachable=0    failed=0


INSTALLER STATUS *********************************************************************************************************************************************************************************
Initialization               : Complete (0:01:55)
Health Check                 : Complete (0:00:14)
Node Bootstrap Preparation   : Complete (0:04:18)
etcd Install                 : Complete (0:01:56)
Master Install               : Complete (0:13:01)
Master Additional Install    : Complete (0:02:18)
Node Join                    : Complete (0:01:16)
Hosted Install               : Complete (0:02:50)
Cluster Monitoring Operator  : Complete (0:01:21)
Web Console Install          : Complete (0:01:08)
Console Install              : Complete (0:00:52)
metrics-server Install       : Complete (0:00:03)

3. 查看安装结果

[okd@master-1 ~]$ oc get nodes
NAME       STATUS    ROLES     AGE       VERSION
master-1   Ready     master    19m       v1.11.0+d4cacc0
master-2   Ready     master    19m       v1.11.0+d4cacc0
node-1     Ready     infra     10m       v1.11.0+d4cacc0

4. 创建super account,使用简单的文件密码配置登陆web控制台

[okd@master-1 ~]$ sudo htpasswd -cb /etc/origin/master/htpasswd admin 123456
Adding password for user admin
[okd@master-1 ~]$ oc adm policy add-cluster-role-to-user cluster-admin admin
Warning: User 'admin' not found
cluster role "cluster-admin" added: "admin"

文章仅供参考,安装到此完成,后续有完善或者错误继续补充。

错误记录:

TASK [openshift_control_plane : Wait for control plane pods to appear] 失败:

错误原因分析: master-api container 启动失败,可以看到由于etcd服务无法连接导致的,etcd cluster模式,所安装的etcd服务器通过ip无法连接,
需要增加 ansible hosts文件中:
[etcd]
etcd.xx.xx openshift_ip=x.x.x.x

重复部署安装即可

你可能感兴趣的:(Openshift Origin (OKD) v3.11安装)