单master,3 etcd ,3个计算节点,1个基础服务节点
10.10.11.10 openshift1 CentOS Linux release 7.2
10.10.11.11 openshift2 CentOS Linux release 7.2
10.10.11.12 openshift3 CentOS Linux release 7.2
10.10.11.13 openshift4 CentOS Linux release 7.2
10.10.11.14 openshift5 CentOS Linux release 7.2
1.操作系统语言不能是中文
2.infra节点会自动部署router,lb不要放在infra节点上,所以80端口不能冲突
3.如果web console访问端口改成443,lb不能放一起,端口冲突
4.硬盘格式XFS才支持overlay2
5.开启selinux
6.保证能联网。。。
7.如果lb和master在一个节点上,会有8443端口已被占用的问题,建议安装时lb不要放在master节点上
8.如果etcd放在master节点上,会以静态pod形式启动。如果放在node节点上,会以系统服务的形式启动。我在安装过程中,一个etcd放在了master上,另一个放在了node上,导致etcd启动失败。建议安装时etcd要么全放在master节点上,要么全放在node节点上。
9.我在安装过程中,直接安装了带有nfs持久存储的监控,需要提前安装java-1.8.0-openjdk-headless python-passlib,这一点官网没有提及,不提前装安装会报错。
10.docker 启用配置参数–selinux-enabled=false ,但是操作系统selinux必须开启,否则安装报错
master系统要求
最低操作系统版本:Fedora 21、CentOS 7.4、RHEL 7.4、RHEL Atomic Host 7.4.5。
最低4 vCPU。
最小16GB RAM。
包含/var/的文件系统最小40GB硬盘空间。
包含/usr/local/bin/的文件系统最小1GB硬盘空间。
包含系统临时目录的文件系统最小1GB硬盘空间。
Etcd和Master在同一节点的,需要至少4核,2核系统将无法工作。
node系统要求
最低操作系统版本:Fedora 21、CentOS 7.4、RHEL 7.4、RHEL Atomic Host 7.4.5。
NetworkManager 1.0或更新。
最低1 vCPU。
最小8GB RAM。
包含/var/的文件系统最小15GB硬盘空间。
包含/usr/local/bin/的文件系统最小1GB硬盘空间。
包含系统临时目录的文件系统最小1GB硬盘空间。
额外至少15GB未分配空间,用于Docker存储。
etcd系统要求
最小20GB硬盘空间存储etcd数据。
yum install wget -y
cd /etc/yum.repos.d/ && mkdir repo_bak && mv *.repo repo_bak/
wget http://mirrors.aliyun.com/repo/Centos-7.repo
wget http://mirrors.163.com/.help/CentOS7-Base-163.repo
yum clean all && yum makecache
yum -y install epel-release
yum install python-pip -y
pip install --upgrade setuptools
wget https://releases.ansible.com/ansible/ansible-2.6.5.tar.gz
tar fxz ansible-2.6.5.tar.gz && cd ansible-2.6.5
python setup.py install
配置/etc/hosts
10.10.11.10 openshift1
10.10.11.11 openshift2
10.10.11.12 openshift3
10.10.11.13 openshift4
10.10.11.14 openshift5
#ssh-keygen -t rsa
#ssh-copy-id -i .ssh/id_rsa.pub openshift{1,2,3,4,5}
#cat okd.hosts
# Create an OSEv3 group that contains the masters, nodes, and etcd groups
[OSEv3:children]
masters
nodes
etcd
#new_nodes
#lb
# Set variables common for all OSEv3 hosts
[OSEv3:vars]
# SSH user, this user should allow ssh based auth without requiring a password
ansible_ssh_user=root
openshift_deployment_type=origin
openshift_image_tag=v3.11
# If ansible_ssh_user is not root, ansible_become must be set to true
ansible_become=true
# default selectors for router and registry services
# openshift_router_selector='node-role.kubernetes.io/infra=true'
# openshift_registry_selector='node-role.kubernetes.io/infra=true'
# uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
openshift_disable_check=memory_availability,disk_availability,docker_image_availability
# 使用多租户网络
os_sdn_network_plugin_name=redhat/openshift-ovs-multitenant
# 方便访问使用,指定web console 端口为443以及域名,默认8443
#openshift_master_api_port=443
#openshift_master_console_port=443
openshift_hosted_router_replicas=1
openshift_hosted_registry_replicas=1
openshift_master_cluster_method=native
openshift_master_cluster_hostname=paas.xxxxxx.test
openshift_master_cluster_public_hostname=paas.xxxxxx.test
openshift_master_default_subdomain=xxxxxx.test
# openshift_uninstall_docker=true
ansible_service_broker_install=false
openshift_enable_service_catalog=false
template_service_broker_install=false
openshift_logging_install_logging=false # 不安装EFK
# registry passwd
#oreg_url=172.16.37.12:5000/openshift3/ose-${component}:${version}
#openshift_examples_modify_imagestreams=true
# docker config
#openshift_docker_additional_registries=172.16.37.12:5000,172.30.0.0/16
#openshift_docker_insecure_registries=172.16.37.12:5000,172.30.0.0/16
#openshift_docker_blocked_registries
openshift_docker_options="--selinux-enabled=false --insecure-registry reg.iqianjin.com --log-driver json-file --log-opt max-size=10M --log-opt max-file=3 --graph=/data/docker/dockerimg"
# host group for masters
[masters]
openshift1
[etcd]
openshift2
openshift3
openshift4
[nodes]
#router部署到infra节点
openshift1 openshift_node_group_name='node-config-master'
openshift2 openshift_node_group_name='node-config-compute'
openshift3 openshift_node_group_name='node-config-compute'
openshift4 openshift_node_group_name='node-config-infra'
openshift5 openshift_node_group_name='node-config-compute'
#git clone https://github.com/openshift/openshift-ansible && cd ~/openshift-ansible && git checkout release-3.11
#cat /etc/yum.repos.d/CentOS-OpenShift-Origin311.repo
[centos-openshift-origin311]
name=CentOS OpenShift Origin
baseurl=http://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin311/
enabled=1
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-PaaS
[centos-openshift-origin311-testing]
name=CentOS OpenShift Origin Testing
baseurl=http://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin311/
enabled=0
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-PaaS
[centos-openshift-origin311-debuginfo]
name=CentOS OpenShift Origin DebugInfo
baseurl=http://debuginfo.centos.org/centos/7/paas/x86_64/
enabled=0
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-PaaS
[centos-openshift-origin311-source]
name=CentOS OpenShift Origin Source
baseurl=http://vault.centos.org/centos/7/paas/Source/openshift-origin311/
enabled=0
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-PaaS
#ansible openshift -m copy -a “src=CentOS-OpenShift-Origin311.repo dest=/etc/yum.repos.d/”
#ansible openshift -a "yum install NetworkManager -y && systemctl start NetworkManager && systemctl enable NetworkManager "
#sed -i ‘s/SELINUX=disabled/SELINUX=enforcing/’ /etc/selinux/config
#ansible-playbook -i okd.hosts openshift-ansible/playbooks/prerequisites.yml
docker.io/openshift/origin-node
docker.io/openshift/origin-control-plane
docker.io/openshift/origin-pod
docker.io/openshift/origin-web-console
docker.io/openshift/cockpit/kubernetes
docker.io/openshift/cluster-monitoring-operator
docker.io/openshift/prometheus-config-reloader
docker.io/openshift/prometheus-operator
docker.io/openshift/prometheus-alertmanager
docker.io/openshift/prometheus
docker.io/openshift/prometheus-node-exporter:v0.16.0
docker.io/openshift/etcd
docker.io/openshift/configmap-reload
docker.io/openshift/origin-metrics-cassandra
docker.io/openshift/origin-metrics-hawkular-metrics
docker.io/openshift/origin-metrics-heapster
docker.io/openshift/origin-logging-fluentd
docker.io/openshift/origin-logging-kibana5
docker.io/openshift/origin-logging-elasticsearch5
docker.io/openshift/origin-logging-fluentd
docker.io/openshift/origin-logging-kibana
docker.io/openshift/origin-logging-auth-proxy
cni网络插件配置文件–在centos7.2中没有成功,导致node NotReady,手动拷贝该文件即可
cat resolv.j2
#nameserver updated by /etc/NetworkManager/dispatcher.d/99-origin-dns.sh
#Generated by NetworkManager
search cluster.local
nameserver {{ inventory_hostname }}
配置宿主机节点的nameserver为本机IP,使得宿主机可以使用本机的dnsmasq服务解析。且该文件在生成后会自动复制到/etc/origin/node/resolv.conf,该文件在pod启动时会被添加为pod中的/etc/resolv.conf
80-openshift-network.conf 文件删除网络有问题,暂时注释
编辑 roles/openshift_sdn/files/sdn.yaml
注释: # rm -Rf /etc/cni/net.d/80-openshift-network.conf
#cat /etc/cni/net.d/80-openshift-network.conf
{
“cniVersion”: “0.2.0”,
“name”: “openshift-sdn”,
“type”: “openshift-sdn”
}
cat origin-dns.conf
no-resolv
domain-needed
no-negcache
max-cache-ttl=1
enable-dbus
dns-forward-max=10000
cache-size=10000
bind-dynamic
min-port=1024
except-interface=lo
#End of config
cat origin-upstream-dns.conf
#内网自建dns服务器
server=10.10.100.100
拷贝没有生成的文件
ansible openshift -m template -a “src=resolv.j2 dest=/etc/resolv.conf”
ansible openshift -a “/bin/cp /etc/resolv.conf /etc/origin/node/resolv.conf”
ansible openshift -m copy -a “src=origin-upstream-dns.conf dest=/etc/dnsmasq.d/”
ansible openshift -m copy -a “src=origin-dns.conf dest=/etc/dnsmasq.d/”
ansible openshift -m copy -a “src=80-openshift-network.conf dest=/etc/cni/net.d/80-openshift-network.conf”
#ansible-playbook -i okd.hosts openshift-ansible/playbooks/deploy_cluster.yml
集群成功标志:所有容器STATUS 为 Running
#oc get pod --all-namespaces
#ansible-playbook -i okd.hosts openshift-ansible/playbooks/adhoc/uninstall.yml
首次新建用户密码
#htpasswd -cb /etc/origin/master/htpasswd admin admin
添加用户密码
#htpasswd -b /etc/origin/master/htpasswd dev dev
以集群管理员登录
#oc login -u system:admin
给用户分配一个集群管理员角色
#oc adm policy add-cluster-role-to-user cluster-admin admin
访问登录即可
https://paas.xxxxxx.com:8443
参考文献:
https://docs.okd.io/3.11/install/configuring_inventory_file.html
https://docs.okd.io/3.11/install/example_inventories.html
https://blog.51cto.com/7308310/2171091