在ESXI中准备虚拟机,部署参考官网:https://kubesphere.io/zh/
CentOs7.5 | 192.168.31.21 | master, etcd |
---|---|---|
CentOs7.5 | 192.168.31.22 | master, etcd |
CentOs7.5 | 192.168.31.23 | master, etcd |
CentOs7.5 | 192.168.31.24 | worker |
CentOs7.5 | 192.168.31.25 | worker |
CentOs7.5 | 192.168.31.26 | worker |
sudo yum install open-vm-tools
这将使用 yum 从 VMware Tools 软件源安装 open-vm-tools 软件包。
sudo reboot
使用vi编辑器打开:
sudo vim /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="static" #dhcp改为static
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="74ca9b68-1475-4b02-9750-f48b871504df"
DEVICE="ens33"
ONBOOT="yes" #开机启用本配置
IPADDR=192.168.0.180 #静态IP
GATEWAY=192.168.0.1 #默认网关
NETMASK=255.255.255.0 #子网掩码
DNS1=192.168.0.1 #DNS地址1
DNS2=223.6.6.6 #DNS地址2
重启网络服务使配置生效:
sudo service network restart
#设置开机 “启动” 防火墙命令
systemctl enable firewalld.service
# 设置开机 “禁用” 防火墙命令
systemctl disable firewalld.service
#防火墙开启命令
systemctl start firewalld
#防火墙关闭命令
systemctl stop firewalld
#防火墙状态查看命令1
systemctl status firewalld
关闭 SELinux 可以通过编辑 /etc/selinux/config 文件并将 SELINUX 参数设置为 disabled。具体步骤如下:
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
SELinux 作为一个安全模块,可以提供强制访问控制机制,限制进程和用户对系统资源的访问,从而提高系统的安全性和可靠性。但是,在某些情况下,关闭 SELinux 可能是必要的,例如:
需要注意的是,关闭 SELinux 可能会降低系统的安全性和可靠性,因此应该谨慎考虑。如果必须关闭 SELinux,请确保在关闭之前仔细评估系统的安全风险,并采取其他措施来保护系统的安全性,例如使用防火墙、限制用户权限等。
在大多数情况下,建议仅在必要时关闭 SELinux,并在关闭之前备份系统以便在需要时进行恢复。关闭 SELinux 的方法包括编辑 /etc/selinux/config 文件并将 SELINUX 参数设置为 disabled,或者使用命令 setenforce 0 临时禁用 SELinux。
yum install -y chrony
systemctl enable chronyd
systemctl start chronyd
timedatectl set-ntp true
timedatectl set-timezone Asia/Shanghai
chronyc activity -v
sudo hostnamectl set-hostname ksmaster21
sudo hostnamectl set-hostname ksmaster22
sudo hostnamectl set-hostname ksmaster23
sudo hostnamectl set-hostname ksnode21
sudo hostnamectl set-hostname ksnode22
sudo hostnamectl set-hostname ksnode23
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.31.21 ksmaster21
192.168.31.22 ksmaster22
192.168.31.23 ksmaster23
192.168.31.24 ksnode24
192.168.31.25 ksnode25
192.168.31.26 ksnode26
ping ksmaster21
ping ksmaster22
ping ksmaster23
ping ksnode21
ping ksnode22
ping ksnode23
KubeKey 可以将 Kubernetes 和 KubeSphere 一同安装。针对不同的 Kubernetes 版本,需要安装的依赖项可能有所不同。您可以参考以下列表,查看是否需要提前在节点上安装相关的依赖项。
依赖项 | Kubernetes 版本 ≥ 1.18 | Kubernetes 版本 < 1.18 |
---|---|---|
socat | 必须 | 可选但建议 |
conntrack | 必须 | 可选但建议 |
ebtables | 可选但建议 | 可选但建议 |
ipset | 可选但建议 | 可选但建议 |
执行下述命令一键安装:
yum -y install socat conntrack ebtables ipset
由于使用群辉nfs作为nas,需安装nfs:
yum install -y nfs-utils
nfs:
server: "nas.yxym.com" # 这是群辉服务器IP地址,把它换成你自己的
path: "/volume5/ks" # 用您自己的目录替换导出的目录
storageClass:
defaultClass: true
# 环境设置
export KKZONE=cn
# 创建配置文件
./kk create config --with-kubernetes v1.23.10 --with-kubesphere v3.4.1
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
name: sample
spec:
hosts:
- {name: ksmaster21, address: 192.168.31.21, internalAddress: 192.168.31.21, user: root, password: 密码}
- {name: ksmaster22, address: 192.168.31.22, internalAddress: 192.168.31.22, user: root, password: 密码}
- {name: ksmaster23, address: 192.168.31.23, internalAddress: 192.168.31.23, user: root, password: 密码}
- {name: ksnode24, address: 192.168.31.24, internalAddress: 192.168.31.24, user: root, password: 密码}
- {name: ksnode25, address: 192.168.31.25, internalAddress: 192.168.31.25, user: root, password: 密码}
- {name: ksnode26, address: 192.168.31.26, internalAddress: 192.168.31.26, user: root, password: 密码}
roleGroups:
etcd:
- ksmaster21
- ksmaster22
- ksmaster23
control-plane:
- ksmaster21
- ksmaster22
- ksmaster23
worker:
- ksnode24
- ksnode25
- ksnode26
controlPlaneEndpoint:
## Internal loadbalancer for apiservers
internalLoadbalancer: haproxy
domain: lb.kubesphere.local
address: ""
port: 6443
kubernetes:
version: v1.23.10
clusterName: cluster.local
autoRenewCerts: true
containerManager: docker
etcd:
type: kubekey
network:
plugin: calico
kubePodsCIDR: 10.10.0.0/18
kubeServiceCIDR: 10.20.0.0/18
## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
multusCNI:
enabled: false
registry:
privateRegistry: ""
namespaceOverride: ""
registryMirrors: ["https://0j62md6t.mirror.aliyuncs.com","http://hub-mirror.c.163.com"]
insecureRegistries: []
addons:
- name: nfs-client
namespace: kube-system
sources:
chart:
name: nfs-client-provisioner
repo: https://charts.kubesphere.io/main
valuesFile: /opt/ks/v3.3/nfs-client.yaml
---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
name: ks-installer
namespace: kubesphere-system
labels:
version: v3.3.2
spec:
persistence:
storageClass: ""
authentication:
jwtSecret: ""
zone: ""
local_registry: ""
namespace_override: ""
# dev_tag: ""
etcd:
monitoring: false
endpointIps: localhost
port: 2379
tlsEnable: true
common:
core:
console:
enableMultiLogin: true
port: 30880
type: NodePort
# apiserver:
# resources: {}
# controllerManager:
# resources: {}
redis:
enabled: false
volumeSize: 2Gi
openldap:
enabled: false
volumeSize: 2Gi
minio:
volumeSize: 20Gi
monitoring:
# type: external
endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
GPUMonitoring:
enabled: false
gpu:
kinds:
- resourceName: "nvidia.com/gpu"
resourceType: "GPU"
default: true
es:
# master:
# volumeSize: 4Gi
# replicas: 1
# resources: {}
# data:
# volumeSize: 20Gi
# replicas: 1
# resources: {}
logMaxAge: 7
elkPrefix: logstash
basicAuth:
enabled: false
username: ""
password: ""
externalElasticsearchHost: ""
externalElasticsearchPort: ""
alerting:
enabled: false
# thanosruler:
# replicas: 1
# resources: {}
auditing:
enabled: false
# operator:
# resources: {}
# webhook:
# resources: {}
devops:
enabled: false
# resources: {}
jenkinsMemoryLim: 8Gi
jenkinsMemoryReq: 4Gi
jenkinsVolumeSize: 8Gi
events:
enabled: false
# operator:
# resources: {}
# exporter:
# resources: {}
# ruler:
# enabled: true
# replicas: 2
# resources: {}
logging:
enabled: false
logsidecar:
enabled: true
replicas: 2
# resources: {}
metrics_server:
enabled: false
monitoring:
storageClass: ""
node_exporter:
port: 9100
# resources: {}
# kube_rbac_proxy:
# resources: {}
# kube_state_metrics:
# resources: {}
# prometheus:
# replicas: 1
# volumeSize: 20Gi
# resources: {}
# operator:
# resources: {}
# alertmanager:
# replicas: 1
# resources: {}
# notification_manager:
# resources: {}
# operator:
# resources: {}
# proxy:
# resources: {}
gpu:
nvidia_dcgm_exporter:
enabled: false
# resources: {}
multicluster:
clusterRole: none
network:
networkpolicy:
enabled: false
ippool:
type: none
topology:
type: none
openpitrix:
store:
enabled: false
servicemesh:
enabled: false
istio:
components:
ingressGateways:
- name: istio-ingressgateway
enabled: false
cni:
enabled: false
edgeruntime:
enabled: false
kubeedge:
enabled: false
cloudCore:
cloudHub:
advertiseAddress:
- ""
service:
cloudhubNodePort: "30000"
cloudhubQuicNodePort: "30001"
cloudhubHttpsNodePort: "30002"
cloudstreamNodePort: "30003"
tunnelNodePort: "30004"
# resources: {}
# hostNetWork: false
iptables-manager:
enabled: true
mode: "external"
# resources: {}
# edgeService:
# resources: {}
terminal:
timeout: 600
# 环境设置
export KKZONE=cn
# 创建配置文件
./kk create config --with-kubernetes v1.23.10 --with-kubesphere v3.4.1
# 安装
./kk create cluster -f config-sample.yaml
# 卸载
./kk delete cluster -f config-sample.yaml