安装 docker, kubeadm 等必要命令,过程可以参考文章:机器从零到 K8S 集群 Worker 节点的安装过程
kubeadm 可以使用参数配置,但如果参数较多,建议使用配置文件。
导出配置文件并命名为 kubeadm.yaml
kubeadm config print init-defaults > kubeadm.yaml
根据网络环境,本人选择 registry.aliyuncs.com/google_containers
配置文件 kubeadm.yaml
中相关行修改为所选镜像:
...
imageRepository: registry.aliyuncs.com/google_containers
...
可参考官方文档:
在 kubeadm.yaml
末尾添加:
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
完整配置文件参考:
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: hyper-sia
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
imageRepository: ""
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.17.3
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
podSubnet: 10.240.0.0/16
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
sudo kubeadm init --config kubeadm.yaml
输出结果:
W0317 09:37:19.620034 7827 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0317 09:37:19.620074 7827 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [hyper-sia kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.3.200]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [hyper-sia localhost] and IPs [192.168.3.200 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [hyper-sia localhost] and IPs [192.168.3.200 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0317 09:37:22.004796 7827 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0317 09:37:22.005387 7827 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 34.001805 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node hyper-sia as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node hyper-sia as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.3.200:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:9260f901a1702edbca5de31f8d19e4986a753827e12871a4529cc7ee6bb08c13
到这里还没有结束,还需要执行以下命令:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
验证状态:
kubectl get nodes
输出结果:
NAME STATUS ROLES AGE VERSION
hyper-sia NotReady master 2m45s v1.17.3
由于缺少网络插件,目前集群状态为 NotReady
GitHub: https://github.com/coreos/flannel
按照官方提供的命令:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
如果网络条件受限,可能会出错,需要修改镜像源。
先将文件下载到本地:
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
将所有 amd64 版本(根据系统决定)镜像地址由 quay.io
修改为 quay-mirror.qiniu.com
:
quay.io/coreos/flannel:v0.12.0-amd64
修改后:
quay-mirror.qiniu.com/coreos/flannel:v0.12.0-amd64
然后执行kubectl apply -f kube-flannel.yml
最后执行可以看到 node 已经 ready, pod 都起来了:
NAME STATUS ROLES AGE VERSION
hyper-sia Ready master 22m v1.17.3
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-9d85f5447-4tdc2 1/1 Running 0 21m
kube-system coredns-9d85f5447-gvtml 1/1 Running 0 21m
kube-system etcd-hyper-sia 1/1 Running 0 21m
kube-system kube-apiserver-hyper-sia 1/1 Running 0 21m
kube-system kube-controller-manager-hyper-sia 1/1 Running 0 21m
kube-system kube-flannel-ds-amd64-bjnhn 1/1 Running 0 5m57s
kube-system kube-proxy-l8r8j 1/1 Running 0 21m
kube-system kube-scheduler-hyper-sia 1/1 Running 0 21m
如果准备搭建单节点 K8S 或者需要将 master 节点也用于部署 pod, 可以执行命令:
kubectl taint nodes --all node-role.kubernetes.io/master-
此时,该 K8S 集群就可以部署应用了。
部署一个 3 节点的 nginx 集群:
kubectl run my-nginx --image=nginx --port=80 --expose -r3
获取 nginx Service 的 IP:
kubectl get svc/my-nginx
输出结果:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-nginx ClusterIP 10.102.31.20 80/TCP 81s
使用 curl 连接:
sia@hyper-sia:~$ curl 10.102.31.20
Welcome to nginx!
Welcome to nginx!
If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.
For online documentation and support please refer to
nginx.org.
Commercial support is available at
nginx.com.
Thank you for using nginx.
验证通过
如果要部署 Worker 节点加入到本集群中,可以参考文章:
机器从零到 K8S 集群 Worker 节点的安装过程