k8s主要由Master和Node组成。客户端通常与Master中的API Server
进程进行交互。Resgistry做为容器镜像,主要用到的镜像仓库有Docker Hub
, gcr.io
, quay.io
。
初步有两个节点: 3个节点作为Node
, 一个节点作为Master
.
借助NTP服务设定各个节点时间精确同步.
通过DNS
完成各个节点的主机名称解析,或者通过使用hosts文件完成解析(此次试验选择这个方法).
关闭各个节点的iptables或者firewalld服务,并且确保它们禁止随系统引导过程启动。
各个节点禁用SELinux.
各个节点禁用所有的Swap设备.
若要使用ipvs模型的proxy, 各节点还需要载入ipvs相关的各个模块.
启动chronyd系统服务,并设定其随系统引导而启动。
systemctl status chronyd.serivce
systemctl enable chronyd.service
// 用于同步到互联网中的时间服务器
同样可以通过配置本地的时间服务器,修改
/etc/chrony.conf
配置文件
***.***.*** k8s-master
***.***.*** k8s-node01
***.***.*** k8s-node02
***.***.*** k8s-node03
systemctl stop firewalld.service
systemctl stop iptables.service
systemctl disable firewalld.service
systemctl disable iptables.service
若当前启用了SELinux
,则需要编辑/etc/sysconfig/selinux
文件,禁用SELinux
,并临时设置其当前状态为permissive
:
vim /etc/sysconfig/selinux
, 添加如下语句:SELINUX=disabled
setenforce 0
[root@VM_0_2_centos ~]# free -m
total used free shared buff/cache available
Mem: 1872 454 206 0 1212 1234
Swap: 0 0 0
[root@VM_0_2_centos ~]# swap // 查看swap设备
-bash: swap: command not found
[root@VM_0_2_centos ~]# swapoff -a // 关闭所有swap设备
这一部分暂且跳过, 没有用到
ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"
for mod in $(ls $ipvs_mods_dir | grep -o "^[^.]*"); do
/sbin/modinfo -F filename $mod &> /dev/null
if [ $? -eq 0 ]; then
/sbin/modprobe $mod
fi
done
修改文件权限,并手动为当前系统加载内核模块
chmod +x /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules
首先获取docker-ce的配置仓库配置文件:
cd /etc/yum.repos.d/
wget http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum -y install docker-ce
在/usr/lib/systemd/system/docker.service
文件中修改内容如下:
#sky
Environment="HTTPS_PROXY=http://www.ik8s.io:10070"
Environment="NO_PROXY=127.0.0.0/8,172.18.0.0/16"
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
#sky
ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT
然后启动docker服务(在各个节点上执行)
systemctl daemon-reload
systemctl start docker
更改网络配置
[root@VM_0_2_centos /]# vim /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
加载修改的配置信息
[root@VM_0_2_centos /]# sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
上面步骤中的相关说明如下,可跳过。
如果通过默认的k8s.gcr.io
镜像仓库获取kubernetes
系统组件的相关镜像,需要配置docker Unit File(/usr/lib/systemd/system/docker.service
文件)中的Enviroment
变量,为其定义合用的HTTPS_PROXY,格式如下:
Environment = "HTTPS_PROXY = PROTOCOL://HOST:PORT"
Environment = "NO_PROXY=172.20.0.0/16, 127.0.0.0/8"
另外,docker自1.13版起会自动设置iptables的FORWARD默认策略为DROP, 这可能会影响Kubernetes集群依赖的报文转发功能,因此,需要在docker服务启动后,重新将FORWARD链的默认策略设置为ACCEPT,方式是修改/usr/lib/systemd/system/docker.service
文件,在"ExceStart=/usr/bin/dockerd"一行之后新增一行如下内容:
ExecStartPost = /usr/sbin/iptables -P FORWARD ACCEPT
查看docker的信息:
docker info
HTTPS Proxy: http://www.ik8s.io:10070
No Proxy: 127.0.0.0/8,172.18.0.0/16
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
接着,检查网络状态
[root@VM_0_2_centos /]# sysctl -a | grep bridge
sysctl: reading key "net.ipv6.conf.all.stable_secret"
sysctl: reading key "net.ipv6.conf.default.stable_secret"
sysctl: reading key "net.ipv6.conf.docker0.stable_secret"
sysctl: reading key "net.ipv6.conf.eth0.stable_secret"
sysctl: reading key "net.ipv6.conf.lo.stable_secret"
net.bridge.bridge-nf-call-arptables = 0
net.bridge.bridge-nf-call-ip6tables = 0 # 设置为1
net.bridge.bridge-nf-call-iptables = 0 # 设置为1
net.bridge.bridge-nf-filter-pppoe-tagged = 0
net.bridge.bridge-nf-filter-vlan-tagged = 0
net.bridge.bridge-nf-pass-vlan-input-dev = 0
[root@VM_0_2_centos /]# vim /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
[root@VM_0_2_centos /]# sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
[kubernetes]
name=Kubernetes Repository
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
[root@VM_0_2_centos /]# yum repolist
Loaded plugins: fastestmirror, langpacks
Repository epel is listed more than once in the configuration
Loading mirror speeds from cached hostfile
kubernetes | 1.4 kB 00:00:00
kubernetes/primary | 66 kB 00:00:00
kubernetes 481/481
repo id repo name status
docker-ce-stable/x86_64 Docker CE Stable - x86_64 70
epel/7/x86_64 EPEL for redhat/centos 7 - x86_64 13,228
extras/7/x86_64 Qcloud centos extras - x86_64 341
kubernetes Kubernetes Repository 481
os/7/x86_64 Qcloud centos os - x86_64 10,097
updates/7/x86_64 Qcloud centos updates - x86_64 1,787
repolist: 26,004
由此可以看到已经依赖到kubenetes的yum源
查看kubenetes的相关包
[root@VM_0_2_centos /]# yum list all | grep "^kube"
Repository epel is listed more than once in the configuration
kubeadm.x86_64 1.18.0-0 kubernetes # 需要手动安装
kubectl.x86_64 1.18.0-0 kubernetes # 自动安装
kubelet.x86_64 1.18.0-0 kubernetes # 需要手动安装
kubernetes.x86_64 1.5.2-0.7.git269f928.el7 extras
kubernetes-ansible.noarch 0.6.0-0.1.gitd65ebd5.el7 epel
kubernetes-client.x86_64 1.5.2-0.7.git269f928.el7 extras
kubernetes-cni.x86_64 0.7.5-0 kubernetes
kubernetes-master.x86_64 1.5.2-0.7.git269f928.el7 extras
kubernetes-node.x86_64 1.5.2-0.7.git269f928.el7 extras
在此只需要安装前三个安装包:
yum install kubeadm kubectl kubelet
查看安装后的程序包:
[root@VM_0_2_centos /]# rpm -ql kubelet
/etc/kubernetes/manifests
/etc/sysconfig/kubelet
/usr/bin/kubelet
/usr/lib/systemd/system/kubelet.service
[root@VM_0_2_centos /]# rpm -ql kubeadm
/usr/bin/kubeadm
/usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
安装成功!
/etc/sysconfig/kubelet
[root@VM_0_2_centos ~]# cat /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
kubeadm
, 此步骤是测试使用kubeadm init --kubernetes-version="v1.18.0" --pod-network-cidr="10.244.0.0/16" --dry-run
kubeadm config images pull # 由于不可描述又众所周知的原因,通常不可用
docker.service
文件(此步骤是注释掉之前添加的语句,使其不在访问官方网站)#sky
#Environment="HTTPS_PROXY=http://www.ik8s.io:10070"
#Environment="NO_PROXY=127.0.0.0/8,172.18.0.0/16"
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
#sky
#ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT
重启docker
服务
systemctl daemon-reload
systemctl restart docker
这里需要注册一个阿里云的用户,然后登录阿里云的Registry
[root@node02 ~]# sudo docker login --username=张乐sky registry.cn-hangzhou.aliyuncs.com
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
kubeadm.conf
的默认配置文件,并修改kubeadm config print init-defaults > kubeadm.conf # 生成默认的配置文件
# 修改下载路径的信息
#imageRepository: k8s.gcr.io
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
[root@master yum.repos.d]# kubeadm config images pull --config kubeadm.conf
W0407 17:03:27.925434 31013 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.18.0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.18.0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.18.0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.18.0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.7
docker
的镜像信息[root@master yum.repos.d]# docker image list
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy v1.18.0 43940c34f24f 12 days ago 117MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver v1.18.0 74060cea7f70 12 days ago 173MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler v1.18.0 a31f78c7c8ce 12 days ago 95.3MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager v1.18.0 d3e55153f52f 12 days ago 162MB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause 3.2 80d28bedfe5d 7 weeks ago 683kB
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns 1.6.7 67da37a9a360 2 months ago 43.8MB
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd 3.4.3-0 303ce5db0e90 5 months ago 288MB
成功绕过外网获取镜像!
k8s.gcr.io
样式docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.18.0 k8s.gcr.io/kube-proxy:v1.18.0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.18.0 k8s.gcr.io/kube-apiserver:v1.18.0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.18.0 k8s.gcr.io/kube-scheduler:v1.18.0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.18.0 k8s.gcr.io/kube-controller-manager:v1.18.0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 k8s.gcr.io/pause:3.2
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.7 k8s.gcr.io/coredns:1.6.7
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0 k8s.gcr.io/etcd:3.4.3-0
registry.aliyuncs.com
标识的镜像[root@node02 yum.repos.d]# docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.7
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.7
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/coredns@sha256:695a5e109604331f843d2c435f488bf3f239a88aec49112d452c1cbf87e88405
[root@node02 yum.repos.d]# docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.18.0 registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.18.0
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.18.0 registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.18.0
[root@node02 yum.repos.d]# docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.18.0
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.18.0
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy@sha256:b832454a96a848ad5c51ad8a499ef2173b627ded2c225e3a6be5aad9446cb211
[root@node02 yum.repos.d]# docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.18.0
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.18.0
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver@sha256:39b3d5a305ec4e340204ecfc81e8cfce87aada5832eb8ee51ef2165b8b31abe3
[root@node02 yum.repos.d]# docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.18.0
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.18.0
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler@sha256:7c242783ca2bbd9f85fbef785ed7c492d4aaa96e3808740a6fb9fb14babfa700
[root@node02 yum.repos.d]# docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.18.0
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.18.0
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager@sha256:8e9f80f5de8a78e84b0f61325b00628276c56aaee281e5f58c6300ef12dbf3a8
[root@node02 yum.repos.d]# docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108
[root@node02 yum.repos.d]# docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216
kubeadm init --kubernetes-version="v1.18.0" --pod-network-cidr="10.244.0.0/16" --ignore-preflight-errors=Swap
执行结果:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.47.139:6443 --token nkxsyy.9xnltnf2a4y7sr0p \
--discovery-token-ca-cert-hash sha256:9644189c6aa9a7e2e4bf8435f772b03f08eebdd91a16f17844b9a626a3644f12
成功!
home
目录下:
mkdir -p .kube
cp -i /etc/kubernetes/admin.conf .kube/config
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master.example.com NotReady master 6m39s v1.18.0
2. 进入flannel官网,根据readme内容安装flannel插件, 会部署一些`Pod`需要用的信息
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
成功的结果
[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
3. 查看`pods`的状态
[root@master ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-66bff467f8-25p8s 1/1 Running 0 16m
coredns-66bff467f8-kgd2z 1/1 Running 0 16m
etcd-master.example.com 1/1 Running 1 16m
kube-apiserver-master.example.com 1/1 Running 1 16m
kube-controller-manager-master.example.com 1/1 Running 1 16m
kube-flannel-ds-amd64-4mb6p 1/1 Running 0 7m41s
kube-proxy-gw964 1/1 Running 1 16m
kube-scheduler-master.example.com 1/1 Running 1 16m
4. 查看`node`的状态
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master.example.com Ready master 16m v1.18.0
5. 在其他各个节点执行
# 在各个节点中安装kubeadm kubelet
yum install kubeadm kubelet -y
# 添加kubernetes集群
kubeadm join 192.168.47.139:6443 --token nkxsyy.9xnltnf2a4y7sr0p \
--discovery-token-ca-cert-hash sha256:9644189c6aa9a7e2e4bf8435f772b03f08eebdd91a16f17844b9a626a3644f12
# 查看kubernetes集群
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 24m v1.18.0
k8s-node01 Ready <none> 4m19s v1.18.1
k8s-node02 Ready <none> 4m25s v1.18.1
k8s-node03 Ready <none> 4m17s v1.18.1
相关概念说明, 可以
跳过
若禁用swap
设备,则需要编辑kubelet
的配置文件/etc/sysconfig/kubelet
, 设置其忽略swap启用的状态错误,如下:
[root@VM_0_2_centos ~]# cat /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
修改完成后,需要进行master节点初始化。kubeadm init命令支持两种方式的初始化,1: 通过命令行选项传递关键的部署设定;2:是基于yaml格式的专用配置文件,此种方式也允许用户自定义各个部署参数。
kubeadm init --kubernetes-version=v1.12.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
–kubernetes-version 选项的版本号用于指定要部署的kubenretes程序版本,需要与当前的kubeadm支持的版本一致
–pod-network-cidr 选项用于指定Pod分配使用的网络地址,它通常应该与要部署使用的网络插件(flannel, calico)的默认设定保持一致。10.244.0.0/16是flannel默认使用的网络.
–service-cidr用于指定为Service分配使用的网络地址,它由kubernetes管理,默认为10.96.0.0/12
–ignore-preflight-errors 仅应该在未禁用Swap设备的状态下使用
iptables -vnL 是干嘛用的?
修改的/etc/sysconfig/kubelet
是干嘛用的?
当启动kubelet.service
服务时,会读取/etc/sysconfig/kubelet
文件从中加载参数。
kubeadm config images pull
是干嘛用的?
下载相关镜像
docker pull
下载特别慢怎么办?
解决办法链接 https://blog.csdn.net/u012720518/article/details/105350978