ubuntu安装k8s(docker版)

(一)简介

ubuntu安装k8s(docker版)_第1张图片

k8s是什么就不介绍了,下面是k8s的官方文档,这里仅演示安装的过程,使用一主一从的结构

传送门

PS:推荐安装可以先使用云服务器把环境搭起来,这样系统比较干净,不然不知道会遇到什么问题

(二)环境初始化

设置ubuntu命令补全,这和k8s安装无关,要不然敲命令费劲

apt update
apt install bash-completion

#为了立即使设置生效,可以运行以下命令来加载新的 Bash 配置:
source /etc/profile.d/bash_completion.sh
#或者,也可以重新启动终端来应用更改

1.固定服务器内网ip

先用ip addr 查看ip和网卡等信息

ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 52:54:00:cf:f4:9f brd ff:ff:ff:ff:ff:ff
    inet 10.206.16.15/20 brd 10.206.31.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fecf:f49f/64 scope link 
       valid_lft forever preferred_lft forever

查看当前主机的网卡名,当前ip, 子网掩码,网关地址

如上图所示:网卡名为 eth0, 当前ip 10.206.16.15, 网关地址10.206.16.1
然后根据上面的信息修改/etc/netplan/00-installer-config.yaml的配置文件

cd /etc/netplan
#将原文件备份
cp 00-installer-config.yaml 00-installer-config.yaml.bak
#编辑00-installer-config.yaml
vim 00-installer-config.yaml

更新后的00-installer-config.yaml内容为:

network:
  version: 2
  renderer: NetworkManager
  ethernets:
    eth0:   # 上面获取的网卡名称
      dhcp4: no     # 关闭dhcp
      dhcp6: no
      addresses: [10.206.16.15/20]  # 上面获取的ip
      gateway4: 10.206.16.1     # 上面获取的网关
      nameservers:
        addresses: [8.8.8.8, 114.114.114.114] #dns

只需要将你获取的网卡,ip,网关等信息替换为你的即可

使配置生效

netplan apply

2.关闭swap

swapoff -a
#为防止重启后swap分区又打开,编辑/etc/fstab,找到包含swap.img的一行,将其注释,一般是最后一行,当然也可能没有

3.设置时间同步

统一时区

#查看时区
cat /etc/timezone
#修改时区
timedatectl set-timezone Asia/Shanghai

同步时间

#安装ntpdate
apt install ntpdate
#时间同步命令
ntpdate ntp1.aliyun.com
#设置crontab定时任务,比如3分钟同步一次
#编辑crontab定时任务
crontab -e
#添加下面配置
*/3 * * * * ntpdate ntp1.aliyun.com

如果报错the NTP socket is in use, exiting,说明服务器有ntp在同步时间(腾讯云服务器就有),你可以ps然后将其kill,不kill也可以,那我们就不用同步时间了

4.安装docker

#先更新一波
apt-get update
#更新apt包索引并安装包以允许apt在HTTPS上使用存储库
apt-get install ca-certificates curl gnupg lsb-release
#添加腾讯云docker的使用的公钥
curl -fsSL https://mirrors.tencentyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
添加腾讯云docker的远程库
add-apt-repository "deb [arch=amd64] https://mirrors.tencentyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
#更新apt源,安装docker引擎、docker命令行、docker容器运行时
apt-get update && sudo apt-get install -y docker-ce docker-ce-cli containerd.io


#查看docker版本和状态
docker version
systemctl status docker


#修改docker镜像源
vim /etc/docker/daemon.json

修改后的daemon.json文件内容如下

{
   "registry-mirrors": [
   "https://mirror.ccs.tencentyun.com"
  ]
}
#重启docker
systemctl restart docker

5.安装 cri-dockerd

Kubernetes在v1.24版本之后删除了dockershim,Docker不再是默认的容器运行时了,要想继续使用Docker运行时,需要安装cri-dockerd

#github获取对应版本的cri-dockerd,可以用lsb_release -a查看ubuntu的发行版本
wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.4/cri-dockerd_0.3.4.3-0.ubuntu-focal_amd64.deb
#安装cri-dockerd
dpkg -i cri-dockerd_0.3.4.3-0.ubuntu-focal_amd64.deb
#修改/usr/lib/systemd/system/cri-docker.service配置
sed -i -e 's#ExecStart=.*#ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9#g' /usr/lib/systemd/system/cri-docker.service
#重启cri-docker
systemctl restart cri-docker

6.开启IPv4转发

#开启IPv4转发
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF
sudo sysctl --system

7.设置k8s软件源和安装kubeadm/kubectl/kubelet

设置k8s软件源

apt-get install -y apt-transport-https ca-certificates curl
curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg  https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg
cat /usr/share/keyrings/kubernetes-archive-keyring.gpg |  sudo apt-key add -
echo "deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
apt update

查看k8s可用版本

apt-cache madison kubeadm

我这里安装的时1.26版本,此时最新的版本为1.28.2

#安装最新
#apt-get install -y kubelet kubeadm kubectl
#安装指定版本
apt install -y kubelet=1.26.0-00 kubeadm=1.26.0-00 kubectl=1.26.0-00
#锁定版本,防止自动升级
apt-mark hold kubelet kubeadm kubectl
#查看版本
kubelet --version
kubeadm version
kubectl version

以上内容是所有节点都要执行的

(三)kubeadm初始化集群以及其它节点加入集群

下面的操作步骤会分master节点和Node节点

master节点集群初始化

#初始化集群
kubeadm init --image-repository registry.aliyuncs.com/google_containers \
             --service-cidr=192.168.200.0/21 \
             --pod-network-cidr=10.0.0.0/16 \
             --cri-socket unix:///var/run/cri-dockerd.sock

初始化成功会看到如下类似信息

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.206.16.10:6443 --token koul8l.8lusuzprjb8vqfak \
        --discovery-token-ca-cert-hash sha256:c43c82d4d20e071a52e6f846030bb25264ebd7238f67ffe17fbbce154397b183

如果初始化失败需要重置集群,否这无法再次初始化

#重置集群(初始化失败使用)
# kubeadm reset --cri-socket unix:///var/run/cri-dockerd.sock

如果还是失败,可以重启kubelet试一试,我在安装的时候就出现了一直初始化失败的现象,按照网上的systemctl enable --now kubelet啥的都不行,查看kubelet的状态也正常,只有重启了kubelet才行

#重启了kubelet并查看其状态
systemctl restart kubelet && systemctl status kubelet

如果还不行,但又找不出原因可以reboot试试

当然,初始化失败的原因有很多,这里只是列举我遇到的一个,不管怎样,kubeadm init初始化失败后都需要kubeadm reset 后才可以从新初始化

初始化成功后,按照kubeadm init输出的内容执行下面的命令

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf

这时使用kubectl get nodes,你会发现节点的状态时not ready

kubectl get nodes
NAME              STATUS     ROLES           AGE    VERSION
vm-16-4-ubuntu    NotReady   control-plane   3m8s   v1.26.0

这是因为还需要安装网络插件,这里是安装Calico网络插件,具体如下
链接: 安装Calico网络插件官方文档

#先获取tigera-operator.yaml和custom-resources.yaml文件
wget https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yaml
wget https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/custom-resources.yaml

kubectl create -f tigera-operator.yaml

对于custom-resources.yaml需修改文件中spec.calicoNetwork.ipPools.cidr值设置为初始化集群时–pod-network-cidr参数指定的网段
修改后的custom-resources.yaml后的文件如下

# This section includes base Calico installation configuration.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
  name: default
spec:
  # Configures Calico networking.
  calicoNetwork:
    # Note: The ipPools section cannot be modified post-install.
    ipPools:
    - blockSize: 26
      cidr: 10.0.0.0/16
      encapsulation: VXLANCrossSubnet
      natOutgoing: Enabled
      nodeSelector: all()

---

# This section configures the Calico API server.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
  name: default
spec: {}
kubectl create -f custom-resources.yaml
kubectl get pods -A

NAMESPACE          NAME                                       READY   STATUS    RESTARTS   AGE
calico-apiserver   calico-apiserver-5498df66f9-7htvp          1/1     Running   0          15s
calico-apiserver   calico-apiserver-5498df66f9-n8pz6          1/1     Running   0          15s
calico-system      calico-kube-controllers-7d86749f8f-gg458   1/1     Running   0          61s
calico-system      calico-node-4jxsx                          1/1     Running   0          61s
calico-system      calico-node-dfzg2                          1/1     Running   0          61s
calico-system      calico-typha-7f644c66fc-9gwfr              1/1     Running   0          61s
calico-system      csi-node-driver-ff42h                      2/2     Running   0          61s
calico-system      csi-node-driver-qbqm4                      2/2     Running   0          61s
kube-system        coredns-5bbd96d687-d2xmb                   1/1     Running   0          18m
kube-system        coredns-5bbd96d687-xpjb9                   1/1     Running   0          18m
kube-system        etcd-vm-16-4-ubuntu                        1/1     Running   0          18m
kube-system        kube-apiserver-vm-16-4-ubuntu              1/1     Running   0          18m
kube-system        kube-controller-manager-vm-16-4-ubuntu     1/1     Running   0          18m
kube-system        kube-proxy-mw87t                           1/1     Running   0          15m
kube-system        kube-proxy-n9mpb                           1/1     Running   0          18m
kube-system        kube-scheduler-vm-16-4-ubuntu              1/1     Running   0          18m
tigera-operator    tigera-operator-78d7857c44-z77nw           1/1     Running   0          4m5s

等使用kubectl get pods -A命令查看所有容器都运行好后,在使用kubectl get nodes查看节点状态

NAME              STATUS   ROLES           AGE   VERSION
vm-16-4-ubuntu    Ready    control-plane   20m   v1.26.0

以上命令只在master节点执行

node节点加入集群

#该命令只在node节点执行,下面命令在kubeadm成功后,终端会输出,只需在加上--cri-socket unix:///var/run/cri-dockerd.sock
kubeadm join 10.206.16.4:6443 --token zt7v60.0o7dkkwvonlp3p4g \
        --discovery-token-ca-cert-hash sha256:eda942207a39cbfb5a95ee9ac6171609c4a9c5a8669be216ae0c71567833b265 \
        --cri-socket unix:///var/run/cri-dockerd.sock

上面命令执行完后,去master节点执行kubectl get nodes

kubectl get nodes 
NAME              STATUS   ROLES           AGE   VERSION
vm-16-16-ubuntu   Ready    <none>          20m   v1.26.0
vm-16-4-ubuntu    Ready    control-plane   23m   v1.26.0

vm-16-4-ubuntu是master节点对应的ROLES是control-plane ,vm-16-16-ubuntu是node节点,ROLES是none,如果想指定,可以使用如下命令

kubectl label no vm-16-16-ubuntu kubernetes.io/role=node
#查看节点
kubectl get nodes 
NAME              STATUS   ROLES           AGE   VERSION
vm-16-16-ubuntu   Ready    node            32m   v1.26.0
vm-16-4-ubuntu    Ready    control-plane   35m   v1.26.0

PS:NAME是主机的hostname,可以使用 hostnamectl set-hostname 主机名 修改

(四)测试

部署一个nginx测试一下看看

创建 nginx.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80
kubectl get pods -owide
NAME                                READY   STATUS    RESTARTS   AGE     IP            NODE              NOMINATED NODE   READINESS GATES
nginx-deployment-6b7f675859-9gz6n   1/1     Running   0          2m10s   10.0.240.69   vm-16-16-ubuntu   <none>           <none>
nginx-deployment-6b7f675859-kbvw9   1/1     Running   0          2m10s   10.0.240.67   vm-16-16-ubuntu   <none>           <none>
nginx-deployment-6b7f675859-s28f4   1/1     Running   0          2m10s   10.0.240.68   vm-16-16-ubuntu   <none>           <none>

PS:containerPort这个字段用于规范化声明容器对外暴露的端口,但这个端口并不是容器映射到主机的端口,它是一个声明式的字段,属于容器端口规范。在很多情况下,我们不需要设置此 containerPort 也可以直接访问 Pod。

还有就是默认情况下,master节点不会作为pod调度的节点,如果希望master也可以被调度,可以执行下面的命令

#允许控制节点被调度(可选)
kubectl taint nodes --all node-role.kubernetes.io/control-plane-

(五)结语

写这些,仅记录自己学习使用Kubernetes的过程。如果有什么错误的地方,还请大家批评指正。最后,希望小伙伴们都能有所收获。

ubuntu安装k8s(docker版)_第2张图片

你可能感兴趣的:(ubuntu,kubernetes,docker)