k8s和docker单节点部署

修改主机名:

hostnamectl set-hostname k8s-master
hostnamectl set-hostname k8s-node1

配置/etc/hosts

192.168.0.103 k8s-master
192.168.0.107 k8s-node1

卸载podman,可能与docker冲突

sudo yum remove podman

关闭交换分区:

sudo swapoff -a
sudo sed -i 's/.*swap.*/#&/' /etc/fstab

禁用selinux:

setenforce 0
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

关闭防火墙:

sudo systemctl stop firewalld.service
sudo systemctl disable firewalld.service

内核参数修改

sysctl net.bridge.bridge-nf-call-iptables=1
sysctl net.bridge.bridge-nf-call-ip6tables=1
# 失效的命令

设置kubernetes的yum源

[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
yum clean all
yum makecache

安装docker

这个操作master和node1都要做

yum install -y yum-utils device-mapper-persistent-data lvm2

设置docker的yum源

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

查看docker有哪些版本

yum list docker-ce --showduplicates | sort -r

安装19.03.13版本

yum install docker-ce-19.03.13 docker-ce-cli-19.03.13 containerd.io -y --allowerasing

验证

docker --version

启动docker :

systemctl start docker
systemctl enable docker

2、命令补全
使用docker命令时按tab键会自动补全后面的命令

yum -y install bash-completion
source /etc/profile.d/bash_completion.sh

安装docker加速

mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
    "registry-mirrors":["https://xbr1wq16.mirror.aliyuncs.com"]
}
EOF

重新加载docker配置文件

systemctl daemon-reload

重启docker

systemctl restart docker

4、验证加速器

docker --version
docker run hello-world

修改docker cgroup驱动,修改为cgroups

vim /etc/docker/daemon.json
{
    "exec-opts":["native.cgroupdriver=cgroups"]
}
systemctl daemon-reload
systemctl restart docker

查看cgroup信息

docker info|grep cgroup

安装k8s

node1节点也要安装

查看版本

yum list kubelet --showduplicates | sort -r

装1.19.7版本,安装kubelet、kubeadm和kubectl。

sudo yum install -y kubelet-1.19.7 kubeadm-1.19.7 kubectl-1.19.7

启动

systemctl enable kubelet
systemctl start kubelet

kubectl命令补全

echo "source <(kubectl completion bash)" >> ~/.bash_profile
source .bash_profile

查看镜像

kubeadm config images list

初始化master

仅初始化master,node节点不需要

kubeadm init --apiserver-advertise-address 192.168.0.103 --pod-network-cidr=10.244.0.0/16 --image-repository=registry.aliyuncs.com/google_containers --kubernetes-version=v1.19.7
# 192.168.0.103是master服务器的地址

成功信息

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.103:6443 --token qkf60g.uojauhqfzl1hrexo \
    --discovery-token-ca-cert-hash sha256:b9908586f0e3f8475874564a0cf44b6bd5b033e033a799d41ee6352d98975374

执行

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

安装pod网络

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# 重启docker
systemctl restart docker
# 重启kubelet
systemctl restart kubelet

加入集群

# 保留自成功信息
kubeadm join 192.168.0.103:6443 --token qkf60g.uojauhqfzl1hrexo \
    --discovery-token-ca-cert-hash sha256:b9908586f0e3f8475874564a0cf44b6bd5b033e033a799d41ee6352d98975374

master节点查看node状态

kubectl get nodes

Dashboard安装
是K8S的一个web界面查看node、pod状态的一个组件

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml

查看namespace的service有没有起来

kubectl get namespace

查看service

kubectl get service -n kubernetes-dashboard

查看depolyment

kubectl get deployment -n kubernetes-dashboard -o wide

查看pod状态

kubectl get pods -A -o wide

结果如下:

[root@k8s-master ~]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
[root@k8s-master ~]# kubectl get namespace
NAME                   STATUS   AGE
default                Active   3h2m
kube-node-lease        Active   3h2m
kube-public            Active   3h2m
kube-system            Active   3h2m
kubernetes-dashboard   Active   3m18s
[root@k8s-master ~]# kubectl get service -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
dashboard-metrics-scraper   ClusterIP   10.102.31.138           8000/TCP   3m31s
kubernetes-dashboard        ClusterIP   10.97.139.94            443/TCP    3m31s
[root@k8s-master ~]# kubectl get deployment -n kubernetes-dashboard -o wide
NAME                        READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS                  IMAGES                                SELECTOR
dashboard-metrics-scraper   1/1     1            1           3m38s   dashboard-metrics-scraper   kubernetesui/metrics-scraper:v1.0.1   k8s-app=dashboard-metrics-scraper
kubernetes-dashboard        1/1     1            1           3m39s   kubernetes-dashboard        kubernetesui/dashboard:v2.0.0-beta4   k8s-app=kubernetes-dashboard
[root@k8s-master ~]# kubectl get pods -A -o wide
NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE     IP              NODE         NOMINATED NODE   READINESS GATES
kube-system            coredns-6d56c8448f-h9bsp                     1/1     Running   1          3h2m    10.244.0.4      k8s-master              
kube-system            coredns-6d56c8448f-nkqtj                     1/1     Running   1          3h2m    10.244.0.5      k8s-master              
kube-system            etcd-k8s-master                              1/1     Running   2          3h3m    192.168.0.103   k8s-master              
kube-system            kube-apiserver-k8s-master                    1/1     Running   1          3h3m    192.168.0.103   k8s-master              
kube-system            kube-controller-manager-k8s-master           1/1     Running   1          3h3m    192.168.0.103   k8s-master              
kube-system            kube-flannel-ds-b2xrr                        1/1     Running   2          178m    192.168.0.103   k8s-master              
kube-system            kube-flannel-ds-bctv7                        1/1     Running   0          10m     192.168.0.107   k8s-node1               
kube-system            kube-proxy-5d5lx                             1/1     Running   2          3h2m    192.168.0.103   k8s-master              
kube-system            kube-proxy-fqn2v                             1/1     Running   0          10m     192.168.0.107   k8s-node1               
kube-system            kube-scheduler-k8s-master                    1/1     Running   1          3h3m    192.168.0.103   k8s-master              
kubernetes-dashboard   dashboard-metrics-scraper-7b9b99d599-pfg6g   1/1     Running   0          3m45s   10.244.2.3      k8s-node1               
kubernetes-dashboard   kubernetes-dashboard-6d4799d74-fps86         1/1     Running   0          3m46s   10.244.2.2      k8s-node1               

安装的dashboard运行起来了
ClusterIP类型改为NodePort

vi /etc/kubernetes/manifests/kube-apiserver.yaml

卸载dashboard

kubectl -n kubernetes-dashboard delete $(kubectl -n kubernetes-dashboard get pod -o name | grep dashboard)
# 强制删除
kubectl -n kubernetes-dashboard get pod -o name | grep dashboard
kubectl delete pod dashboard-metrics-scraper-7b9b99d599-lghsw -n kubernetes-dashboard --force --grace-period=0

卸载dashboard

#!/bin/bash
kubectl delete deployment kubernetes-dashboard --namespace=kubernetes-dashboard 
kubectl delete service kubernetes-dashboard  --namespace=kubernetes-dashboard
kubectl delete role kubernetes-dashboard-minimal --namespace=kubernetes-dashboard 
kubectl delete rolebinding kubernetes-dashboard-minimal --namespace=kubernetes-dashboard
kubectl delete sa kubernetes-dashboard --namespace=kubernetes-dashboard 
kubectl delete secret kubernetes-dashboard-certs --namespace=kubernetes-dashboard
kubectl delete secret kubernetes-dashboard-csrf --namespace=kubernetes-dashboard
kubectl delete secret kubernetes-dashboard-key-holder --namespace=kubernetes-dashboard

获取并修改

wget  https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml

两处新增

# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort  # 新增
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30009  # 新增
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.0.0
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
    spec:
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.4
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}

重新安装dashboard

kubectl create -f recommended.yaml
# 安装失败时,可能是未清理干净,重新清理再安装。

验证

[root@k8s-master ~]# kubectl get deployment -n kubernetes-dashboard -o wide
NAME                        READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS                  IMAGES                                SELECTOR
dashboard-metrics-scraper   1/1     1            1           38m     dashboard-metrics-scraper   kubernetesui/metrics-scraper:v1.0.1   k8s-app=dashboard-metrics-scraper
kubernetes-dashboard        1/1     1            1           7m34s   kubernetes-dashboard        kubernetesui/dashboard:v2.0.0         k8s-app=kubernetes-dashboard

浏览器访问

https://192.168.0.103:30009/#/login

获取token

# 创建admin用户
kubectl create serviceaccount dashboard-admin -n kubernetes-dashboard
# admin用户绑定集群
kubectl create clusterrolebinding dashboard-cluster-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:dashboard-admin
# 查看
[root@k8s-master ~]# kubectl get sa,secrets -n kubernetes-dashboard
NAME                                  SECRETS   AGE
serviceaccount/dashboard-admin        1         20s
serviceaccount/default                1         41m
serviceaccount/kubernetes-dashboard   1         10m

NAME                                      TYPE                                  DATA   AGE
secret/dashboard-admin-token-q9vg6        kubernetes.io/service-account-token   3      20s
secret/default-token-rnxhx                kubernetes.io/service-account-token   3      41m
secret/kubernetes-dashboard-certs         Opaque                                0      10m
secret/kubernetes-dashboard-csrf          Opaque                                1      10m
secret/kubernetes-dashboard-key-holder    Opaque                                2      10m
secret/kubernetes-dashboard-token-dl7vw   kubernetes.io/service-account-token   3      10m
# admin token的名称是dashboard-admin-token-q9vg6
# 查看token
kubectl describe secrets dashboard-admin-token-q9vg6 -n kubernetes-dashboard

粘贴token到页面,即可访问。

问题

swapoff: /dev/dm-1:swapoff 失败: 无法分配内存

扩大虚拟机内存,大于2G,重启。核数大于2

yum install源有问题,超时

换yum仓库中的源地址

problem with installed package buildah-1.22.3-2.module_el8.5.0+911+f19012f9.x86_64
yum remove docker-ce-19.03.13 docker-ce-cli-19.03.13 containerd.io
yum install docker-ce-19.03.13 docker-ce-cli-19.03.13 containerd.io -y --allowerasing
Unable to find image ‘hello-world:latest’ locally

多尝试几次重加在加速器,并重启docker

kubelet启动失败

查看kubelet日志

journalctl -xefu kubelet 

经过查看kubelet再不断重启,不用管继续下一步,执行init或join后问题会自动解决

重新初始化
# 移除所有工作节点
kubectl delete node izwz9ac58lkokssyf8owagz
# 所有工作节点删除工作目录,并重置kubeadm
rm -rf /etc/kubernetes/*
kubeadm reset
# Master节点删除工作目录,并重置kubeadm
sudo rm -rf /etc/kubernetes/*;sudo rm -rf ~/.kube/*;sudo rm -rf /var/lib/etcd/*;sudo rm -rf /var/lib/cni/;sudo rm -fr /etc/cni/net.d;sudo kubeadm reset -f
# 重新init kubernetes
kubeadm init --apiserver-advertise-address 192.168.0.103 --pod-network-cidr=10.244.0.0/16 --image-repository=registry.aliyuncs.com/google_containers --kubernetes-version=v1.19.7
kubelet cgroup driver: “cgroupfs” is different from docker cgroup driver: “systemd”

修改docker cgroup状态和kubelet一致

node是NotReady状态

报错:network plugin is not ready: cni config uninitialized
没安装flannel

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
ls: 无法访问’/etc/kubernetes/admin.conf’: 没有那个文件或目录

从master节点copy一个过来

error execution phase kubelet-start: error uploading crisocket: timed out waiting for the condition
swapoff -a
kubeadm reset
systemctl daemon-reload
systemctl restart kubelet
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X  

参考:
https://blog.csdn.net/m0_46722578/article/details/114629631
https://blog.csdn.net/Yusheng9527/article/details/124140077
https://blog.csdn.net/hehj369986957/article/details/107839906
https://blog.csdn.net/wu2374633583/article/details/121910381

你可能感兴趣的:(k8s,docker,kubernetes,linux)