安装K8S

安装Docker

安装yum工具

$ yum install -y yum-utils
device-mapper-persistent-data
lvm2

然后设置阿里云仓库

$ yum-config-manager
--add-repo
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

查看可选版本

$ yum list docker-ce.x86_64 --showduplicates | sort -r

安装docker

yum install -y docker-ce-18.09.9 docker-ce-cli-18.09.9 containerd.io

启动docker

service docker start

修改docker 仓库地址

$ vi /etc/docker/daemon.json

{
  "registry-mirrors": ["http://hub.c.163.com/"]
}

保存后重启docker

service docker restart

卸载docker

$ yum list installed |grep docker

containerd.io.x86_64                 1.4.3-3.1.el7                  @docker-ce-stable
docker-ce-cli.x86_64                 1:20.10.3-3.el7                @docker-ce-stable

$ yum -y remove docker.x86_64

删除目录

$ rm -rf /var/lib/docker

设置docker代理

$ mkdir -p /etc/systemd/system/docker.service.d
$ vi /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=socks5://192.168.137.1:1080" "HTTPS_PROXY=socks5://192.168.137.1:1080" "NO_PROXY=localhost,127.0.0.1"

$ systemctl daemon-reload
$ systemctl restart docker

安装docker-compose(可选)

$ sudo curl -L https://get.daocloud.io/docker/compose/releases/download/1.28.5/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose

$ sudo chmod +x /usr/local/bin/docker-compose

安装Harbor(可选)

到github下载harbor https://github.com/goharbor/harbor/releases

下载后解压缩

$ tar zxvf harbor-offline-installer-v2.2.0.tgz

$ cd harbor

将配置模板生成配置文件

$ cp harbor.yml.tmpl harbor.yml

修改配置

$ vi harbor.yml

hostname改成你本地ip并注释https然后安装

$ ./prepare

$./install.sh

因为不用https,所以要在 /etc/docker/daemon.json 加上配置

"insecure-registries": [
"192.168.56.100"
]

将本地镜像dc-gateway提交到harbor

$ docker login -u admin -p Harbor12345 http://127.0.0.1

$ docker tag dc-gateway:1.0 127.0.0.1/library/dc-gateway:1.0 #给镜像打标签

$ docker push 127.0.0.1/library/dc-gateway:1.0 //推送到仓库

docker-compose up -d 启动

docker-compose stop 停止

docker-compose restart 重新启动

安装Kubernetes工具

$ vi /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0

$yum install -y kubectl.x86_64
$yum install -y kubeadm

同步时间

$yum install ntpdate

$ntpdate cn.pool.ntp.org

$hwclock --systohc

关闭swap

$ swapoff -a

$ vi /etc/fstab 注释有swap那行就永久关闭

关闭防火墙

$ iptables -F

修改IP地址

如果是VirtualBox虚拟机就在vb设置里面将第一块网卡设置NET,第二块设置桥接即可。跳过一下配置

先查看ip地址

$ ip addr

第二块enp0s3就是网卡,接着修改对应的ip地址

$ vi /etc/sysconfig/network-scripts/ifcfg-enp0s3

TYPE="Ethernet"
PROXY_METHOD="static"
BROWSER_ONLY="no"
BOOTPROTO="dhcp"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="enp0s3"
UUID="6a104c04-1109-4b64-81e1-3e7f9ba045f2"
DEVICE="enp0s3"
ONBOOT="yes"
IPADDR="192.168.141.110"
NETMASK="255.255.255.0"
GATEWAY="192.168.141.254"
DNS1="202.100.199.8"
DNS2="8.8.4.4"

修改主机名称

$vi /etc/hostname

k8s-master

$vi /etc/sysconfig/network

HOSTNAME=k8s-master

其他主机k8s-node1,k8s-node2

安装kubernetes集群

初始化集群

$ kubeadm reset

$ rm -f

$ cd /usr/local/

$ mkdir kubernets

$ mkdir cluster

$ kubeadm config print init-defaults --kubeconfig ClusterConfiguration > kubeadm.yml

修改配置

$ vi kubeadm.yml

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.56.101 #修改master节点ip
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers #改成阿里云仓库
kind: ClusterConfiguration
kubernetesVersion: v1.20.1 #改成安装的kubernetes版本
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16" #新增pot网络配置
  serviceSubnet: 10.96.0.0/12
scheduler: {}

查看需要安装那些镜像

$ kubeadm config images --list --config kubeadm.yml

拉取镜像

$ kubeadm config images pull --config kubeadm.yml

安装主节点

$ kubeadm init --config=kubeadm.yml --upload-certs | tee kubeadm-init.log

成功后返回以下内容:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.21.226:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:1a30805571c5d7de92e9b51bbc0a265968969590cb5fcf9dcd7d334a5b6cff9a

然后执行返回的命令,如果是root用户执行前面2句

HOME/.kube
HOME/.kube/config

安装从节点

将上面最后一条命令复制出来粘贴到从节点里面执行

$ kubeadm join 192.168.21.226:6443 --token abcdef.0123456789abcdef
--discovery-token-ca-cert-hash sha256:1a30805571c5d7de92e9b51bbc0a265968969590cb5fcf9dcd7d334a5b6cff9a

若返回以下错误

[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1

执行:

$ echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables

====

[ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1

执行:

$ echo "1" > /proc/sys/net/ipv4/ip_forward

成功到master执行

$ kubectl get node 返回

NAME         STATUS     ROLES                  AGE     VERSION
k8s-master   NotReady   control-plane,master   8m46s   v1.20.4
k8s-node1    NotReady                    71s     v1.20.4
k8s-node2    NotReady                    79s     v1.20.4

[kubelet-check] Initial timeout of 40s passed.
error execution phase kubelet-start: error uploading crisocket: timed out waiting for the condition
To see the stack trace of this error execute with --v=5 or higher

$ kubeadm reset

$ systemctl daemon-reload

$ systemctl restart kubelet

配置网络

$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

完成后查看状态, status 是ready表示成功了

$ kubectl get nodes

NAME         STATUS     ROLES                  AGE   VERSION
k8s-master   NotReady   control-plane,master   35m   v1.20.4
k8s-node1    Ready                       25m   v1.20.4
k8s-node2    Ready                       27m   v1.20.4

查看状态 kubectl get cs

Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}  

kubeadm安装k8s 组件controller-manager 和scheduler状态 Unhealthy

检查kube-scheduler和kube-controller-manager组件配置是否禁用了非安全端口

vi /etc/kubernetes/manifests/kube-scheduler.yaml
vi /etc/kubernetes/manifests/kube-controller-manager.yaml


000.png

将port=0去掉
然后systemctl restart kubelet
再检查集群组件状态已正常

aHR0cHM6Ly93d3cuZ2ppZS5jbi93cC1jb250ZW50L3VwbG9hZHMvMjAyMC8wNy8zLnBuZw.png

检查 Master 状态

$ kubectl cluster-info

查看节点状态

$ kubectl get nodes -o wide

NAME         STATUS   ROLES                  AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
k8s-master   Ready    control-plane,master   8h    v1.20.4   10.0.2.15                CentOS Linux 7 (Core)   3.10.0-1127.19.1.el7.x86_64   docker://18.9.9
k8s-node1    Ready                     8h    v1.20.4   10.0.2.15                CentOS Linux 7 (Core)   3.10.0-1127.19.1.el7.x86_64   docker://18.9.9
k8s-node2    Ready                     8h    v1.20.4   192.168.56.103           CentOS Linux 7 (Core)   3.10.0-1127.19.1.el7.x86_64   docker://18.9.9

注意:如果是INTERNAL-IP都是相同的(VirtualBox下安装的)都是10.0.2.15 的IP则需要手动修改配置文件/etc/sysconfig/kubelet

KUBELET_EXTRA_ARGS=--node-ip=192.168.56.103 # 改成节点真实ip

$ kubectl get endpoints 服务名称

关联harbor(可选)

kubectl create secret docker-registry docker-harbor-secret --namespace=default  \
--docker-server=192.168.21.229 \
--docker-username=admin \
--docker-password=Harbor12345 

运行Nginx容器

# API 版本号
apiVersion: apps/v1
# 类型,如:Pod/ReplicationController/Deployment/Service/Ingress
kind: Deployment
metadata:
  # Kind 的名称
  name: nginx
spec:
  selector:
    matchLabels:
      # 容器标签的名字,发布 Service 时,selector 需要和这里对应
      app: nginx
  # 部署的实例数量
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      # 配置容器,数组类型,说明可以配置多个容器
      containers:
      # 容器名称
      - name: nginx
        # 容器镜像
        image: nginx:1.17
        # 只有镜像不存在时,才会进行镜像拉取
        imagePullPolicy: IfNotPresent
        ports:
        # Pod 端口
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:  
  name: nginx-http
spec:  
  selector:    
    app: nginx  
  type: LoadBalancer  
  ports:  
  - port: 80
    targetPort: 80    
    nodePort: 31234

创建

$ kubectl apply -f nginx.yml

查看Pods状态

$ kubectl get pods

如果status是ContainerCreating,可以输入

$ kubectl describe pod nginx

查看日志

删除节点

$ kubectl delete -f pod_nginx.yml

$ kubectl create -f pod_nginx.yml

查看部署

$ kubectl get deployment

服务发布 如果yml里面有了Service 就不需要这一步

$ kubectl expose deployment nginx --port=80 --type=LoadBalancer

查看服务

$ kubectl get services

$ kubectl describe service nginx

删除已经部署的服务

$ kubectl delete deployment nginx

删除已发布的服务

$ kebectl delete service nginx

排查服务无法访问

https://www.cnblogs.com/cheyunhua/p/13343313.html

Error from server (NotFound): the server could not find the requested resource

https://www.jianshu.com/p/fd9941c21e55

https://cookcode.blog.csdn.net/article/details/109424100

http://www.bubuko.com/infodetail-3499239.html

kubernetes 在pod内无法ping通servicename和ClusterIP

需要使用 ipvs 替换iptables,操作是在所有节点上(需升级Linux内核4.4以上)

1:开启内核支持

cat >> /etc/sysctl.conf << EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

sysctl -p

2:开启ipvs支持

yum -y install ipvsadm  ipset

# 临时生效
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4

# 永久生效
cat > /etc/sysconfig/modules/ipvs.modules <

3:配置kube-proxy,在master上操作,因使用kubeadmin安装,所以操作方式如下

[root@master] # kubectl edit cm kube-proxy -n kube-system
configmap/kube-proxy edited

#修改如下
kind: MasterConfiguration
apiVersion: kubeadm.k8s.io/v1alpha1
...
ipvs:
      excludeCIDRs: null
      minSyncPeriod: 0s
      scheduler: ""
      syncPeriod: 30s
    kind: KubeProxyConfiguration
    metricsBindAddress: 127.0.0.1:10249
    mode: "ipvs"                  #修改

4:在master重启kube-proxy

kubectl  get pod -n kube-system | grep kube-proxy | awk ‘{print $1}‘ | xargs kubectl delete pod -n kube-system

5:验证ipvs是否开启

[root@k8s-m mytest]# kubectl logs kube-proxy-cvzb4 -n kube-system
I0409 03:37:29.194391       1 server_others.go:170] Using ipvs Proxier.
W0409 03:37:29.194779       1 proxier.go:401] IPVS scheduler not specified, use rr by default
I0409 03:37:29.194981       1 server.go:534] Version: v1.15.3
I0409 03:37:29.214255       1 conntrack.go:52] Setting nf_conntrack_max to 524288
I0409 03:37:29.216744       1 config.go:96] Starting endpoints config controller
I0409 03:37:29.216812       1 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller
I0409 03:37:29.217445       1 config.go:187] Starting service config controller
I0409 03:37:29.218320       1 controller_utils.go:1029] Waiting for caches to sync for service config controller
I0409 03:37:29.318218       1 controller_utils.go:1036] Caches are synced for endpoints config controller
I0409 03:37:29.318564       1 controller_utils.go:1036] Caches are synced for service config controller

6:进入pod内,现在可以ping通servicename了,使用iptables时,发现ping的时候出现了如下错误,执行完上述操作,一切正常

root@xxxxxx-cb4c9cb8c-hpzdl:/opt# ping xxxxxx
PING xxxxxx.xxxxx.svc.cluster.local (172.16.140.78) 56(84) bytes of data.
From 172.16.8.1 (172.16.8.1) icmp_seq=1 Time to live exceeded
From 172.16.8.1 (172.16.8.1) icmp_seq=2 Time to live exceeded

http://www.mydlq.club/article/78/#wow11

在busybox中测试

$ kubectl delete pod busybox 

$ kubectl run -it busybox --image=busybox -- /bin/sh

进入busybox后

wget -q -O- http://mysql

你可能感兴趣的:(安装K8S)