k8安装

版本:1.10.4

参考官方:
https://kubernetes.io/docs/tasks/tools/install-kubectl/

https://www.kubernetes.org.cn/3808.html

使用阿里云源

Kubernetes.repo

[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

安装 kubelet kubeadm kubectl

yum install -y kubectl --enablerepo=kubernetes 
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet 
systemctl start kubelet

配置iptables

Some users on RHEL/CentOS 7 have reported issues with traffic being routed incorrectly due to iptables being bypassed. You should ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl config, e.g.

cat <  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sysctl --system

在master节点上配置cgroup driver

需要确保kubelet 和docker使用的 cgroup driver 是一致的 。
查看docker的:docker info | grep -i cgroup


[root@node205 sysctl.d]# docker info | grep -i cgroup
Cgroup Driver: cgroupfs

查看kubelet的配置 :

cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf 

sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

加载并重启配置 :

systemctl daemon-reload
systemctl restart kubelet

命令补全

yum install -y bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc

初始化master

因为后续准备使用Flannel 网络 , 所以 初始化时给出网络参数
命令:
kubeadm init --pod-network-cidr=10.244.0.0/16

当初始化操作运行卡在了 :

[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.

原因: 访问的地址是 k8s.gcr.io , 在墙外。
网上搜索有两种解决方案:

  1. 国外代理
  2. 使用国内docker仓库,将相关镜像下载下来

这里使用第二种 , 特别感谢 QQ 雨夜的大神 的脚本 , 提供了一种方式将相关版本的镜像啦回本地 :

REGISTRY_NAME=registry.cn-shenzhen.aliyuncs.com/k8s-opswolrd


IMAGES_NAME="kube-proxy-amd64:v1.10.4
kube-controller-manager-amd64:v1.10.4
kube-scheduler-amd64:v1.10.4
kube-apiserver-amd64:v1.10.4
etcd-amd64:3.1.12
k8s-dns-dnsmasq-nanny-amd64:1.14.8
k8s-dns-sidecar-amd64:1.14.8
k8s-dns-kube-dns-amd64:1.14.8
pause-amd64:3.1
heapster-amd64:v1.5.3
kubernetes-dashboard-amd64:v1.8.3
heapster-influxdb-amd64:v1.3.3"

for i in $IMAGES_NAME
do
  docker pull $REGISTRY_NAME/$i
  docker tag $REGISTRY_NAME/$i k8s.gcr.io/$i
done

docker pull $REGISTRY_NAME/flannel:v0.10.0-amd64
docker tag $REGISTRY_NAME/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64

运行该脚本, docker 会将用到的相关镜像拉倒本地并重命名:
使用docker images 查看如下:

REPOSITORY                                                                     TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy-amd64                                                    v1.10.4             3f9ff47d0fca        2 weeks ago         97.1MB
registry.cn-shenzhen.aliyuncs.com/k8s-opswolrd/kube-proxy-amd64                v1.10.4             3f9ff47d0fca        2 weeks ago         97.1MB
k8s.gcr.io/kube-controller-manager-amd64                                       v1.10.4             1a24f5586598        2 weeks ago         148MB
registry.cn-shenzhen.aliyuncs.com/k8s-opswolrd/kube-controller-manager-amd64   v1.10.4             1a24f5586598        2 weeks ago         148MB
k8s.gcr.io/kube-apiserver-amd64                                                v1.10.4             afdd56622af3        2 weeks ago         225MB
registry.cn-shenzhen.aliyuncs.com/k8s-opswolrd/kube-apiserver-amd64            v1.10.4             afdd56622af3        2 weeks ago         225MB
registry.cn-shenzhen.aliyuncs.com/k8s-opswolrd/kube-scheduler-amd64            v1.10.4             6fffbea311f0        2 weeks ago         50.4MB
k8s.gcr.io/kube-scheduler-amd64                                                v1.10.4             6fffbea311f0        2 weeks ago         50.4MB
k8s.gcr.io/heapster-amd64                                                      v1.5.3              f57c75cd7b0a        7 weeks ago         75.3MB
registry.cn-shenzhen.aliyuncs.com/k8s-opswolrd/heapster-amd64                  v1.5.3              f57c75cd7b0a        7 weeks ago         75.3MB
hello-world                                                                    latest              e38bc07ac18e        2 months ago        1.85kB
k8s.gcr.io/etcd-amd64                                                          3.1.12              52920ad46f5b        3 months ago        193MB
registry.cn-shenzhen.aliyuncs.com/k8s-opswolrd/etcd-amd64                      3.1.12              52920ad46f5b        3 months ago        193MB
k8s.gcr.io/kubernetes-dashboard-amd64                                          v1.8.3              0c60bcf89900        4 months ago        102MB
registry.cn-shenzhen.aliyuncs.com/k8s-opswolrd/kubernetes-dashboard-amd64      v1.8.3              0c60bcf89900        4 months ago        102MB
registry.cn-shenzhen.aliyuncs.com/k8s-opswolrd/flannel                         v0.10.0-amd64       f0fad859c909        4 months ago        44.6MB
quay.io/coreos/flannel                                                         v0.10.0-amd64       f0fad859c909        4 months ago        44.6MB
registry.cn-shenzhen.aliyuncs.com/k8s-opswolrd/k8s-dns-dnsmasq-nanny-amd64     1.14.8              c2ce1ffb51ed        5 months ago        40.9MB
k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64                                         1.14.8              c2ce1ffb51ed        5 months ago        40.9MB
k8s.gcr.io/k8s-dns-sidecar-amd64                                               1.14.8              6f7f2dc7fab5        5 months ago        42.2MB
registry.cn-shenzhen.aliyuncs.com/k8s-opswolrd/k8s-dns-sidecar-amd64           1.14.8              6f7f2dc7fab5        5 months ago        42.2MB
k8s.gcr.io/k8s-dns-kube-dns-amd64                                              1.14.8              80cc5ea4b547        5 months ago        50.5MB
registry.cn-shenzhen.aliyuncs.com/k8s-opswolrd/k8s-dns-kube-dns-amd64          1.14.8              80cc5ea4b547        5 months ago        50.5MB
k8s.gcr.io/pause-amd64                                                         3.1                 da86e6ba6ca1        6 months ago        742kB
registry.cn-shenzhen.aliyuncs.com/k8s-opswolrd/pause-amd64                     3.1                 da86e6ba6ca1        6 months ago        742kB
k8s.gcr.io/heapster-influxdb-amd64                                             v1.3.3              577260d221db        9 months ago        12.5MB
registry.cn-shenzhen.aliyuncs.com/k8s-opswolrd/heapster-influxdb-amd64         v1.3.3              577260d221db        9 months ago        12.5MB

重新执行初始化命令 :
kubeadm init --pod-network-cidr=10.244.0.0/16

注意: 重新初始化前需要执行 kubeadm reset , 否则会报错。

显示成功信息:


Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 10.30.16.205:6443 --token ngaatd.59490lbqvjl68dul --discovery-token-ca-cert-hash sha256:3ad459ffdd0e92008304864a56f3ed19938a4ce2603cfbecac060f60d0358d0b

安装配置 网络 Flannel

命令 :kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml

报错

The connection to the server localhost:8080 was refused - did you specify the right host or port?

使用 kubectl get node 也报同样错误 。

原因:
将/etc/kubernetes/admin.conf 加入环境变量 , 上面初始化master结果中有提示 。

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

或
export  KUBECONFIG=/etc/kubernetes/admin.conf

重新执行 安装网络

[root@node205 ~]# kubectl apply -f kube-flannel-v0.10.0.yml 
clusterrole.rbac.authorization.k8s.io "flannel" created
clusterrolebinding.rbac.authorization.k8s.io "flannel" created
serviceaccount "flannel" created
configmap "kube-flannel-cfg" created
daemonset.extensions "kube-flannel-ds" created

验证查看master 节点

至此, 完成master节点的部署。
可以通过kubectl 命令查看集群状态:


[root@node205 ~]# kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
node205   Ready     master    1h        v1.10.4

[root@node205 ~]# kubectl cluster-info
Kubernetes master is running at https://10.30.16.205:6443
KubeDNS is running at https://10.30.16.205:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

[root@node205 ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                              READY     STATUS    RESTARTS   AGE
kube-system   etcd-node205                      1/1       Running   0          1h
kube-system   kube-apiserver-node205            1/1       Running   0          1h
kube-system   kube-controller-manager-node205   1/1       Running   0          1h
kube-system   kube-dns-86f4d74b45-lzds4         3/3       Running   0          1h
kube-system   kube-flannel-ds-2k866             1/1       Running   0          2m
kube-system   kube-proxy-7flzg                  1/1       Running   0          1h
kube-system   kube-scheduler-node205            1/1       Running   0          1h

安装配置nodes节点

  1. yum安装相关包
yum install -y kubelet kubeadm kubectl
  1. 开机自启动
systemctl enable kubelet
  1. 使用kubeadm join 命令添加节点
kubeadm join 10.30.16.205:6443 --token ngaatd.59490lbqvjl68dul --discovery-token-ca-cert-hash sha256:3ad459ffdd0e92008304864a56f3ed19938a4ce2603cfbecac060f60d0358d0b

该命令来自master初始化输出结果 。

[preflight] Running pre-flight checks.
    [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.03.1-ce. Max validated version: 17.03
    [WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[preflight] Starting the kubelet service
[discovery] Trying to connect to API Server "10.30.16.205:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.30.16.205:6443"
[discovery] Requesting info from "https://10.30.16.205:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.30.16.205:6443"
[discovery] Successfully established connection with API Server "10.30.16.205:6443"

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

节点上使用 kubectl get nodes

报错 The connection to the server localhost:8080 was refused - did you specify the right host or port?

需要将master 节点上的 admin.conf拷贝过来 。

节点上 cgroup dirver的修改 , 参见master

kubectl get nodes

[root@node206 ~]# kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
node205   Ready     master    2h        v1.10.4
node206   Ready         2m        v1.10.4

安装配置 dashboard

  1. 下载配置文件 https://github.com/kubernetes/dashboard/blob/master/src/deploy/kubernetes-dashboard.yaml
  2. 修改service 类型为 NodePort 对外提供服务 :
# ------------------- Dashboard Service ------------------- #

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30443
  selector:
    k8s-app: kubernetes-dashboard
  1. 创建 之 kubectl create -f kubernetes-dashboard.yaml

  2. 查看状态
    kubectl get pods --all-namespaces

  3. 查看日志
    kubectl describe po kubernetes-dashboard --namespace=kube-system
    kubectl logs -f kubernetes-dashboard-latest-3243398-thc7k -n kube-system

  4. 删除
    kubectl delete -f kubernetes-dashboard.yaml
    在这个版本 , 直接上面一条命令即可删除。
    若出现重复创建pod的情况 ,使用下面这条命令。
    kubectl delete deployment kubernetes-dashboard --namespace=kube-system

  5. 使dashboard有更高的权限管理集群
    创建 rbac文件

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-admin
  namespace: kube-system

---

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: kubernetes-dashboard-admin
  labels:
    k8s-app: kubernetes-dashboard
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard-admin
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io

使用 kubectl create -f 创建 之 。

查看 :

[root@node205 dashboard]# kubectl get secret -o wide --all-namespaces | grep dash
kube-system   kubernetes-dashboard-admin-token-k68zs           kubernetes.io/service-account-token   3         20s
kube-system   kubernetes-dashboard-certs                       Opaque                                0         10m
kube-system   kubernetes-dashboard-key-holder                  Opaque                                2         9d
kube-system   kubernetes-dashboard-token-fs5q4                 kubernetes.io/service-account-token   3         10m

kubernetes-dashboard-admin-token-k68zs 是具有admin权限的token , 查看其token值。

kubectl describe secret kubernetes-dashboard-admin-token-k68zs -n kube-system

复制token , 在登陆界面选择token ,输入token值登陆进去。

你可能感兴趣的:(k8安装)