Centos7使用kubeadm搭建Kebernete1.11.3

已在阿里云云服务器ECS和轻量应用服务器成功搭建。

特别注意是,在安装了宝塔的轻量应用服务器上,使用相同方法在轻量一直不成功 。

 

参考:https://blog.csdn.net/u013355826/article/details/82801482

https://www.datayang.com/article/45

 

  • 准备

系统环境:Centos 7.3

版本:Kubernetes 1.11.3

 

1.  禁用防火墙

(之后需要了解)

systemctl stop firewalld
systemctl disable firewalld

2. 禁用SELINUX

执行如下命令:(之后需要了解)

vim /etc/sysconfig/selinux

修改文件中的SELINUX为: 

SELINUX=disabled

 

3.关闭swap内存

执行命令:

swapoff -a

4.调整内核参数

  • 执行命令:
vi /etc/sysctl.d/k8s.conf

在新建的k8s.cof文件中增加:

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

 

  • 执行如下命令使之生效:
sudo sysctl --system

 

5. 配置阿里Kubernetes和Docker的yum源

  • 添加阿里的Kubernetes yum源
vim  /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

gpgcheck=0

enable=1

  • 直接下载docker-ce.repo
cd /etc/yum.repos.d/
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

下载后清理缓存刷新所有源

yum clean all
yum repolist

 

  • 安装

 

1.安装docker, kubeadm, kubelet and kubectl

yum install -y docker-ce-17.06.0.ce kubelet-1.11.3 kubeadm-1.11.3  kubectl-1.11.3 kubernetes-cni

 

设置开机启动和启动服务

systemctl enable docker
systemctl enable kubelet.service
systemctl start docker
systemctl start kubelet

 

3.拉取Docker镜像

由于国内网络原因,kubernetes的镜像托管在google云上,无法直接下载,所以有个技术大牛把gcr.io的镜像每天同步到 https://github.com/anjia0532/gcr.io_mirror

因此,如果需要用到gcr.io的镜像,可以执行如下的脚本进行镜像拉取。循环拉取并且tag成gcr.io。

vim pullimages.sh

#!/bin/bash

images=(kube-proxy-amd64:v1.11.3 kube-scheduler-amd64:v1.11.3 kube-controller-manager-amd64:v1.11.3

kube-apiserver-amd64:v1.11.3 etcd-amd64:3.2.18 coredns:1.1.3 pause:3.1 )

for imageName in ${images[@]} ; do

docker pull anjia0532/google-containers.$imageName

docker tag anjia0532/google-containers.$imageName k8s.gcr.io/$imageName

docker rmi anjia0532/google-containers.$imageName

done

 

运行脚本

sh pullimages.sh

 

4.kubernetes集群不允许开启swap,所以我们需要忽略这个错误

vim /etc/sysconfig/kubelet

KUBELET_EXTRA_ARGS="--fail-swap-on=false"

 

5.编写kubeadm.yaml

vim kubeadm.yaml

apiVersion: kubeadm.k8s.io/v1alpha1

kind: MasterConfiguration

controllerManagerExtraArgs:

horizontal-pod-autoscaler-use-rest-clients: "true"

horizontal-pod-autoscaler-sync-period: "10s"

node-monitor-grace-period: "10s"

apiServerExtraArgs:

runtime-config: "api/all=true"

kubernetesVersion: "v1.11.3"

 

6.运行Kubedam

kubeadm init --config kubeadm.yaml

这样就可以完成 Kubernetes Master 的部署了,这个过程只需要几分钟,部署完成后,kubeadm 会生成一行指令:

kubeadm join 10.168.0.2:6443 --token 00bwbx.uvnaa2ewjflwu1ry --discovery-token-ca-cert-hash sha256:00eb62a2a6020f94132e3fe1ab721349bbcd3e9b94da9654cfe15f2985ebd711

 

7.配置kubectl与apiserver的认证

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • 检查健康状态
kubectl get cs
  • 查看节点状态
kubectl get nodes
  • 现在,我们就可以使用 kubectl get 命令来查看当前唯一一个节点的状态了:
kubectl get nodes
  • 部署网络插件Weave(未找到yml)
kubectl apply -f https://git.io/weave-kube-1.6
  • 或部署网络插件flannel

编写kube-flannel.yml

 

运行yml文件

kubectl apply -f kube-flannel.yml

 

参考:https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel-aliyun.yml

kind: ClusterRole

apiVersion: rbac.authorization.k8s.io/v1beta1

metadata:

name: flannel

rules:

- apiGroups:

- ""

resources:

- pods

verbs:

- get

- apiGroups:

- ""

resources:

- nodes

verbs:

- list

- watch

- apiGroups:

- ""

resources:

- nodes/status

verbs:

- patch

---

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1beta1

metadata:

name: flannel

roleRef:

apiGroup: rbac.authorization.k8s.io

kind: ClusterRole

name: flannel

subjects:

- kind: ServiceAccount

name: flannel

namespace: kube-system

---

apiVersion: v1

kind: ServiceAccount

metadata:

name: flannel

namespace: kube-system

---

kind: ConfigMap

apiVersion: v1

metadata:

name: kube-flannel-cfg

namespace: kube-system

labels:

tier: node

app: flannel

data:

cni-conf.json: |

{

"name": "cbr0",

"type": "flannel",

"delegate": {

"hairpinMode": true,

"isDefaultGateway": true

}

}

net-conf.json: |

{

"Network": "10.24.0.0/16",

"Backend": {

"Type": "ali-vpc"

}

}

---

apiVersion: extensions/v1beta1

kind: DaemonSet

metadata:

name: kube-flannel-ds

namespace: kube-system

labels:

tier: node

app: flannel

spec:

template:

metadata:

labels:

tier: node

app: flannel

spec:

hostNetwork: true

nodeSelector:

beta.kubernetes.io/arch: amd64

tolerations:

- key: node-role.kubernetes.io/master

operator: Exists

effect: NoSchedule

serviceAccountName: flannel

initContainers:

- name: install-cni

image: registry.cn-hangzhou.aliyuncs.com/google-containers/flannel:v0.9.0

command:

- cp

args:

- -f

- /etc/kube-flannel/cni-conf.json

- /etc/cni/net.d/10-flannel.conf

volumeMounts:

- name: cni

mountPath: /etc/cni/net.d

- name: flannel-cfg

mountPath: /etc/kube-flannel/

containers:

- name: kube-flannel

image: registry.cn-hangzhou.aliyuncs.com/google-containers/flannel:v0.9.0

command:

- /opt/bin/flanneld

args:

- --ip-masq

- --kube-subnet-mgr

resources:

requests:

cpu: "100m"

memory: "50Mi"

limits:

cpu: "100m"

memory: "50Mi"

securityContext:

privileged: true

env:

- name: POD_NAME

valueFrom:

fieldRef:

fieldPath: metadata.name

- name: POD_NAMESPACE

valueFrom:

fieldRef:

fieldPath: metadata.namespace

volumeMounts:

- name: run

mountPath: /run

- name: flannel-cfg

mountPath: /etc/kube-flannel/

volumes:

- name: run

hostPath:

path: /run

- name: cni

hostPath:

path: /etc/cni/net.d

- name: flannel-cfg

configMap:

name: kube-flannel-cfg

 

  • 查看
kubectl get pods -n kube-system

NAME READY STATUS RESTARTS AGE

coredns-78fcdf6894-csxpw 1/1 Running 0 27m

coredns-78fcdf6894-td848 1/1 Running 0 27m

etcd-localhost.localdomain 1/1 Running 0 26m

kube-apiserver-localhost.localdomain 1/1 Running 0 26m

kube-controller-manager-localhost.localdomain 1/1 Running 0 26m

kube-proxy-v78j8 1/1 Running 0 27m

kube-scheduler-localhost.localdomain 1/1 Running 0 26m

weave-net-vcnb6 2/2 Running 0 44s

  • 加入污点
kubectl taint nodes --all node-role.kubernetes.io/master-

 

8.可视化插件,下载镜像

wget https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

docker pull anjia0532/google-containers.kubernetes-dashboard-amd64:v1.10.0

docker tag  anjia0532/google-containers.kubernetes-dashboard-amd64:v1.10.0   k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0

docker rmi  anjia0532/google-containers.kubernetes-dashboard-amd64:v1.10.0 
  • 修改kubernetes-dashboard.yaml,可以直接token认证进入

kind: Service

apiVersion: v1

metadata:

labels:

k8s-app: kubernetes-dashboard

name: kubernetes-dashboard

namespace: kube-system

spec:

# 添加Service的type为NodePort

type: NodePort

ports:

- port: 443

targetPort: 8443

# 添加映射到虚拟机的端口,k8s只支持30000以上的端口

nodePort: 30001

selector:

k8s-app: kubernetes-dashboard

使用声明API运行

kubectl apply -f kubernetes-dashboard.yaml

看 Dashboard 对应的 Pod 的状态了

kubectl get pods -n kube-system
  • 部署容器存储插件,下载镜像
kubectl apply -f https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/ceph/operator.yaml

 

kubectl apply -f https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/ceph/cluster.yaml
  • 查看安装情况

kubectl get pods -n rook-ceph-system

kubectl get pods -n rook-ceph

  • 开启服务

nohup kubectl proxy --address='0.0.0.0' --accept-hosts='^*$' --disable-filter=true &

  • 获取token命令
kubectl -n kube-system describe $(kubectl -n kube-system get secret -n kube-system -o name | grep namespace) | grep token
  • 访问dashboard

通过node节点的ip,加刚刚我们设置的nodePort就可以访问了。

https://:

下面是我成功的结果图

  • 备忘
  • 查看全部节点
kubectl get pods --all-namespaces
  • 查看pods
kubectl describe pod -n kube-system
  • 查看具体问题
kubectl describe pod kubernetes-dashboard-767dc7d4d-mg5gw -n kube-system

 

你可能感兴趣的:(Centos7使用kubeadm搭建Kebernete1.11.3)