ubuntu系统搭建kubernetes3节点集群

1.准备了三台机器

ubuntu 18.04 master  192.168.0.10
ubuntu 18.04 worker1 192.168.0.11
ubuntu 18.04 worker2 192.168.0.12

2.所有机器均安装了docker环境,具体安装过程请查看docker官网:ubuntu 上安装docker-ce

# docker version

Client:
 Version:           18.06.1-ce
 API version:       1.38
 Go version:        go1.10.3
 Git commit:        e68fc7a
 Built:             Tue Aug 21 17:25:03 2018
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          18.06.1-ce
  API version:      1.38 (minimum version 1.12)
  Go version:       go1.10.3
  Git commit:       e68fc7a
  Built:            Tue Aug 21 17:23:27 2018
  OS/Arch:          linux/amd64
  Experimental:     false

3.所有机器上,禁用防火墙,关闭swap

# systemctl  stop/disable ufw
# systemctl status ufw
# swapoff -a

4.配置各自机器的hosts

#vi /etc/hosts
127.0.0.1 localhost
192.168.0.10 master

5.使用国内的镜像站点,在所有节点安装kubectl kubelet kubeadm

# 下载 gpg 密钥

curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - 

或者单独下载apt-key.gpg 然后执行 

apt-key add apt-key.gpg

# 添加 k8s 镜像源

cat </etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF

# 更新源列表
apt-get update

进行kubectl kubelet kubeadm 的安装,也可以指定需要安装的版本
 
apt-get install -y kubectl kubelet kubeadm

查看版本

apt-cache madison  kubeadm kubelet kubectl

安装指定版本

apt-get install -y kubelet=1.15.1-00 kubeadm=1.15.1-00 kubectl=1.15.1-00

 6.初始化master节点,很明显k8s镜像站点国内无法访问,需要从或内拉取对应的镜像,修改成k8s需要的镜像名称后,方可安装。下面命令查看安装相应的版本需要哪些镜像

# kubeadm config images list --kubernetes-version=v1.15.1

k8s.gcr.io/kube-apiserver:v1.15.1
k8s.gcr.io/kube-controller-manager:v1.15.1
k8s.gcr.io/kube-scheduler:v1.15.1
k8s.gcr.io/kube-proxy:v1.15.1
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1
# 从docker.io拉取上述镜像

docker pull mirrorgooglecontainers/kube-apiserver:v1.15.1
docker pull mirrorgooglecontainers/kube-controller-manager:v1.15.1
docker pull mirrorgooglecontainers/kube-scheduler:v1.15.1
docker pull mirrorgooglecontainers/kube-proxy:v1.15.1
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd:3.3.10
docker pull coredns/coredns:1.3.1

#修改上面拉取的镜像的tag为所需tag
docker tag mirrorgooglecontainers/kube-apiserver:v1.15.1  k8s.gcr.io/kube-apiserver:v1.15.1

docker tag mirrorgooglecontainers/kube-controller-manager:v1.15.1 k8s.gcr.io/kube-controller-manager:v1.15.1

docker tag mirrorgooglecontainers/kube-scheduler:v1.15.1  k8s.gcr.io/kube-scheduler:v1.15.1

docker tag mirrorgooglecontainers/kube-proxy:v1.15.1 k8s.gcr.io/kube-proxy:v1.15.1

docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1

docker tag mirrorgooglecontainers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10

docker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1

7.上面做好之后,在master节点执行init操作

有帖子说,宿主机如果使用192.168.x.x 网络段,那么 pod-network-cidr 则分配10.0.0.0/16,如果宿主机使用10.0.x.x网段,那么 pod-network-cidr 则分配192.168.0.0/16,我的宿主机使用的192.168网段,经我测试,当使用calico为网络插件时, pod-network-cidr 则设置为10.0.0.0/16,若使用flannel为网络插件时, pod-network-cidr 则设置为10.244.0.0/16,否则coredns会出现CrashLoopBackOff

7.1 下面是使用flannel 作为网络插件时,init 命令

# kubeadm init  --kubernetes-version v1.15.1 --pod-network-cidr=10.244.0.0/16     

下面是该命令的输出

[init] Using Kubernetes version: v1.15.1
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [server01 localhost] and IPs [192.168.0.181 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [server01 localhost] and IPs [192.168.0.181 127.0.0.1 ::1]
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [server01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.181]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing 

你可能感兴趣的:(kubernetes,kubernetes,docker)