以下操作都是基于root 用户
vim /etc/apt/sources.list
deb http://mirrors.aliyun.com/ubuntu/ xenial main restricted
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates main restricted
deb http://mirrors.aliyun.com/ubuntu/ xenial universe
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates universe
deb http://mirrors.aliyun.com/ubuntu/ xenial multiverse
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates multiverse
deb http://mirrors.aliyun.com/ubuntu/ xenial-backports main restricted universe multiverse
deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main
apt-get update
apt-get install apt-transport-https -y
//如果下载不下来,可以浏览器下下载gpg 文件然后使用 cat gpg | apt-key add -
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
apt-get install docker.io -y
鉴于国内网络问题,后续拉取 Docker 镜像十分缓慢,我们可以需要配置加速器来解决,我使用的是网易的镜像地址:http://hub-mirror.c.163.com。
新版的 Docker 使用 /etc/docker/daemon.json(Linux) 或者 %programdata%\docker\config\daemon.json(Windows) 来配置 Daemon。
请在该配置文件中加入(没有该文件的话,请先建一个):
{
"registry-mirrors": ["http://hub-mirror.c.163.com"]
}
runoob@runoob:~$ sudo service docker start
当要以非root用户可以直接运行docker时,需要执行 sudo usermod -aG docker suke命令,然后重新登陆
Docker Hello World
Docker 允许你在容器内运行应用程序, 使用 docker run 命令来在容器内运行一个应用程序。
输出Hello world
runoob@runoob:~$ docker run ubuntu:15.10 /bin/echo "Hello world"
Hello world
各个参数解析:
//在所有节点上执行:swapoff -a
swapoff -a
//文件下载不下来,就用浏览器下载 ,然后执行cat apt-key.gpg | sudo apt-get add -
sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
apt-get update
//如果认证有问题 就加上--allow-unauthenticated
apt-get install -y kubelet kubeadm kubectl --allow-unauthenticated
这是kubernetes 所需的原始镜像
sudo docker pull k8s.gcr.io/kube-apiserver:v1.15.0
sudo docker pull k8s.gcr.io/kube-controller-manager:v1.15.0
sudo docker pull k8s.gcr.io/kube-scheduler:v1.15.0
sudo docker pull k8s.gcr.io/kube-proxy:v1.15.0
sudo docker pull k8s.gcr.io/pause:3.1
sudo docker pull k8s.gcr.io/etcd:3.3.10
sudo docker pull k8s.gcr.io/coredns:1.3.1
打tag 推送自己的镜像仓库
docker tag docker.io/mirrorgooglecontainers/kube-apiserver:v1.15.0 registry.cn-hangzhou.aliyuncs.com/mirror-suke/kube-apiserver:v1.15.0 && \
docker tag docker.io/mirrorgooglecontainers/kube-controller-manager:v1.15.0 registry.cn-hangzhou.aliyuncs.com/mirror-suke/kube-controller-manager:v1.15.0 && \
docker tag docker.io/mirrorgooglecontainers/kube-scheduler:v1.15.0 registry.cn-hangzhou.aliyuncs.com/mirror-suke/kube-scheduler:v1.15.0 && \
docker tag docker.io/mirrorgooglecontainers/kube-proxy:v1.15.0 registry.cn-hangzhou.aliyuncs.com/mirror-suke/kube-proxy:v1.15.0 && \
docker tag docker.io/mirrorgooglecontainers/pause:3.1 registry.cn-hangzhou.aliyuncs.com/mirror-suke/pause:3.1 && \
docker tag docker.io/mirrorgooglecontainers/etcd:3.3.10 registry.cn-hangzhou.aliyuncs.com/mirror-suke/etcd:3.3.10 && \
docker tag docker.io/coredns/coredns:1.3.1 registry.cn-hangzhou.aliyuncs.com/mirror-suke/coredns:1.3.1 && \
docker tag quay.io/coreos/flannel:v0.11.0-amd64 registry.cn-hangzhou.aliyuncs.com/mirror-suke/flannel:v0.11.0-amd64
sudo docker push registry.cn-hangzhou.aliyuncs.com/mirror-suke/kube-apiserver:v1.15.0 && \
sudo docker push registry.cn-hangzhou.aliyuncs.com/mirror-suke/kube-controller-manager:v1.15.0 && \
sudo docker push registry.cn-hangzhou.aliyuncs.com/mirror-suke/kube-scheduler:v1.15.0 && \
sudo docker push registry.cn-hangzhou.aliyuncs.com/mirror-suke/kube-proxy:v1.15.0 && \
sudo docker push registry.cn-hangzhou.aliyuncs.com/mirror-suke/pause:3.1 && \
sudo docker push registry.cn-hangzhou.aliyuncs.com/mirror-suke/etcd:3.3.10 && \
sudo docker push registry.cn-hangzhou.aliyuncs.com/mirror-suke/coredns:1.3.1&& \
sudo docker push registry.cn-hangzhou.aliyuncs.com/mirror-suke/flannel:v0.11.0-amd64
由于gcr被墙,国内拉不到镜像,docker官方做了镜像备份,但是速度感人
docker pull mirrorgooglecontainers/kube-apiserver:v1.15.0
docker pull mirrorgooglecontainers/kube-controller-manager:v1.15.0
docker pull mirrorgooglecontainers/kube-scheduler:v1.15.0
docker pull mirrorgooglecontainers/kube-proxy:v1.15.0
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd:3.3.10
docker pull coredns/coredns:1.3.1
我已经拉取下来,放进自己的阿里云镜像仓库,从国内拉取镜像是很快的,只需拉取下来,然后tag成系统所需镜像名称
sudo docker pull registry.cn-hangzhou.aliyuncs.com/mirror-suke/kube-apiserver:v1.15.0 && \
sudo docker pull registry.cn-hangzhou.aliyuncs.com/mirror-suke/kube-controller-manager:v1.15.0 && \
sudo docker pull registry.cn-hangzhou.aliyuncs.com/mirror-suke/kube-scheduler:v1.15.0 && \
sudo docker pull registry.cn-hangzhou.aliyuncs.com/mirror-suke/kube-proxy:v1.15.0 && \
sudo docker pull registry.cn-hangzhou.aliyuncs.com/mirror-suke/pause:3.1 && \
sudo docker pull registry.cn-hangzhou.aliyuncs.com/mirror-suke/etcd:3.3.10 && \
sudo docker pull registry.cn-hangzhou.aliyuncs.com/mirror-suke/coredns:1.3.1 && \
sudo docker pull registry.cn-hangzhou.aliyuncs.com/mirror-suke/flannel:v0.11.0-amd64
拉取下来以后需要tag成所需的镜像名
sudo docker tag registry.cn-hangzhou.aliyuncs.com/mirror-suke/kube-apiserver:v1.15.0 k8s.gcr.io/kube-apiserver:v1.15.0 && \
sudo docker tag registry.cn-hangzhou.aliyuncs.com/mirror-suke/kube-controller-manager:v1.15.0 k8s.gcr.io/kube-controller-manager:v1.15.0 && \
sudo docker tag registry.cn-hangzhou.aliyuncs.com/mirror-suke/kube-scheduler:v1.15.0 k8s.gcr.io/kube-scheduler:v1.15.0 && \
sudo docker tag registry.cn-hangzhou.aliyuncs.com/mirror-suke/kube-proxy:v1.15.0 k8s.gcr.io/kube-proxy:v1.15.0 && \
sudo docker tag registry.cn-hangzhou.aliyuncs.com/mirror-suke/pause:3.1 k8s.gcr.io/pause:3.1 && \
sudo docker tag registry.cn-hangzhou.aliyuncs.com/mirror-suke/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10 && \
sudo docker tag registry.cn-hangzhou.aliyuncs.com/mirror-suke/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1 && \
sudo docker tag registry.cn-hangzhou.aliyuncs.com/mirror-suke/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64
sudo kubeadm init --pod-network-cidr 10.244.0.0/16
–pod-network-cidr是指配置节点中的pod的可用IP地址,此为内部IP
初始化kubeadm成功之后,会输出如下信息:
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 192.168.188.129:6443 --token ixekmt.f5hv1zu3693gtuib \
--discovery-token-ca-cert-hash sha256:54c8326e32a39fd861eb9a93dcc266c2815cfabe6fabc6c21770817830f66288
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubeadm join 192.168.188.129:6443 --token ixekmt.f5hv1zu3693gtuib \
--discovery-token-ca-cert-hash sha256:54c8326e32a39fd861eb9a93dcc266c2815cfabe6fabc6c21770817830f66288
sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml
也可以将这两个yml下载到本地再执行 kubeclt apply -f 命令
从节点 必须也安装了docker、kubeadm、kubectl、kubelet、也有那几个镜像
kubeadm join 192.168.188.129:6443 --token ixekmt.f5hv1zu3693gtuib \
--discovery-token-ca-cert-hash
######
[discovery] Trying to connect to API Server "A-IP:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://B-IP:6443"
[discovery] Requesting info from "https://B-IP:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "B-IP:6443"
[discovery] Successfully established connection with API Server "A-IP:6443"
This node has joined the cluster:
* Certificate signing request was sent to master and a response
was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the cluster.
######
kubectl get node
NAME STATUS ROLES AGE VERSION
kube1 Ready master 16h v1.15.0
kube2 NotReady 16h v1.15.0
kube3 Ready 16h v1.15.0
节点没有ready 一般是由于flannel 插件没有装好,可以通过查看kube-system 的pod 验证
NAME READY STATUS RESTARTS AGE
coredns-5c98db65d4-4df8r 1/1 Running 2 16h
coredns-5c98db65d4-z8md8 1/1 Running 2 16h
etcd-kube1 1/1 Running 5 16h
kube-apiserver-kube1 1/1 Running 4 16h
kube-controller-manager-kube1 1/1 Running 4 16h
kube-flannel-ds-amd64-8dgv2 1/1 Running 3 16h
kube-flannel-ds-amd64-ktpdc 1/1 Running 2 16h
kube-flannel-ds-amd64-zzdzq 1/1 Running 0 16h
kube-proxy-dcnqn 1/1 Running 0 16h
kube-proxy-jmh4x 1/1 Running 4 16h
kube-proxy-lrchv 1/1 Running 2 16h
kube-scheduler-kube1 1/1 Running 4 16h
kube-flannel 相关的几个pod不是running状态,describe他的信息,会发现镜像没有拉取下来,只要拉取下来就可以正常了
参考