本文是基于 http://www.cnblogs.com/LinuxGo/p/5729788.html 工作的基础上,针对新的版本存在的一些问题做了修改
组件 | 版本 |
etcd | 3.1.0 |
Flannel | 0.5.5 |
Kubernetes | 1.6.0alpha |
主机 | IP | OS |
k8s-master | 172.16.203.133 | Ubuntu 16.04 |
k8s-node01 | 172.16.203.133 | Ubuntu 16.04 |
每台主机上安装最新版Docker Engine - https://docs.docker.com/engine/installation/linux/ubuntu/
我们将在1台主机上安装部署etcd集群
在部署机上下载etcd
ETCD_VERSION=${ETCD_VERSION:-"3.1.0"}
ETCD="etcd-v${ETCD_VERSION}-linux-amd64"
curl -L https://github.com/coreos/etcd/releases/download/v${ETCD_VERSION}/${ETCD}.tar.gz -o etcd.tar.gz
tar xzf etcd.tar.gz -C /tmp
cd /tmp/etcd-v${ETCD_VERSION}-linux-amd64
sudo mkdir -p /opt/bin && sudo mv * /opt/bin
在每台主机上,分别创建/opt/config/etcd.conf和/lib/systemd/system/etcd.service文件,(注意修改红色粗体处的IP地址)
/opt/config/etcd.conf
sudo mkdir -p /var/lib/etcd/ sudo mkdir -p /opt/config/ sudo cat <<EOF | sudo tee /opt/config/etcd.conf ETCD_DATA_DIR=/var/lib/etcd.etcd ETCD_NAME=$(hostname) ETCD_INITIAL_CLUSTER=master=http://172.16.203.133:2380 ETCD_INITIAL_CLUSTER_STATE=new ETCD_LISTEN_PEER_URLS=http://172.16.203.133:2380 ETCD_INITIAL_ADVERTISE_PEER_URLS=http://172.16.203.133:2380 ETCD_ADVERTISE_CLIENT_URLS=http://172.16.203.133:2379 ETCD_LISTEN_CLIENT_URLS=http://172.16.203.133:2379 GOMAXPROCS=$(nproc) EOF
/lib/systemd/system/etcd.service
[Unit] Description=Etcd Server Documentation=https://github.com/coreos/etcd After=network.target [Service] User=root Type=simple EnvironmentFile=-/opt/config/etcd.conf ExecStart=/opt/bin/etcd Restart=on-failure RestartSec=10s LimitNOFILE=40000 [Install] WantedBy=multi-user.target
然后在每台主机上运行
sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd
FLANNEL_VERSION=${FLANNEL_VERSION:-"0.5.5"} curl -L https://github.com/coreos/flannel/releases/download/v${FLANNEL_VERSION}/flannel-${FLANNEL_VERSION}-linux-amd64.tar.gz flannel.tar.gz tar xzf flannel.tar.gz -C /tmp
在部署机上编译K8s
git clone https://github.com/kubernetes/kubernetes.git cd kubernetes make release-skip-tests tar xzf _output/release-tars/kubernetes-server-linux-amd64.tar.gz -C /tmp
cd /tmp cp kubernetes/server/bin/kube-apiserver \ kubernetes/server/bin/kube-controller-manager \ kubernetes/server/bin/kube-scheduler kubernetes/server/bin/kubelet kubernetes/server/bin/kube-proxy ~/kube cp flannel-${FLANNEL_VERSION}/flanneld~/kube
sudo mv ~/kube/* /opt/bin/
在master主机上 ,运行如下命令创建证书
mkdir -p /srv/kubernetes/
cd /srv/kubernetes
export MASTER_IP=172.16.203.133
echo subjectAltName =
${MASTER_IP} > extfile.cnf
openssl genrsa -out ca.key 2048openssl req -x509 -new -nodes -key ca.key -subj "/CN=${MASTER_IP}" -days 10000 -out ca.crtopenssl genrsa -out server.key 2048openssl req -new -key server.key -subj "/CN=${MASTER_IP}" -out server.csropenssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt -days 10000 -extfile extfile.cnf
我们使用如下的Service以及Flannel的网段:
SERVICE_CLUSTER_IP_RANGE=172.18.0.0/16
FLANNEL_NET=192.168.0.0/16
在master主机上,创建/lib/systemd/system/kube-apiserver.service文件,内容如下
[Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] User=root ExecStart=/opt/bin/kube-apiserver \ --insecure-bind-address=0.0.0.0 \ --insecure-port=8080 \ --etcd-servers=http://172.16.203.133:2379\ --logtostderr=true \ --allow-privileged=false \ --service-cluster-ip-range=172.18.0.0/16 \ --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,SecurityContextDeny,ResourceQuota \ --service-node-port-range=30000-32767 \ --advertise-address=172.16.203.133 \ --client-ca-file=/srv/kubernetes/ca.crt \ --tls-cert-file=/srv/kubernetes/server.crt \ --tls-private-key-file=/srv/kubernetes/server.key Restart=on-failure Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.target
在master主机上,创建/lib/systemd/system/kube-controller-manager.service文件,内容如下
[Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] User=root ExecStart=/opt/bin/kube-controller-manager \ --master=127.0.0.1:8080 \ --root-ca-file=/srv/kubernetes/ca.crt \ --service-account-private-key-file=/srv/kubernetes/server.key \ --logtostderr=true Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target
在master主机上,创建/lib/systemd/system/kube-scheduler.service文件,内容如下
[Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] User=root ExecStart=/opt/bin/kube-scheduler \ --logtostderr=true \ --master=127.0.0.1:8080 Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target
在master主机上,创建/lib/systemd/system/flanneld.service文件,内容如下
[Unit] Description=Flanneld Documentation=https://github.com/coreos/flannel After=network.target Before=docker.service [Service] User=root ExecStart=/opt/bin/flanneld \ --etcd-endpoints="http://172.16.203.133:2379" \ --iface=172.16.203.133 \ --ip-masq Restart=on-failure Type=notify LimitNOFILE=65536
/opt/bin/etcdctl --endpoints="http://172.16.203.133:2379" mk /coreos.com/network/config \ '{"Network":"192.168.0.0/16", "Backend": {"Type": "vxlan"}}' sudo systemctl daemon-reload sudo systemctl enable kube-apiserver sudo systemctl enable kube-controller-manager sudo systemctl enable kube-scheduler sudo systemctl enable flanneld sudo systemctl start kube-apiserver sudo systemctl start kube-controller-manager sudo systemctl start kube-scheduler sudo systemctl start flanneld
source /run/flannel/subnet.env sudo sed -i "s|^ExecStart=/usr/bin/dockerd -H fd://$|ExecStart=/usr/bin/dockerd -H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}|g" /lib/systemd/system/docker.service rc=0 ip link show docker0 >/dev/null 2>&1 || rc="$?" if [[ "$rc" -eq "0" ]]; then ip link set dev docker0 down ip link delete docker0 fi sudo systemctl daemon-reload sudo systemctl enable docker sudo systemctl restart docker
cd /tmp cp kubernetes/server/bin/kubelet kubernetes/server/bin/kube-proxy ~/kube cp flannel-${FLANNEL_VERSION}/flanneld ~/kube sudo mkdir -p /opt/bin && sudo mv ~/kube/* /opt/bin/
参见Master部分相关步骤: 配置Flanneld服务,启动Flanneld服务,修改Docker服务。注意修改iface的地址
/lib/systemd/system/kubelet.service,注意修改IP地址
[Unit] Description=Kubernetes Kubelet After=docker.service Requires=docker.service [Service] ExecStart=/opt/bin/kubelet \ --hostname-override=172.16.203.133 \ --api-servers=http://172.16.203.133:8080 \ --logtostderr=true Restart=on-failure KillMode=process [Install] WantedBy=multi-user.target
启动服务
sudo systemctl daemon-reload sudo systemctl enable kubelet sudo systemctl start kubelet
/lib/systemd/system/kube-proxy.service,注意修改IP地址
[Unit] Description=Kubernetes Proxy After=network.target [Service] ExecStart=/opt/bin/kube-proxy \ --hostname-override=172.16.203.133 \ --master=http://172.16.203.133:8080 \ --logtostderr=true Restart=on-failure [Install] WantedBy=multi-user.target
启动服务
sudo systemctl daemon-reload sudo systemctl enable kube-proxy sudo systemctl start kube-proxy
配置kubectl
cd /tmp
mv kubernetes/kubectl /usr/bin/kubectl
mkdir -p ~/.kube
vi ~/.kube/config
Version: v1
clusters:
- cluster:
certificate-authority: crts/ca.crt
server: https://172.16.203.133:6443
name: minikube
- cluster:
insecure-skip-tls-verify: true
server: http://172.16.203.133:8080
name: ubuntu
contexts:
- context:
cluster: minikube
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
user:
client-certificate: crts/server.crt
client-key: crts/server.key
ps: 感谢Linux&GO的详细文档!!