目录
一:互相解析,关防火墙,关掉分区,三台服务器时间一致(以下操作三台都要做)
二:安装docker(三台都做)
三:安装ETCD
四:安装flannel
五:安装CNI
六:安装K8S集群
七:安装K8S
八:k8s界面安装WEB UI
容我缕缕:
K8S 2核4G40G磁盘 | 192.168.3.121 |
node1 2核4G40G磁盘 | 192.168.3.193 |
node2 2核4G40G磁盘 | 192.168.3.219 |
kubernetes | 1.10.7 |
flannel | flannelv0.10.0-Linux的amd64.tar |
ETCD | ETCD-v3.3.8-Linux的amd64.tar |
CNI | CNI-插件-AMD64-V0.7.1 |
docker | 18.03.1-ce |
链接:pan.baidu.com/s/1vhlUkQjI8hMSBM7EJbuPbA
因为谷歌翻译的问题,有几个标题可能不是很仔细,不过内容是没问题的....
1.1:互相解析
[root@k8s ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.3.121 k8s
192.168.3.193 node1
192.168.3.219 node2
[root@k8s ~]# scp /etc/hosts node1:/etc/hosts
[root@k8s ~]# scp /etc/hosts node2:/etc/hosts
1.2:关闭防火墙selinux
[root@k8s ~]# systemctl stop firewalld
[root@k8s ~]# setenforce 0
1.3:关闭swap分区
[root@k8s ~]# swapoff -a
[root@k8s ~]# vim /etc/fstab //把/dev/mapper/centos-swap swap注释掉
1.4:时区不同,我用的tzselect
[root@k8s ~]# tzselect
[root@k8s ~]# cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
cp: overwrite ‘/etc/localtime’? y
ssh-copy-id最好传一下公钥
2.1:卸载原有版本
[root@k8s ~]# yum remove docker docker-common docker-selinux docker-engine
2.2:安装docker所依赖驱动
[root@k8s ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
2.3:添加yum源
[root@node1 ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
2.4:选择docker版本安装
[root@k8s ~]# yum list docker-ce --showduplicates | sort -r
2.5:选择安装18.03.1.ce
[root@k8s ~]# yum -y install docker-ce-18.03.1.ce
2.6:启动docker
[root@k8s ~]# systemctl start docker
[root@k8s ~]# tar -zxvf etcd-v3.3.8-linux-amd64.tar.gz
[root@k8s ~]# cd etcd-v3.3.8-linux-amd64
[root@k8s etcd-v3.3.8-linux-amd64]# cp etcd etcdctl /usr/bin
[root@k8s etcd-v3.3.8-linux-amd64]# mkdir -p /var/lib/etcd /etc/etcd
所有节点均做以上操作
ETCD配置文件
三台机器都要有,略有不同,主要是两个文件
/usr/lib/systemd/system/etcd.service
和 /etc/etcd/etcd.conf
ETCD集群的主从节点关系与kubernetes集群的主从节点关系不是同的
ETCD的配置文件只是表示三个ETCD节点,ETCD集群在启动和运行过程中会选举出主节点
因此,配置文件中体现的只是三个节点ETCD-I,ETCD-II,ETCD-ⅲ
配置好三个节点的配置文件后,便可以启动ETCD集群了
K8S
[root@k8s ~]# cat /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=/etc/etcd/etcd.conf
ExecStart=/usr/bin/etcd
[Install]
WantedBy=multi-user.target
[root@k8s ~]# cat /etc/etcd/etcd.conf
# [member]
# 节点名称
ETCD_NAME=etcd-i
# 数据存放位置
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
# 监听其他Etcd实例的地址
ETCD_LISTEN_PEER_URLS="http://192.168.3.121:2380"
# 监听客户端地址
ETCD_LISTEN_CLIENT_URLS="http://192.168.3.121:2379,http://127.0.0.1:2379"
#[cluster]
# 通知其他Etcd实例地址
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.3.121:2380"
# 初始化集群内节点地址 //这里折行了。
ETCD_INITIAL_CLUSTER="etcd-i=http://192.168.3.121:2380,etcd-
ii=http://192.168.3.193:2380,etcd-iii=http://192.168.3.219:2380"
# 初始化集群状态,new表示新建
ETCD_INITIAL_CLUSTER_STATE="new"
# 初始化集群token
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-token"
# 通知客户端地址
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.3.121:2379,http://127.0.0.1:2379"
节点1
[root@node1 ~]# cat /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=/etc/etcd/etcd.conf
ExecStart=/usr/bin/etcd
[Install]
WantedBy=multi-user.target
[root@node1 ~]# cat /etc/etcd/etcd.conf
# [member]
ETCD_NAME=etcd-ii
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.3.193:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.3.193:2379,http://127.0.0.1:2379"
#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.3.193:2380"
ETCD_INITIAL_CLUSTER="etcd-i=http://192.168.3.121:2380,etcd-ii=http://192.168.3.193:2380,etcd-iii=http://192.168.3.219:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-token"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.3.193:2379,http://127.0.0.1:2379"
节点2
[root@node2 ~]# cat /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=/etc/etcd/etcd.conf
ExecStart=/usr/bin/etcd
[Install]
WantedBy=multi-user.target
[root@node2 ~]# cat /etc/etcd/etcd.conf
# [member]
ETCD_NAME=etcd-iii
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.3.219:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.3.219:2379,http://127.0.0.1:2379"
#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.3.219:2380"
ETCD_INITIAL_CLUSTER="etcd-i=http://192.168.3.121:2380,etcd-ii=http://192.168.3.193:2380,etcd-iii=http://192.168.3.219:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-token"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.3.219:2379,http://127.0.0.1:2379"
3.2:检查启动ETCD集群
[root@k8s ~]# systemctl daemon-reload
[root@k8s ~]# systemctl start etcd.service
[root@k8s ~]# etcdctl member list
1f6e47d3e5c09902: name=etcd-i peerURLs=http://192.168.3.121:2380 clientURLs=http://127.0.0.1:2379,http://192.168.3.121:2379 isLeader=true
8059b18c36b2ba6b: name=etcd-ii peerURLs=http://192.168.3.193:2380 clientURLs=http://127.0.0.1:2379,http://192.168.3.193:2379 isLeader=false
ad715b003d53f3e6: name=etcd-iii peerURLs=http://192.168.3.219:2380 clientURLs=http://127.0.0.1:2379,http://192.168.3.219:2379 isLeader=false
[root@k8s ~]# etcdctl cluster-health
member 1f6e47d3e5c09902 is healthy: got healthy result from http://127.0.0.1:2379
member 8059b18c36b2ba6b is healthy: got healthy result from http://127.0.0.1:2379
member ad715b003d53f3e6 is healthy: got healthy result from http://127.0.0.1:2379
cluster is healthy
集群机器均需操作
flannel服务依赖ETCD,必须先安装好ETCD,并配置ETCD服务地址-etcd的端点,ETCD前缀是ETCD存储的flannel网络配置的键前缀
4.1:安装flannel
[root@k8s ~]# mkdir -p /opt/flannel/bin/
[root@k8s ~]# tar -xzvf flannel-v0.10.0-linux-amd64.tar.gz -C /opt/flannel/bin/
[root@k8s ~]# cat /usr/lib/systemd/system/flannel.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service
[Service]
Type=notify
ExecStart=/opt/flannel/bin/flanneld -etcd-endpoints=http://192.168.3.121:2379,http://192.168.3.193:2379,http://192.168.3.219:2379 -etcd-prefix=coreos.com/network
ExecStartPost=/opt/flannel/bin/mk-docker-opts.sh -d /etc/docker/flannel_net.env -c
Restart=on-failure
[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
执行一下命令设置flannel网络配置.(ip等信息可修改)
[root@k8s ~]# etcdctl mk /coreos.com/network/config '{"Network":"172.18.0.0/16", "SubnetMin": "172.18.1.0", "SubnetMax": "172.18.254.0", "Backend": {"Type": "vxlan"}}'
想删除的话:etcdctl rm /coreos.com/network/config 删了后面报错
4.2:下载flannel
flannel服务依赖flannel镜像,所以要先下载flannel镜像,执行以下命令从阿里云下载,并创建镜像tag:
[root@node2 ~]# docker pull registry.cn-beijing.aliyuncs.com/k8s_images/flannel:v0.10.0-amd64
[root@node2 ~]# docker tag registry.cn-beijing.aliyuncs.com/k8s_images/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0
配置docker
flannel配置中有一项
ExecStartPost=/opt/flannel/bin/mk-docker-opts.sh -d /etc/docker/flannel_net.env -c
flannel启动后执行mk-docker-opts.sh,并生成/etc/docker/flannel_net.env文件
flannel会修改docker网络,flannel_net.env是flannel生成的docker配置参数,因此,还要修改docker配置项
/usr/lib/systemd/system/docker.service
[root@k8s ~]# cat /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
# After=network-online.target firewalld.service
After=network-online.target flannel.service
Wants=network-online.target
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
EnvironmentFile=/etc/docker/flannel_net.env
#ExecStart=/usr/bin/dockerd $DOCKER_OPTS
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
# restart the docker process if it exits prematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
After:flannel启动之后再启动docker
EnvironmentFile:配置docker的启动参数,由flannel生成
ExecStart:增加docker启动参数
ExecStartPost:在docker启动之后执行,会修改主机的iptables路由规则。
4.3:启动flannel
[root@k8s ~]# systemctl daemon-reload
[root@k8s ~]# systemctl start flannel.service
[root@k8s ~]# systemctl restart docker.service
集群机器均需操作
CNI(Container Network Interface)容器网络接口,是Linux容器网络配置的一组标准和库,用户需要根据这些标准和库来开发自己的容器网络插件
[root@k8s ~]# mkdir -p /opt/cni/bin /etc/cni/net.d
[root@k8s ~]# tar -xzvf cni-plugins-amd64-v0.7.1.tgz -C /opt/cni/bin
[root@k8s ~]# cat /etc/cni/net.d/10-flannel.conflist
{
"name":"cni0",
"cniVersion":"0.3.1",
"plugins":[
{
"type":"flannel",
"delegate":{
"forceAddress":true,
"isDefaultGateway":true
}
},
{
"type":"portmap",
"capabilities":{
"portMappings":true
}
}
]
}
CA证书
证书用途 | 命名 |
---|---|
根证书和私钥 | ca.crt,的ca.key |
KUBE-API服务器证书和私钥 | API服务器-key.pem,apiserver.pem |
KUBE控制器的管理器/ KUBE调度器证书和私钥 | cs_client.crt,cs_client.key |
kubelet / KUBE-代理证书和私钥 | kubelet_client.crt,kubelet_client.key |
主
创建证书目录(/ CN =自己的主机名)
6.1:生成根证书和私钥
[root@k8s ~]# mkdir -p /etc/kubernetes/ca
[root@k8s ~]# cd /etc/kubernetes/ca/
[root@k8s ca]# openssl genrsa -out ca.key 2048
[root@k8s ca]# openssl req -x509 -new -nodes -key ca.key -subj "/CN=k8s" -days 5000 -out ca.crt
6.2:生成kube-apiserver证书和私钥
[root@k8s ca]# cat master_ssl.conf
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster.local
DNS.5 = k8s
IP.1 = 172.18.0.1
IP.2 = 192.168.3.121
[root@k8s ca]# openssl genrsa -out apiserver-key.pem 2048
[root@k8s ca]# openssl req -new -key apiserver-key.pem -out apiserver.csr -subj "/CN=k8s" -config master_ssl.conf
[root@k8s ca]# openssl x509 -req -in apiserver.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out apiserver.pem -days 365 -extensions v3_req -extfile master_ssl.conf
6.3:生成kube-controller-manager/kube-scheduler证书和私钥
[root@k8s ca]# openssl genrsa -out cs_client.key 2048
[root@k8s ca]# openssl req -new -key cs_client.key -subj "/CN=k8s" -out cs_client.csr
[root@k8s ca]# openssl x509 -req -in cs_client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out cs_client.crt -days 5000
6.4:拷贝证书到node
[root@k8s ca]# scp ca.crt ca.key node1:/etc/kubernetes/ca/
[root@k8s ca]# scp ca.crt ca.key node2:/etc/kubernetes/ca/
----------------------------------------------------------------
到这里下面这些文件少一个,就去面壁寻思寻思为啥少一个。
[root@k8s ca]# ls
apiserver.csr apiserver.pem ca.key cs_client.crt cs_client.key
apiserver-key.pem ca.crt ca.srl cs_client.csr master_ssl.conf
----------------------------------------------------------------
6.4:node1证书配置
[root@node1 ca]# openssl genrsa -out kubelet_client.key 2048
[root@node1 ca]# openssl req -new -key kubelet_client.key -subj "/CN=192.168.3.193" -out kubelet_client.csr
[root@node1 ca]# openssl x509 -req -in kubelet_client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out kubelet_client.crt -days 5000
node2证书配置
[root@node2 ca]# openssl genrsa -out kubelet_client.key 2048
[root@node2 ca]# openssl req -new -key kubelet_client.key -subj "/CN=192.168.3.219" -out kubelet_client.csr
[root@node2 ca]# openssl x509 -req -in kubelet_client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out kubelet_client.crt -days 5000
7.1:安装
[root@k8s bin]# tar -zxvf kubernetes-server-linux-amd64.tar.gz -C /opt
[root@k8s bin]# cd /opt/kubernetes/server/bin
[root@k8s bin]# cp -a `ls |egrep -v "*.tar|*_tag"` /usr/bin
[root@k8s bin]# mkdir -p /var/log/kubernetes
7.2:配置KUBE-API服务器
[root@k8s bin]# cat /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=etcd.service
Wants=etcd.service
[Service]
EnvironmentFile=/etc/kubernetes/apiserver.conf
ExecStart=/usr/bin/kube-apiserver $KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
7.3:配置apiserver.conf
[root@k8s bin]# cat /etc/kubernetes/apiserver.conf
KUBE_API_ARGS="\
--storage-backend=etcd3 \
--etcd-servers=http://192.168.3.121:2379,http://192.168.3.193:2379,http://192.168.3.219:2379 \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--service-cluster-ip-range=172.18.0.0/16 \
--service-node-port-range=1-65535 \
--kubelet-port=10250 \
--advertise-address=192.168.3.121 \
--allow-privileged=false \
--anonymous-auth=false \
--client-ca-file=/etc/kubernetes/ca/ca.crt \
--tls-private-key-file=/etc/kubernetes/ca/apiserver-key.pem \
--tls-cert-file=/etc/kubernetes/ca/apiserver.pem \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,NamespaceExists,SecurityContextDeny,ServiceAccount,DefaultStorageClass,ResourceQuota \
--logtostderr=true \
--log-dir=/var/log/kubernets \
--v=2"
#############################
#解释说明
--etcd-servers #连接到etcd集群
--secure-port #开启安全端口6443
--client-ca-file、--tls-private-key-file、--tls-cert-file配置CA证书
--enable-admission-plugins #开启准入权限
--anonymous-auth=false #不接受匿名访问,若为true,则表示接受,此处设置为false,便于dashboard访问
7.4:配置KUBE控制器的管理器(服务器引用CONF,CONF里引用YAML)
[root@k8s bin]# cat /etc/kubernetes/kube-controller-config.yaml
apiVersion: v1
kind: Config
users:
- name: controller
user:
client-certificate: /etc/kubernetes/ca/cs_client.crt
client-key: /etc/kubernetes/ca/cs_client.key
clusters:
- name: local
cluster:
certificate-authority: /etc/kubernetes/ca/ca.crt
contexts:
- context:
cluster: local
user: controller
name: default-context
current-context: default-context
7.5:配置KUBE控制器-manager.service
[root@k8s bin]# cat /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=kube-apiserver.service
Requires=kube-apiserver.service
[Service]
EnvironmentFile=/etc/kubernetes/controller-manager.conf
ExecStart=/usr/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
7.6:配置控制器manager.conf
[root@k8s bin]# cat /etc/kubernetes/controller-manager.conf
KUBE_CONTROLLER_MANAGER_ARGS="\
--master=https://192.168.3.121:6443 \
--service-account-private-key-file=/etc/kubernetes/ca/apiserver-key.pem \
--root-ca-file=/etc/kubernetes/ca/ca.crt \
--cluster-signing-cert-file=/etc/kubernetes/ca/ca.crt \
--cluster-signing-key-file=/etc/kubernetes/ca/ca.key \
--kubeconfig=/etc/kubernetes/kube-controller-config.yaml \
--logtostderr=true \
--log-dir=/var/log/kubernetes \
--v=2"
#######################
master连接到master节点
service-account-private-key-file、root-ca-file、cluster-signing-cert-file、cluster-signing-key-file配置CA证书
kubeconfig是配置文件
7.7:配置KUBE-调度
[root@k8s bin]# cat /etc/kubernetes/kube-scheduler-config.yaml
apiVersion: v1
kind: Config
users:
- name: scheduler
user:
client-certificate: /etc/kubernetes/ca/cs_client.crt
client-key: /etc/kubernetes/ca/cs_client.key
clusters:
- name: local
cluster:
certificate-authority: /etc/kubernetes/ca/ca.crt
contexts:
- context:
cluster: local
user: scheduler
name: default-context
current-context: default-context
[root@k8s bin]# cat /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=kube-apiserver.service
Requires=kube-apiserver.service
[Service]
User=root
EnvironmentFile=/etc/kubernetes/scheduler.conf
ExecStart=/usr/bin/kube-scheduler $KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
[root@k8s bin]# cat /etc/kubernetes/scheduler.conf
KUBE_SCHEDULER_ARGS="\
--master=https://192.168.3.121:6443 \
--kubeconfig=/etc/kubernetes/kube-scheduler-config.yaml \
--logtostderr=true \
--log-dir=/var/log/kubernetes \
--v=2"
7.8:启动主
[root@k8s bin]# systemctl daemon-reload
[root@k8s bin]# systemctl start kube-apiserver.service //启动报错,就是上面配置文件的问题。
[root@k8s bin]# systemctl start kube-controller-manager.service
[root@k8s bin]# systemctl start kube-scheduler.service
7.9:日志查看
[root@k8s bin]# journalctl -xeu kube-apiserver --no-pager
[root@k8s bin]# journalctl -xeu kube-controller-manager --no-pager
[root@k8s bin]# journalctl -xeu kube-scheduler --no-pager
# 实时查看加 -f
节点安装K8S(下面这些两个节点都要做)
[root@node1 ~]# tar -zxvf kubernetes-server-linux-amd64.tar.gz -C /opt
[root@node1 ~]# cd /opt/kubernetes/server/bin
[root@node1 bin]# cp -a kubectl kubelet kube-proxy /usr/bin/
[root@node1 bin]# mkdir -p /var/log/kubernetes
[root@node1 bin]# cat /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
# 修改内核参数,iptables过滤规则生效.如果未用到可忽略
[root@node1 bin]# sysctl -p #配置生效
节点1:配置kubelet
[root@node1 bin]# cat /etc/kubernetes/kubelet-config.yaml
apiVersion: v1
kind: Config
users:
- name: kubelet
user:
client-certificate: /etc/kubernetes/ca/kubelet_client.crt
client-key: /etc/kubernetes/ca/kubelet_client.key
clusters:
- cluster:
certificate-authority: /etc/kubernetes/ca/ca.crt
server: https://192.168.3.121:6443
name: local
contexts:
- context:
cluster: local
user: kubelet
name: default-context
current-context: default-context
preferences: {}
[root@node1 bin]# cat /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
EnvironmentFile=/etc/kubernetes/kubelet.conf
ExecStart=/usr/bin/kubelet $KUBELET_ARGS
Restart=on-failure
[Install]
WantedBy=multi-user.target
[root@node1 bin]# cat /etc/kubernetes/kubelet.conf
KUBELET_ARGS="\
--kubeconfig=/etc/kubernetes/kubelet-config.yaml \
--pod-infra-container-image=registry.aliyuncs.com/archon/pause-amd64:3.0 \
--hostname-override=192.168.3.193 \
--network-plugin=cni \
--cni-conf-dir=/etc/cni/net.d \
--cni-bin-dir=/opt/cni/bin \
--logtostderr=true \
--log-dir=/var/log/kubernetes \
--v=2"
---------------------------------------------------------------------
###################
--hostname-override #配置node名称 建议使用node节点的IP
#--pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0 \
--pod-infra-container-image #指定pod的基础镜像 默认是google的,建议改为国内,或者FQ
或者 下载到本地重新命名镜像
docker pull registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
docker tag registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0 gcr.io/google_containers/pause-amd64:3.0
--kubeconfig #为配置文件
配置KUBE-代理
[root@node1 bin]# cat /etc/kubernetes/proxy-config.yaml
apiVersion: v1
kind: Config
users:
- name: proxy
user:
client-certificate: /etc/kubernetes/ca/kubelet_client.crt
client-key: /etc/kubernetes/ca/kubelet_client.key
clusters:
- cluster:
certificate-authority: /etc/kubernetes/ca/ca.crt
server: https://192.168.3.121:6443
name: local
contexts:
- context:
cluster: local
user: proxy
name: default-context
current-context: default-context
preferences: {}
[root@node1 bin]# cat /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
Requires=network.service
[Service]
EnvironmentFile=/etc/kubernetes/proxy.conf
ExecStart=/usr/bin/kube-proxy $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
[root@node1 bin]# cat /etc/kubernetes/proxy.conf
KUBE_PROXY_ARGS="\
--master=https://192.168.3.121:6443 \
--hostname-override=192.168.3.193 \
--kubeconfig=/etc/kubernetes/proxy-config.yaml \
--logtostderr=true \
--log-dir=/var/log/kubernetes \
--v=2"
节点2:配置kubelet
[root@node2 bin]# cat /etc/kubernetes/kubelet-config.yaml
apiVersion: v1
kind: Config
users:
- name: kubelet
user:
client-certificate: /etc/kubernetes/ca/kubelet_client.crt
client-key: /etc/kubernetes/ca/kubelet_client.key
clusters:
- cluster:
certificate-authority: /etc/kubernetes/ca/ca.crt
server: https://192.168.3.121:6443
name: local
contexts:
- context:
cluster: local
user: kubelet
name: default-context
current-context: default-context
preferences: {}
[root@node2 bin]# cat /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
EnvironmentFile=/etc/kubernetes/kubelet.conf
ExecStart=/usr/bin/kubelet $KUBELET_ARGS
Restart=on-failure
[Install]
WantedBy=multi-user.target
[root@node2 bin]# cat /etc/kubernetes/kubelet.conf
KUBELET_ARGS="\
--kubeconfig=/etc/kubernetes/kubelet-config.yaml \
--pod-infra-container-image=registry.aliyuncs.com/archon/pause-amd64:3.0 \
--hostname-override=192.168.3.219 \
--network-plugin=cni \
--cni-conf-dir=/etc/cni/net.d \
--cni-bin-dir=/opt/cni/bin \
--logtostderr=true \
--log-dir=/var/log/kubernetes \
--v=2"
---------------------------------------------------------------------------------
###################
--hostname-override #配置node名称 建议使用node节点的IP
#--pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0 \
--pod-infra-container-image #指定pod的基础镜像 默认是google的,建议改为国内,或者FQ
或者 下载到本地重新命名镜像
docker pull registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
docker tag registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0 gcr.io/google_containers/pause-amd64:3.0
--kubeconfig #为配置文件
配置KUBE-代理
[root@node2 bin]# cat /etc/kubernetes/proxy-config.yaml
apiVersion: v1
kind: Config
users:
- name: proxy
user:
client-certificate: /etc/kubernetes/ca/kubelet_client.crt
client-key: /etc/kubernetes/ca/kubelet_client.key
clusters:
- cluster:
certificate-authority: /etc/kubernetes/ca/ca.crt
server: https://192.168.3.121:6443
name: local
contexts:
- context:
cluster: local
user: proxy
name: default-context
current-context: default-context
preferences: {}
[root@node2 bin]# cat /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
Requires=network.service
[Service]
EnvironmentFile=/etc/kubernetes/proxy.conf
ExecStart=/usr/bin/kube-proxy $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
[root@node2 bin]# cat /etc/kubernetes/proxy.conf
KUBE_PROXY_ARGS="\
--master=https://192.168.3.121:6443 \
--hostname-override=192.168.3.219 \
--kubeconfig=/etc/kubernetes/proxy-config.yaml \
--logtostderr=true \
--log-dir=/var/log/kubernetes \
--v=2"
---------------------------------------------------
--hostname-override #配置node名称,要与kubelet对应,kubelet配置了,则kube-proxy也要配置
--master #连接master服务
--kubeconfig #为配置文件
启动节点,日志查看(两节点都做)
[root@node2 bin]# systemctl daemon-reload
[root@node2 bin]# systemctl start kubelet.service
[root@node2 bin]# systemctl start kube-proxy.service
[root@node2 bin]# journalctl -xeu kubelet --no-pager
[root@node2 bin]# journalctl -xeu kube-proxy --no-pager
# 实时查看加 -f
主上查看节点
[root@k8s ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
192.168.3.193 Ready 19m v1.10.7
192.168.3.219 Ready 19m v1.10.7
集群测试(配置nginx测试文件(master doing))
[root@k8s bin]# cat nginx-rc.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-rc
labels:
name: nginx-rc
spec:
replicas: 2
selector:
name: nginx-pod
template:
metadata:
labels:
name: nginx-pod
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
[root@k8s bin]# cat nginx-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
labels:
name: nginx-service
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 80
nodePort: 30081
selector:
name: nginx-pod
启动YAML文件
[root@k8s bin]# kubectl create -f nginx-rc.yaml
[root@k8s bin]# kubectl create -f nginx-svc.yaml
#查看pod创建情况:
[root@k8s bin]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-rc-9qv8g 0/1 ContainerCreating 0 6s 192.168.3.193
nginx-rc-gksh9 0/1 ContainerCreating 0 6s 192.168.3.219
浏览器访问http://node-ip:30081/
出现NGINX页面就OK
可以使用以下命令来删除服务及nginx的部署:
[root@k8s bin]# kubectl delete -f nginx-svc.yaml
[root@k8s bin]# kubectl delete -f nginx-rc.yaml
下载仪表板YAML
[root@k8s ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
修改文件kubernetes,dashboard.yaml
image 那里 要修改下.默认的地址被墙了
#image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
image: mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.8.3
---------------------------------------------------------------
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
##############修改后#############
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 30000
selector:
k8s-app: kubernetes-dashboard
添加type:NodePort
暴露端口 :30000
创建权限控制YAML
[root@k8s ~]# cat dashboard-admin.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
创建,查看
[root@k8s ~]# kubectl create -f kubernetes-dashboard.yaml
[root@k8s ~]# kubectl create -f dashboard-admin.yaml
[root@k8s ~]# kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
default nginx-rc-9qv8g 1/1 Running 0 26m 172.18.19.2 192.168.3.193
default nginx-rc-gksh9 1/1 Running 0 26m 172.18.58.2 192.168.3.219
kube-system kubernetes-dashboard-66c9d98865-k84kb 1/1 Running 0 23m 172.18.58.3 192.168.3.219
访问
成功后可以直接访问 https://开头开头NODE_IP:的配置端口
我这边英文的 https://192.168.3.193:30000
访问会提示登录。我们采取令牌登录
[root@k8s ~]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') | grep token
大佬博客:https://blog.dl1548.site/2018/09/18/%E4%BA%8C%E8%BF%9B%E5%88%B6%E5%AE%89%E8%A3%85kubernetes %E9%9B%86%E7%BE%A4 /
https://juejin.im/post/5b46100de51d4519105d37e3