配置环境说明:
- 服务器配置说明
hostname | use | os | ip |
---|---|---|---|
test10 | DNS+docker_hub+master | centos7 | 10.10.10.10 |
test11 | node | centos7 | 10.10.10.11 |
test12 | node | centos7 | 10.10.10.12 |
- 业务-分布说明
服务器名 | OS | IP | 运行程序组件名 | 组件说明 |
---|---|---|---|---|
test10 | centos7 | 10.10.10.10 | dnsmasq | DNS域名解析 |
test10 | centos7 | 10.10.10.10 | harbor | docker镜像管理工具 |
test10 | centos7 | 10.10.10.10 | etcd | 保存系统的配置信息和各种资源的状态信息 |
test10 | centos7 | 10.10.10.10 | flannel | 管理docker及宿主机-网络工具 |
test10 | centos7 | 10.10.10.10 | kube-apiserver | 提供Restful api。各种客户端工具或者其他组件可以调用其完成资源调用 |
test10 | centos7 | 10.10.10.10 | kube-scheduler | 调度服务,决定将容器创建在哪个Node上 |
test10 | centos7 | 10.10.10.10 | kube-controller-manager | 管理系统中各种资源,保证资源处于预期的状态 |
test10 | centos7 | 10.10.10.10 | kubectl | 集群中一个重要且便捷的管理工具 |
test11 | centos7 | 10.10.10.11 | flannel | 管理docker及宿主机-网络工具 |
test11 | centos7 | 10.10.10.11 | kubectl | 集群中一个重要且便捷的管理工具 |
test11 | centos7 | 10.10.10.11 | kubelet | 接收Master节点发来的创建请求信息,并向Master报告运行状态。 |
test11 | centos7 | 10.10.10.11 | kube-proxy | 访问控制 |
test12 | centos7 | 10.10.10.12 | flannel | 管理docker及宿主机-网络工具 |
test12 | centos7 | 10.10.10.12 | kubectl | 集群中一个重要且便捷的管理工具 |
test12 | centos7 | 10.10.10.12 | kubelet | 接收Master节点发来的创建请求信息,并向Master报告运行状态。 |
test12 | centos7 | 10.10.10.12 | kube-proxy | 访问控制 |
- 安装顺序
安装序号 | 程序名称 | 对应服务器 |
---|---|---|
1 | dnsmasq | test10 |
2 | etcd | test10 |
3 | flannel | test10,test11,test12 |
4 | docker_hub-harbor | test10 |
5 | kube-apiserver | test10 |
6 | kube-scheduler | test10 |
7 | kube-controller-manager | test10 |
8 | kubectl | test10,test11,test12 |
9 | kubelet | test11,test12 |
10 | kube-proxy | test11,test12 |
kuberneter版本: 1.15.1
安装方式: 二进制文件
认证方式:key
单管理节点服务
预配置
配置服务器(所有服务器)
- 关闭所有服务器selinux
setenforce 0
sed -i "s/^SELINUX\=.*/SELINUX=disabled/g" /etc/selinux/config
- 关闭所有服务器防火墙
systemctl stop firewalld.service
systemctl disable firewalld.service
- 关闭所有服务器swap
swapoff -a
##注释 /etc/fstab 中swap启动项
- 下载日常插件并设置yum源文件为阿里云
yum -y install epel-release wget yum-utils device-mapper-persistent-data lvm2
mkdir -p /etc/yum.repos.d/bak
mv /etc/yum.repos.d/* /etc/yum.repos.d/bak
wget http://mirrors.aliyun.com/repo/Centos-7.repo -P /etc/yum.repos.d/
wget http://mirrors.aliyun.com/repo/epel-7.repo -P /etc/yum.repos.d/
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum -y install wget vim ntp unzip zip net-snmp* telnet lrzsz bash-completion net-tools ntpdate supervisor ipvsadm
- 安装docker-ce
##安装
yum -y install docker-ce docker-ce-cli containerd.io
##配置阿里云docker源
mkdir -p /etc/docker
cat >> /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://hh3tvdpc.mirror.aliyuncs.com"]
}
EOF
systemctl daemon-reload
- 同步时间
ntpdate ntp1.aliyun.com
echo '*/5 * * * * /usr/sbin/ntpdate ntp1.aliyun.com > /dev/null 2>&1' >> /var/spool/cron/root
- 修改主机名及hosts
hostname test10
hostnamectl set-hostname test10
- 配置内核参数
cat >> /etc/sysctl.d/k8s.conf <> /etc/rc.local
sysctl -p /etc/sysctl.d/k8s.conf
- 服务器SSH免密
##master机器
ssh-keygen
##授信其他服务器
ssh-copy-id 10.10.10.10
ssh-copy-id 10.10.10.11
ssh-copy-id 10.10.10.12
配置证书(test10)
- 下载并安装证书生成工具
wget "https://pkg.cfssl.org/R1.2/cfssl_linux-amd64" -O /usr/local/bin/cfssl
wget "https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64" -O /usr/local/bin/cfssljson
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson
mkdir -p /data/k8s_key
cd /data/k8s_key
需创建证书如下:
ca证书
etcd证书
apiserver证书
proxy证书
kubectl证书
- 生成ca证书
cat > ca-config.json << EOF
{
"signing": {
"default": {
"expiry": "876000h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "876000h"
}
}
}
}
EOF
cat > ca-csr.json << EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "JS",
"L": "NJ",
"O": "k8s",
"OU": "system"
}
]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
根据需求修改 'C' , 'ST' , 'L' , 'O' , 'OU' 值
"CN":Common Name,kube-apiserver从证书中提取该字段作为请求的用户名(User name);浏览器检验该字段验证网站是否合法;
“O”:Organization,kube-apiserver从证书提取该字段作为请求用户所属的组(Group);
- 生成etcd证书
cat > etcd-csr.json << EOF
{
"CN": "etcd",
"hosts": [
"127.0.0.1",
"10.10.10.10",
"etcd.k8s.test"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "JS",
"L": "NJ",
"O": "k8s",
"OU": "system"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
'hosts'为必填项,根据ETCD程序实际部署IP地址及域名进行填写,否则证书校验会出错
- 生成apiserver证书
cat > apiserver-csr.json << EOF
{
"CN": "apiserver",
"hosts": [
"127.0.0.1",
"10.10.10.10",
"apiserver.k8s.test"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "JS",
"L": "NJ",
"O": "k8s",
"OU": "system"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes apiserver-csr.json | cfssljson -bare apiserver
'hosts'为必填项,根据apiserver程序实际部署IP地址及域名进行填写,否则证书校验会出错
- 生成proxy证书
cat > proxy-csr.json << EOF
{
"CN": "proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "JS",
"L": "NJ",
"O": "k8s",
"OU": "system"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes proxy-csr.json | cfssljson -bare proxy
'hosts'可以为空,集群中增加新节点也不需要重新生成证书
- 生成kubectl证书
cat > kubectl-csr.json << EOF
{
"CN": "kubectl",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "JS",
"L": "NJ",
"O": "k8s",
"OU": "system"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubectl-csr.json | cfssljson -bare kubectl
- 将证书同步到其他服务器
ssh 10.10.10.11 "mkdir -p /data/k8s_key"
scp -r /data/k8s_key 10.10.10.11:/data/
ssh 10.10.10.12 "mkdir -p /data/k8s_key"
scp -r /data/k8s_key 10.10.10.12:/data/
部署dnsmasq
test10
- 安装
yum -y install dnsmasq
- 修改配置文件
cp /etc/dnsmasq.conf /etc/dnsmasq.conf.bak
cat > /etc/dnsmasq.conf <
- 创建相关配置文件及文件夹
mkdir -p /data/dnsmasq/{dnsmasq.d,log}
touch /data/dnsmasq/{dnsmasq.hosts,resolv.dnsmasq}
- 填写DNS转发服务器(提供非自定义域名查询)
cat > /data/dnsmasq/resolv.dnsmasq << EOF
nameserver 223.5.5.5
nameserver 1.2.4.8
EOF
- 填写hosts主机记录(提供域名hosts记录集中查询)
cat > /data/dnsmasq/dnsmasq.hosts << EOF
10.10.10.10 test10
10.10.10.11 test11
10.10.10.12 test12
EOF
修改addn-hosts指定hosts记录文件,需重启dnsmasq,可以通过hostsdir指定域名配置文件添加解析。
- 填写自定义域名(提供内网自定义域名查询)
cat > /data/dnsmasq/dnsmasq.d/k8s.test <
- 启动服务并设置开机启动
systemctl start dnsmasq.service
systemctl enable dnsmasq.service
- 所有服务器设置DNS指向10.10.10.10
#修改配置项 /etc/sysconfig/network-scripts/ifcfg-eth0
PEERDNS=no #拒绝接受DHCP分发的DNS配置
DNS1=10.10.10.10 #自定义配置DNS服务器地址
#重启网络配置
systemctl restart network.service
部署etcd
test10
- 下载、安装
mkdir -p /setup/ /opt/etcd/{bin,conf} /data/etcd/
wget https://github.com/coreos/etcd/releases/download/v3.3.13/etcd-v3.3.13-linux-amd64.tar.gz -P /setup/
cd /setup/
tar zxvf etcd-v3.3.13-linux-amd64.tar.gz
mv etcd-v3.3.13-linux-amd64/etcd* /opt/etcd/bin/
chmod +x /opt/etcd/bin/etcd*
ln -s /opt/etcd/bin/etcd /usr/bin/
ln -s /opt/etcd/bin/etcdctl /usr/bin/
- 创建配置文件 /opt/etcd/conf/etcd.conf
ETCD_CONF='--name test10 \
--data-dir /data/etcd \
--listen-peer-urls https://0.0.0.0:2380 \
--listen-client-urls https://0.0.0.0:2379 \
--advertise-client-urls https://10.10.10.10:2379 \
--initial-cluster-token etcd-cluster-0 \
--initial-cluster-state new \
--initial-advertise-peer-urls https://10.10.10.10:2380 \
--initial-cluster test10=https://10.10.10.10:2380 \
--client-cert-auth \
--trusted-ca-file /data/k8s_key/ca.pem \
--cert-file /data/k8s_key/etcd.pem \
--key-file /data/k8s_key/etcd-key.pem \
--peer-client-cert-auth \
--peer-trusted-ca-file /data/k8s_key/ca.pem \
--peer-cert-file /data/k8s_key/etcd.pem \
--peer-key-file /data/k8s_key/etcd-key.pem'
- 创建启动文件 /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=-/opt/etcd/conf/etcd.conf
ExecStart=/opt/etcd/bin/etcd $ETCD_CONF
Restart=on-failure
RestartSec=2
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
- 启动etcd并设置为开机启动
systemctl daemon-reload
systemctl start etcd.service
systemctl enable etcd.service
- 查看部署情况
#查看成员状态
etcdctl --ca-file=/data/k8s_key/ca.pem --cert-file=/data/k8s_key/etcd.pem --key-file=/data/k8s_key/etcd-key.pem --endpoints=https://127.0.0.1:2379 member list
#查看集群状态
etcdctl --ca-file=/data/k8s_key/ca.pem --cert-file=/data/k8s_key/etcd.pem --key-file=/data/k8s_key/etcd-key.pem --endpoints=https://127.0.0.1:2379 cluster-health
部署flannel(所有服务器都部署),并修改docker-ce配置项(使用flannel网络)
test10、test11、test12
- 下载、安装
mkdir -p /setup/ /opt/flannel/{bin,conf} /data/flannel
wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz -P /setup/
cd /setup/
tar zxvf flannel-v0.11.0-linux-amd64.tar.gz -C /opt/flannel/bin/
chmod +x /opt/flannel/bin/*
ln -s /opt/flannel/bin/flanneld /usr/bin/
-
向etcd中注册网段信息
master机器执行一遍即可
/opt/etcd/bin/etcdctl \
--ca-file=/data/k8s_key/ca.pem \
--cert-file=/data/k8s_key/etcd.pem \
--key-file=/data/k8s_key/etcd-key.pem \
--endpoints=https://127.0.0.1:2379 \
set /test_1/network/config '{"Network": "172.30.0.0/16","SubnetLen": 24, "SubnetMin": "172.30.1.0","SubnetMax": "172.30.20.0", "Backend": {"Type": "vxlan"}}'
注册使用网段172.30.0.0/16,每个子网分配子网掩码24位,网络分配从172.30.1.0/24分配到172.30.20.0/24,协议使用vxlan
- 创建配置文件 /opt/flannel/conf/flannel.conf
FLANNEL_CONF="-etcd-cafile=/data/k8s_key/ca.pem \
-etcd-certfile=/data/k8s_key/etcd.pem \
-etcd-keyfile=/data/k8s_key/etcd-key.pem \
-etcd-endpoints=https://etcd.k8s.test:2379 \
-etcd-prefix=/test_1/network"
- 创建启动文件 /usr/lib/systemd/system/flannel.service
[Unit]
Description=Flanneld overlay address etcd agent
After=etcd.service
Before=docker.service
[Service]
Type=notify
EnvironmentFile=-/opt/flannel/conf/flannel.conf
ExecStart=/opt/flannel/bin/flanneld $FLANNEL_CONF
ExecStartPost=/opt/flannel/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure
[Install]
WantedBy=multi-user.target
- 启动flannel并设置为开机启动
systemctl daemon-reload
systemctl start flannel.service
systemctl enable flannel.service
- 检测启动状态
#查看etcd中注册信息
etcdctl -ca-file=/data/k8s_key/ca.pem --cert-file=/data/k8s_key/etcd.pem --key-file=/data/k8s_key/etcd-key.pem --endpoints=https://etcd.k8s.test:2379 ls -r /test_1/network/subnets/
etcdctl -ca-file=/data/k8s_key/ca.pem --cert-file=/data/k8s_key/etcd.pem --key-file=/data/k8s_key/etcd-key.pem --endpoints=https://etcd.k8s.test:2379 get /test_1/network/subnets/172.30.5.0-24
#查看本地网络情况
ip add
#查看docker网络配置文件
cat /run/flannel/docker
-
修改docker启动文件(使用flannel网络)
/usr/lib/systemd/system/docker.service
[Service]
#修改
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
#新增
EnvironmentFile=/run/flannel/docker
- 启动docker并设置为开机启动
systemctl daemon-reload
systemctl start docker.service
systemctl enable docker.service
- 检查docker网络
##检查本地网络情况
ip a show docker0
##查看docker的配置
docker inspect bridge
部署harbor (docker镜像仓库,提供项目docker镜像托管及版本管理)
test10
- 下载
yum -y install docker-compose.noarch
mkdir -p /setup /opt/harbor/
wget https://storage.googleapis.com/harbor-releases/release-1.8.0/harbor-offline-installer-v1.8.1.tgz -P /setup
tar xvf /setup/harbor-offline-installer-v1.8.1.tgz -C /opt/
- 修改配置文件
##修改配置文件 /opt/harbor/harbor.yml
hostname: harbor.k8s.test #侦听域名
harbor_admin_password: admin123 #管理平台admin管理员密码
- 安装Harbor
cd /opt/harbor/
./install.sh
如有保存,按照错误提醒解决问题。(一般为缺少依赖,或者依赖版本不匹配)
- 访问harbor管理界面
#访问url http://harbor.k8s.test
#用户名/密码 admin / admin123
- 修改docker配置,访问harbor http模式(配置在上传docker镜像机器中)
##修改配置文件 /etc/docker/daemon.json
{
"registry-mirrors": ["https://hh3tvdpc.mirror.aliyuncs.com"],
"insecure-registries": ["harbor.k8s.test"]
}
##重启docker
systemctl restart docker.service
##docker登录仓库账户
docker login harbor.k8s.test
admin
admin123
- 上传镜像步骤(后续kubelet需要使用)
##管理界面中创建项目 k8s
##下载镜像包
docker pull registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
##将已准备的镜像包使用tag转成私网镜像包
docker tag registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0 harbor.k8s.test/k8s/pause-amd64:3.0
##上传镜像包(所有)
docker push harbor.k8s.test/k8s/pause-amd64:3.0
##管理界面中查看dockertest 项目中是否有已上传镜像
部署kube-apiserver
test10
- 下载、安装
mkdir /setup && cd /setup
wget https://dl.k8s.io/v1.15.1/kubernetes-server-linux-amd64.tar.gz -P /setup
tar zxvf kubernetes-server-linux-amd64.tar.gz
mkdir -p /opt/kubernetes/{bin,conf} /data/kubernetes/logs/
cp kubernetes/server/bin/kube-apiserver /opt/kubernetes/bin/
chmod +x /opt/kubernetes/bin/kube-apiserver
- 创建token.csv文件
export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
cat > /opt/kubernetes/conf/token.csv <
- 创建高级审计配置文件 /opt/kubernetes/conf/audit-policy.yaml
apiVersion: audit.k8s.io/v1 # This is required.
kind: Policy
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
- "RequestReceived"
rules:
# Log pod changes at RequestResponse level
- level: RequestResponse
resources:
- group: ""
# Resource "pods" doesn't match requests to any subresource of pods,
# which is consistent with the RBAC policy.
resources: ["pods"]
# Log "pods/log", "pods/status" at Metadata level
- level: Metadata
resources:
- group: ""
resources: ["pods/log", "pods/status"]
# Don't log requests to a configmap called "controller-leader"
- level: None
resources:
- group: ""
resources: ["configmaps"]
resourceNames: ["controller-leader"]
# Don't log watch requests by the "system:kube-proxy" on endpoints or services
- level: None
users: ["system:kube-proxy"]
verbs: ["watch"]
resources:
- group: "" # core API group
resources: ["endpoints", "services"]
# Don't log authenticated requests to certain non-resource URL paths.
- level: None
userGroups: ["system:authenticated"]
nonResourceURLs:
- "/api*" # Wildcard matching.
- "/version"
# Log the request body of configmap changes in kube-system.
- level: Request
resources:
- group: "" # core API group
resources: ["configmaps"]
# This rule only applies to resources in the "kube-system" namespace.
# The empty string "" can be used to select non-namespaced resources.
namespaces: ["kube-system"]
# Log configmap and secret changes in all other namespaces at the Metadata level.
- level: Metadata
resources:
- group: "" # core API group
resources: ["secrets", "configmaps"]
# Log all other resources in core and extensions at the Request level.
- level: Request
resources:
- group: "" # core API group
- group: "extensions" # Version of group should NOT be included.
# A catch-all rule to log all other requests at the Metadata level.
- level: Metadata
# Long-running requests like watches that fall under this rule will not
# generate an audit event in RequestReceived.
omitStages:
- "RequestReceived"
- 创建配置文件 /opt/kubernetes/conf/kube-apiserver.conf
KUBE_APISERVER="--apiserver-count=1 \
--logtostderr=false \
--audit-log-path=/data/kubernetes/logs/kube-apiserver.log \
--audit-policy-file=/opt/kubernetes/conf/audit-policy.yaml \
--v=4 \
--bind-address=10.10.10.10 \
--secure-port=6443 \
--advertise-address=10.10.10.10 \
--allow-privileged=true \
--authorization-mode=Node,RBAC \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/conf/token.csv \
--client-ca-file=/data/k8s_key/ca.pem \
--requestheader-client-ca-file=/data/k8s_key/ca.pem \
--etcd-cafile=/data/k8s_key/ca.pem \
--etcd-certfile=/data/k8s_key/apiserver.pem \
--etcd-keyfile=/data/k8s_key/apiserver-key.pem \
--etcd-servers=https://etcd.k8s.test:2379 \
--service-account-key-file=/data/k8s_key/ca-key.pem \
--service-cluster-ip-range=10.254.0.0/16 \
--service-node-port-range=30000-50000 \
--kubelet-client-certificate=/data/k8s_key/apiserver.pem \
--kubelet-client-key=/data/k8s_key/apiserver-key.pem \
--tls-cert-file=/data/k8s_key/apiserver.pem \
--tls-private-key-file=/data/k8s_key/apiserver-key.pem"
- 创建启动文件 /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service
[Service]
Type=notify
EnvironmentFile=-/opt/kubernetes/conf/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER
Restart=on-failure
RestartSec=2
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
- 启动kube-apiserver并设置为开机启动
systemctl daemon-reload
systemctl start kube-apiserver.service
systemctl enable kube-apiserver.service
部署kube-scheduler
test10
- 下载、安装
cd /setup
wget https://dl.k8s.io/v1.15.1/kubernetes-server-linux-amd64.tar.gz -P /setup
tar zxvf kubernetes-server-linux-amd64.tar.gz
mkdir -p /opt/kubernetes/{bin,conf} /data/kubernetes/logs/
cp kubernetes/server/bin/kube-scheduler /opt/kubernetes/bin/
chmod +x /opt/kubernetes/bin/kube-scheduler
- 创建配置文件 /opt/kubernetes/conf/kube-scheduler.conf
KUBE_SCHEDULER="--leader-elect \
--logtostderr=false \
--log-dir=/data/kubernetes/logs/ \
--log-file=/data/kubernetes/logs/kube-scheduler.log \
--v=4 \
--master=http://127.0.0.1:8080"
- 创建启动文件 /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=kube-apiserver.service
[Service]
Type=simple
EnvironmentFile=-/opt/kubernetes/conf/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER
Restart=on-failure
RestartSec=2
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
- 启动kube-scheduler并设置为开机启动
systemctl daemon-reload
systemctl start kube-scheduler.service
systemctl enable kube-scheduler.service
部署kube-controller-manager
test10
- 下载、安装
cd /setup
wget https://dl.k8s.io/v1.15.1/kubernetes-server-linux-amd64.tar.gz -P /setup
tar zxvf kubernetes-server-linux-amd64.tar.gz
mkdir -p /opt/kubernetes/{bin,conf} /data/kubernetes/logs/
cp kubernetes/server/bin/kube-controller-manager /opt/kubernetes/bin/
chmod +x /opt/kubernetes/bin/kube-controller-manager
- 创建配置文件 /opt/kubernetes/conf/kube-controller-manager.conf
KUBE_CONTROLLER_MANAGER="--logtostderr=false \
--log-dir=/data/kubernetes/logs/ \
--log-file=/data/kubernetes/logs/kube-controller-manager.log \
--v=4 \
--master=http://127.0.0.1:8080 \
--leader-elect=true \
--address=0.0.0.0 \
--service-cluster-ip-range=10.254.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/data/k8s_key/ca.pem \
--cluster-signing-key-file=/data/k8s_key/ca-key.pem \
--root-ca-file=/data/k8s_key/ca.pem \
--service-account-private-key-file=/data/k8s_key/ca-key.pem"
- 创建启动文件 /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=kube-apiserver.service
[Service]
Type=simple
EnvironmentFile=-/opt/kubernetes/conf/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER
Restart=on-failure
RestartSec=2
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
- 启动kube-controller-manager并设置为开机启动
systemctl daemon-reload
systemctl start kube-controller-manager.service
systemctl enable kube-controller-manager.service
master配置kubectl
test10
- 下载、安装
cd /setup
wget https://dl.k8s.io/v1.15.1/kubernetes-server-linux-amd64.tar.gz -P /setup
tar zxvf kubernetes-server-linux-amd64.tar.gz
mkdir -p /opt/kubernetes/{bin,conf} /data/kubernetes/logs/
cp kubernetes/server/bin/kubectl /opt/kubernetes/bin/
chmod +x /opt/kubernetes/bin/kubectl
ln -s /opt/kubernetes/bin/kubectl /usr/sbin/
- 查看master集群状态
kubectl get cs,nodes
NAME STATUS MESSAGE ERROR
componentstatus/scheduler Healthy ok
componentstatus/controller-manager Healthy ok
componentstatus/etcd-0 Healthy {"health":"true"}
node服务器使用kubectl
test11 , test12
kubectl默认通过localhost:8080连接apiserver 。所以需要配置kubectl使用证书访问api接口。
- 下载、安装
cd /setup
wget https://dl.k8s.io/v1.15.1/kubernetes-node-linux-amd64.tar.gz -P /setup
tar zxvf kubernetes-node-linux-amd64.tar.gz
mkdir -p /opt/kubernetes/{bin,conf} /data/kubernetes/logs/
cp kubernetes/node/bin/kubectl /opt/kubernetes/bin/
chmod +x /opt/kubernetes/bin/kubectl
ln -s /opt/kubernetes/bin/kubectl /usr/sbin/
- 在master节点上添加kubectl用户
##创建资源配置文件(添加授权kubectl用户) kubectl.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubectl
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubectl
##应用资源配置文件
kubectl create -f kubectl.yaml
- node节点增加配置文件
> 创建/root/.kube/config文件
# 设置集群参数,--server指定Master节点ip
kubectl config set-cluster kubernetes \
--certificate-authority=/data/k8s_key/ca.pem \
--server=https://apiserver.k8s.test:6443
# 设置客户端认证参数
kubectl config set-credentials kubectl \
--certificate-authority=/data/k8s_key/ca.pem \
--client-certificate=/data/k8s_key/kubectl.pem \
--client-key=/data/k8s_key/kubectl-key.pem
# 设置上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kubectl
# 设置默认上下文
kubectl config use-context default
- node节点上验证
##查看配置文件
cat /root/.kube/config
##使用命令查看
kubectl get node
部署kubelet
test11、test12
- 下载、部署
cd /setup
wget https://dl.k8s.io/v1.15.1/kubernetes-node-linux-amd64.tar.gz -P /setup
tar zxvf kubernetes-node-linux-amd64.tar.gz
mkdir -p /opt/kubernetes/{bin,conf} /data/kubernetes/logs/
cp kubernetes/node/bin/{kubelet,kubectl} /opt/kubernetes/bin/
chmod +x /opt/kubernetes/bin/*
ln -s /opt/kubernetes/bin/kubectl /usr/sbin/
- 创建bootstrap.kubeconfig.(master机器上执行)
# 设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=/data/k8s_key/ca.pem \
--embed-certs=true \
--server="https://apiserver.k8s.test:6443" \
--kubeconfig=bootstrap.kubeconfig
# 设置客户端认证参数 (token值必须对应apiserver中token.csv中token值)
kubectl config set-credentials kubelet-bootstrap \
--token=9ec6ae9648b97907d24aa33202f00c92 \
--kubeconfig=bootstrap.kubeconfig
# 设置上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig
# 设置默认上下文并生成文件
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
- 将kubelet-bootstrap用户绑定到系统集群角色 (master机器上执行)
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
- 将bootstrap.kubeconfig复制到配置文件目录并scp到所有node服务器
cp bootstrap.kubeconfig /opt/kubernetes/conf
scp /opt/kubernetes/conf/bootstrap.kubeconfig test11:/opt/kubernetes/conf/
scp /opt/kubernetes/conf/bootstrap.kubeconfig test12:/opt/kubernetes/conf/
- 创建kubelet参数配置模板文件 /opt/kubernetes/conf/kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 10.10.10.11
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.10.10.10"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
anonymous:
enabled: true
address
本机IP
- 创建kubelet配置文件 /opt/kubernetes/conf/kubelet.conf
KUBELET="--logtostderr=false \
--log-dir=/data/kubernetes/logs/ \
--log-file=/data/kubernetes/logs/kubelet.log \
--v=4 \
--hostname-override=10.10.10.11 \
--kubeconfig=/opt/kubernetes/conf/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/conf/bootstrap.kubeconfig \
--config=/opt/kubernetes/conf/kubelet.config \
--cert-dir=/data/k8s_key/ \
--pod-infra-container-image=harbor.k8s.test/k8s/pause-amd64:3.0"
hostname-override
取本机IP
- 创建启动文件 /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
After=docker.service
[Service]
Type=simple
EnvironmentFile=-/opt/kubernetes/conf/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet $KUBELET
Restart=on-failure
RestartSec=2
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
- 启动kubelet并设置为开机启动
systemctl daemon-reload
systemctl start kubelet.service
systemctl enable kubelet.service
- master服务器上确认csr请求
#查看csr请求信息
kubectl get csr
#确认csr请求
kubectl certificate approve node-csr-OJXR5HcB9oYb8cZCdWeg6iQZgcLiSaORWwsImOBZVS8
#查看集群状态(确认csr后注册的node节点)
kubectl get nodes
- 删除过期的csr
kubectl delete csr node-csr-XiPfkW9BD3t3XmpcxbQ3m7CA5NPjNp_2OQSY2Tl94gA
- 删除过期的node
kubectl delete node 10.10.10.12
- 集群打标签
kubectl label node 10.10.10.11 node-role.kubernetes.io/node='node'
kubectl label node 10.10.10.12 node-role.kubernetes.io/node='node'
- 查看cs,service,node,csr信息
kubectl get cs,service,node,csr
- 查看详细信息(服务端)
kubectl describe service
部署kube-proxy
test11、test12
- 下载、安装
cd /setup
wget https://dl.k8s.io/v1.15.1/kubernetes-node-linux-amd64.tar.gz -P /setup
tar zxvf kubernetes-node-linux-amd64.tar.gz
mkdir -p /opt/kubernetes/{bin,conf} /data/kubernetes/logs/
cp kubernetes/node/bin/{kube-proxy,kubectl} /opt/kubernetes/bin/
chmod +x /opt/kubernetes/bin/*
ln -s /opt/kubernetes/bin/kubectl /usr/sbin/
- 创建kube-proxy.kubeconfig (master机器上执行)
# 设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=/data/k8s_key/ca.pem \
--embed-certs=true \
--server="https://apiserver.k8s.test:6443" \
--kubeconfig=kube-proxy.kubeconfig
# 设置客户端认证参数
kubectl config set-credentials kube-proxy \
--client-certificate=/data/k8s_key/proxy.pem \
--client-key=/data/k8s_key/proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
# 设置上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
# 设置默认上下文
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
- 将bootstrap.kubeconfig复制到配置文件目录并scp到所有node服务器
cp kube-proxy.kubeconfig /opt/kubernetes/conf
scp /opt/kubernetes/conf/kube-proxy.kubeconfig test11://opt/kubernetes/conf
scp /opt/kubernetes/conf/kube-proxy.kubeconfig test12://opt/kubernetes/conf
- 创建配置文件 /opt/kubernetes/conf/kube-proxy.conf
KUBE_PROXY="--logtostderr=false \
--log-dir=/data/kubernetes/logs/ \
--log-file=/data/kubernetes/logs/kube-proxy.log \
--v=4 \
--hostname-override=10.10.10.12 \
--cluster-cidr=10.254.0.0/16 \
--kubeconfig=/opt/kubernetes/conf/kube-proxy.kubeconfig \
--masquerade-all \
--feature-gates=SupportIPVSProxyMode=true \
--proxy-mode=ipvs \
--ipvs-min-sync-period=5s \
--ipvs-sync-period=5s \
--ipvs-scheduler=rr"
--hostname-override
: 参数值必须与 kubelet 的值一致,否则 kube-proxy 启动后会找不到该 Node,从而不会创建任何 ipvs 规则;
- 创建启动文件 /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
Type=simple
EnvironmentFile=-/opt/kubernetes/conf/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY
Restart=on-failure
RestartSec=2
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
- 启动kube-proxy程序
systemctl daemon-reload
systemctl start kube-proxy.service
systemctl enable kube-proxy.service
- 设置proxy用户赋权 (master机器执行)
cat > kube-proxy.yaml < EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: proxy
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: proxy
EOF
kubectl create -f kube-proxy.yaml
使用kubectl命令发布镜像
- 使用命令启动一个镜像(启动一个副本,对外映射80端口)
kubectl run nginx --image=nginx --replicas=2 --port=80
- 查看镜像启动情况
kubectl get pods -o wide
- 查看部署情况
kubectl get deployment
- 设定已启动镜像副本数(增加或者减少)
kubectl scale --replicas=4 deployment/nginx
##or
kubectl scale --replicas=1 deployment/nginx
- 删除部署(直接删除pods会重新启动镜像)
kubectl delete deployment nginx
使用yaml脚本发布镜像 (master机器执行)
- 编辑yaml脚本
mkdir -p /job/k8s_service/
cat > /job/k8s_service/nginx.yaml << EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ds
namespace: default
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: harbor.k8s.test/service/nginx:latest
ports:
- containerPort: 80
hostPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-ds
labels:
app: nginx
spec:
type: NodePort
selector:
app: nginx
ports:
- name: http
port: 80
targetPort: 80
nodePort: 38000
EOF
指定内网服务端口80 , 映射至宿主机端口为80 , proxy代理端口为38000
- 执行发布命令
kubectl create -f /job/k8s_service/nginx.yaml
- 查看发布状态
###查询全部发布结构
kubectl get all -o wide
###客户机查询proxy接口
ipvsadm -l -n
##OR
netstat -lnpt |grep kube-proxy
原文链接:https://www.cnblogs.com/taoyuxuan/p/11205430.html