CKA-kubernetes 部署-hard-way-1.1-1.3

提示:文章写完后,目录可以自动生成,如何生成可参考右边的帮助文档

文章目录

  • 前言
  • 1、hard way
    • 1.1 pull virtual machine
    • 1.2 Installing the Client Tools
    • 1.3 Provisioning a CA and Generating TLS Certificates
  • 小结


前言

本篇日记将记录不同部署kubernetes的production 环境,如果是学习环境的部署可以参照k8-install-learning environments。部署完毕后,进行压力以及安全测试,逐步迭代熟悉production cluster的部署考量。


1、hard way

本方法将不借用部署工具,在服务器端或者主机端部署相关的binary 文件和nodes。通过本轮部署,加深kubernetes架构的理解,总体流程参照kubernete the hard way,本篇日记记录的k8 cluster,同样部署2个master node,2个worker node,1个loadbalancer node。
需要工具:

  • vagrant
  • virtuaxl box
    Memory and CPU:
  • 8G(memory)

安装组件如下:

  • kube api-server
  • etcd
  • kubelet
  • controll manager
  • kubeproxy
  • kubectl

1.1 pull virtual machine

由于virtual box 不同版本上对于host-only network中规定范围的不同,因此修改原github中master node 与 work node 的ip范围(Cannot create a private networck from Vagrant in VirtualBox after updating it)。

git clone https://github.com/mmumshad/kubernetes-the-hard-way.git
cd kubernetes-the-hard-way\vagrant

#修改node的ip范围,原来的范围为192.168.5.* 
#IP_NW = "192.168.5."
新的ip范围
IP_NW = "192.168.56."
vagrant up

ssh进入不同worker 与master node下不同worker 和master的IP。

vagrant ssh worker-1
##进入worker-1 后,系统给出了ip
##IP address for enp0s8:  192.168.56.21
## 依次获取ips

依次获取ips,如下图vagrant 在virtubox里面部署的虚拟机的Ip以及对应的主机端口等信息。

vm vm name role ip port
master-1 kubernetes-ha-master-1 Master 192.168.56.11 2711
master-2 kubernetes-ha-master-2 Master 192.168.56.12 2712
work-1 kubernetes-ha-work-1 Worker 192.168.56.21 2721
work-2 kubernetes-ha-work-2 Worker 192.168.56.22 2722
Loadbalancer kubernetes-ha-work-3 Loadbalancer 192.168.56.30 2730

通过上述命令,vagrant 部署了5个vm,并在每个虚拟机完成了以下的设置:

  • Deploys 5 VMs - 2 Master, 2 Worker and 1 Loadbalancer with the name 'kubernetes-ha-* '(见上表)
  • Set’s IP addresses in the range 192.168.56.*(见上表)
  • Add’s a DNS entry to each of the nodes to access internet(8.8.8.8)
# to see details about the uplink DNS servers
vagrant@worker-2:/etc$ systemd-resolve --status
# 输出
Global
         DNS Servers: 8.8.8.8
          DNSSEC NTA: 10.in-addr.arpa

  • Install’s Docker on Worker nodes
#change to sudo
vagrant@worker-2:/etc$ sudo  -i
root@worker-2:~# docker --version
Docker version 20.10.17, build 100c701
  • Runs the below command on all nodes to allow for network forwarding in IP Tables.
    难道就是port forwarding,比如访问localhost:2730的请求,会forward to loadbalancer 这个node
sysctl net.bridge.bridge-nf-call-iptables=1
  • 确定这几台vm可以相互ping 通。
#例如:
root@worker-2:~# ping 192.168.56.11
PING 192.168.56.11 (192.168.56.11) 56(84) bytes of data.
64 bytes from 192.168.56.11: icmp_seq=1 ttl=64 time=1.12 ms
64 bytes from 192.168.56.11: icmp_seq=2 ttl=64 time=0.491 ms

1.2 Installing the Client Tools

  • 选一个node作为adminstrator of the cluster,例如master-1.Copy public key to
    all the nodes generated from master-1
vagrant ssh master-1
ssh-keygen
#得到private key 与 public key,如下
vagrant@master-1:~/.ssh$ ls
authorized_keys  id_rsa  id_rsa.pub

## add public key to authorized_keys
vagrant@worker-1:~/.ssh$ cat >> ~/.ssh/authorized_keys <<EOF
> ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDuNtNIV6fvTWHaqYDgnSSgA/HcchVXxbEwAT26UDw7frksiMI806xgN+9/M5EtRuzUQTvUG1uFoKldNo5IjIDZnyoCBZMtNSEFntzxQ3UwPUYaHJ7+F4pGmY2wn9kHNxe6JdKnhJ4IllhjeRlZGiv8v5LDZGDYofnakcUJo7bnKQdxJXBTK8mDJ8Eu/vD5jHkxEMY+3uwSPwmJcjD+JJ2HPQVPmthdfkXGgSeFJZGWCThK91iXrLJMcGoujHV1WFz23HauGLYO+mhL7HNXEAhIOB0j+8j1caT94jd7ulrticlGP5G8T8/0GI4GGEB1qXtiL/1I0TwHEir7wOenRdb7 vagrant@master-1
> EOF

  • 重复上述操作,把在master-1 里生成的public key 添加到authorized_keys 文件里面。

这样就能从master-1里ssh其他nodes。例如下面的界面表示已经从master-1 成功ssh入 worker-1(192.168.56.21)

vagrant@master-1:~/.ssh$ ssh 192.168.56.21
Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 4.15.0-191-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage
  • install Kubectl on the master node
wget https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin/
kubectl client --version ##查看版本

1.3 Provisioning a CA and Generating TLS Certificates

本节主要通过openssl来生成kubernetes的各种组件需要访问彼此的certificate。首先选择一个node来生成文件,然后将对应文件拷贝到其他nodes里面去,参照kubernete the hard way。本篇也选择在master-1里面生成certificates。关于TLS certificate 的介绍,这篇TLS introdution就足够了。

  • Provision a Certificate Authority to sign Certificate Signing Request and use it to create a private key
openssl genrsa -out kube-controller-manager.key 2048
openssl req -new -key kube-controller-manager.key -subj "/CN=system:kube-controller-manager" -out kube-controller-manager.csr
openssl x509 -req -in kube-controller-manager.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out kube-controller-manager.crt -days 1000
  • Generate client and server certificates for each Kubernetes component and a client certificate for the Kubernetes admin user.
# Generate private key for admin user
openssl genrsa -out admin.key 2048
# Generate CSR for admin user. Note the OU.
openssl req -new -key admin.key -subj "/CN=admin/O=system:masters" -out admin.csr
# Sign certificate for admin user using CA servers private key
openssl x509 -req -in admin.csr -CA ca.crt -CAkey ca.key -CAcreateserial  -out admin.crt -days 1000
  • Generate the kube-controller-manager client certificate and private
    key
openssl genrsa -out kube-controller-manager.key 2048
openssl req -new -key kube-controller-manager.key -subj "/CN=system:kube-controller-manager" -out kube-controller-manager.csr
openssl x509 -req -in kube-controller-manager.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out kube-controller-manager.crt -days 1000
  • Generate the kube-proxy client certificate and private key
openssl genrsa -out kube-proxy.key 2048
openssl req -new -key kube-proxy.key -subj "/CN=system:kube-proxy" -out kube-proxy.csr
openssl x509 -req -in kube-proxy.csr -CA ca.crt -CAkey ca.key -CAcreateserial  -out kube-proxy.crt -days 1000
  • Generate the kube-scheduler client certificate and private key
openssl genrsa -out kube-scheduler.key 2048
openssl req -new -key kube-scheduler.key -subj "/CN=system:kube-scheduler" -out kube-scheduler.csr
openssl x509 -req -in kube-scheduler.csr -CA ca.crt -CAkey ca.key -CAcreateserial  -out kube-scheduler.crt -days 1000
  • Kubernetes API Server Certificate
    首先生成一个conf文件,因为apiserver是kubernetes的不同组件的server 除去etcd和kubelet。
cat > openssl.cnf <<EOF
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster.local
#记得修改IP
IP.1 = 10.96.0.1
IP.2 = 192.168.56.11
IP.3 = 192.168.56.12
IP.4 = 192.168.56.30
IP.5 = 127.0.0.1
EOF
openssl genrsa -out kube-apiserver.key 2048
openssl req -new -key kube-apiserver.key -subj "/CN=kube-apiserver" -out kube-apiserver.csr -config openssl.cnf
openssl x509 -req -in kube-apiserver.csr -CA ca.crt -CAkey ca.key -CAcreateserial  -out kube-apiserver.crt -extensions v3_req -extfile openssl.cnf -days 1000
  • ETCD Server Certificate
cat > openssl-etcd.cnf <<EOF
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
IP.1 = 192.168.56.11
IP.2 = 192.168.56.12
IP.3 = 127.0.0.1
EOF
openssl genrsa -out etcd-server.key 2048
openssl req -new -key etcd-server.key -subj "/CN=etcd-server" -out etcd-server.csr -config openssl-etcd.cnf
openssl x509 -req -in etcd-server.csr -CA ca.crt -CAkey ca.key -CAcreateserial  -out etcd-server.crt -extensions v3_req -extfile openssl-etcd.cnf -days 1000
  • Generate and sign service account tokens
openssl genrsa -out service-account.key 2048
openssl req -new -key service-account.key -subj "/CN=service-accounts" -out service-account.csr
openssl x509 -req -in service-account.csr -CA ca.crt -CAkey ca.key -CAcreateserial  -out service-account.crt -days 1000
  • Copy the appropriate certificates and private keys to each controller instance
for instance in master-1 master-2; do
  scp ca.crt ca.key kube-apiserver.key kube-apiserver.crt \
    service-account.key service-account.crt \
    etcd-server.key etcd-server.crt \
    ${instance}:~/
done

小结

本篇截止,一共拉起了5个VMs,其中2个为master node,2个worker node,1个loadbalancer。并建立了master-1 和各个node的ssh连接,在master-1上部署了kubectl;生成了不同组件的crt和key文件,并用CA签名,分布到master-1 和master-2。下篇Generating Kubernetes Configuration Files for Authentication.

你可能感兴趣的:(kubernetes,kubernetes,docker,devops,etcd)