三、kubernetes实践:rancher-ha-rke安装k8s

参考
https://www.cnrancher.com/docs/rancher/v2.x/cn/installation/ha-install/

1. 常规节点配置

option required description
address yes 公共域名或IP地址
user yes 可以运行docker命令的用户
role yes 分配给节点的Kubernetes角色列表
internal_address no 内部集群通信的私有域名或IP地址
ssh_key_path no 用于对节点进行身份验证的SSH私钥的路径(默认为~/.ssh/id_rsa)

以上表格的意思是:

使用rke安装时,确定各个服务器的IP地址,且需要采用非root用户,每个服务器之间需要免密钥处理。

在此选用test-kube-master-01节点生成公钥,并分发给其他master节点实现安装。

2. 准备Kubernetes集群的节点

2.1 配置dns解析

条件限制没有dns服务器,所以在每个节点都要配置上hosts

cat >> /etc/hosts << EOF
172.18.1.4 test-kube-master-01
172.18.1.5 test-kube-master-02
172.18.1.9 test-kube-master-03
172.18.1.6 test-kube-node-01
172.18.1.7 test-kube-node-02
172.18.1.8 test-kube-node-03
EOF

2.2 master配置普通用户可操作docker

具体操作方式,在第一章基础环境准备

2.3 免密钥登录配置

wangpeng@test-kube-master-01:~$ ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/home/wangpeng/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/wangpeng/.ssh/id_rsa.
Your public key has been saved in /home/wangpeng/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:66NgD0FtJuv8ASIEQcdDoCyOklo+S5TJMgkTC8pDKwE wangpeng@test-kube-master-01
The key's randomart image is:
+---[RSA 2048]----+
|E=+o             |
|O+oo .           |
|O*  + +          |
|B=.+ =           |
|O.B +   S        |
|oB + o   .       |
|. + * . .        |
| . + = o.        |
|  .   +...       |
+----[SHA256]-----+

一般传送密钥方式是这样的:

ssh-copy-id wangpeng@test-kube-masterxx(包括本机)

3. 下载rkerancher-cluser.yml文件配置

3.1 rke二进制安装

浏览器访问RKE Releases 页面,下载符合操作系统的最新RKE安装程序:

  • MacOS:rke_darwin-amd64
  • Linux(Intel / AMD):rke_linux-amd64
  • Linux(ARM 32位):rke_linux-arm
  • Linux(ARM 64位):rke_linux-arm64
  • Windows(32位):rke_windows-386.exe
  • Windows(64位):rke_windows-amd64.exe

这里使用的Linux(Intel / AMD):rke_linux-amd64

wget https://github.com/rancher/rke/releases/download/v0.2.4/rke_linux-amd64

运行以下命令给与二进制文件执行权限:

chmod +x rke_linux-amd64

3.2 创建rke配置文件

有两种简单的方法可以创建cluster.yml

  • 使用我们的最小值rke配置cluster.yml并根据将使用的节点更新它;
  • 使用rke config向导式生成配置;
3.2.1 运行rke config配置向导

在这只需要将3个master添加进去即可,rancher是安装在master组成的k8s集群里面

 ./rke_linux-amd64 config --name cluster.yml
cat cluster.yml 

# If you intened to deploy Kubernetes in an air-gapped environment,
# please consult the documentation on how to configure custom RKE images.
nodes:
- address: 172.18.1.4
  port: "22"
  internal_address: ""
  role:
  - controlplane
  - worker
  - etcd
  hostname_override: ""
  user: wangpeng
  docker_socket: /var/run/docker.sock
  ssh_key: ""
  ssh_key_path: ~/.ssh/id_rsa
  ssh_cert: ""
  ssh_cert_path: ""
  labels: {}
- address: 172.18.1.5
  port: "22"
  internal_address: ""
  role:
  - controlplane
  - worker
  - etcd
  hostname_override: ""
  user: wangpeng
  docker_socket: /var/run/docker.sock
  ssh_key: ""
  ssh_key_path: ~/.ssh/id_rsa
  ssh_cert: ""
  ssh_cert_path: ""
  labels: {}
- address: 172.18.1.9
  port: "22"
  internal_address: ""
  role:
  - controlplane
  - worker
  - etcd
  hostname_override: ""
  user: wangpeng
  docker_socket: /var/run/docker.sock
  ssh_key: ""
  ssh_key_path: ~/.ssh/id_rsa
  ssh_cert: ""
  ssh_cert_path: ""
  labels: {}
services:
  etcd:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
    external_urls: []
    ca_cert: ""
    cert: ""
    key: ""
    path: ""
    snapshot: null
    retention: ""
    creation: ""
    backup_config: null
  kube-api:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
    service_cluster_ip_range: 10.43.0.0/16
    service_node_port_range: ""
    pod_security_policy: false
    always_pull_images: false
  kube-controller:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
    cluster_cidr: 10.42.0.0/16
    service_cluster_ip_range: 10.43.0.0/16
  scheduler:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
  kubelet:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
    cluster_domain: cluster.local
    infra_container_image: ""
    cluster_dns_server: 10.43.0.10
    fail_swap_on: false
  kubeproxy:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
network:
  plugin: canal
  options: {}
authentication:
  strategy: x509
  sans: []
  webhook: null
addons: ""
addons_include: []
system_images:
  etcd: rancher/coreos-etcd:v3.2.24-rancher1
  alpine: rancher/rke-tools:v0.1.28
  nginx_proxy: rancher/rke-tools:v0.1.28
  cert_downloader: rancher/rke-tools:v0.1.28
  kubernetes_services_sidecar: rancher/rke-tools:v0.1.28
  kubedns: rancher/k8s-dns-kube-dns:1.15.0
  dnsmasq: rancher/k8s-dns-dnsmasq-nanny:1.15.0
  kubedns_sidecar: rancher/k8s-dns-sidecar:1.15.0
  kubedns_autoscaler: rancher/cluster-proportional-autoscaler:1.0.0
  coredns: rancher/coredns-coredns:1.2.6
  coredns_autoscaler: rancher/cluster-proportional-autoscaler:1.0.0
  kubernetes: rancher/hyperkube:v1.14.1-rancher1
  flannel: rancher/coreos-flannel:v0.10.0-rancher1
  flannel_cni: rancher/flannel-cni:v0.3.0-rancher1
  calico_node: rancher/calico-node:v3.4.0
  calico_cni: rancher/calico-cni:v3.4.0
  calico_controllers: ""
  calico_ctl: rancher/calico-ctl:v2.0.0
  canal_node: rancher/calico-node:v3.4.0
  canal_cni: rancher/calico-cni:v3.4.0
  canal_flannel: rancher/coreos-flannel:v0.10.0
  weave_node: weaveworks/weave-kube:2.5.0
  weave_cni: weaveworks/weave-npc:2.5.0
  pod_infra_container: rancher/pause:3.1
  ingress: rancher/nginx-ingress-controller:0.21.0-rancher3
  ingress_backend: rancher/nginx-ingress-controller-defaultbackend:1.4-rancher1
  metrics_server: rancher/metrics-server:v0.3.1
ssh_key_path: ~/.ssh/id_rsa
ssh_cert_path: ""
ssh_agent_auth: false
authorization:
  mode: rbac
  options: {}
ignore_docker_version: false
kubernetes_version: ""
private_registries: []
ingress:
  provider: ""
  options: {}
  node_selector: {}
  extra_args: {}
cluster_name: ""
cloud_provider:
  name: ""
prefix_path: ""
addon_job_timeout: 0
bastion_host:
  address: ""
  port: ""
  user: ""
  ssh_key: ""
  ssh_key_path: ""
  ssh_cert: ""
  ssh_cert_path: ""
monitoring:
  provider: ""
  options: {}
restore:
  restore: false
  snapshot_name: ""
dns: null

4. 安装kubernetes集群

运行RKE命令创建Kubernetes集群

./rke_linux-amd64 up --config cluster.yml 

三、kubernetes实践:rancher-ha-rke安装k8s_第1张图片
完成后,它应显示:Finished building Kubernetes cluster successfully。

5. 添加kubectl命令

在docker中跑有rancher/hyperkube:v1.14.1-rancher1镜像的容器中,将hyperkube拷贝出来,将hyperkube修改成kubectl,并移动到/usr/bin/kubectl即可。

wangpeng@test-kube-master-01:~$ docker ps
CONTAINER ID        IMAGE                                  COMMAND                  CREATED             STATUS              PORTS               NAMES
af4de55e791f        rancher/nginx-ingress-controller       "/entrypoint.sh /ngi…"   14 minutes ago      Up 14 minutes                           k8s_nginx-ingress-controller_nginx-ingress-controller-jw7w5_ingress-nginx_da8868d3-972c-11e9-bfa4-0017fa0337ea_0
a572977ce68b        rancher/pause:3.1                      "/pause"                 15 minutes ago      Up 15 minutes                           k8s_POD_nginx-ingress-controller-jw7w5_ingress-nginx_da8868d3-972c-11e9-bfa4-0017fa0337ea_0
086c3aad92fe        rancher/coreos-flannel                 "/opt/bin/flanneld -…"   15 minutes ago      Up 15 minutes                           k8s_kube-flannel_canal-hncc5_kube-system_b957f9ba-972c-11e9-bfa4-0017fa0337ea_0
8d48b7e7c492        rancher/calico-node                    "start_runit"            15 minutes ago      Up 15 minutes                           k8s_calico-node_canal-hncc5_kube-system_b957f9ba-972c-11e9-bfa4-0017fa0337ea_0
8babca986d3a        rancher/pause:3.1                      "/pause"                 16 minutes ago      Up 16 minutes                           k8s_POD_canal-hncc5_kube-system_b957f9ba-972c-11e9-bfa4-0017fa0337ea_0
e9cb76b6ee95        rancher/hyperkube:v1.14.1-rancher1     "/opt/rke-tools/entr…"   16 minutes ago      Up 16 minutes                           kube-proxy
0ed5730300bc        rancher/hyperkube:v1.14.1-rancher1     "/opt/rke-tools/entr…"   16 minutes ago      Up 16 minutes                           kubelet
2b75e5aa7802        rancher/hyperkube:v1.14.1-rancher1     "/opt/rke-tools/entr…"   16 minutes ago      Up 14 minutes                           kube-scheduler
46a1002715bd        rancher/hyperkube:v1.14.1-rancher1     "/opt/rke-tools/entr…"   17 minutes ago      Up 14 minutes                           kube-controller-manager
50c736c7b389        rancher/hyperkube:v1.14.1-rancher1     "/opt/rke-tools/entr…"   17 minutes ago      Up 17 minutes                           kube-apiserver
33387faa5469        rancher/rke-tools:v0.1.28              "/opt/rke-tools/rke-…"   18 minutes ago      Up 18 minutes                           etcd-rolling-snapshots
3d348dea1e88        rancher/coreos-etcd:v3.2.24-rancher1   "/usr/local/bin/etcd…"   18 minutes ago      Up 18 minutes                           etcd
wangpeng@test-kube-master-01:~$ docker cp  e9cb76b6ee95:/hyperkube ./ 

wangpeng@test-kube-master-01:~$ ls
cluster.rkestate  cluster.yml  hyperkube  kube_config_cluster.yml  rke_linux-amd64

wangpeng@test-kube-master-01:~$ sudo mv hyperkube /usr/bin/kubectl

wangpeng@test-kube-master-01:~$ kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?

将rke安装完成后生成的kube_config_cluster.yml,将KUBECONFIG环境变量设置为kube_config_rancher-cluster.yml文件路径。

wangpeng@test-kube-master-01:~$ mkdir -pv ~/.kube
mkdir: created directory '/home/wangpeng/.kube'

wangpeng@test-kube-master-01:~$ cp kube_config_cluster.yml ~/.kube/config

6. 查看集群状态

wangpeng@test-kube-master-01:~$ kubectl get nodes
NAME         STATUS   ROLES                      AGE   VERSION
172.18.1.4   Ready    controlplane,etcd,worker   17m   v1.14.1
172.18.1.5   Ready    controlplane,etcd,worker   17m   v1.14.1
172.18.1.9   Ready    controlplane,etcd,worker   17m   v1.14.1

7. 检查集群Pod的运行状况

  • Pods是Running或者Completed状态。
  • READY列显示所有正在运行的容器 (i.e. 3/3),STATUS显示POD是Running。
  • Pods的STATUS是Completed为run-one Jobs,这些podsREADY应该为0/1。
wangpeng@test-kube-master-01:~$ kubectl get pods --all-namespaces
NAMESPACE       NAME                                      READY   STATUS      RESTARTS   AGE
ingress-nginx   default-http-backend-775b55c884-wfdgm     1/1     Running     0          23m
ingress-nginx   nginx-ingress-controller-jw7w5            1/1     Running     0          23m
ingress-nginx   nginx-ingress-controller-sg8gs            1/1     Running     0          23m
ingress-nginx   nginx-ingress-controller-tstp5            1/1     Running     0          23m
kube-system     canal-hncc5                               2/2     Running     0          24m
kube-system     canal-qnxx4                               2/2     Running     0          24m
kube-system     canal-sjpbv                               2/2     Running     0          24m
kube-system     kube-dns-869c7b8d96-rtpmn                 3/3     Running     0          23m
kube-system     kube-dns-autoscaler-78dbfd75b7-twpbm      1/1     Running     0          23m
kube-system     metrics-server-7f6bd4c888-gjwz8           1/1     Running     0          23m
kube-system     rke-ingress-controller-deploy-job-srz6h   0/1     Completed   0          23m
kube-system     rke-kube-dns-addon-deploy-job-lsnm4       0/1     Completed   0          24m
kube-system     rke-metrics-addon-deploy-job-k74sn        0/1     Completed   0          23m
kube-system     rke-network-plugin-deploy-job-t5btw       0/1     Completed   0          24m

8. 保存配置文件

保存kube_config_rancher-cluster.ymlrancher-cluster.yml文件的副本,您将需要这些文件来维护和升级Rancher实例。

你可能感兴趣的:(三、kubernetes实践:rancher-ha-rke安装k8s)