First, clone the "smobronze" VM into a new VM, named "ricbronze".
Change hostname to 'ricpltbronze'.
(09:54 dabs@ricpltbronze bin) > uname -a
Linux ricpltbronze 5.3.0-62-generic #56~18.04.1-Ubuntu SMP Wed Jun 24 16:17:03 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
Revert changes made to 'dep/tools/k8s/etc/infra.rc' during SMO installation:
(09:55 dabs@ricpltbronze etc) > pwd
/home/dabs/oran/dep/tools/k8s/etc
(09:58 dabs@ricpltbronze etc) > cat infra.rc
# modify below for RIC infrastructure (docker-k8s-helm) component versions
# RIC tested
INFRA_DOCKER_VERSION=""
INFRA_HELM_VERSION="2.12.3"
INFRA_K8S_VERSION="1.16.0"
INFRA_CNI_VERSION="0.7.5"
# older RIC tested
#INFRA_DOCKER_VERSION=""
#INFRA_HELM_VERSION="2.12.3"
#INFRA_K8S_VERSION="1.13.3"
#INFRA_CNI_VERSION="0.6.0"
# ONAP Frankfurt
#INFRA_DOCKER_VERSION="18.09.7"
#INFRA_K8S_VERSION="1.15.9"
#INFRA_CNI_VERSION="0.7.5"
#INFRA_HELM_VERSION="2.16.6"
Generate the k8s one-node cluster installation script: k8s-1node-cloud-init-k_1_16-h_2_12-d_cur.sh
(10:00 dabs@ricpltbronze bin) > pwd
/home/dabs/oran/dep/tools/k8s/bin
(10:01 dabs@ricpltbronze bin) > ./gen-cloud-init.sh
(09:59 dabs@ricpltbronze bin) > ls -al | grep k8s-1node
-rwxrwxr-x 1 dabs dabs 9.7K Jul 2 13:12 k8s-1node-cloud-init-k_1_15-h_2_16-d_18_09.sh
-rwxrwxr-x 1 dabs dabs 9.7K Jul 16 09:09 k8s-1node-cloud-init-k_1_16-h_2_12-d_cur.sh
Update script k8s-1node-cloud-init-k_1_16-h_2_12-d_cur.sh, similar to SMO installation procedure:
(09:05 dabs@ricpltbronze bin) > diff k8s-1node-cloud-init-k_1_15-h_2_16-d_18_09.sh k8s-1node-cloud-init-k_1_16-h_2_12-d_cur.sh
46,47c46,47
< echo "18.09.7" > /opt/config/docker_version.txt
< echo "1.15.9" > /opt/config/k8s_version.txt
---
> echo "" > /opt/config/docker_version.txt
> echo "1.16.0" > /opt/config/k8s_version.txt
49c49
< echo "2.16.6" > /opt/config/helm_version.txt
---
> echo "2.12.3" > /opt/config/helm_version.txt
117,121c117,118
< #curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
< #echo 'deb http://apt.kubernetes.io/ kubernetes-xenial main' > /etc/apt/sources.list.d/kubernetes.list
< curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
< echo 'deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main' > /etc/apt/sources.list.d/kubernetes.list
<
---
> curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
> echo 'deb http://apt.kubernetes.io/ kubernetes-xenial main' > /etc/apt/sources.list.d/kubernetes.list
166,167c163
< "storage-driver": "overlay2",
< "registry-mirrors":["https://docker.mirrors.ustc.edu.cn", "http://hub-mirror.c.163.com"]
---
> "storage-driver": "overlay2"
360c356
< #if [ "$(uname -r)" != "4.15.0-45-lowlatency" ]; then reboot; fi
---
> if [ "$(uname -r)" != "4.15.0-45-lowlatency" ]; then reboot; fi
Make further preparations:
(1) manually pull k8s.gcr.io/* images, similar to SMO installation
(2) manually pull tiller images:
$docker pull sapcc/tiller:v2.12.3
$docker tag sapcc/tiller:v2.12.3 gcr.io/kubernetes-helm/tiller:v2.12.3
$docker rmi sapcc/tiller:v2.12.3
Now, we can run the k8s-1node script:
(09:30 dabs@ricpltbronze bin) > sudo ./k8s-1node-cloud-init-k_1_16-h_2_12-d_cur.sh
Eventually, you will get a one-node k8s cluster with 9 running pods in ns kube-system:
(09:54 dabs@ricpltbronze etc) > sudo kubectl get pods --all-namespaces
[sudo] password for dabs:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5644d7b6d9-clvtf 1/1 Running 2 30m
kube-system coredns-5644d7b6d9-jwxnm 1/1 Running 2 30m
kube-system etcd-ricpltbronze 1/1 Running 2 30m
kube-system kube-apiserver-ricpltbronze 1/1 Running 4 29m
kube-system kube-controller-manager-ricpltbronze 1/1 Running 3 30m
kube-system kube-flannel-ds-amd64-mrwn2 1/1 Running 3 30m
kube-system kube-proxy-zrtl8 1/1 Running 2 30m
kube-system kube-scheduler-ricpltbronze 1/1 Running 3 30m
kube-system tiller-deploy-68bf6dff8f-wbmwl 1/1 Running 0 29m
(10:08 dabs@ricpltbronze etc) > sudo kubectl get service --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 443/TCP 31m
kube-system kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP,9153/TCP 31m
kube-system tiller-deploy ClusterIP 10.98.55.170 44134/TCP 30m
(to be continued)