kata container in cluster

kata-container in Minikube

install kata-container in cluster

set up a CRI-O and kvm2 based Minikube

$ minikube start --vm-driver kvm2 --memory 6144 --network-plugin=cni --enable-default-cni --container-runtime=cri-o --bootstrapper=kubeadm

  minikube v1.26.1 on Ubuntu 20.04
❗  Both driver=kvm2 and vm-driver=kvm2 have been set.

    Since vm-driver is deprecated, minikube will default to driver=kvm2.

    If vm-driver is set in the global config, please run "minikube config unset vm-driver" to resolve this warning.
			
✨  Using the kvm2 driver based on user configuration
❗  With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
E0812 04:49:23.216330   34297 start_flags.go:448] Found deprecated --enable-default-cni flag, setting --cni=bridge
  Starting control plane node minikube in cluster minikube
  Creating kvm2 VM (CPUs=2, Memory=6144MB, Disk=20000MB) ...
  Preparing Kubernetes v1.24.3 on CRI-O 1.24.1 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
  Configuring bridge CNI (Container Networking Interface) ...
  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
  Enabled addons: storage-provisioner, default-storageclass
  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

the master node (control-plane)will be ready

$ kubectl get nodes
NAME       STATUS   ROLES           AGE     VERSION
minikube   Ready    control-plane   6m12s   v1.24.3

Check you have virtualization enabled inside your Minikube. The following should return a number larger than 0

$ minikube ssh "egrep -c 'vmx|svm' /proc/cpuinfo"
2

install the Kata Containers runtime components

$ git clone https://github.com/kata-containers/packaging.git
$ cd packaging/kata-deploy
$ kubectl apply -f kata-deploy/base/kata-deploy.yaml
$ kubectl apply -f kata-rbac/base/kata-rbac.yaml

 checking the status of the kata-deploy pod, it  will be executing a sleep infinity once it has successfully completed its work

$ podname=$(kubectl -n kube-system get pods -o=name | fgrep kata-deploy | sed 's?pod/??')

$ echo podname=$podname
podname=kata-deploy-xg7f9

$ kubectl -n kube-system exec ${podname} -- ps -ef | fgrep infinity
root          47       1  0 12:03 ?        00:00:00 sleep infinity

configure Kubernetes RuntimeClass to know when to use Kata Containers to run a pod.

$ kubectl apply -f my-versioned-crontab.yaml

 my-versioned-crontab.yaml

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  # name must match the spec fields below, and be in the form: .
  name: crontabs.example.com
spec:
  # group name to use for REST API: /apis//
  group: example.com
  # list of versions supported by this CustomResourceDefinition
  versions:
  - name: v1beta1
    # Each version can be enabled/disabled by Served flag.
    served: true
    # One and only one version must be marked as the storage version.
    storage: true
    # A schema is required
    schema:
      openAPIV3Schema:
        type: object
        properties:
          host:
            type: string
          port:
            type: string
  - name: v1
    served: true
    storage: false
    schema:
      openAPIV3Schema:
        type: object
        properties:
          host:
            type: string
          port:
            type: string
  # The conversion section is introduced in Kubernetes 1.13+ with a default value of
  # None conversion (strategy sub-field set to None).
  conversion:
    # None conversion assumes the same schema for all versions and only sets the apiVersion
    # field of custom resources to the proper value
    strategy: None
  # either Namespaced or Cluster
  scope: Namespaced
  names:
    # plural name to be used in the URL: /apis///
    plural: crontabs
    # singular name to be used as an alias on the CLI and for display
    singular: crontab
    # kind is normally the CamelCased singular type. Your resource manifests use this.
    kind: CronTab
    # shortNames allow shorter string to match your resource on the CLI
    shortNames:
    - ct

Now register the kata qemu runtime

$ cd packaging/kata-deploy/k8s-1.14
$ kubectl apply -f kata-qemu-runtimeClass.yaml

test the kata container

Launch a container that has been defined to run on Kata Containers. The enabling is configured by the following lines in the YAML file.

spec:
      runtimeClassName: kata-qemu

launch a Kata Containers

$ cd packaging/kata-deploy/examples
$ kubectl apply -f kata-test.yaml 
deployment.apps/kata-test-deployment created
service/kata-test-service created

 kata-test.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: run-in-kata
  name: kata-test-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: run-in-kata
  template:
    metadata:
      labels:
        app: run-in-kata
    spec:
      runtimeClassName: kata-qemu
      containers:
      - name: kata-container-1
        image: k8s.gcr.io/hpa-example
        ports:
        - containerPort: 8080
          protocol: TCP

---
apiVersion: v1
kind: Service
metadata:
  name: kata-test-service
spec:
  selector:
    app: run-in-kata
  ports:
  - protocol: TCP
    port: 80
    targetPort: 8080

 check the container image pulled down into the cluster. 

$ kubectl rollout status deployment kata-test-deployment
deployment "kata-test-deployment" successfully rolled out

if the node and pod are running different kernel versions,  a Kata Container you will be running a Kata Containers kernel inside the Kata Containers VM. (For a normal software container you will be running the same kernel as the node. )

First, examine which kernel is running inside the Minikube node itself:

$ minikube ssh -- uname -a
Linux minikube 5.10.57 #1 SMP Sat Jul 16 03:51:15 UTC 2022 x86_64 GNU/Linux

And then compare that against the kernel that is running inside the container:

$ podname=$(kubectl get pods -o=name | fgrep kata-test-deployment | sed 's?pod/??')

$ echo podname=$podname
podname=kata-test-deployment-67c95867c7-8bkj4

$ kubectl exec ${podname} -- uname -a
Linux kata-test-deployment-67c95867c7-8bkj4 5.15.48 #2 SMP Wed Jul 6 06:41:59 UTC 2022 x86_64 GNU/Linux

access to the pod

create a pod with 2 containers  :     kata-test.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: run-in-kata
  name: kata-test-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: run-in-kata
  template:
    metadata:
      labels:
        app: run-in-kata
    spec:
      runtimeClassName: kata-qemu
      containers:
      - name: kata-container-1
        image: k8s.gcr.io/hpa-example
        ports:
        - containerPort: 8080
          protocol: TCP
      - name: kata-container-2
        image: nginx
        ports:
        - containerPort: 8081
          protocol: TCP

and run it :

$ kubectl get pods
NAME                                    READY   STATUS    RESTARTS      AGE
kata-test-deployment-566ff7d56b-kbzmb   2/2     Running   4 (55s ago)   2m20s

$ kubectl exec kata-test-deployment-566ff7d56b-kbzmb -- date
Defaulted container "kata-container-1" out of: kata-container-1, kata-container-2
Sun Aug 14 17:25:46 UTC 2022
$ kubectl get pods
NAME                                    READY   STATUS    RESTARTS        AGE
kata-test-deployment-566ff7d56b-67v9t   2/2     Running   32 (5m6s ago)   141m

$ kubectl exec kata-test-deployment-566ff7d56b-67v9t -c kata-container-2 date
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
error: unable to upgrade connection: container not found ("kata-container-2")

problem:  

$ kubectl get pods
NAME                                    READY   STATUS             RESTARTS       AGE
kata-test-deployment-566ff7d56b-67v9t   1/2     CrashLoopBackOff   52 (17s ago)   4h6m
$ kubectl describe pod kata-test-deployment-566ff7d56b-67v9t
Name:         kata-test-deployment-566ff7d56b-67v9t
Namespace:    default
Priority:     0
Node:         minikube/192.168.39.127
Start Time:   Sun, 14 Aug 2022 10:41:59 -0700
Labels:       app=run-in-kata
              pod-template-hash=566ff7d56b
Annotations:  
Status:       Running
IP:           10.244.0.6
IPs:
  IP:           10.244.0.6
Controlled By:  ReplicaSet/kata-test-deployment-566ff7d56b
Containers:
  kata-container-1:
    Container ID:   cri-o://35e4ba8dd97b2810b28e39e15fc2e27c162d935b66efa5c3bd74d270e52853e6
    Image:          k8s.gcr.io/hpa-example
    Image ID:       k8s.gcr.io/hpa-example@sha256:581697a37f0e136db86d6b30392f0db40ce99c8248a7044c770012f4e8491544
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Sun, 14 Aug 2022 10:42:11 -0700
    Ready:          True
    Restart Count:  0
    Environment:    
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vxlfg (ro)
  kata-container-2:
    Container ID:   cri-o://1c9802e52a581f481b824b81911d99bfd4cf2a656fcc9421cf264737c898489f
    Image:          nginx
    Image ID:       docker.io/library/nginx@sha256:790711e34858c9b0741edffef6ed3d8199d8faa33f2870dea5db70f16384df79
    Port:           8081/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Sun, 14 Aug 2022 13:55:46 -0700
      Finished:     Sun, 14 Aug 2022 13:55:50 -0700
    Ready:          False
    Restart Count:  42
    Environment:    
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vxlfg (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  kube-api-access-vxlfg:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason   Age                    From     Message
  ----     ------   ----                   ----     -------
  Warning  BackOff  36s (x884 over 3h15m)  kubelet  Back-off restarting failed container

If you get the back-off restarting failed container message this means that you are dealing with a temporary resource overload, as a result of an activity spike. The solution is to adjust periodSeconds or timeoutSeconds to give the application a longer window of time to respond.

How to fix CrashLoopBackOff Kubernetes error | Komodor

create a pod with secret data

use kubectl create a secret

kubectl create secret generic test-secret --from-literal='username=my-app' --from-literal='password=39528$vdg7Jb'

secret-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: secret-test-pod
spec:
  containers:
  - name: secret-test-container
    image: nginx
    volumeMounts:
    - name: secret-volume
      mountPath: /etc/secret-volume
  volumes:
  - name: secret-volume
    secret:
      secretName: test-secret

$ kubectl apply -f secret-pod.yaml 
pod/secret-test-pod created

$ kubectl get pods
NAME                                    READY   STATUS    RESTARTS   AGE
kata-test-deployment-67c95867c7-hqglt   1/1     Running   0          7h34m
secret-test-pod                         1/1     Running   0          2m56s

access the container in the pod

$ kubectl exec -i -t secret-test-pod -- /bin/bash
root@secret-test-pod:/# ls /etc/secret-volume
password  username
root@secret-test-pod:/# echo "$( cat /etc/secret-volume/username )"
my-app
root@secret-test-pod:/# echo "$( cat /etc/secret-volume/password )"
39528$vdg7Jb

install nano and openSSH server in container

apt-get update
apt-get install vim nano
sudo apt install openssh-server

Open the SSH daemon configuration file with:

nano /etc/ssh/sshd_config

In that file, uncomment the line

#PermitRootLogin prohibit-password

 Restart the SSH daemon with:

/usr/sbin/sshd -D

but

Missing privilege separation directory: /run/sshd
/# cd usr/sbin
/usr/sbin# mkdir /run/sshd
/# sshd -t
/# service ssh status
sshd is not running ... failed!
/# service ssh start
/# /usr/sbin/sshd -D

 

Use case: Pod with SSH keys

install openSSH Server in local

$ sudo apt-get update
$ sudo apt install openssh-server

# create public key on server

~$ cd .ssh
~/.ssh$ cat id_rsa.pub >> authorized_keys

 configure /etc/ssh/ssh_config before the pod creation

PasswordAuthentication yes

 configure /etc/ssh/sshd_config before the pod creation

PermitRootLogin yes 

# PubkeyAuthentication yes

 restart ssd

service sshd restart

create a SSH-Key

$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/ziyi/.ssh/id_rsa): /home/ziyi/.ssh/id_rsa
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/ziyi/.ssh/id_rsa
Your public key has been saved in /home/ziyi/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:Fwq/K78IARGXZmBfOigyt374IgNBhgZKBTv5c9QtPmc ziyi@ubuntu
The key's randomart image is:
+---[RSA 3072]----+
|+oB+...          |
|+=o=+o. .        |
|O++o+..o ..      |
|o+oo...o.. .     |
| ..o.. oSE.      |
|.. .o.  +o       |
|. o o   .        |
|o .o ... .       |
| o .. .o+.       |
+----[SHA256]-----+

create a secret include SSH-Key

$ kubectl create secret generic ssh-key-secret --from-file=ssh-privatekey=/home/ziyi/.ssh/id_rsa --from-file=ssh-publickey=/home/ziyi/.ssh/id_rsa.pub
secret/ssh-key-secret created

create a pod

kind: Pod
apiVersion: v1
metadata:
  name: secret-test-pod
  labels:
    name: secret-test
spec:
  runtimeClassName: kata-qemu
  volumes:
  - name: secret-volume
    secret:
      secretName: ssh-key-secret
  containers:
  - name: ssh-test-container
    image: nginx
    volumeMounts:
    - name: secret-volume
      readOnly: true
      mountPath: "/etc/secret-volume"

你可能感兴趣的:(docker,kubernetes,容器)