cka问题记录

Q1:Create a Job that run 10 time with 2 jobs running in parallel
主要参数:

# kubectl explain  job.spec.parallelism
KIND:     Job
VERSION:  batch/v1

FIELD:    parallelism 

DESCRIPTION:
     Specifies the maximum desired number of pods the job should run at any
     given time. The actual number of pods running in steady state will be less
     than this number when ((.spec.completions - .status.successful) <
     .spec.parallelism), i.e. when the work left to do is less than max
     parallelism. More info:
     https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/
# kubectl explain  job.spec.completions
KIND:     Job
VERSION:  batch/v1

FIELD:    completions 

DESCRIPTION:
     Specifies the desired number of successfully finished pods the job should
     be run with. Setting to nil means that the success of any pod signals the
     success of all pods, and allows parallelism to have any positive value.
     Setting to 1 means that parallelism is limited to 1 and the success of that
     pod signals the success of the job. More info:
     https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/

Job Yaml范例:

apiVersion: batch/v1
kind: Job
metadata:
  name: date
spec:
  parallelism: 2
  completions: 10
  template:
    spec:
      containers:
      - name: date
        image: busybox:v1
        command: ["/bin/sh","-c","date&&/bin/sleep 15"]
      restartPolicy: Never

Q2:Find which Pod is taking max CPU

# kubectl top pods -A
NAMESPACE       NAME                                       CPU(cores)   MEMORY(bytes)   
default         busybox-httpd-1-7d9fcbc69-kh6jm            0m           0Mi             
default         busybox-httpd-1-7d9fcbc69-lqzk8            0m           0Mi             
default         busybox-httpd-2-79bc8f7fcd-gwz6l           0m           0Mi             
default         busybox-httpd-3-64594f984b-r7nhd           0m           0Mi             
default         task-pv-pod-2                              0m           1Mi             
default         testpod-h9v4r                              0m           0Mi             
default         testpod-hsfrp                              0m           0Mi             
default         testpod-m9nbn                              0m           0Mi             
default         testpod-nfb6h                              0m           0Mi             
default         testpod-qzf7q                              0m           0Mi             
default         testpod-ttq9k                              0m           0Mi             
kube-system     calico-kube-controllers-8646dd497f-wqzwj   3m           11Mi            
kube-system     calico-node-5ckj5                          13m          86Mi            
kube-system     calico-node-hklwn                          18m          67Mi            

Q3:List all PersistentVolumes sorted by their name

# kubectl get pv --sort-by=.metadata.name -A
NAME              CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM          STORAGECLASS   REASON   AGE
ceph-pool1-lun1   10Gi       RWO            Retain           Bound    default/pvc1   manual                  16h
ceph-pool1-lun2   15Gi       RWO            Retain           Bound    default/pvc2   manual                  16h
ceph-pool1-lun3   10Gi       RWO            Retain           Bound    default/pvc5   manual                  16h
ceph-pool1-lun4   15Gi       RWO            Retain           Bound    default/pvc3   manual                  16h
ceph-pool1-lun5   10Gi       RWO            Retain           Bound    default/pvc4   manual                  16h

Q4:Create a NetworkPolicy to allow connect to port 8080 by busybox pod only
1、需要安装支持网络策略的插件
2、使用标注将namespace default设置为缺省隔离

kubectl annotate ns default "net.beta.kubernetes.io/network-policy={\"ingress\": {\"isolation\": \"DefaultDeny\"}}"

3、配置NetworkPolicy

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: busybox-access-8080
spec:
  podSelector:  #匹配目标
    matchLabels:
      version: test1
  ingress:      #匹配源
  - from:
    - podSelector:
        matchLabels:
          run: busybox
    ports:
      - protocol: TCP
        port: 8080

Q5:fixing broken nodes, see https://kubernetes.io/docs/concepts/architecture/nodes/

Q6:etcd backup
参考资料:
Kubernetes的所有配置数据都存储在ETCD中,发生无法恢复的灾难时,可以使用ETCD备份来恢复所有数据。
ETCD会定期创建快照,但最好还是进行单独备份。建议也备份/etc/etcd/etcd.conf配置文件。
备份方法:
v2备份则可以备份V2和V3的数据。
v3备份只备份v3数据,不备份v2数据。

V2备份,在线执行备份,不中断ETCD群集操作

export ETCDCTL_API=2
alias etcdctl2=' etcdctl  \
--cert-file=/etc/kubernetes/pki/etcd/peer.crt \
--key-file=/etc/kubernetes/pki/etcd/peer.key \
--ca-file=/etc/kubernetes/pki/etcd/ca.crt \
--endpoints=https://etcd01:2379,https://etcd02:2379,https://etcd03:2379'
etcdctl2 backup --data-dir /var/lib/etcd/default.etcd/ --backup-dir /backupdir

备份后是多个文件的目录结构

# tree /backupdir/
/backupdir/
└── member
    ├── snap
    │   ├── 000000000000005c-0000000000356803.snap
    │   └── db
    └── wal
        └── 0000000000000000-0000000000000000.wal

备份恢复:
1.停止所有主机上的ETCD
2.清除所有主机上的/var/lib/etcd/member
3.将备份复制到第一个ETCD主机上的/var/lib/etcd/member
4.在第一台ETCD主机上启动ETCD –force-new-cluster
5.将第一台ETCD主机上的正确对等URL设置为节点的IP,而不是127.0.0.1。
6.将下一个主机添加到群集
7.在下一台主机上启动etcd,将 –initial-cluster= "第一台+第二台"
8.重复5和6,直到所有ETCD节点连接
9.正常重启ETCD(使用现有设置)

V3备份,备份后是单个文件etcdbackup.20190612

export ETCDCTL_API=3
alias etcdctl3=' etcdctl  \
--cert=/etc/kubernetes/pki/etcd/peer.crt \
--key=/etc/kubernetes/pki/etcd/peer.key \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--endpoints=https://etcd01:2379,https://etcd02:2379,https://etcd03:2379'
etcdctl3 snapshot save /etcdbackup.20190612

1.停止所有主机上的ETCD
2.清除所有主机上的/var/lib/etcd/member
3.将备份复制到每一台etcd主机
4.在每一台etcd主机上'source /etc/default/etcd',并执行如下命令

etcdctl3 snapshot restore /etcdbackup.20190612 \
--name $ETCD_NAME --initial-cluster "$ETCD_INITIAL_CLUSTER" \
--initial-cluster-token “$ETCD_INITIAL_CLUSTER_TOKEN” \
--initial-advertise-peer-urls $ETCD_INITIAL_ADVERTISE_PEER_URLS \
--data-dir $ETCD_DATA_DIR

备份方法:
检查etcd是否存在v2的数据,可以看到当前集群并没有v2的数据

#export ETCDCTL_API=2
#alias etcdctl2=' etcdctl  \
--cert-file=/etc/kubernetes/pki/etcd/peer.crt \
--key-file=/etc/kubernetes/pki/etcd/peer.key \
--ca-file=/etc/kubernetes/pki/etcd/ca.crt \
--endpoints=https://etcd01:2379,https://etcd02:2379,https://etcd03:2379'
#etcdctl2 ls /

检查etcd是否存在v3的数据,可以看到数据都在v3中

#export ETCDCTL_API=3
#alias etcdctl3=' etcdctl  \
--cert=/etc/kubernetes/pki/etcd/peer.crt \
--key=/etc/kubernetes/pki/etcd/peer.key \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--endpoints=https://etcd01:2379,https://etcd02:2379,https://etcd03:2379'
#etcdctl3  get / --prefix --keys-only

备份etcd v3的数据

#etcdctl3 snapshot save /etcdbackup.20190612

Q7:You have a Container with a volume mount. Add a init container that creates an empty file in the volume. (only trick is to mount the volume to init-container as well)
https://kubernetes.io/docs/concepts/workloads/pods/init-containers/

apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod
  labels:
    app: myapp
spec:
  initContainers:
  - name: init-myapp
    image: busybox
    command: ['sh', '-c', 'echo 1111 > /var/www/html/index.html ']
    volumeMounts:
      -  mountPath: /var/www/html
         name: volume1
  containers:
  - name: myapp
    image: busybox
    command: ['sh', '-c', '/bin/httpd -f -h /var/www/html']
    volumeMounts:
      -  mountPath: /var/www/html
         name: volume1
  volumes:
    - emptyDir:
      name: volume1

Q8:When running a redis key-value store in your pre-production environments many deployments are incoming from CI and leaving behind a lot of stale cache data in redis which is causing test failures. The CI admin has requested that each time a redis key-value-store is deployed in staging that it not persist its data.
Create a pod named non-persistent-redis that specifies a named-volume with name app-cache, and mount path /data/redis. It should launch in the staging namespace and the volume MUST NOT be persistent.
Create a Pod with EmptyDir and in the YAML file add namespace: CI

kubectl create ns ci

apiVersion: v1
kind: Pod
metadata:
  name: non-persistent-redis
  labels:
    app: myapp
  namespace: ci
spec:
  nodeName: node01
  containers:
  - name: redis
    image: redis
    volumeMounts:
      -  mountPath: /data/redis
         name: app-cache
    workingDir: /data/redis
    env:
      - name: data
        value: /data/redis/
  volumes:
    - emptyDir:
      name: app-cache

Q9: Find the error message with the string “Some-error message here”.
https://kubernetes.io/docs/concepts/cluster-administration/logging/
see kubectl logs and /var/log for system services

 #kubectl logs pod-name
 #cat /var/log/messages

Q10:Run a Jenkins Pod on a specified node only.
https://kubernetes.io/docs/tasks/administer-cluster/static-pod/
Create the Pod manifest at the specified location and then edit the systemd service file for kubelet(/etc/systemd/system/kubelet.service) to include --pod-manifest-path=/specified/path. Once done restart the service.
可以在pod.spec.nodeName 指定在哪个node上run

Q11:Create an Ingress resource, Ingress controller and a Service that resolves to cs.rocks.ch.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-test
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
  rules:
  - host: cs.rocks.ch
    http:
      paths:
        - path: /
          backend:
            serviceName: test1
            servicePort: 80

你可能感兴趣的:(cka问题记录)