k8s job机制初探

博客作为学习笔记记录,若有理解或表述错误,欢迎指出

 

k8s的job机制,k8s官网参考

 

k8s的job是用来执行一次性任务的一类资源,相关的还有cronjob,用于执行以下周期性任务。

部署job之后,k8s会起对应pod,当pod的状态为finished之后,job的状态会更新为complete,即这个job任务已经执行完成,pod不在系统中继续运行。

相对于ReplicaSet、ReplicationController等controller而言,k8s的job是当pod完成某项任务之后,退出对应的pod,而ReplicaSet、ReplicationController是保证了k8s集群环境中始终保持有对应数量的pod在运行。因此可以说job是ReplicaSet、ReplicationController等controller等的补充。

 

下面通过几个例子来看看k8s job机制的使用:

1. 创建一个简单的job,用于计算pi的值,精度为小数点后2000位,对应yaml:

apiVersion: batch/v1
kind: Job
metadata:
name: pi
spec:
  template:
  spec:
  containers:
    - name: pi
    image: perl
    command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
  restartPolicy: Never
  backoffLimit: 4

此时k8s会在node上起一个pod,当pod完成计算任务之后,状态会更新为complete,而job的状态也是complete

 

上述例子中,当一个pod完成任务之后,job的状态即为complete,也就是completions的值达到1时则认为job已经完成。其实我们通过指定job的completions值,来决定job完成的条件。

2. 创建一个job任务,当completions值达到3时job才完成,并使用并发数量为2,即同一时刻该job有对应两个pod在工作,eg:

[root@calico learn]# cat job-parallel.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: pi
spec:
  parallelism: 2
  completions: 3
  template:
  spec:
    containers:
    - name: pi
      image: perl
      command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
    restartPolicy: Never
    backoffLimit: 4

此时查看pod的状态:

#先创建2个pod用于执行任务
[root@calico learn]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pi-2xlxl 0/1 ContainerCreating 0 23s  calico-node1  
pi-7wcbj 0/1 ContainerCreating 0 23s  calico-node2  
...

#等一会之后,2个pod完成,再起第3个pod来完成第三个completion目标
[root@calico learn]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pi-2xlxl 0/1 Completed 0 7m14s 192.168.63.150 calico-node1  
pi-7wcbj 0/1 Completed 0 7m14s 192.168.186.85 calico-node2  
pi-996x8 1/1 Running 0 3m8s 192.168.63.151 calico-node1  

#查看此时job的状态,completions为2/3
[root@calico learn]# kubectl get job -o wide
NAME COMPLETIONS DURATION AGE CONTAINERS IMAGES SELECTOR
pi 2/3 7m26s 7m27s pi perl controller-uid=9736793a-5075-11e9-a970-5254000ebe60
...

#此时3个pod都完成任务,状态都为complete
[root@calico learn]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pi-2xlxl 0/1 Completed 0 8m37s 192.168.63.150 calico-node1  
pi-7wcbj 0/1 Completed 0 8m37s 192.168.186.85 calico-node2  
pi-996x8 0/1 Completed 0 4m31s 192.168.63.151 calico-node1   

#查看此时job的状态,completions为3/3
[root@calico learn]# kubectl get job -o wide
NAME COMPLETIONS DURATION AGE CONTAINERS IMAGES SELECTOR
pi 3/3 8m38s 9m27s pi perl controller-uid=9736793a-5075-11e9-a970-5254000ebe60

在job的yaml中,有个参数为backoffLimit。该参数为job在执行过程中,指定任务执行失败重试的次数,但达到backoffLimit后任务仍未成功,则job的状态会更新为failed。

除了backoffLimit用于限制job执行的次数,job的timeout机制也用于限制job执行的时间,当job超过timeout时间,则job的状态也会更新为failed。注意,当job指定了timeout之后,backoffLimit参数就不生效了。

3.  创建一个带timeout的job,通过在yaml中指定activeDeadlineSeconds:

[root@calico learn]# cat job-timeout.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: pi-timeout
spec:
  parallelism: 2
  completions: 3
  backoffLimit: 5
  activeDeadlineSeconds: 100
  template:
    spec:
      containers:
      - name: pi
        image: perl
        command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
      restartPolicy: Never

显然该job执行时间会超过100s,因此job会超时失败。

#查看job状态
[root@calico learn]# kubectl get job -o wide
NAME COMPLETIONS DURATION AGE CONTAINERS IMAGES SELECTOR
pi-timeout 0/3 109s 109s pi perl controller-uid=e34174a9-5077-11e9-a970-5254000ebe60

#查看pod状态
[root@calico learn]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pi-timeout-5rkvs 1/1 Terminating 0 113s 192.168.186.86 calico-node2  
pi-timeout-6cj5k 1/1 Terminating 0 113s 192.168.63.152 calico-node1  

可以看到,超过100s之后,job并没有完成,此时会把pod删掉。注意,job的删除是级联的(cascading),因此pod和job都会被删掉。

此时查看job的详细信息,可以看到pod状态是0 Running / 0 Succeeded / 2 Failed:

[root@calico learn]# kubectl describe job pi-timeout
Name: pi-timeout
Namespace: default
Selector: controller-uid=e34174a9-5077-11e9-a970-5254000ebe60
Labels: controller-uid=e34174a9-5077-11e9-a970-5254000ebe60
job-name=pi-timeout
Annotations: 
Parallelism: 2
Completions: 3
Start Time: Wed, 27 Mar 2019 06:05:45 -0400
Active Deadline Seconds: 100s
Pods Statuses: 0 Running / 0 Succeeded / 2 Failed
Pod Template:
Labels: controller-uid=e34174a9-5077-11e9-a970-5254000ebe60
job-name=pi-timeout
Containers:
pi:
Image: perl
Port: 
Host Port: 
Command:
perl
-Mbignum=bpi
-wle
print bpi(2000)
Environment: 
Mounts: 
Volumes: 
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 2m5s job-controller Created pod: pi-timeout-5rkvs
Normal SuccessfulCreate 2m5s job-controller Created pod: pi-timeout-6cj5k
Normal SuccessfulDelete 25s job-controller Deleted pod: pi-timeout-5rkvs
Normal SuccessfulDelete 25s job-controller Deleted pod: pi-timeout-6cj5k
Warning DeadlineExceeded 25s job-controller Job was active longer than specified deadline 

在上面的例子中,当pod finished之后(finished指pod的状态是complete或者failed),pod仍然运行在系统上,这会占用系统的资源,对系统造成一定的负担。因此,在k8s 1.12版本之后,引入ttl controller,对应finished状态的pod进行清理。(目前ttl  controller也只是对job资源有清理,后续会对其他资源也支持)

当在job的yaml设置ttl后,ttl controller会在pod状态更新为finished之后,经过ttl秒删掉把pod删掉。注意,删除是把pod,job都删掉了

  • ttl为0时,finished后马上删掉
  • ttl不为n,经过n秒之后删除finished的pod
  • ttl不设置,不会删除finished的pod,job

示例:

apiVersion: batch/v1
kind: Job
metadata:
  name: pi-with-ttl
spec:
  ttlSecondsAfterFinished: 100
  template:
  spec:
    containers:
    - name: pi
      image: perl
      command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
    restartPolicy: Never

 

4. 前面提到,k8s job的删除是级联的,即删掉job之后,也会对应的把pod删掉。那么当我们有时不希望把pod删掉,只对job进行操作呢。比如,pod正在跑现网上的任务,而我们又需要修改job模板,比如job的名字等等,此时我们可以用这个命令:

kubectl delete jobs/old --cascade=false

此命令不会删除pod,只会删除对应的job

 

5. job的RestartPolicy

job支持的RestartPolicy有两种:OnFailure或Never。而在k8s中,RestartPolicy默认是Always。因此部署job时,一定要指定job的RestartPolicy,否则会有问题。

 

 

 

 

 

 

你可能感兴趣的:(云平台,k8s)