无标题文章

## 简介

[Chimera: Collaborative Preemption

for Multitasking on a Shared GPU](https://scholar.google.com/citations?view_op=view_citation&hl=en&user=ieArtTcAAAAJ&citation_for_view=ieArtTcAAAAJ:2osOgNQ5qMEC)

![在这里输入图片描述][1]

*    Keywords:

*    Graphics Processing Unit;

*    Preemptive Multitasking;

*    ContextSwitch;

*    Idempotence

----------

## 问题的提出

> Preemptive multitasking on CPUs has been primarily supported through context switching. However, the same preemption strategy incurs substantial overhead due to the large context in GPUs.

由于GPU的上下文很大,因此上下文切换不适用于GPU的抢占技术

>    overhead comes in two dimensions: a preempting kernel suffers from a long preemption latency, and the system throughput is wasted during the switch

这里的开销主要体现两方面: 抢占延时长,系统的吞吐量浪费

## 解决方案的提出

>    we propose Chimera, a collaborative preemption approach that can precisely control the overhead for multitasking on GPUs

提出Chimera,一种适用于多任务GPU的协作抢占方法,可以精确控制上述的开销。

>    Chimera can achieve a specified preemption latency while minimizing throughput overhead

Chimera 能实现指定的抢占延迟,同时最小化吞吐量。

>    Chimera achieves the goal by intelligently selecting which SMs to preempt and how each thread block will be preempted.

Chimera智能选择抢占哪个SM,以及如何抢占每个线程块

>    Chimera first introduces streaming multiprocessor (SM) flushing, which can instantly preempt an SM by detecting and exploiting idempotent execution

技术之一: flushing

>    Chimera utilizes flushing collaboratively with two previously proposed preemption techniques for GPUs, namely context switching and draining to minimize throughput overhead while achieving a required preemption latency.

技术之二: context switching,技术之三:draining

####  技术解释

1.    Context switching

>    Context switching [17, 29] stores the context of currently running thread blocks, and preempts an SM with a new kernel.

Context switching 是保存当前运行线程块的上下文,并用新内核抢占SM。

1.    Draining

>    Draining [12, 29] stops issuing new thread blocks to the SM and waits until the SM finishes its currently running thread blocks.

Draining 是停止分配新线程块给SM,等待当前线程块运行完,再抢占该SM(彬彬有礼)。

1.    Flushing

>    Flushing drops the execution of running thread blocks and preempts the SM almost instantly.

Flushing 是取消正在运行的线程块,立即抢占SM(很粗暴)。

## 本文的contribution

1.  分析GPU的刷新条件: 放宽幂等的语义定义

1.  分析 抢占技术( context switching, draining, and flushing) 与线程运行过程的定量关系

1.  Chimera的实现: 根据不同的抢占技术的开销来智能选择抢占哪个SM以及如何抢占线程块。

## 实验结果评估

> Evaluations show that Chimera violates the deadline for only 0.2% of preemption requests when a 15µs preemption latency constraint is used. For multi-programmed workloads, Chimera can improve the average normalized turnaround time by 5.5x, and system throughput by 12.2%

改善平均周转时间和吞吐量。

#### 3. Architecture

##### 3.1 GPU Scheduler with PreemptiveMultitasking

An SM partitioning policy in the kernel scheduler tells

howmany SMs each kernelwill run on

Chimera consists of two parts: estimating costs of preemption

for each technique, and selecting SMs to preempt

with corresponding preempting techniques.

Chimera can directly compare the estimated

cost of each preemption technique

##### 3.2 Cost Estimation

estimate the cost of each

preemption technique precisely for each SM.

First, Chimera

measures the total number of executed instructions for each

thread block to determine the progress of each thread block

Second, Chimera also measures the progress of each

thread block in cycles

instructions-per-cycle (IPC) or cycles-per-instruction (CPI)

estimate the preemption latency of context switching

using the same method

##### 3.3 Preemption Selection

how Chimera selects a subset of

SMs and techniques to preempt.

The time complexity of algorithm 1 is O(NT logT +

NlogN),

Thus, the impact

of the selection algorithm in Chimera is negligible to the

preemption latency.

##### 3.4 SM Flushing

We relax the idempotence condition by looking at thread

blocks individually with the notion of time

4. Results

[1]: https://coding.net/api/project/178029/files/403854/imagePreview

你可能感兴趣的:(无标题文章)