dmaengine driver for linux

只是linux文档的翻译而已:

3.4 什么时候异步操作会被执行

async_函数返回后,提交的操作不会被马上执行。直到达到驱动满足的临界点后,之前挂起的操作才会被执行。可以通过async_tx_issue_pending_all函数强迫执行所有挂起的操作。

3.5异步操作什么时候结束
1、调用dma_wait_for_async_tx循环查询操作是否结束
2、调用async_时设置回调函数,


3.6 Constraints:
1/ Calls to async_ are not permitted in IRQ context.  Other
   contexts are permitted provided constraint #2 is not violated.
2/ Completion callback routines cannot submit new operations.  This
   results in recursion in the synchronous case and spin_locks being
   acquired twice in the asynchronous case.
约束:
1、async_操作不能再IRQ中断上下文调用,可以在#2条件满足的情况下在其他的上下文中调用这些操作;
2、操作完成回调函数不能再提交新的操作。这会导致同步循环,或者异步中spin_locks被申请两次。
   

1、结束回调函数在tasklet上下文运行
2、dma_async_tx_descriptor结构体不能在IRQ中断上下文操作
3、在descriptor清理函数中调用async_tx_run_dependencies处理提交的依赖操作

4.2 dma_request_channel函数可以分配device-to-memory操作的通道,这些通道是不能被共享的,exclusive,排它的。
struct dma_chan *dma_request_channel(dma_cap_mask_t mask,
                     dma_filter_fn filter_fn,
                     void *filter_param);
                    
Where dma_filter_fn is defined as:
typedef bool (*dma_filter_fn)(struct dma_chan *chan, void *filter_param);
dma_filter_fn可以对满足mask的所有channel进行过滤,成功返回ture,失败返回false。

DMA_PRIVATE标记channel是私有的,不能被用于其他的操作。如果已知某一channel硬件上是私有的,可以在初始化的时候设置DMA_PRIVATE。否则,可以在通过dma_request_channel申请public通道时设置DMA_PRIVATE。

1/ Once a channel has been privately allocated it will no longer be
   considered by the general-purpose allocator even after a call to
   dma_release_channel().
2/ Since capabilities are specified at the device level a dma_device
   with multiple channels will either have all channels public, or all
   channels private.
1、如果一个channel通过DMA_PRIVATE标记成功申请到,即使调用dma_release_channel释放该通道,
    该通道仍旧不能用于其他用途。
2、多通道DMA要么标记所有通道为public,要么全部为private

The slave DMA usage consists of following steps:
1. Allocate a DMA slave channel
2. Set slave and controller specific parameters
3. Get a descriptor for transaction
4. Submit the transaction
5. Issue pending requests and wait for callback notification

1. Allocate a DMA slave channel分配DMA通道
   Interface:
    struct dma_chan *dma_request_channel(dma_cap_mask_t mask,
            dma_filter_fn filter_fn,
            void *filter_param);
   where dma_filter_fn is defined as:
    typedef bool (*dma_filter_fn)(struct dma_chan *chan, void *filter_param);
    
2. Set slave and controller specific parameters
struct dma_slave_config指定
   DMA direction方向, DMA addresses地址, bus widths总线宽度, DMA burst lengths突发长度

   Interface:
    int dmaengine_slave_config(struct dma_chan *chan,
                  struct dma_slave_config *config)    
    
Please note  that the 'direction' member will be going away as it duplicates the
   direction given in the prepare call.    
   direction会在下一步的准备函数中重复定义
    
    
3. Get a descriptor for transaction
DMA-engine支持的slave transfers模式有:
   slave_sg    - DMA a list of scatter gather buffers from/to a peripheral
   dma_cyclic    - Perform a cyclic DMA operation from/to a peripheral till the
          operation is explicitly stopped.
   interleaved_dma - This is common to Slave as well as M2M clients. For slave
         address of devices' fifo could be already known to the driver.
         Various types of operations could be expressed by setting
         appropriate values to the 'dma_interleaved_template' members.

   Interface:
    struct dma_async_tx_descriptor *(*chan->device->device_prep_slave_sg)(
        struct dma_chan *chan, struct scatterlist *sgl,
        unsigned int sg_len, enum dma_data_direction direction,
        unsigned long flags);

    struct dma_async_tx_descriptor *(*chan->device->device_prep_dma_cyclic)(
        struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len,
        size_t period_len, enum dma_data_direction direction);

    struct dma_async_tx_descriptor *(*device_prep_interleaved_dma)(
        struct dma_chan *chan, struct dma_interleaved_template *xt,
        unsigned long flags);
以上三个函数会返回一个非NULL dma_async_tx_descriptor指针

DMA操作的scatterlist应该在调用device_prep_slave_sg前映射好,而且要保持到DMA操作完成。
scatterlist 一定要用 DMA struct device映射. 正常的步骤如下:
    nr_sg = dma_map_sg(chan->device->dev, sgl, sg_len);
    if (nr_sg == 0)
        /* error */

    desc = chan->device->device_prep_slave_sg(chan, sgl, nr_sg,
            direction, flags);

dma_async_tx_descriptor成功获得后,可以添加回调函数并提交异步操作。一些dmaengine driver会在
successful preparation和提交异步操作之间持有spinlock,所以这两个操作必须成对出现。

   Note:
    Although the async_tx API specifies that completion callback
    routines cannot submit any new operations, this is not the
    case for slave/cyclic DMA.

    For slave DMA, the subsequent transaction may not be available
    for submission prior to callback function being invoked, so
    slave DMA callbacks are permitted to prepare and submit a new
    transaction.

    For cyclic DMA, a callback function may wish to terminate the
    DMA via dmaengine_terminate_all().

    Therefore, it is important that DMA engine drivers drop any
    locks before calling the callback function which may cause a
    deadlock.
    DMA驱动调用回调函数时应该释放所有的互斥锁

    Note that callbacks will always be invoked from the DMA
    engines tasklet, never from interrupt context.

4. Submit the transaction提交异步操作
descriptor准备好并设置回调函数后,必须提交到DMA engine driver的挂起队列。

   Interface:
    dma_cookie_t dmaengine_submit(struct dma_async_tx_descriptor *desc)
会调用dma_async_tx_descriptor的tx_submit函数,该函数在dma驱动中实现

   This returns a cookie can be used to check the progress of DMA engine
   activity via other DMA engine calls not covered in this document.
   返回的cookie可以被其他的函数调用来确定DMA操作的进度。

dmaengine_submit只是提交异步操作,并不会触发操作。
step 5 的dma_async_issue_pending会触发操作

5. Issue pending DMA requests and wait for callback notification

   On completion of each DMA operation, the next in queue is started and
   a tasklet triggered. The tasklet will then call the client driver
   completion callback routine for notification, if set.
    
    Interface:
    void dma_async_issue_pending(struct dma_chan *chan);
该函数调用的device_issue_pending在DMA驱动中实现。

    
PL330 DMA driver 只支持DMA_TERMINATE_ALL和DMA_SLAVE_CONFIG两个命令
Further APIs:

1. int dmaengine_terminate_all(struct dma_chan *chan)    

2. int dmaengine_pause(struct dma_chan *chan)

   This pauses activity on the DMA channel without data loss.

3. int dmaengine_resume(struct dma_chan *chan)

   Resume a previously paused DMA channel.  It is invalid to resume a
   channel which is not currently paused.
    
4. enum dma_status dma_async_is_tx_complete(struct dma_chan *chan,
        dma_cookie_t cookie, dma_cookie_t *last, dma_cookie_t *used)    
 This can be used to check the status of the channel.    
区别于dma_wait_for_async_tx()

 This can be used in conjunction with dma_async_is_complete() and
   the cookie returned from 'descriptor->submit()' to check for
   completion of a specific DMA transaction.



你可能感兴趣的:(linux,驱动)