CUDA 编程 之 配置cache函数

设置L1/shared mem 配置函数

Sets the preferred cache configuration for a device function.

__host__ ​cudaError_t cudaFuncSetCacheConfig ( const void* func, cudaFuncCache cacheConfig ) 

枚举参数

enum cudaFuncCache
CUDA function cache configurations

Values

cudaFuncCachePreferNone = 0
Default function cache configuration, no preference
cudaFuncCachePreferShared = 1
Prefer larger shared memory and smaller L1 cache
cudaFuncCachePreferL1 = 2
Prefer larger L1 cache and smaller shared memory
cudaFuncCachePreferEqual = 3
Prefer equal size L1 cache and shared memory

你可能感兴趣的:(CUDA并行编程,CUDA,learning)