PyCUDA矩阵乘法

import numpy as np
from pycuda import driver, gpuarray
from pycuda.compiler import SourceModule
import pycuda.autoinit
MATRIX_SIZE = 3  
matrix_mul_kernel = """
__global__ void Matrix_Mul_Kernel(float *d_a, float *d_b, float *d_c)
{
      int tx = threadIdx.x;
      int ty = threadIdx.y;
      float value = 0;
  
      for (int i = 0; i < %(MATRIX_SIZE)s; ++i) 
      {
          float d_a_element = d_a[ty * %(MATRIX_SIZE)s + i];
          float d_b_element = d_b[i * %(MATRIX_SIZE)s + tx];
           value += d_a_element * d_b_element;
      }
 
       d_c[ty * %(MATRIX_SIZE)s + tx] = value;
   } """
  
matrix_mul = matrix_mul_kernel % {'MATRIX_SIZE': MATRIX_SIZE}
  
mod = SourceModule(matrix_mul)
  
h_a = np.random.randint(1,5,(MATRIX_SIZE, MATRIX_SIZE)).astype(np.float32)
h_b = np.random.randint(1,5,(MATRIX_SIZE, MATRIX_SIZE)).astype(np.float32)
  
# compute on the CPU to verify GPU computation
h_c_cpu = np.dot(h_a, h_b)
    
d_a = gpuarray.to_gpu(h_a) 
d_b = gpuarray.to_gpu(h_b)
  
d_c_gpu = gpuarray.empty((MATRIX_SIZE, MATRIX_SIZE), np.float32)
  
matrixmul = mod.get_function("Matrix_Mul_Kernel") 
 
matrixmul(d_a, d_b,d_c_gpu, block = (MATRIX_SIZE, MATRIX_SIZE, 1))

if (h_c_cpu.all() == d_c_gpu.get().all()) :
    print("\n\nThe computed matrix multiplication is correct")

两个3×3的矩阵初始化为1到5的随机整数,这些矩阵使用gpuarray类的 to_gpu方法上载到设备显存。创建空的GPU数组以将结果存储在设备上,这三个变量作为参数传递给内核函数。内核函数是以矩阵大小作为x和y方向的维度来调用的。使用get()方法将结果下载回主机。控制台上打印两个输入矩阵和GPU计算的结果,矩阵乘法也在CPU上使用numpy库的dot方法计算。并与GPU结果进行了比较,验证了内核计算的结果。

你可能感兴趣的:(PyCUDA,矩阵,python,cuda)