python调用英伟达GPU加速方法

前往我的主页以获得更好的阅读python调用英伟达GPU加速方法 - DearXuan的主页icon-default.png?t=M276https://blog.dearxuan.com/2022/02/20/python%E8%B0%83%E7%94%A8%E8%8B%B1%E4%BC%9F%E8%BE%BEGPU%E5%8A%A0%E9%80%9F%E6%96%B9%E6%B3%95/调用GPU的本质其实是调用CUDA的dll

如果你对CUDA编程不熟悉,可以参考以下文章CUDA并行编程概述 - DearXuan的主页icon-default.png?t=M276https://blog.dearxuan.com/2021/11/15/CUDA%E5%B9%B6%E8%A1%8C%E7%BC%96%E7%A8%8B%E6%A6%82%E8%BF%B0/

生成CUDA dll

调用显卡的方法是调用CUDA的dll,因此首先要使用CUDA生成dll

下面是示例CUDA代码

#include "cuda_runtime.h"
#include "device_launch_parameters.h"

#include 
#include 

using namespace std;

__global__ void DoInKernel(int* a, int* b, int* c, int num) {
	int i = blockIdx.x * 1024 + threadIdx.x;
	if (i >= num) return;
	c[i] = a[i] + b[i];
}

extern "C" _declspec(dllexport) void Add(int* ori_a, int* ori_b, int* ori_c, int num) {
	int* dev_a;
	int* dev_b;
	int* dev_c;
	int size = num * sizeof(int);
	cudaMalloc((void**)&dev_a, size);
	cudaMalloc((void**)&dev_b, size);
	cudaMalloc((void**)&dev_c, size);
	cudaMemcpy(dev_a, ori_a, size, cudaMemcpyHostToDevice);
	cudaMemcpy(dev_b, ori_b, size, cudaMemcpyHostToDevice);
	DoInKernel << > > (dev_a, dev_b, dev_c, num);
	cudaMemcpy(ori_c, dev_c, size, cudaMemcpyDeviceToHost);
	cudaFree(dev_a);
	cudaFree(dev_b);
	cudaFree(dev_c);
}

它的作用是将 ori_a 和 ori_b 数组内的数组相加,并保存到 ori_c

编译后得到CUDA.dll文件

调用dll

import ctypes
import random

length = 4096

# dll地址
dll_path = r"C:\Users\Administrator\桌面\CUDA.dll"

# 使用stypes库加载dll
dll = ctypes.cdll.LoadLibrary(dll_path)

# 创建C语言中的数组
c_int = ctypes.c_int * length
array_a = c_int()
array_b = c_int()
array_c = c_int()

# 生成随机数
i = 0
while i < length:
    array_a[i] = random.randint(0,1000)
    array_b[i] = random.randint(0,1000)
    i = i + 1

# 运行dll中的 Add() 函数
dll.Add(array_a, array_b, array_c, 4096)

# 打印结果
i = 0
while i < length:
    print(array_a[i],"+",array_b[i],"=",array_c[i])
    i = i + 1

运行结果

python调用英伟达GPU加速方法_第1张图片

你可能感兴趣的:(python)