cuML installation guidence

My computer’s gpu is gtx3060ti, cuda version is 11.4.

First

Check the version of your gcc, type “gcc -v”, ensuring that your gcc version is 9 or higher, my computer is at 9.4 version gcc.
Also, reminder to update your conda version, just follow the system’s warming.

Secondly

Type the following command:

## create a new environment
conda create -n rapids-22.08 -c rapidsai -c nvidia -c conda-forge  \
    cuml=22.08 python=3.9 cudatoolkit=11.4
## installed in base environment
conda install -c rapidsai -c nvidia -c conda-forge \
    cuml=22.08 cudatoolkit=11.4

and type YES to download the required packages. (30+ minutes or longer costed, depending on your internet speed rate)

Third

type:

conda activate rapids-22.08

enter the “setting” of Pycharm, click “Project: " to " Python interpreter”, then click the gear icon near the “python interpreter”, (open your terminal to conda avticate rapids-22.08 before this step), so that you can choose rapids environment to enter.
Finally, click Apply and Ok, enjoy cuml.

Test

from sklearn.svm import SVC
from cuml.svm import SVC as SVC_gpu
import time
import pandas
import numpy as np
from sklearn.model_selection import cross_val_score

cpu_time = [ ]
def cpu(x, y):
    for i in range(10):
        start_svc = time.time()
        clf = SVC(gamma='auto', C=1)
        cross_val_score(clf, x, y, scoring='accuracy', cv=5, n_jobs=1)
        end_svc = time.time()
        run_time = end_svc - start_svc
        cpu_time.append(run_time)

gpu_time = [ ]
def gpu(x, y):
    for i in range(10):
        start_svc = time.time()
        clf = SVC_gpu(gamma='auto', C=1)
        cross_val_score(clf, x, y, scoring='accuracy', cv=5, n_jobs=1)
        end_svc = time.time()
        run_time = end_svc - start_svc
        gpu_time.append(run_time)

cpu(x, y)
gpu(x, y)
print('Ratio between sklearn and cuml is', np.mean(cpu_time)/np.mean(gpu_time))

The output is Ratio between sklearn and cuml is 2.062013132176805, much exciting !!!
Let us take a look to each time,

cpu_time
[1.8058998584747314, 1.8110923767089844, 1.8042247295379639, 1.8101730346679688, 1.8119633197784424, 1.8518891334533691, 1.808666467666626, 1.8549416065216064, 1.8006622791290283, 1.8383514881134033]
gpu_time
[2.466395139694214, 0.7334909439086914, 0.6987638473510742, 0.7025899887084961, 0.6982870101928711, 0.7201166152954102, 0.6984083652496338, 0.7005207538604736, 0.7072882652282715, 0.6994292736053467]

Well, it showed that, cpu is faster than gpu at onset. However, under each repeat process, gpu time was 1/3 lower than cpu time. AMAZING !!!

Machine learning can no longer be such enjoyable than ever before via using cuml and hyper-paramaters aotu optimization.

Much recommand to everyone. Well, now i really want to buy a 4090 for my computer (uhhh,). From now on, maybe it is the poor wallet that limitates my paper number.

More accuracy comparison

single kernal cpu VS gpu, with dataset putting in memory

Result: single kernal cpu < gpu
cpu_time = 1.8, gpu_time = 0.7

multiple kernal cpu (eight kernal) VS multiple gpu, with dataset putting in memory

Result: gpu error.

multiple kernal cpu (eight kernal) VS gpu, with dataset putting in memory

Result: multiple kernal cpu (eight kernal) > gpu
cpu_time = 0.36, gpu_time = 0.7

multiple kernal cpu (eight kernal) VS gpu, with dataset putting in NVIDIA

error

Well, after carefully comparison, i think, if you have much kernal, you’d better use cpu. If you cpu is all working, thus you may consider to use gpu. Those means gpu is just like 4 or 5 additional kernal waiting to be used.

Uh,
End

你可能感兴趣的:(python机器学习,python,机器学习,sklearn)