Try llama2 in NUC (by quqi99)

作者:张华 发表于:2023-09-06

版权声明:可以任意转载,转载时请务必以超链接形式标明文章原始出处和作者信息及本版权声明
( http://blog.csdn.net/quqi99 )

据说现在在PC机上可以运行llama2大模型了,

Way 1

于是照着这个文档[1] 试了一下:

#https://medium.com/@karankakwani/build-and-run-llama2-llm-locally-a3b393c1570e
git clone https://github.com/facebookresearch/llama.git
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
make
#request access - https://ai.meta.com/resources/models-and-libraries/llama-downloads/
#then run 'cd llama && ./download.sh', and select 7B-chat to download
du -h ./llama-2-7b-chat/consolidated.00.pth
cd ../llama.cpp/
python3 -m venv llama2
source llama2/bin/activate
python3 -m pip install -r requirements.txt
mkdir -p models/7B
python3 convert.py --outfile models/7B/ggml-model-f16.bin --outtype f16 ../../llama2/llama/llama-2-7b-chat --vocab-dir ../../llama2/llama

但是在上面转换成f16模式的时候报了一个python的错误,像是这个bug (https://github.com/huggingface/transformers/issues/20011 ), 但解决不了。

Way 2

于是,接着转向使用xinference框架在本地试,成功了,就是在nuc上(i7-13700H)速度挺慢的.

python3 -m venv xinference
source xinference/bin/activate
pip install "xinference[all]"
xinference -H 192.168.99.179

Way 3

#https://github.com/PromtEngineer/localGPT
wget -c https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
chmod +x Miniconda3-latest-Linux-x86_64.sh 
./Miniconda3-latest-Linux-x86_64.sh -b
export PATH="/home/hua/miniconda3/bin:"$PATH
conda
conda create -n localGPT
conda env list
conda init bash
conda activate localGPT
cd /bak/work/ai
git clone https://github.com/PromtEngineer/localGPT.git
cd localGPT
pip install -r requirements.txt
cp /nas/book/计算机网络.pdf /bak/work/ai/localGPT/SOURCE_DOCUMENTS/
#need to use proxychains to access huggingface.co:443 and cdn-lfs.huggingface.co:443 to download pytorch_model.bin etc
#proxychains python ingest.py
proxychains python ingest.py --device_type cpu
proxychains python run_localGPT.py

[1] https://medium.com/@karankakwani/build-and-run-llama2-llm-locally-a3b393c1570e

你可能感兴趣的:(ai,ai)