通过程序调用Deepseek本地大模型

 通过程序调用Deepseek本地大模型,以下分别对Python调用、Nodejs调用、Curl调用进行讲解。

DeepSeek R1本地部署 DeepSeek Api接口调用 DeepSeek RAG知识库工作流系列教程

 

Pyton调用

https://pypi.org/project/ollama/

 

pip install ollama
from ollama import Client
client = Client(
host='http://192.168.1.4:11434',
headers={'x-some-header': 'some-value'}
)
stream = client.chat(model='deepseek-r1',stream=True, messages=[
{
'role': 'user',
'content': '你叫什么?',
}
])
for chunk in stream:
print(chunk['message']['content'], end='', flush=True)
Nodejs 调用

 

https://www.npmjs.com/package/ollama

 

npm i ollama --save
import { Ollama } from 'ollama'
const ollama = new Ollama({ host: 'http://127.0.0.1:11434' })
let prompt = `你叫什么`;
const response = await ollama.chat({
model: 'deepseek-r1:7b',
messages: [{ role: 'user', content: prompt }],
})
console.log(response.message.content)

CURL调用 

curl http://localhost:11434/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek-r1:7b",
"messages": [
{
"role": "user",
"content": "你好"
}
]
}

 

 

 

 

 

 

 

 

 

你可能感兴趣的:(deepseek,调用Deepseek本地大模型)