Traceback (most recent call last): File "D:\GitProjects\REQManagement\REQ_Compare_Analysis\.venv\lib\site-packages\langchain_core\language_models\chat_models.py", line 555, in generate raise e File "D:\GitProjects\REQManagement\REQ_Compare_Analysis\.venv\lib\site-packages\langchain_core\language_models\chat_models.py", line 545, in generate self._generate_with_cache( File "D:\GitProjects\REQManagement\REQ_Compare_Analysis\.venv\lib\site-packages\langchain_core\language_models\chat_models.py", line 770, in _generate_with_cache result = self._generate( File "D:\GitProjects\REQManagement\REQ_Compare_Analysis\.venv\lib\site-packages\langchain_community\chat_models\ollama.py", line 276, in _generate final_chunk = self._chat_stream_with_aggregation( File "D:\GitProjects\REQManagement\REQ_Compare_Analysis\.venv\lib\site-packages\langchain_community\chat_models\ollama.py", line 207, in _chat_stream_with_aggregation for stream_resp in self._create_chat_stream(messages, stop, **kwargs): File "D:\GitProjects\REQManagement\REQ_Compare_Analysis\.venv\lib\site-packages\langchain_community\chat_models\ollama.py", line 179, in _create_chat_stream yield from self._create_stream( File "D:\GitProjects\REQManagement\REQ_Compare_Analysis\.venv\lib\site-packages\langchain_community\llms\ollama.py", line 244, in _create_stream raise OllamaEndpointNotFoundError( langchain_community.llms.ollama.OllamaEndpointNotFoundError: Ollama call failed with status code 404. Maybe your model is not found and you should pull the model with `ollama pull llam70B:latest`. python-BaseException
时间: 2025-09-03 18:17:20 AIGC 浏览: 13
---
## ❌ 错误信息分析
你遇到的错误是:
```
OllamaEndpointNotFoundError: Ollama call failed with status code 404. Maybe your model is not found and you should pull the model with `ollama pull llam70B:latest`.
```
### 错误原因:
1. **Ollama 服务未运行**
Ollama 是一个本地大语言模型运行时,你需要先在本地启动 Ollama 服务。
2. **指定的模型未安装或拼写错误**
报错提示你执行:
```bash
ollama pull llam70B:latest
```
说明你当前尝试调用的模型是 `llam70B:latest`,但这个模型不存在或未被正确拉取。
3. **Ollama 默认监听地址是 `http://localhost:11434`**
如果你没有在运行 Ollama,或者端口被占用或配置错误,就会出现 404。
---
## ✅ 解决方法
### ✅ 1. 确保 Ollama 服务正在运行
#### Windows:
- 打开终端并运行:
```bash
ollama serve
```
#### macOS/Linux:
- 启动 Ollama:
```bash
ollama run <model_name>
```
或单独启动服务:
```bash
ollama serve
```
---
### ✅ 2. 拉取正确的模型
你当前的模型名 `llam70B` 可能是拼写错误,正确的模型名应为:
```bash
ollama pull llama3:latest
```
或者使用其他模型如:
```bash
ollama pull mistral
ollama pull qwen2
ollama pull phi3
```
---
### ✅ 3. 查看已安装模型
```bash
ollama list
```
输出示例:
```
NAME SIZE MODIFIED
llama3:latest 4.7GB 2 hours ago
mistral:latest 3.2GB 1 day ago
```
---
### ✅ 4. Python 代码中正确调用 Ollama 模型
确保你的 Python 代码中模型名与你拉取的一致:
```python
from langchain_community.chat_models import ChatOllama
llm = ChatOllama(model="llama3:latest", temperature=0.7)
response = llm.invoke("你好,请介绍一下你自己。")
print(response.content)
```
---
## ✅ 总结
| 问题 | 解决方案 |
|------|----------|
| Ollama 服务未启动 | 运行 `ollama serve` |
| 模型未安装 | 使用 `ollama pull` 安装 |
| 模型名错误 | 确保模型名与 `ollama list` 一致 |
| 网络问题或端口问题 | 检查本地 11434 端口是否被占用 |
---
###
阅读全文
相关推荐















