Deepseek服务器部署指南
-
Docker
-
CUDA
-
Ollama
-
Deepseek
-
Dify
-
注意: 不是面向零linux基础人群
Docker
如果有Docker就跳过
1.下载安装包
Index of linux/static/stable/x86_64/
选择26.1.1为例下载
上传至服务器后 cd
到文件位置执行解压
注: 仔细检查文件名是否拼写正确
tar -zxvf docker-26.1.3-ce.tgz
解压好之后移动文件位置到/usr/bin
cp ./docker/* /usr/bin
2.配置docker
首先创建docker.service文件
cd /etc/systemd/system/
touch docker.service
编辑文件内容
vim docker.service
内容如下:
注意:insecure-registry此处为服务器的ip
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd --selinux-enabled=false --insecure-registry=192.168.205.230
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
# restart the docker process if it exits prematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
添加权限 , 注意要cd到目录下
chmod +x docker.service
重新加载
systemctl daemon-reload
启动
systemctl start docker
查看运行状态
systemctl status docker
给Docker设置上开机自启, 注意要cd到目录下
systemctl enable docker.service
CUDA
先安装一下conda
wget -c https://repo.anaconda.com/archive/Anaconda3-2023.03-1-Linux-x86_64.sh
一直输入yes按回车
bash Anaconda3-2023.03-1-Linux-x86_64.sh
设置环境变量 在profile最后加一行 export PATH=~/anaconda3/bin:$PATH
vim /etc/profile
重载
source /etc/profile
source ~/.bashrc
查看CUDA要下载的版本 , 小方块右上角有一个 CUDA version就是
nvidia-smi
使用conda安装cuda, version 这边写版本
conda install -c conda-forge cupy cudnn cutensor nccl cuda-version=XX.X
Ollama
通过一键下载文件直接安装
curl -fsSL https://ollama.com/install.sh | bash
启动ollama
ollama serve
查看已安装模型
ollama list
安装Deepseek-r1模型, 安装完成后再次输入可进行对话, 按Ctrl+d可退出
ollama run deepseek-r1:32b
Dify
首先克隆dify的源码
git clone --depth 1 https://github.com/langgenius/dify.git
然后切换到dify/docker下边
cd dify/docker
更改配置文件
cp .env.example .env
用docker拉取
docker compose up -d 如报错找不到命令则用 docker-compose up -d
dify代理的是80端口
打开后设置管理员账号, 在右上角有用户, 点设置增加大模型的供应商 , 然后选择ollama,
下边的基础URL写服务器ip, 如果报错就写http://host.docker.internal:11434
, 在自己电脑部署的访问的时候是写后者