活动介绍

tensorflow/serving docker部署

时间: 2024-01-09 20:03:15 浏览: 193
回答: 要使用tensorflow/serving docker部署,你可以按照以下步骤进行操作。首先,确保你已经安装好docker并打开了docker桌面。然后,进入命令台并输入powershell进入powershell环境。接下来,使用以下命令拉取tensorflow/serving镜像: docker pull tensorflow/serving。确认docker正常使用后,输入以下命令来搭建服务器: docker run -t -p 8501:8501 -p 8500:8500 -v C:/Users/65410/Desktop/imbd_bert:/models/imbd_bert -e MODEL_NAME=imbd_bert tensorflow/serving。这样就可以使用tensorflow/serving docker部署了。\[2\]\[3\] #### 引用[.reference_title] - *1* [深度学习模型部署docker+TensorFlow Serving](https://blog.csdn.net/qq_39056596/article/details/116271044)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^control,239^v3^insert_chatgpt"}} ] [.reference_item] - *2* *3* [win10利用tensorflow serving(docker安装)部署Bert文本分类模型(tf keras)](https://blog.csdn.net/wang_rui_j_ie/article/details/122680368)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^control,239^v3^insert_chatgpt"}} ] [.reference_item] [ .reference_list ]
阅读全文

相关推荐

D:\ProgramData\Anaconda3\envs\demo100\python.exe D:\xxq\flower\flower\app.py 2025-07-03 11:09:01.090002: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2025-07-03 11:09:01.097158: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x23f0b03c2f0 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2025-07-03 11:09:01.097359: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version * Serving Flask app 'app' * Debug mode: on WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Running on all addresses (0.0.0.0) * Running on http://127.0.0.1:5000 * Running on http://10.218.82.107:5000 Press CTRL+C to quit * Restarting with stat 2025-07-03 11:09:04.303042: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2025-07-03 11:09:04.309719: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x234b781a570 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2025-07-03 11:09:04.309889: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version * Debugger is active! * Debugger PIN: 464-119-866