file-type

利用Docker快速部署ADB环境的简便方法

ZIP文件

下载需积分: 50 | 4KB | 更新于2025-09-11 | 128 浏览量 | 1 下载量 举报 收藏
download 立即下载
标题中的“docker-adb:适用于ADB的Docker”意味着本节内容将围绕一种特别为Android Debug Bridge(ADB)优化的Docker镜像展开。Docker是一个开源的应用容器引擎,它允许开发者打包应用以及应用的依赖环境到一个可移植的容器中。而ADB是Android SDK的一部分,用于与连接的Android设备进行通信,执行如安装、调试等操作。 描述提供了此Docker镜像的一些关键信息,例如它内置了最新的Debian操作系统、openjdk 7和Android SDK 24.3.4版本。这些组件是进行Android开发和测试时常用的工具集。同时,描述中还提供了如何使用这个Docker镜像的一个简单命令示例,即通过`docker run`命令加上一些参数来运行`adb devices`命令,这条命令用于列出当前连接的所有Android设备。 从技术角度来说,该Docker镜像的特点如下: 1. **Debian系统**: Debian是一个广泛使用的Linux发行版,其稳定性和可靠性为开发者所青睐。采用Debian作为基础,可以确保Docker容器环境的稳定运行。 2. **OpenJDK 7**: OpenJDK(Open Java Development Kit)是一个开源版本的Java开发工具包,是Sun公司Java SE的一个免费版本。在这里,openjdk 7被包含在Docker镜像中,为需要Java开发或运行基于Java的应用的用户提供环境支持。 3. **Android SDK 24.3.4**: Android SDK(Software Development Kit)是开发Android应用和游戏不可或缺的工具集合。版本24.3.4是其中的一个版本号,包含了编写Android应用所需的库、模拟器、文档等资源。通过将特定版本的SDK预先包含在Docker镜像中,可以大大减少开发者在本地环境中安装和配置SDK的工作量。 4. **ADB的集成**: 镜像中直接集成了ADB,使得开发者可以直接在Docker容器内运行ADB命令,而无需单独安装ADB环境。这对于需要频繁操作Android设备进行测试的开发者来说是一个非常方便的功能。 5. **运行`adb devices`命令的示例**: 这个命令是为了展示如何使用这个Docker镜像。其中,`docker run`是用来启动一个Docker容器的命令;`--privileged`参数赋予了容器执行某些特殊权限操作的能力,这通常是因为运行ADB需要访问USB设备等资源;`-v /dev/bus/usb:/dev/bus/usb`是一个卷挂载参数,它将宿主机的`/dev/bus/usb`目录挂载到容器内的同名目录,以允许容器内的进程访问USB设备;`softsam/adb`指定了要使用的Docker镜像;最后的`adb devices`就是将在容器内部运行的命令,用于列出当前连接的所有Android设备。 将这些知识点串联起来,我们可以了解到,通过这个特定的Docker镜像,开发者能够快速搭建一个轻量级的、随时可用的Android开发环境,而无需担心不同机器间的环境配置差异,从而显著提升开发效率和一致性。 考虑到文件的【压缩包子文件的文件名称列表】中只有一个“docker-adb-master”,我们可以推断,这可能是该Docker镜像的源代码仓库或者是项目名称。它表明了这个Docker镜像的维护者或开发者的身份,同时也暗示了如果需要源代码或者进一步的文档说明,应该从这个仓库中获取。

相关推荐

filetype

Xshell 8 (Build 0082) Copyright (c) 2024 NetSarang Computer, Inc. All rights reserved. Type `help' to learn how to use Xshell prompt. [C:\~]$ Connecting to 192.168.200.131:22... Connection established. To escape to local shell, press 'Ctrl+Alt+]'. Last login: Wed Aug 27 04:37:33 2025 from 192.168.200.1 [yywz@localhost ~]$ su root 密码: [root@localhost yywz]# docker-compose up -d Creating network "yywz_default" with the default driver Pulling nginx (nginx:alpine)... ERROR: Get "https://registry-1.docker.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) [root@localhost yywz]# sudo systemctl daemon-reload [root@localhost yywz]# sudo systemctl restart docker [root@localhost yywz]# docker-compose up -d Pulling nginx (nginx:alpine)... ERROR: Get "https://registry-1.docker.io/v2/": dial tcp 128.242.240.20:443: i/o timeout [root@localhost yywz]# sudo mkdir -p /etc/docker [root@localhost yywz]# sudo tee /etc/docker/daemon.json <<-'EOF' > { > "registry-mirrors": [ > "https://hub-mirror.c.163.com", > "https://mirror.baidubce.com", > "https://docker.mirrors.ustc.edu.cn", > "https://registry.docker-cn.com" > ], > "insecure-registries": [], > "debug": false > } > EOF { "registry-mirrors": [ "https://hub-mirror.c.163.com", "https://mirror.baidubce.com", "https://docker.mirrors.ustc.edu.cn", "https://registry.docker-cn.com" ], "insecure-registries": [], "debug": false } [root@localhost yywz]# docker-compose up -d Pulling nginx (nginx:alpine)... ERROR: Get "https://registry-1.docker.io/v2/": dial tcp 108.160.170.39:443: i/o timeout [root@localhost yywz]# cat /etc/docker/daemon.json { "registry-mirrors": [ "https://hub-mirror.c.163.com", "https://mirror.baidubce.com", "https://docker.mirrors.ustc.edu.cn", "https://registry.docker-cn.com" ], "insecure-registries": [], "debug": false } [root@localhost yywz]# systemctl restart docker.service [root@localhost yywz]# docker-compose up -d Pulling nginx (nginx:alpine)... ERROR: Get "https://registry-1.docker.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) [root@localhost yywz]# sudo tee /etc/docker/daemon.json <<-'EOF' > { > "registry-mirrors": [ > "https://docker.211678.top", > "https://docker.1panel.live", > "https://hub.rat.dev", > "https://docker.m.daocloud.io", > "https://do.nark.eu.org", > "https://dockerpull.com", > "https://dockerproxy.cn", > "https://docker.awsl9527.cn" > ] > } > EOF { "registry-mirrors": [ "https://docker.211678.top", "https://docker.1panel.live", "https://hub.rat.dev", "https://docker.m.daocloud.io", "https://do.nark.eu.org", "https://dockerpull.com", "https://dockerproxy.cn", "https://docker.awsl9527.cn" ] } [root@localhost yywz]# sudo systemctl daemon-reload [root@localhost yywz]# systemctl restart docker.service [root@localhost yywz]# docker-compose up -d Pulling nginx (nginx:alpine)... alpine: Pulling from library/nginx 9824c27679d3: Pull complete 6bc572a340ec: Pull complete 403e3f251637: Pull complete 9adfbae99cb7: Pull complete 7a8a46741e18: Pull complete c9ebe2ff2d2c: Pull complete a992fbc61ecc: Pull complete cb1ff4086f82: Pull complete Digest: sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8 Status: Downloaded newer image for nginx:alpine Pulling redis (redis:alpine)... alpine: Pulling from library/redis 9824c27679d3: Already exists 9880d81ff87a: Pull complete 168694ef5d62: Pull complete f8eab6d4856e: Pull complete 1f79dac8d2d4: Pull complete 4f4fb700ef54: Pull complete 61cfb50eeff3: Pull complete Digest: sha256:987c376c727652f99625c7d205a1cba3cb2c53b92b0b62aade2bd48ee1593232 Status: Downloaded newer image for redis:alpine Creating yywz_nginx_1 ... done Creating yywz_redis_1 ... done [root@localhost yywz]# docker -v Docker version 20.10.24, build 297e128 [root@localhost yywz]# systemctl status docker ● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled) Drop-In: /etc/systemd/system/docker.service.d └─timeout.conf Active: active (running) since 三 2025-08-27 15:14:00 CST; 4min 56s ago Docs: https://docs.docker.com Main PID: 9737 (dockerd) Tasks: 46 Memory: 138.6M CGroup: /system.slice/docker.service ├─ 9737 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock ├─10144 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 80 -container-ip 172.19.0.2 -... ├─10151 /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 80 -container-ip 172.19.0.2 -conta... ├─10170 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 6379 -container-ip 172.19.0.3... └─10178 /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 6379 -container-ip 172.19.0.3 -con... 8月 27 15:14:00 localhost.localdomain dockerd[9737]: time="2025-08-27T15:14:00.853107542+08:00" level=inf....24 8月 27 15:14:00 localhost.localdomain dockerd[9737]: time="2025-08-27T15:14:00.853240645+08:00" level=inf...on" 8月 27 15:14:00 localhost.localdomain systemd[1]: Started Docker Application Container Engine. 8月 27 15:14:00 localhost.localdomain dockerd[9737]: time="2025-08-27T15:14:00.891163395+08:00" level=inf...ck" 8月 27 15:14:20 localhost.localdomain dockerd[9737]: time="2025-08-27T15:14:20.265981649+08:00" level=war...ut" 8月 27 15:14:20 localhost.localdomain dockerd[9737]: time="2025-08-27T15:14:20.266117106+08:00" level=inf...ut" 8月 27 15:14:49 localhost.localdomain dockerd[9737]: time="2025-08-27T15:14:49.034510681+08:00" level=war...ut" 8月 27 15:14:49 localhost.localdomain dockerd[9737]: time="2025-08-27T15:14:49.034659945+08:00" level=inf...ut" 8月 27 15:15:01 localhost.localdomain dockerd[9737]: time="2025-08-27T15:15:01+08:00" level=info msg="Fir...ng" 8月 27 15:15:01 localhost.localdomain dockerd[9737]: time="2025-08-27T15:15:01+08:00" level=info msg="Fir...ng" Hint: Some lines were ellipsized, use -l to show in full. [root@localhost yywz]# /opt bash: /opt: 是一个目录 [root@localhost yywz]# cd /opt [root@localhost opt]# ll 总用量 178812 -rw-r--r--. 1 root root 129115976 3月 13 15:23 boot.bak0.bz2 drwxr-xr-x. 2 root root 4096 3月 17 17:08 bt drwx--x--x 4 root root 4096 8月 19 19:23 containerd drwxr-xr-x. 2 root root 4096 2月 23 2025 mysql drwxr-xr-x 6 prometheus prometheus 4096 8月 17 21:55 prometheus drwxr-xr-x. 2 root root 4096 10月 31 2018 rh -rw-------. 1 root root 53949998 7月 14 2023 VMwareTools-10.3.26-22085142.tar.gz drwxr-xr-x. 8 root root 4096 7月 14 2023 vmware-tools-distrib drwxr-xr-x. 2 root root 4096 3月 17 14:11 webmin [root@localhost opt]# mkdir /data mkdir: 无法创建目录"/data": 文件已存在 [root@localhost opt]# ls -l 总用量 178812 -rw-r--r--. 1 root root 129115976 3月 13 15:23 boot.bak0.bz2 drwxr-xr-x. 2 root root 4096 3月 17 17:08 bt drwx--x--x 4 root root 4096 8月 19 19:23 containerd drwxr-xr-x. 2 root root 4096 2月 23 2025 mysql drwxr-xr-x 6 prometheus prometheus 4096 8月 17 21:55 prometheus drwxr-xr-x. 2 root root 4096 10月 31 2018 rh -rw-------. 1 root root 53949998 7月 14 2023 VMwareTools-10.3.26-22085142.tar.gz drwxr-xr-x. 8 root root 4096 7月 14 2023 vmware-tools-distrib drwxr-xr-x. 2 root root 4096 3月 17 14:11 webmin [root@localhost opt]# cd /data [root@localhost data]# git clone https://gitee.com/inge365/docker-prometheus.git fatal: 目标路径 'docker-prometheus' 已经存在,并且不是一个空目录。 [root@localhost data]# cd /docker-prometheus bash: cd: /docker-prometheus: 没有那个文件或目录 [root@localhost data]# cd docker-prometheus/ [root@localhost docker-prometheus]# docker-compose up -d Pulling alertmanager (prom/alertmanager:v0.25.0)... v0.25.0: Pulling from prom/alertmanager b08a0a826235: Pull complete d71d159599c3: Pull complete 05d21abf0535: Pull complete c4dc43cc8685: Pull complete aff850a11e31: Pull complete 6c477a8cc220: Pull complete Digest: sha256:fd4d9a3dd1fd0125108417be21be917f19cc76262347086509a0d43f29b80e98 Status: Downloaded newer image for prom/alertmanager:v0.25.0 Pulling cadvisor (google/cadvisor:latest)... latest: Pulling from google/cadvisor ff3a5c916c92: Pull complete 44a45bb65cdf: Pull complete 0bbe1a2fe2a6: Pull complete Digest: sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 Status: Downloaded newer image for google/cadvisor:latest Pulling node_exporter (prom/node-exporter:v1.5.0)... v1.5.0: Pulling from prom/node-exporter 22b70bddd3ac: Pull complete 5c12815fee55: Pull complete c0e87333d380: Pull complete Digest: sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c Status: Downloaded newer image for prom/node-exporter:v1.5.0 Pulling prometheus (prom/prometheus:v2.37.6)... v2.37.6: Pulling from prom/prometheus 4399114b4c59: Pull complete 225de5a6f1e7: Pull complete d4fec713b49e: Pull complete 7ae184732db2: Pull complete fee9b37b7eaa: Pull complete 7bc64fbe5ac4: Pull complete c5808d9b102a: Pull complete 25611bd629bf: Pull complete e30138ae4e40: Pull complete f68b4ae50d77: Pull complete a8143b4a94e9: Pull complete 72c09123b9ad: Pull complete Digest: sha256:92ceb93400dd4c887c76685d258bd75b9dcfe3419b71932821e9dcc70288d851 Status: Downloaded newer image for prom/prometheus:v2.37.6 Pulling grafana (grafana/grafana:9.4.3)... 9.4.3: Pulling from grafana/grafana 895e193edb51: Pull complete a3e3778621b5: Pull complete e7cf2c69b927: Pull complete df40c119df08: Pull complete 3b29ea6a27af: Pull complete 3997cd619520: Pull complete 7e759f975aac: Pull complete ff133072f235: Pull complete f9a56094a361: Pull complete Digest: sha256:76dcf36e7d2a4110c2387c1ad6e4641068dc78d7780da516d5d666d1e4623ac5 Status: Downloaded newer image for grafana/grafana:9.4.3 Creating node-exporter ... Creating node-exporter ... error Creating alertmanager ... WARNING: Host is already in use by another container ERROR: for node-exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (ff6e14ace4a34f23421c16fb36497c92f47b4f6e68d3828dcb78f425f136bcec): Error starting userland proxy: listen tcp4 0.0.0.0:9100: bind: address already in use Creating alertmanager ... error ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on e Creating cadvisor ... done proxy: listen tcp4 0.0.0.0:9093: bind: address already in use ERROR: for node_exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (ff6e14ace4a34f23421c16fb36497c92f47b4f6e68d3828dcb78f425f136bcec): Error starting userland proxy: listen tcp4 0.0.0.0:9100: bind: address already in use ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (4c3949c57c7cd56926e99a458f84213040813ef5e50b04931f8e017814b69e6e): Error starting userland proxy: listen tcp4 0.0.0.0:9093: bind: address already in use ERROR: Encountered errors while bringing up the project. [root@localhost docker-prometheus]# systemctl daemon-reload [root@localhost docker-prometheus]# [root@localhost docker-prometheus]# systemctl restart docker [root@localhost docker-prometheus]# systemctl stop firewalld [root@localhost docker-prometheus]# docker-compose up -d Starting node-exporter ... cadvisor is up-to-date Starting alertmanager ... Starting alertmanager ... error ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (655a075b63ca30d8feb55e2af3a0d90588987435d9c9e32c6d9ee74cd6da8bd2): (iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 9093 -j DNAT --to-destination 172.18.0.3:9093 ! -i br-a6445a378290: iptables: No chain/target/match by that name. Starting node-exporter ... error WARNING: Host is already in use by another container ERROR: for node-exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (98bbc9308f4f561c30990777d9c07d253d0b3637ab40c49b9d3e5dd65e7ff2b3): (iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 9100 -j DNAT --to-destination 172.18.0.4:9100 ! -i br-a6445a378290: iptables: No chain/target/match by that name. (exit status 1)) ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (655a075b63ca30d8feb55e2af3a0d90588987435d9c9e32c6d9ee74cd6da8bd2): (iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 9093 -j DNAT --to-destination 172.18.0.3:9093 ! -i br-a6445a378290: iptables: No chain/target/match by that name. (exit status 1)) ERROR: for node_exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (98bbc9308f4f561c30990777d9c07d253d0b3637ab40c49b9d3e5dd65e7ff2b3): (iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 9100 -j DNAT --to-destination 172.18.0.4:9100 ! -i br-a6445a378290: iptables: No chain/target/match by that name. (exit status 1)) ERROR: Encountered errors while bringing up the project. [root@localhost docker-prometheus]# sudo systemctl restart docker [root@localhost docker-prometheus]# sudo systemctl status docker ● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled) Drop-In: /etc/systemd/system/docker.service.d └─timeout.conf Active: active (running) since 三 2025-08-27 15:33:16 CST; 9s ago Docs: https://docs.docker.com Main PID: 12886 (dockerd) Tasks: 50 Memory: 33.9M CGroup: /system.slice/docker.service ├─12886 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock ├─13070 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 6379 -container-ip 172.19.0.2... ├─13078 /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 6379 -container-ip 172.19.0.2 -con... ├─13119 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 80 -container-ip 172.19.0.3 -... └─13127 /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 80 -container-ip 172.19.0.3 -conta... 8月 27 15:33:14 localhost.localdomain dockerd[12886]: time="2025-08-27T15:33:14.743661358+08:00" level=in...rpc 8月 27 15:33:14 localhost.localdomain dockerd[12886]: time="2025-08-27T15:33:14.743680704+08:00" level=in...rpc 8月 27 15:33:14 localhost.localdomain dockerd[12886]: time="2025-08-27T15:33:14.763313066+08:00" level=in...y2" 8月 27 15:33:14 localhost.localdomain dockerd[12886]: time="2025-08-27T15:33:14.788019065+08:00" level=in...t." 8月 27 15:33:15 localhost.localdomain dockerd[12886]: time="2025-08-27T15:33:15.126541080+08:00" level=in...ss" 8月 27 15:33:16 localhost.localdomain dockerd[12886]: time="2025-08-27T15:33:16.049410152+08:00" level=in...e." 8月 27 15:33:16 localhost.localdomain dockerd[12886]: time="2025-08-27T15:33:16.087290259+08:00" level=in....24 8月 27 15:33:16 localhost.localdomain dockerd[12886]: time="2025-08-27T15:33:16.087475600+08:00" level=in...on" 8月 27 15:33:16 localhost.localdomain systemd[1]: Started Docker Application Container Engine. 8月 27 15:33:16 localhost.localdomain dockerd[12886]: time="2025-08-27T15:33:16.115125408+08:00" level=in...ck" Hint: Some lines were ellipsized, use -l to show in full. [root@localhost docker-prometheus]# sudo iptables -t nat -F [root@localhost docker-prometheus]# sudo iptables -t filter -F [root@localhost docker-prometheus]# sudo systemctl restart docker [root@localhost docker-prometheus]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3daac002ffaa google/cadvisor:latest "/usr/bin/cadvisor -…" 4 minutes ago Up 16 seconds 8080/tcp cadvisor fd2c63d29ec1 nginx:alpine "/docker-entrypoint.…" 19 minutes ago Up 16 seconds 0.0.0.0:80->80/tcp, :::80->80/tcp yywz_nginx_1 cd603ef0e887 redis:alpine "docker-entrypoint.s…" 19 minutes ago Up 16 seconds 0.0.0.0:6379->6379/tcp, :::6379->6379/tcp yywz_redis_1 [root@localhost docker-prometheus]# docker rm fd2c63d29ec1 Error response from daemon: You cannot remove a running container fd2c63d29ec116234c94487a17d6ea75d784d0cfd22b7e0d467cbab518258347. Stop the container before attempting removal or force remove [root@localhost docker-prometheus]# docker stop fd2c63d29ec1 fd2c63d29ec1 [root@localhost docker-prometheus]# docker rm fd2c63d29ec1 fd2c63d29ec1 [root@localhost docker-prometheus]# docker stop cd603ef0e887 cd603ef0e887 [root@localhost docker-prometheus]# docker rm cd603ef0e887 cd603ef0e887 [root@localhost docker-prometheus]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3daac002ffaa google/cadvisor:latest "/usr/bin/cadvisor -…" 6 minutes ago Up 2 minutes 8080/tcp cadvisor [root@localhost docker-prometheus]# docker stop 3daac002ffaa 3daac002ffaa [root@localhost docker-prometheus]# docker rm 3daac002ffaa 3daac002ffaa [root@localhost docker-prometheus]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES [root@localhost docker-prometheus]# docker-compose up -d Starting node-exporter ... Starting alertmanager ... Starting alertmanager ... error WARNING: Host is already in use by another container ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on e Starting node-exporter ... error proxy: listen tcp4 0.0.0.0:9093: bind: address already in use WARNING: Host is already in use by another container ERROR: for node-exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (d918c84a550f7b946d78470572034217e4fb96b010e7e4af0b0999844abd017f): Error starting userl Creating cadvisor ... done ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (a307f195dd99a59726f03ce9fd5b6ae8b2ae07e0551548c3247604349183d622): Error starting userland proxy: listen tcp4 0.0.0.0:9093: bind: address already in use ERROR: for node_exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (d918c84a550f7b946d78470572034217e4fb96b010e7e4af0b0999844abd017f): Error starting userland proxy: listen tcp4 0.0.0.0:9100: bind: address already in use ERROR: Encountered errors while bringing up the project. [root@localhost docker-prometheus]# sudo systemctl restart docker [root@localhost docker-prometheus]# docker-compose up -d cadvisor is up-to-date Starting node-exporter ... Starting node-exporter ... error WARNING: Host is already in use by another container ERROR: for node-exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (ae6db99f5f30989777bab25b89dbd620f2056a872aa85661bb8aad45032d5302): Error starting userland proxy: listen tcp4 0.0.0.0:9100: bind: address already in use Starting alertmanager ... error ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (17965ccc2adb43b6158e5ddd99dd13ac17c869625c0daed3e6099ba7177f8650): Error starting userland proxy: listen tcp4 0.0.0.0:9093: bind: address already in use ERROR: for node_exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (ae6db99f5f30989777bab25b89dbd620f2056a872aa85661bb8aad45032d5302): Error starting userland proxy: listen tcp4 0.0.0.0:9100: bind: address already in use ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (17965ccc2adb43b6158e5ddd99dd13ac17c869625c0daed3e6099ba7177f8650): Error starting userland proxy: listen tcp4 0.0.0.0:9093: bind: address already in use ERROR: Encountered errors while bringing up the project. [root@localhost docker-prometheus]# docker system prune -f Deleted Containers: 7f52ab012fe533dab192615c8d057fbc3c9305774241bdf3c49d226b858d6523 8b45998c395e05165e680ee482791a34a6287da38f18afe9c748d60e0600c45a Deleted Networks: yywz_default Total reclaimed space: 0B [root@localhost docker-prometheus]# docker-compose down Stopping cadvisor ... done Removing cadvisor ... done Removing network docker-prometheus_monitoring [root@localhost docker-prometheus]# docker-compose up -d Creating network "docker-prometheus_monitoring" with driver "bridge" Creating node-exporter ... Creating node-exporter ... error Creating cadvisor ... WARNING: Host is already in use by another container ERROR: for node-exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (f0abf54c178140c02db6497198b8cdd574c77323fdce7217292efd2df1b09080): Error starting userl Creating alertmanager ... error WARNING: Host is already in use by another container ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (b86c6f893b30deb2e189aba9a1a9ca972401f43bba55d8e4eb8796780e658bc1): Error starting userland Creating cadvisor ... done ERROR: for node_exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (f0abf54c178140c02db6497198b8cdd574c77323fdce7217292efd2df1b09080): Error starting userland proxy: listen tcp4 0.0.0.0:9100: bind: address already in use ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (b86c6f893b30deb2e189aba9a1a9ca972401f43bba55d8e4eb8796780e658bc1): Error starting userland proxy: listen tcp4 0.0.0.0:9093: bind: address already in use ERROR: Encountered errors while bringing up the project. [root@localhost docker-prometheus]# docker-compose logs alertmanager Attaching to alertmanager [root@localhost docker-prometheus]# docker-compose logs node-exporter ERROR: No such service: node-exporter [root@localhost docker-prometheus]# systemctl daemon-reload [root@localhost docker-prometheus]# sudo systemctl restart docker [root@localhost docker-prometheus]# sudo systemctl status docker ● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled) Drop-In: /etc/systemd/system/docker.service.d └─timeout.conf Active: active (running) since 三 2025-08-27 15:40:36 CST; 10s ago Docs: https://docs.docker.com Main PID: 16222 (dockerd) Tasks: 14 Memory: 27.6M CGroup: /system.slice/docker.service └─16222 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock 8月 27 15:40:35 localhost.localdomain dockerd[16222]: time="2025-08-27T15:40:35.582025207+08:00" level=in...rpc 8月 27 15:40:35 localhost.localdomain dockerd[16222]: time="2025-08-27T15:40:35.582038622+08:00" level=in...rpc 8月 27 15:40:35 localhost.localdomain dockerd[16222]: time="2025-08-27T15:40:35.599443534+08:00" level=in...y2" 8月 27 15:40:35 localhost.localdomain dockerd[16222]: time="2025-08-27T15:40:35.611122282+08:00" level=in...t." 8月 27 15:40:35 localhost.localdomain dockerd[16222]: time="2025-08-27T15:40:35.853104234+08:00" level=in...ss" 8月 27 15:40:36 localhost.localdomain dockerd[16222]: time="2025-08-27T15:40:36.461289460+08:00" level=in...e." 8月 27 15:40:36 localhost.localdomain dockerd[16222]: time="2025-08-27T15:40:36.495281733+08:00" level=in....24 8月 27 15:40:36 localhost.localdomain dockerd[16222]: time="2025-08-27T15:40:36.495461424+08:00" level=in...on" 8月 27 15:40:36 localhost.localdomain systemd[1]: Started Docker Application Container Engine. 8月 27 15:40:36 localhost.localdomain dockerd[16222]: time="2025-08-27T15:40:36.527032657+08:00" level=in...ck" Hint: Some lines were ellipsized, use -l to show in full. [root@localhost docker-prometheus]# docker-compose up -d Starting alertmanager ... Starting node-exporter ... Starting alertmanager ... error WARNING: Host is already in use by another container ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (8417fe6ebd4c207f2db52a925bdd7f5924c9d48f1f15d67907df06996b445fdf): Error starting userland proxy: listen tcp4 0.0.0.0:9093: bind: address already in use Starting node-exporter ... error ERROR: for node-exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (8641cb448c49665ec95b7dcbdc39c78cf0f47f54ad56a6da4fac5898c1778489): Error starting userland proxy: listen tcp4 0.0.0.0:9100: bind: address already in use ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (8417fe6ebd4c207f2db52a925bdd7f5924c9d48f1f15d67907df06996b445fdf): Error starting userland proxy: listen tcp4 0.0.0.0:9093: bind: address already in use ERROR: for node_exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (8641cb448c49665ec95b7dcbdc39c78cf0f47f54ad56a6da4fac5898c1778489): Error starting userland proxy: listen tcp4 0.0.0.0:9100: bind: address already in use ERROR: Encountered errors while bringing up the project. [root@localhost docker-prometheus]# vim docker-compose.yml [root@localhost docker-prometheus]# sudo lsof -i :9093 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME alertmana 1161 prometheus 7u IPv6 31306 0t0 TCP *:copycat (LISTEN) [root@localhost docker-prometheus]# docker ps --format "table {{.Names}}\t{{.Ports}}" NAMES PORTS cadvisor 8080/tcp [root@localhost docker-prometheus]# docker ps --format "table {{.ID}}\t{{.Names}}\t{{.Ports}}" | grep -E "(9093|9100)" [root@localhost docker-prometheus]# docker-compose up -d Starting alertmanager ... cadvisor is up-to-date Starting alertmanager ... error WARNING: Host is already in use by another container ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (c9acbc4520a2f9ab5b40b5936c3b61071c8acb445dbd858c2b126d3dcf9e101f): Error starting userland proxy: listen tcp4 0.0.0.0:9093: bind: address already in use Starting node-exporter ... error ERROR: for node-exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (f46135cd7dc0aa34069a7b825c8ad09a691c6a40f429f7714a2f052f14d348d6): Error starting userland proxy: listen tcp4 0.0.0.0:9100: bind: address already in use ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (c9acbc4520a2f9ab5b40b5936c3b61071c8acb445dbd858c2b126d3dcf9e101f): Error starting userland proxy: listen tcp4 0.0.0.0:9093: bind: address already in use ERROR: for node_exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (f46135cd7dc0aa34069a7b825c8ad09a691c6a40f429f7714a2f052f14d348d6): Error starting userland proxy: listen tcp4 0.0.0.0:9100: bind: address already in use ERROR: Encountered errors while bringing up the project. [root@localhost docker-prometheus]# sudo lsof -ti:9093 1161 [root@localhost docker-prometheus]# sudo lsof -ti:9100 714 1167 [root@localhost docker-prometheus]# udo kill -9 714 bash: udo: 未找到命令... [root@localhost docker-prometheus]# udo kill -9 <714> bash: 未预期的符号 `714' 附近有语法错误 [root@localhost docker-prometheus]# sudo kill -9 <1167> bash: 未预期的符号 `1167' 附近有语法错误 [root@localhost docker-prometheus]# sudo kill -9 1167 [root@localhost docker-prometheus]# sudo kill -9 1161 [root@localhost docker-prometheus]# sudo kill -9 714 [root@localhost docker-prometheus]# docker-compose up -d Starting node-exporter ... Starting alertmanager ... Starting node-exporter ... error WARNING: Host is already in use by another container ERROR: for node-exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (a1e70481732ff4211fb0830d456aba632427d7edbf1aed89182b54a2dc1ac0af): Error starting userland proxy: listen tcp4 0.0.0.0:9100: bind: address already in use Starting alertmanager ... error ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (d7a7bb69496f07386701cc39d4f9da755beb85a528753132c9a70048af1c917c): Error starting userland proxy: listen tcp4 0.0.0.0:9093: bind: address already in use ERROR: for node_exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (a1e70481732ff4211fb0830d456aba632427d7edbf1aed89182b54a2dc1ac0af): Error starting userland proxy: listen tcp4 0.0.0.0:9100: bind: address already in use ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (d7a7bb69496f07386701cc39d4f9da755beb85a528753132c9a70048af1c917c): Error starting userland proxy: listen tcp4 0.0.0.0:9093: bind: address already in use ERROR: Encountered errors while bringing up the project. [root@localhost docker-prometheus]# sudo systemctl restart docker [root@localhost docker-prometheus]# docker-compose up -d Starting node-exporter ... Starting alertmanager ... Starting node-exporter ... error WARNING: Host is already in use by another container ERROR: for node-exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (d4bd8914814c09ec94f19902075b8a0b7c2f39feba0a2efc1c6018e52fb01061): Error starting userland proxy: listen tcp4 0.0.0.0:9100: bind: address already in use Starting alertmanager ... error ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (a1f8e19b7f96a87e287d4d2b33ce00e81a5d5c0d1bc2192c17f9bf39ace81f16): Error starting userland proxy: listen tcp4 0.0.0.0:9093: bind: address already in use ERROR: for node_exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (d4bd8914814c09ec94f19902075b8a0b7c2f39feba0a2efc1c6018e52fb01061): Error starting userland proxy: listen tcp4 0.0.0.0:9100: bind: address already in use ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (a1f8e19b7f96a87e287d4d2b33ce00e81a5d5c0d1bc2192c17f9bf39ace81f16): Error starting userland proxy: listen tcp4 0.0.0.0:9093: bind: address already in use ERROR: Encountered errors while bringing up the project. [root@localhost docker-prometheus]# cd /opt [root@localhost opt]# cd /data [root@localhost data]# ks bash: ks: 未找到命令... [root@localhost data]# ls docker-prometheus [root@localhost data]# ls -l 总用量 4 drwxr-xr-x 6 root root 4096 8月 27 15:42 docker-prometheus [root@localhost data]# cd /opt [root@localhost opt]# docker-compose.yml bash: docker-compose.yml: 未找到命令... [root@localhost opt]# cd docker-compose.yml bash: cd: docker-compose.yml: 没有那个文件或目录 [root@localhost opt]# find docker-compose.yml find: ‘docker-compose.yml’: 没有那个文件或目录 [root@localhost opt]# cd /data/docker-prometheus/ [root@localhost docker-prometheus]# ls l ls: 无法访问l: 没有那个文件或目录 [root@localhost docker-prometheus]# ls -l 总用量 52 drwxr-xr-x 2 root root 4096 8月 20 15:12 alertmanager -rw-r--r-- 1 root root 2634 8月 20 14:36 docker-compose.yaml drwxr-xr-x 2 root root 4096 8月 20 14:36 grafana -rw-r--r-- 1 root root 35181 8月 20 14:36 LICENSE drwxr-xr-x 2 root root 4096 8月 22 16:45 prometheus -rw-r--r-- 1 root root 0 8月 20 14:36 README.md [root@localhost docker-prometheus]# vim docker-compose.yaml [root@localhost docker-prometheus]# docker-compose.yml bash: docker-compose.yml: 未找到命令... [root@localhost docker-prometheus]# docker-compose.yml bash: docker-compose.yml: 未找到命令... [root@localhost docker-prometheus]# sudo systemctl restart docker ^[[A[root@localhost docker-prometheudocker-compose up -d Recreating node-exporter ... Recreating alertmanager ... cadvisor is up-to-date Recreating alertmanager ... error ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on e Recreating node-exporter ... done proxy: listen tcp4 0.0.0.0:9094: bind: address already in use ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (845ba9f38bf09748f96a5e67761382648b7a08f0d1ca4aea45e9d486034ee09f): Error starting userland proxy: listen tcp4 0.0.0.0:9094: bind: address already in use ERROR: Encountered errors while bringing up the project. [root@localhost docker-prometheus]# docker-compose up -d Removing alertmanager node-exporter is up-to-date Recreating 53b2433d3f44_alertmanager ... cadvisor is up-to-date Recreating 53b2433d3f44_alertmanager ... error ERROR: for 53b2433d3f44_alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (9f74c9d6ee9219aa21f3a075ee643a8da9a3ee0a7cad5d8fc5e7497c5784c400): Error starting userland proxy: listen tcp4 0.0.0.0:9094: bind: address already in use ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (9f74c9d6ee9219aa21f3a075ee643a8da9a3ee0a7cad5d8fc5e7497c5784c400): Error starting userland proxy: listen tcp4 0.0.0.0:9094: bind: address already in use ERROR: Encountered errors while bringing up the project. [root@localhost docker-prometheus]# sudo lsof -i :9093 || echo "端口 9093 已释放" COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME alertmana 17952 prometheus 7u IPv6 177546 0t0 TCP *:copycat (LISTEN) [root@localhost docker-prometheus]# sudo lsof -i :9100 || echo "端口 9100 已释放" COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME prometheu 17933 prometheus 30u IPv6 179703 0t0 TCP localhost:54442->localhost:jetdirect (ESTABLISHED) node_expo 17972 prometheus 3u IPv6 177682 0t0 TCP *:jetdirect (LISTEN) node_expo 17972 prometheus 6u IPv6 177755 0t0 TCP localhost:jetdirect->localhost:54442 (ESTABLISHED) [root@localhost docker-prometheus]# sudo iptables -t nat -L -n | grep -E "(9093|9100)" MASQUERADE tcp -- 172.18.0.3 172.18.0.3 tcp dpt:9100 DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:9101 to:172.18.0.3:9100 [root@localhost docker-prometheus]# docker-compose down Stopping node-exporter ... done Stopping cadvisor ... done Removing alertmanager ... done Removing node-exporter ... done Removing cadvisor ... done Removing 53b2433d3f44_alertmanager ... done Removing network docker-prometheus_monitoring [root@localhost docker-prometheus]# docker-compose up -d Creating network "docker-prometheus_monitoring" with driver "bridge" Creating node-exporter ... Creating cadvisor ... Creating alertmanager ... Creating alertmanager ... error Creating node-exporter ... done Creating cadvisor ... done proxy: listen tcp4 0.0.0.0:9094: bind: address already in use ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (fc63ea1fc78609d94171482fa5d1a2fb2c3ba3ed83c2d8088806b8a5613cb2ac): Error starting userland proxy: listen tcp4 0.0.0.0:9094: bind: address already in use ERROR: Encountered errors while bringing up the project. [root@localhost docker-prometheus]# docker ps --format "table {{.ID}}\t{{.Names}}\t{{.Ports}}" | grep 9094 [root@localhost docker-prometheus]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ede1cff2fba0 google/cadvisor:latest "/usr/bin/cadvisor -…" 56 seconds ago Up 55 seconds 8080/tcp cadvisor 1a149cda0ce5 prom/node-exporter:v1.5.0 "/bin/node_exporter …" 56 seconds ago Up 55 seconds 0.0.0.0:9101->9100/tcp, :::9101->9100/tcp node-exporter [root@localhost docker-prometheus]# vim docker-compose.yaml [root@localhost docker-prometheus]# docker-compose up -d Recreating alertmanager ... node-exporter is up-to-date Recreating alertmanager ... done Creating prometheus ... Creating prometheus ... error ERROR: for prometheus Cannot start service prometheus: driver failed programming external connectivity on endpoint prometheus (ddd41fce73c70167f23ff37ff3001e101134785f9423893a5aae147768420d6a): Error starting userland proxy: listen tcp4 0.0.0.0:9090: bind: address already in use ERROR: for prometheus Cannot start service prometheus: driver failed programming external connectivity on endpoint prometheus (ddd41fce73c70167f23ff37ff3001e101134785f9423893a5aae147768420d6a): Error starting userland proxy: listen tcp4 0.0.0.0:9090: bind: address already in use ERROR: Encountered errors while bringing up the project. [root@localhost docker-prometheus]# vim docker-compose.yaml version: '3.3' volumes: prometheus_data: {} grafana_data: {} networks: monitoring: driver: bridge services: prometheus: image: prom/prometheus:v2.37.6 container_name: prometheus restart: always volumes: - /etc/localtime:/etc/localtime:ro - ./prometheus/:/etc/prometheus/ - prometheus_data:/prometheus command: - '--config.file=/etc/prometheus/prometheus.yml' - '--storage.tsdb.path=/prometheus' - '--web.console.libraries=/usr/share/prometheus/console_libraries' - '--web.console.templates=/usr/share/prometheus/consoles' #热加载配置 - '--web.enable-lifecycle' #api配置 #- '--web.enable-admin-api' #历史数据最大保留时间,默认15天 - '--storage.tsdb.retention.time=30d' networks: - monitoring links: - alertmanager 34,6 顶端 怎么该

filetype

root@hi3798mv300:~# adb version -bash: adb: command not found root@hi3798mv300:~# # 更新软件源 root@hi3798mv300:~# sudo apt update Hit:1 http://repo.huaweicloud.com/ubuntu-ports focal InRelease Hit:2 http://repo.huaweicloud.com/ubuntu-ports focal-updates InRelease Hit:3 http://repo.huaweicloud.com/ubuntu-ports focal-backports InRelease Hit:4 http://repo.huaweicloud.com/ubuntu-ports focal-security InRelease Hit:5 https://repo.huaweicloud.com/docker-ce/linux/ubuntu focal InRelease Hit:6 https://www.ecoo.top/update/repo/arm64 histb InRelease Reading package lists... Done Building dependency tree Reading state information... Done All packages are up to date. root@hi3798mv300:~# root@hi3798mv300:~# # 安装ADB依赖库 root@hi3798mv300:~# sudo apt install android-tools-adb android-tools-fastboot libusb-1.0-0 Reading package lists... Done Building dependency tree Reading state information... Done libusb-1.0-0 is already the newest version (2:1.0.23-2build1). libusb-1.0-0 set to manually installed. The following additional packages will be installed: adb android-libadb android-libbacktrace android-libbase android-libboringssl android-libcrypto-utils android-libcutils android-libetc1 android-libf2fs-utils android-liblog android-libsparse android-libunwind android-libutils android-libziparchive android-sdk-platform-tools android-sdk-platform-tools-common dmtracedump etc1tool f2fs-tools fastboot fontconfig fonts-liberation graphviz hprof-conv libann0 libcairo2 libcdt5 libcgraph6 libdatrie1 libf2fs-format4 libf2fs5 libgraphite2-3 libgts-0.7-5 libgts-bin libgvc6 libgvpr2 libharfbuzz0b libice6 liblab-gamut1 libpango-1.0-0 libpangocairo-1.0-0 libpangoft2-1.0-0 libpathplan4 libpixman-1-0 libsm6 libthai-data libthai0 libxaw7 libxcb-render0 libxcb-shm0 libxmu6 libxrender1 libxt6 p7zip p7zip-full sqlite3 x11-common Suggested packages: gsfonts graphviz-doc p7zip-rar sqlite3-doc The following NEW packages will be installed: adb android-libadb android-libbacktrace android-libbase android-libboringssl android-libcrypto-utils android-libcutils android-libetc1 android-libf2fs-utils android-liblog android-libsparse android-libunwind android-libutils android-libziparchive android-sdk-platform-tools android-sdk-platform-tools-common android-tools-adb android-tools-fastboot dmtracedump etc1tool f2fs-tools fastboot fontconfig fonts-liberation graphviz hprof-conv libann0 libcairo2 libcdt5 libcgraph6 libdatrie1 libf2fs-format4 libf2fs5 libgraphite2-3 libgts-0.7-5 libgts-bin libgvc6 libgvpr2 libharfbuzz0b libice6 liblab-gamut1 libpango-1.0-0 libpangocairo-1.0-0 libpangoft2-1.0-0 libpathplan4 libpixman-1-0 libsm6 libthai-data libthai0 libxaw7 libxcb-render0 libxcb-shm0 libxmu6 libxrender1 libxt6 p7zip p7zip-full sqlite3 x11-common 0 upgraded, 59 newly installed, 0 to remove and 0 not upgraded. Need to get 8,012 kB of archives. After this operation, 29.5 MB of additional disk space will be used. Do you want to continue? [Y/n] y Abort. root@hi3798mv300:~# adb version -bash: adb: command not found root@hi3798mv300:~# 没有啊 是安装错了吗

filetype

Windows PowerShell 版权所有(C) Microsoft Corporation。保留所有权利。 安装最新的 PowerShell,了解新功能和改进!https://aka.ms/PSWindows PS C:\Users\Czh20> docker run -d --name=ms --restart=unless-stopped --network=host -p 5354:5354/udp -p 1901:1901/udp -v E:\WorkSpace\Tool\matter-data:/data -e DEBUG=1 -e TZ=Asia/Shanghai ghcr.io/home-assistant-libs/python-matter-server:stable WARNING: Published ports are discarded when using host network mode c0e8729b1fd30244684ee054d882f52d0fcd7b4db80c11ed907adb39f14debbb PS C:\Users\Czh20> ^C PS C:\Users\Czh20> ^C PS C:\Users\Czh20> docker run -d --name=ms --restart=unless-stopped --network=host -v E:\WorkSpace\Tool\matter-data:/data -e DEBUG=1 -e TZ=Asia/Shanghai ghcr.io/home-assistant-libs/python-matter-server:stable 9fde2407e813aaae656b7042376f87aee09e302fe2fd5fbaa870f64f083f832c PS C:\Users\Czh20> Get-Process -Id (Get-NetUDPEndpoint -LocalPort 5354).OwningProcess Get-NetUDPEndpoint : 找不到任何“LocalPort”属性等于“5354”的 MSFT_NetUDPEndpoint 对象。请验证属性值,然后重试。 所在位置 行:1 字符: 18 + Get-Process -Id (Get-NetUDPEndpoint -LocalPort 5354).OwningProcess + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : ObjectNotFound: (5354:UInt16) [Get-NetUDPEndpoint], CimJobException + FullyQualifiedErrorId : CmdletizationQuery_NotFound_LocalPort,Get-NetUDPEndpoint Get-Process : 无法将参数绑定到参数“Id”,因为该参数是空值。 所在位置 行:1 字符: 17 + Get-Process -Id (Get-NetUDPEndpoint -LocalPort 5354).OwningProcess + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidData: (:) [Get-Process],ParameterBindingValidationException + FullyQualifiedErrorId : ParameterArgumentValidationErrorNullNotAllowed,Microsoft.PowerShell.Commands.GetProcessCommand PS C:\Users\Czh20>Get-NetUDPEndpoint -LocalPort 5354,1901 | Format-Table LocalPort, OwningProcess Get-NetUDPEndpoint : 找不到任何“LocalPort”属性等于“5354”的 MSFT_NetUDPEndpoint 对象。请验证属性值,然后重试。 所在位置 行:1 字符: 1 + Get-NetUDPEndpoint -LocalPort 5354,1901 | Format-Table LocalPort, Own ... + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : ObjectNotFound: (5354:UInt16) [Get-NetUDPEndpoint], CimJobException + FullyQualifiedErrorId : CmdletizationQuery_NotFound_LocalPort,Get-NetUDPEndpoint Get-NetUDPEndpoint : 找不到任何“LocalPort”属性等于“1901”的 MSFT_NetUDPEndpoint 对象。请验证属性值,然后重试。 所在位置 行:1 字符: 1 + Get-NetUDPEndpoint -LocalPort 5354,1901 | Format-Table LocalPort, Own ... + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : ObjectNotFound: (1901:UInt16) [Get-NetUDPEndpoint], CimJobException + FullyQualifiedErrorId : CmdletizationQuery_NotFound_LocalPort,Get-NetUDPEndpoint PS C:\Users\Czh20>docker logs ms | Select-String "mDNS|SSDP|1901" 2025-07-01 11:58:25.416 (MainThread) INFO [matter_server.server.stack] Initializing CHIP/Matter Logging... 2025-07-01 11:58:25.417 (MainThread) INFO [matter_server.server.stack] Initializing CHIP/Matter Controller Stack... 2025-07-01 11:58:25.620 (MainThread) INFO [chip.storage] Initializing persistent storage from file: /data/chip.json 2025-07-01 11:58:25.620 (MainThread) ERROR [chip.storage] [Errno 2] No such file or directory: '/data/chip.json' 2025-07-01 11:58:25.620 (MainThread) CRITICAL [chip.storage] Could not load configuration from /data/chip.json - resetting configuration... 2025-07-01 11:58:25.620 (MainThread) WARNING [chip.storage] No valid SDK configuration present - clearing out configuration 2025-07-01 11:58:25.620 (MainThread) WARNING [chip.storage] No valid REPL configuration present - clearing out configuration 2025-07-01 11:58:25.685 (MainThread) INFO [chip.CertificateAuthority] Loading certificate authorities from storage... 2025-07-01 11:58:25.686 (MainThread) INFO [chip.CertificateAuthority] New CertificateAuthority at index 1 2025-07-01 11:58:25.691 (MainThread) INFO [chip.FabricAdmin] New FabricAdmin: FabricId: 0x0000000000000001, VendorId = 0xFFF1 2025-07-01 11:58:25.693 (MainThread) INFO [matter_server.server.stack] CHIP Controller Stack initialized. 2025-07-01 11:58:25.693 (MainThread) INFO [matter_server.server.server] Starting the Matter Server... 2025-07-01 11:58:25.698 (MainThread) INFO [matter_server.server.helpers.paa_certificates] Fetching the latest PAA root certificates from DCL. 2025-07-01 11:58:54.475 (MainThread) INFO [matter_server.server.helpers.paa_certificates] Fetched 69 PAA root certificates from DCL. 2025-07-01 11:58:54.476 (MainThread) INFO [matter_server.server.helpers.paa_certificates] Fetching the latest PAA root certificates from Git. 2025-07-01 11:58:56.206 (MainThread) INFO [matter_server.server.helpers.paa_certificates] Fetched 2 PAA root certificates from Git. 2025-07-01 11:58:56.211 (MainThread) INFO [chip.FabricAdmin] Allocating new controller with CaIndex: 1, FabricId: 0x0000000000000001, NodeId: 0x000000000001B669, CatTags: [] 2025-07-01 11:58:56.454 (Dummy-2) CHIP_ERROR [chip.native.DL] Long dispatch time: 241 ms, for event type 2 2025-07-01 11:58:56.460 (MainThread) INFO [matter_server.server.vendor_info] Loading vendor info from storage. 2025-07-01 11:58:56.460 (MainThread) INFO [matter_server.server.vendor_info] Loaded 0 vendors from storage. 2025-07-01 11:58:56.460 (MainThread) INFO [matter_server.server.vendor_info] Fetching the latest vendor info from DCL. 2025-07-01 11:59:00.307 (MainThread) INFO [matter_server.server.vendor_info] Fetched 326 vendors from DCL. 2025-07-01 11:59:00.307 (MainThread) INFO [matter_server.server.vendor_info] Saving vendor info to storage. 2025-07-01 11:59:00.309 (MainThread) INFO [matter_server.server.device_controller] Loaded 0 nodes from stored configuration 2025-07-01 11:59:00.312 (MainThread) INFO [matter_server.server.server] Matter Server successfully initialized. PS C:\Users\Czh20>New-NetFirewallRule -DisplayName "Matter_HostMode" -Direction Inbound -Protocol UDP -LocalPort 5354,1901 -Action Allow New-NetFirewallRule : 拒绝访问。 所在位置 行:1 字符: 1 + New-NetFirewallRule -DisplayName "Matter_HostMode" -Direction Inbound ... + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : PermissionDenied: (MSFT_NetFirewallRule:root/standardcimv2/MSFT_NetFirewallRule) [New-NetFirewallRule], CimException + FullyQualifiedErrorId : Windows System Error 5,New-NetFirewallRule PS C:\Users\Czh20>

cestZOE
  • 粉丝: 39
上传资源 快速赚钱