打开npz文件zipfile.BadZipFile: File is not a zip file

时间: 2025-03-29 14:14:20 AIGC 浏览: 107
### 处理 `npz` 文件中的 `zipfile.BadZipFile: File is not a zip file` 错误 当尝试加载 `.npz` 文件时遇到 `zipfile.BadZipFile: File is not a zip file` 的错误,通常表明该文件可能已损坏或不是合法的 `.npz` 格式。`.npz` 是一种基于 ZIP 存档的格式,专门用于存储多个 NumPy 数组。以下是可能导致此问题的原因以及解决方案: #### 可能原因分析 1. **文件损坏** 如果 `.npz` 文件在传输过程中被截断或者保存不完整,则可能会导致其不再符合 ZIP 文件的标准结构,从而引发上述错误[^4]。 2. **文件并非真正的 `.npz` 格式** 用户可能将其他类型的文件重命名为 `.npz`,但实际上它的内部并不遵循 NumPy 对于 `.npz` 文件的规定格式[^1]。 3. **版本兼容性问题** 使用不同版本的 Python 或 NumPy 创建和读取 `.npz` 文件时,可能存在某些细微差异,这也会引起类似的解析错误[^5]。 #### 解决方案 为了有效应对这一情况,可以采取以下措施之一来解决问题: 1. **验证文件完整性** 首先确认目标 `.npz` 文件本身是否完好无损。可以通过任何支持 ZIP 压缩标准的工具手动尝试解压缩它;如果这些通用程序同样报告无法正常打开的情况,则基本可判定原文件存在问题。 2. **重新生成 .npz 文件** 若有权限访问原始数据源,建议再次执行导出操作以获取一份全新的 `.npz` 文件副本。例如,在使用 NumPy 进行打包的过程中应确保所有参数设置正确: ```python import numpy as np data_array_1 = np.array([1, 2, 3]) data_array_2 = np.array([4, 5, 6]) # 正确方式保存为 .npz 文件 np.savez('data.npz', array_one=data_array_1, array_two=data_array_2) ``` 3. **修复现有文件 (仅限部分场景适用)** 考虑到直接修改受损的 `.npz` 文件较为复杂且成功率较低,除非具备深入的技术背景知识,否则一般推荐优先考虑前两种方法替代尝试自行修正[^3]。 4. **更新依赖库至最新稳定版** 确认当前使用的 Python 和 NumPy 版本均为官方发布的较新状态,以便利用其中包含的各种改进特性减少潜在冲突风险。 ```python import sys print(sys.version) try: import numpy as np except ImportError: !pip install --upgrade numpy finally: print(f"Numpy version installed:{np.__version__}") ``` --- ### 示例代码片段展示如何安全加载 NPZ 文件并捕获异常 通过引入 try-except 结构能够优雅地处理可能出现的各类 IOError 类型异常状况如下所示: ```python def load_npz_safely(filepath): try: with np.load(filepath) as loaded_data: keys = list(loaded_data.keys()) values = {key: loaded_data[key] for key in keys} return values except FileNotFoundError: print("The specified filepath does not exist.") except ValueError: print("There was an issue interpreting the content of your .npz file.") result_dict = load_npz_safely('./test_data.npz') if result_dict is None: exit(-1) else: print(result_dict['array_one']) ```
阅读全文

相关推荐

!mkdir -p ~/.keras/datasets !cp work/mnist.npz ~/.keras/datasets/ import warnings warnings.filterwarnings("ignore") from keras.datasets import mnist (train_images, train_labels), (test_images, test_labels) = mnist.load_data() print(f"训练数据形状: {train_images.shape}") print(f"训练标签长度: {len(train_labels)}") print(f"测试数据形状: {test_images.shape}") print(f"测试标签长度: {len(test_labels)}") from keras import models from keras import layers # 构建神经网络模型 network = models.Sequential() network.add(layers.Dense(512, activation='relu', input_shape=(28 * 28,))) # 隐藏层:512个神经元,激活函数为ReLU network.add(layers.Dense(10, activation='softmax')) # 输出层:10个分类,激活函数为Softmax # 编译模型 network.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) # 数据预处理 train_images = train_images.reshape((60000, 28 * 28)) # 将图像展平成一维向量 train_images = train_images.astype('float32') / 255 # 归一化到[0,1] test_images = test_images.reshape((10000, 28 * 28)) test_images = test_images.astype('float32') / 255 # 标签编码 from keras.utils import to_categorical train_labels = to_categorical(train_labels) test_labels = to_categorical(test_labels) # 训练模型 network.fit(train_images, train_labels, epochs=5, batch_size=128) # 测试模型性能 test_loss, test_acc = network.evaluate(test_images, test_labels) print('Test accuracy:', test_acc)W0402 08:09:22.415642 140410418362176 deprecation.py:323] From /opt/conda/lib/python3.6/site-packages/tensorflow_core/python/ops/math_grad.py:1424: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.where in 2.0, which has the same broadcast rule as np.where W0402 08:09:22.484165 140410418362176 module_wrapper.py:139] From /opt/conda/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:986: The name tf.assign_add is deprecated. Please use tf.compat.v1.assign_add instead. W0402 08:09:22.495126 140410418362176 module_wrapper.py:139] From /opt/conda/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:973: The name tf.assign is deprecated. Please use tf.compat.v1.assign instead. W0402 08:09:22.537523 140410418362176 module_wrapper.py:139] From /opt/conda/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:2741: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead. W0402 08:09:22.546429 140410418362176 module_wrapper.py:139] From /opt/conda/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:174: The name tf.get_default_session is deprecated. Please use tf.compat.v1.get_default_session instead. W0402 08:09:22.548026 140410418362176 module_wrapper.py:139] From /opt/conda/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:181: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead. W0402 08:09:22.566734 140410418362176 module_wrapper.py:139] From /opt/conda/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:190: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead. W0402 08:09:22.567799 140410418362176 module_wrapper.py:139] From /opt/conda/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:199: The name tf.is_variable_initialized is deprecated. Please use tf.compat.v1.is_variable_initialized instead. W0402 08:09:22.613820 140410418362176 module_wrapper.py:139] From /opt/conda/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:206: The name tf.variables_initializer is deprecated. Please use tf.compat.v1.variables_initializer instead.

2025-03-20 04:26:29.893491: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:467] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered WARNING: All log messages before absl::InitializeLog() is called are written to STDERR E0000 00:00:1742459189.923278 7224 cuda_dnn.cc:8579] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered E0000 00:00:1742459189.932373 7224 cuda_blas.cc:1407] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered W0000 00:00:1742459189.960838 7224 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once. W0000 00:00:1742459189.960898 7224 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once. W0000 00:00:1742459189.960902 7224 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once. W0000 00:00:1742459189.960905 7224 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once. 2025-03-20 04:26:29.966980: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. Namespace(weights='/home/book/rk3566/ai_tools/FaceRecognition-master/weights/last.pt', source='/home/book/rk3566/ai_tools/yolov5-face-master/data/images/lhr.jpg', output='../inference/output', img_size=640, conf_thres=0.3, iou_thres=0.5, fourcc='mp4v', device='', view_img=False, save_txt=False, classes=None, agnostic_nms=False, augment=False, facenet_model_path='../20180408-102900', svc_path='../SVCmodel.pkl', database_path='../Database.npz') Using CPU Traceback (most recent call last): File "/home/book/rk3566/ai_tools/FaceRecognition-master/recognition/test.py", line 374, in <module> detect(setOPT()) # step3 File "/home/book/rk3566/ai_tools/FaceRecognition-master/recognition/test.py", line 37, in detect model = torch.load(weights, map_location=device)['model'].float() # load to FP32 File "/home/book/anaconda3/envs/yolov5face/lib/python3.9/site-packages/torch/serialization.py", line 1494, in load raise pickle.UnpicklingError(_get_wo_message(str(e))) from None _pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, do those steps only if you trust the source of the checkpoint. (1) In PyTorch 2.6, we changed the default value of the weights_only argument in torch.load from False to True. Re-running torch.load with weights_only set to False will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. (2) Alternatively, to load with weights_only=True please check the recommended steps in the following error message. WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray._reconstruct was not an allowed global by default. Please use torch.serialization.add_safe_globals([_reconstruct]) or the torch.serialization.safe_globals([_reconstruct]) context manager to allowlist this global if you trust this class/function. Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. 报错了

!mkdir -p ~/.keras/datasets !cp work/mnist.npz ~/.keras/datasets/ import warnings warnings.filterwarnings("ignore") from keras.datasets import mnist #train_images 和 train_labels 组成了训练集(training set),模型将从这些数据中进行学习。 #然后在测试集(test set,即 test_images 和 test_labels)上对模型进行测试。 (train_images, train_labels), (test_images, test_labels) = mnist.load_data() train_images.shape#看下数据的维度 len(train_labels) train_labels test_images.shape len(test_labels) test_labels from keras import models from keras import layers # 构建神经网络模型 network = models.Sequential() network.add(layers.Dense(512, activation='relu', input_shape=(28 * 28,))) network.add(layers.Dense(10, activation='softmax')) network.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) train_images = train_images.reshape((60000, 28 * 28)) train_images = train_images.astype('float32') / 255 test_images = test_images.reshape((10000, 28 * 28)) test_images = test_images.astype('float32') / 255 from keras.utils import to_categorical train_labels = to_categorical(train_labels) test_labels = to_categorical(test_labels) network.fit(train_images, train_labels, epochs=5, batch_size=128) test_loss, test_acc = network.evaluate(test_images, test_labels) print('test_acc:', test_acc)cp: cannot stat 'work/mnist.npz': No such file or directory;Using TensorFlow backend.;WARNING: Logging before flag parsing goes to stderr. W0407 05:07:41.395064 139824211502912 module_wrapper.py:139] From /opt/conda/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:74: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead. W0407 05:07:41.399161 139824211502912 module_wrapper.py:139] From /opt/conda/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead. W0407 05:07:41.404188 139824211502912 module_wrapper.py:139] From /opt/conda/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:4138: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.

2025-07-27 06:34:49.050571: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered WARNING: All log messages before absl::InitializeLog() is called are written to STDERR E0000 00:00:1753598089.235406 36 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered E0000 00:00:1753598089.287919 36 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/imdb.npz 17464789/17464789 ━━━━━━━━━━━━━━━━━━━━ 0s 0us/step 数据集形状: 训练集: (20000, 128) (20000,) 验证集: (5000, 128) (5000,) 测试集: (25000, 128) (25000,) I0000 00:00:1753598105.743405 36 gpu_device.cc:2022] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 15513 MB memory: -> device: 0, name: Tesla P100-PCIE-16GB, pci bus id: 0000:00:04.0, compute capability: 6.0 Model: "functional_1" ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer (InputLayer) │ (None, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ embedding (Embedding) │ (None, 128, 64) │ 640,000 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ lite_transformer │ (None, 128, 64) │ 33,472 │ │ (LiteTransformer) │ │ │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ global_average_pooling1d │ (None, 64) │ 0 │ │ (GlobalAveragePooling1D) │ │ │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_2 (Dense) │ (None, 1) │ 65 │ └─────────────────────────────────┴────────────────────────┴───────────────┘ Total params: 673,537 (2.57 MB) Trainable params: 673,537 (2.57 MB) Non-trainable params: 0 (0.00 B) Epoch 1/15 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR I0000 00:00:1753598112.016146 94 service.cc:148] XLA service 0x7ef2ec012540 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: I0000 00:00:1753598112.016592 94 service.cc:156] StreamExecutor device (0): Tesla P100-PCIE-16GB, Compute Capability 6.0 I0000 00:00:1753598112.533355 94 cuda_dnn.cc:529] Loaded cuDNN version 90300 36/313 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - accuracy: 0.5287 - loss: 0.6993 I0000 00:00:1753598116.120238 94 device_compiler.h:188] Compiled cluster using XLA! This line is logged at most once for the lifetime of the process. 313/313 ━━━━━━━━━━━━━━━━━━━━ 15s 21ms/step - accuracy: 0.6466 - loss: 0.6406 - val_accuracy: 0.8130 - val_loss: 0.4329 - learning_rate: 1.0000e-04 Epoch 2/15 313/313 ━━━━━━━━━━━━━━━━━━━━ 2s 5ms/step - accuracy: 0.8495 - loss: 0.3622 - val_accuracy: 0.8554 - val_loss: 0.3257 - learning_rate: 1.0000e-04 Epoch 3/15 313/313 ━━━━━━━━━━━━━━━━━━━━ 2s 5ms/step - accuracy: 0.9026 - loss: 0.2480 - val_accuracy: 0.8598 - val_loss: 0.3216 - learning_rate: 1.0000e-04 Epoch 4/15 313/313 ━━━━━━━━━━━━━━━━━━━━ 2s 5ms/step - accuracy: 0.9239 - loss: 0.2061 - val_accuracy: 0.8616 - val_loss: 0.3358 - learning_rate: 1.0000e-04 Epoch 5/15 313/313 ━━━━━━━━━━━━━━━━━━━━ 2s 5ms/step - accuracy: 0.9342 - loss: 0.1736 - val_accuracy: 0.8578 - val_loss: 0.3611 - learning_rate: 1.0000e-04 Epoch 6/15 313/313 ━━━━━━━━━━━━━━━━━━━━ 2s 5ms/step - accuracy: 0.9529 - loss: 0.1365 - val_accuracy: 0.8586 - val_loss: 0.3935 - learning_rate: 5.0000e-05 测试集准确率: 0.8540, 测试集损失: 0.3377 /tmp/ipykernel_36/130590330.py:135: UserWarning: Glyph 35757 (\N{CJK UNIFIED IDEOGRAPH-8BAD}) missing from current font. plt.tight_layout() /tmp/ipykernel_36/130590330.py:135: UserWarning: Glyph 32451 (\N{CJK UNIFIED IDEOGRAPH-7EC3}) missing from current font. plt.tight_layout() /tmp/ipykernel_36/130590330.py:135: UserWarning: Glyph 36718 (\N{CJK UNIFIED IDEOGRAPH-8F6E}) missing from current font. plt.tight_layout() /tmp/ipykernel_36/130590330.py:135: UserWarning: Glyph 27425 (\N{CJK UNIFIED IDEOGRAPH-6B21}) missing from current font. plt.tight_layout() /tmp/ipykernel_36/130590330.py:135: UserWarning: Glyph 25439 (\N{CJK UNIFIED IDEOGRAPH-635F}) missing from current font. plt.tight_layout() /tmp/ipykernel_36/130590330.py:135: UserWarning: Glyph 22833 (\N{CJK UNIFIED IDEOGRAPH-5931}) missing from current font. plt.tight_layout() /tmp/ipykernel_36/130590330.py:135: UserWarning: Glyph 26354 (\N{CJK UNIFIED IDEOGRAPH-66F2}) missing from current font. plt.tight_layout() /tmp/ipykernel_36/130590330.py:135: UserWarning: Glyph 32447 (\N{CJK UNIFIED IDEOGRAPH-7EBF}) missing from current font. plt.tight_layout() /tmp/ipykernel_36/130590330.py:135: UserWarning: Glyph 39564 (\N{CJK UNIFIED IDEOGRAPH-9A8C}) missing from current font. plt.tight_layout() /tmp/ipykernel_36/130590330.py:135: UserWarning: Glyph 35777 (\N{CJK UNIFIED IDEOGRAPH-8BC1}) missing from current font. plt.tight_layout() /tmp/ipykernel_36/130590330.py:135: UserWarning: Glyph 20934 (\N{CJK UNIFIED IDEOGRAPH-51C6}) missing from current font. plt.tight_layout() /tmp/ipykernel_36/130590330.py:135: UserWarning: Glyph 30830 (\N{CJK UNIFIED IDEOGRAPH-786E}) missing from current font. plt.tight_layout() /tmp/ipykernel_36/130590330.py:135: UserWarning: Glyph 29575 (\N{CJK UNIFIED IDEOGRAPH-7387}) missing from current font. plt.tight_layout() /tmp/ipykernel_36/130590330.py:136: UserWarning: Glyph 35757 (\N{CJK UNIFIED IDEOGRAPH-8BAD}) missing from current font. plt.savefig('training_history.png') /tmp/ipykernel_36/130590330.py:136: UserWarning: Glyph 32451 (\N{CJK UNIFIED IDEOGRAPH-7EC3}) missing from current font. plt.savefig('training_history.png') /tmp/ipykernel_36/130590330.py:136: UserWarning: Glyph 36718 (\N{CJK UNIFIED IDEOGRAPH-8F6E}) missing from current font. plt.savefig('training_history.png') /tmp/ipykernel_36/130590330.py:136: UserWarning: Glyph 27425 (\N{CJK UNIFIED IDEOGRAPH-6B21}) missing from current font. plt.savefig('training_history.png') /tmp/ipykernel_36/130590330.py:136: UserWarning: Glyph 25439 (\N{CJK UNIFIED IDEOGRAPH-635F}) missing from current font. plt.savefig('training_history.png') /tmp/ipykernel_36/130590330.py:136: UserWarning: Glyph 22833 (\N{CJK UNIFIED IDEOGRAPH-5931}) missing from current font. plt.savefig('training_history.png') /tmp/ipykernel_36/130590330.py:136: UserWarning: Glyph 26354 (\N{CJK UNIFIED IDEOGRAPH-66F2}) missing from current font. plt.savefig('training_history.png') /tmp/ipykernel_36/130590330.py:136: UserWarning: Glyph 32447 (\N{CJK UNIFIED IDEOGRAPH-7EBF}) missing from current font. plt.savefig('training_history.png') /tmp/ipykernel_36/130590330.py:136: UserWarning: Glyph 39564 (\N{CJK UNIFIED IDEOGRAPH-9A8C}) missing from current font. plt.savefig('training_history.png') /tmp/ipykernel_36/130590330.py:136: UserWarning: Glyph 35777 (\N{CJK UNIFIED IDEOGRAPH-8BC1}) missing from current font. plt.savefig('training_history.png') /tmp/ipykernel_36/130590330.py:136: UserWarning: Glyph 20934 (\N{CJK UNIFIED IDEOGRAPH-51C6}) missing from current font. plt.savefig('training_history.png') /tmp/ipykernel_36/130590330.py:136: UserWarning: Glyph 30830 (\N{CJK UNIFIED IDEOGRAPH-786E}) missing from current font. plt.savefig('training_history.png') /tmp/ipykernel_36/130590330.py:136: UserWarning: Glyph 29575 (\N{CJK UNIFIED IDEOGRAPH-7387}) missing from current font. plt.savefig('training_history.png') /usr/local/lib/python3.11/dist-packages/IPython/core/pylabtools.py:151: UserWarning: Glyph 25439 (\N{CJK UNIFIED IDEOGRAPH-635F}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /usr/local/lib/python3.11/dist-packages/IPython/core/pylabtools.py:151: UserWarning: Glyph 22833 (\N{CJK UNIFIED IDEOGRAPH-5931}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /usr/local/lib/python3.11/dist-packages/IPython/core/pylabtools.py:151: UserWarning: Glyph 26354 (\N{CJK UNIFIED IDEOGRAPH-66F2}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /usr/local/lib/python3.11/dist-packages/IPython/core/pylabtools.py:151: UserWarning: Glyph 32447 (\N{CJK UNIFIED IDEOGRAPH-7EBF}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /usr/local/lib/python3.11/dist-packages/IPython/core/pylabtools.py:151: UserWarning: Glyph 35757 (\N{CJK UNIFIED IDEOGRAPH-8BAD}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /usr/local/lib/python3.11/dist-packages/IPython/core/pylabtools.py:151: UserWarning: Glyph 32451 (\N{CJK UNIFIED IDEOGRAPH-7EC3}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /usr/local/lib/python3.11/dist-packages/IPython/core/pylabtools.py:151: UserWarning: Glyph 36718 (\N{CJK UNIFIED IDEOGRAPH-8F6E}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /usr/local/lib/python3.11/dist-packages/IPython/core/pylabtools.py:151: UserWarning: Glyph 27425 (\N{CJK UNIFIED IDEOGRAPH-6B21}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /usr/local/lib/python3.11/dist-packages/IPython/core/pylabtools.py:151: UserWarning: Glyph 39564 (\N{CJK UNIFIED IDEOGRAPH-9A8C}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /usr/local/lib/python3.11/dist-packages/IPython/core/pylabtools.py:151: UserWarning: Glyph 35777 (\N{CJK UNIFIED IDEOGRAPH-8BC1}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /usr/local/lib/python3.11/dist-packages/IPython/core/pylabtools.py:151: UserWarning: Glyph 20934 (\N{CJK UNIFIED IDEOGRAPH-51C6}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /usr/local/lib/python3.11/dist-packages/IPython/core/pylabtools.py:151: UserWarning: Glyph 30830 (\N{CJK UNIFIED IDEOGRAPH-786E}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /usr/local/lib/python3.11/dist-packages/IPython/core/pylabtools.py:151: UserWarning: Glyph 29575 (\N{CJK UNIFIED IDEOGRAPH-7387}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) 782/782 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step /tmp/ipykernel_36/130590330.py:150: UserWarning: Glyph 36127 (\N{CJK UNIFIED IDEOGRAPH-8D1F}) missing from current font. plt.savefig('confusion_matrix.png') /tmp/ipykernel_36/130590330.py:150: UserWarning: Glyph 38754 (\N{CJK UNIFIED IDEOGRAPH-9762}) missing from current font. plt.savefig('confusion_matrix.png') /tmp/ipykernel_36/130590330.py:150: UserWarning: Glyph 35780 (\N{CJK UNIFIED IDEOGRAPH-8BC4}) missing from current font. plt.savefig('confusion_matrix.png') /tmp/ipykernel_36/130590330.py:150: UserWarning: Glyph 35770 (\N{CJK UNIFIED IDEOGRAPH-8BBA}) missing from current font. plt.savefig('confusion_matrix.png') /tmp/ipykernel_36/130590330.py:150: UserWarning: Glyph 27491 (\N{CJK UNIFIED IDEOGRAPH-6B63}) missing from current font. plt.savefig('confusion_matrix.png') /tmp/ipykernel_36/130590330.py:150: UserWarning: Glyph 28151 (\N{CJK UNIFIED IDEOGRAPH-6DF7}) missing from current font. plt.savefig('confusion_matrix.png') /tmp/ipykernel_36/130590330.py:150: UserWarning: Glyph 28102 (\N{CJK UNIFIED IDEOGRAPH-6DC6}) missing from current font. plt.savefig('confusion_matrix.png') /tmp/ipykernel_36/130590330.py:150: UserWarning: Glyph 30697 (\N{CJK UNIFIED IDEOGRAPH-77E9}) missing from current font. plt.savefig('confusion_matrix.png') /tmp/ipykernel_36/130590330.py:150: UserWarning: Glyph 38453 (\N{CJK UNIFIED IDEOGRAPH-9635}) missing from current font. plt.savefig('confusion_matrix.png')
/usr/local/lib/python3.11/dist-packages/IPython/core/pylabtools.py:151: UserWarning: Glyph 36127 (\N{CJK UNIFIED IDEOGRAPH-8D1F}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /usr/local/lib/python3.11/dist-packages/IPython/core/pylabtools.py:151: UserWarning: Glyph 38754 (\N{CJK UNIFIED IDEOGRAPH-9762}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /usr/local/lib/python3.11/dist-packages/IPython/core/pylabtools.py:151: UserWarning: Glyph 35780 (\N{CJK UNIFIED IDEOGRAPH-8BC4}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /usr/local/lib/python3.11/dist-packages/IPython/core/pylabtools.py:151: UserWarning: Glyph 35770 (\N{CJK UNIFIED IDEOGRAPH-8BBA}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /usr/local/lib/python3.11/dist-packages/IPython/core/pylabtools.py:151: UserWarning: Glyph 27491 (\N{CJK UNIFIED IDEOGRAPH-6B63}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /usr/local/lib/python3.11/dist-packages/IPython/core/pylabtools.py:151: UserWarning: Glyph 28151 (\N{CJK UNIFIED IDEOGRAPH-6DF7}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /usr/local/lib/python3.11/dist-packages/IPython/core/pylabtools.py:151: UserWarning: Glyph 28102 (\N{CJK UNIFIED IDEOGRAPH-6DC6}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /usr/local/lib/python3.11/dist-packages/IPython/core/pylabtools.py:151: UserWarning: Glyph 30697 (\N{CJK UNIFIED IDEOGRAPH-77E9}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /usr/local/lib/python3.11/dist-packages/IPython/core/pylabtools.py:151: UserWarning: Glyph 38453 (\N{CJK UNIFIED IDEOGRAPH-9635}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) /tmp/ipykernel_36/130590330.py in <cell line: 0>() 205 206 # 可视化一个样本的注意力 --> 207 visualize_attention(model, sample_idx=42) /tmp/ipykernel_36/130590330.py in visualize_attention(model, sample_idx) 154 def visualize_attention(model, sample_idx): 155 # 创建返回注意力权重的模型 --> 156 attention_output = model.layers[2].attn.output[1] # 获取注意力权重 157 attention_model = keras.Model( 158 inputs=model.input, /usr/local/lib/python3.11/dist-packages/keras/src/ops/operation.py in output(self) 264 Output tensor or list of output tensors. 265 """ --> 266 return self._get_node_attribute_at_index(0, "output_tensors", "output") 267 268 def _get_node_attribute_at_index(self, node_index, attr, attr_name): /usr/local/lib/python3.11/dist-packages/keras/src/ops/operation.py in _get_node_attribute_at_index(self, node_index, attr, attr_name) 283 """ 284 if not self._inbound_nodes: --> 285 raise AttributeError( 286 f"The layer {self.name} has never been called " 287 f"and thus has no defined {attr_name}." AttributeError: The layer multi_head_attention has never been called and thus has no defined output.

!mkdir -p ~/.keras/datasets !cp work/mnist.npz ~/.keras/datasets/ import warnings warnings.filterwarnings("ignore") from keras.datasets import mnist (train_images, train_labels), (test_images, test_labels) = mnist.load_data() # 查看数据维度信息 print(f"Train images shape: {train_images.shape}") print(f"Length of train labels: {len(train_labels)}") print(f"Test images shape: {test_images.shape}") print(f"Length of test labels: {len(test_labels)}") from keras import models from keras import layers network = models.Sequential() network.add(layers.Dense(512, activation='relu', input_shape=(28 * 28,))) # 添加第一层 network.add(layers.Dense(10, activation='softmax')) # 添加第二层 network.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) # 预处理数据 train_images = train_images.reshape((60000, 28 * 28)).astype('float32') / 255 test_images = test_images.reshape((10000, 28 * 28)).astype('float32') / 255 from keras.utils import to_categorical train_labels = to_categorical(train_labels) test_labels = to_categorical(test_labels) # 开始训练 network.fit(train_images, train_labels, epochs=5, batch_size=128) # 模型评价 test_loss, test_acc = network.evaluate(test_images, test_labels) print('Test accuracy:', test_acc)WARNING: Logging before flag parsing goes to stderr. W0402 07:32:40.355032 140313847297856 module_wrapper.py:139] From /opt/conda/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:74: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead. W0402 07:32:40.358073 140313847297856 module_wrapper.py:139] From /opt/conda/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead. W0402 07:32:40.361127 140313847297856 module_wrapper.py:139] From /opt/conda/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:4138: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.

/home/hxj/anaconda3/envs/vlnce/bin/python /home/hxj/VLN-CE/VLN-CE/run.py --exp-config vlnce_baselines/config/r2r_baselines/nonlearning.yaml --run-type eval /home/hxj/anaconda3/envs/vlnce/lib/python3.6/site-packages/gym/core.py:27: UserWarning: WARN: Gym minimally supports python 3.6 as the python foundation not longer supports the version, please update your version to 3.7+ "Gym minimally supports python 3.6 as the python foundation not longer supports the version, please update your version to 3.7+" Gym has been unmaintained since 2022 and does not support NumPy 2.0 amongst other critical functionality. Please upgrade to Gymnasium, the maintained drop-in replacement of Gym, or contact the authors of your software and request that they upgrade. Users of this version of Gym should be able to simply replace 'import gym' with 'import gymnasium as gym' in the vast majority of cases. See the migration guide at https://gymnasium.farama.org/introduction/migration_guide/ for additional information. 2025-08-23 22:32:05,138 config: BASE_TASK_CONFIG_PATH: habitat_extensions/config/vlnce_task.yaml CHECKPOINT_FOLDER: data/checkpoints CHECKPOINT_INTERVAL: -1 CMD_TRAILING_OPTS: [] ENV_NAME: VLNCEDaggerEnv EVAL: EPISODE_COUNT: 10 EVAL_NONLEARNING: True LANGUAGES: ['en-US', 'en-IN'] NONLEARNING: AGENT: RandomAgent SAMPLE: False SAVE_RESULTS: True SPLIT: val_unseen USE_CKPT_CONFIG: True EVAL_CKPT_PATH_DIR: data/checkpoints FORCE_BLIND_POLICY: False IL: DAGGER: drop_existing_lmdb_features: True expert_policy_sensor: SHORTEST_PATH_SENSOR expert_policy_sensor_uuid: shortest_path_sensor iterations: 10 lmdb_commit_frequency: 500 lmdb_features_dir: data/trajectories_dirs/debug/trajectories.lmdb lmdb_fp16: False lmdb_map_size: 1200000000000.0 p: 0.75 preload_lmdb_features: False start_iteration: 0 update_size: 5000 RECOLLECT_TRAINER: effective_batch_size: -1 gt_file: data/datasets/RxR_VLNCE_v0/{split}/{split}_{role}_gt.json.gz max_traj_len: -1 preload_size: 30 preload_trajectories_file: False trajectories_file: data/trajectories_dirs/debug/trajectories.json.gz batch_size: 5 ckpt_to_load: data/checkpoints/ckpt.0.pth epochs: 4 inflection_weight_coef: 3.2 is_requeue: False load_from_ckpt: False lr: 0.00025 use_iw: True INFERENCE: CKPT_PATH: data/checkpoints/CMA_PM_DA_Aug.pth FORMAT: rxr INFERENCE_NONLEARNING: True LANGUAGES: ['en-US', 'en-IN'] NONLEARNING: AGENT: RandomAgent PREDICTIONS_FILE: predictions.json SAMPLE: False SPLIT: val_unseen USE_CKPT_CONFIG: True LOG_FILE: train.log LOG_INTERVAL: 10 MODEL: DEPTH_ENCODER: backbone: resnet50 cnn_type: VlnResnetDepthEncoder ddppo_checkpoint: data/ddppo-models/gibson-2plus-resnet50.pth output_size: 128 trainable: False INSTRUCTION_ENCODER: bidirectional: False dataset_vocab: data/datasets/R2R_VLNCE_v1-3_preprocessed/train/train.json.gz embedding_file: data/datasets/R2R_VLNCE_v1-3_preprocessed/embeddings.json.gz embedding_size: 50 final_state_only: True fine_tune_embeddings: False hidden_size: 128 rnn_type: LSTM sensor_uuid: instruction use_pretrained_embeddings: True vocab_size: 2504 PROGRESS_MONITOR: alpha: 1.0 use: False RGB_ENCODER: cnn_type: TorchVisionResNet50 output_size: 256 trainable: False SEQ2SEQ: use_prev_action: False STATE_ENCODER: hidden_size: 512 rnn_type: GRU WAYPOINT: continuous_distance: True continuous_offset: True discrete_distances: 6 discrete_offsets: 7 max_distance_prediction: 2.75 max_distance_var: 3.52 max_offset_var: 0.0685 min_distance_prediction: 0.25 min_distance_var: 0.0625 min_offset_var: 0.011 offset_temperature: 1.0 predict_distance: True predict_offset: True ablate_depth: False ablate_instruction: False ablate_rgb: False normalize_rgb: False policy_name: CMAPolicy NUM_CHECKPOINTS: 10 NUM_ENVIRONMENTS: 16 NUM_PROCESSES: -1 NUM_UPDATES: 10000 ORBSLAM2: ANGLE_TH: 0.2617993877991494 BETA: 100 CAMERA_HEIGHT: 1.25 DEPTH_DENORM: 10.0 DIST_REACHED_TH: 0.15 DIST_TO_STOP: 0.05 D_OBSTACLE_MAX: 4.0 D_OBSTACLE_MIN: 0.1 H_OBSTACLE_MAX: 1.25 H_OBSTACLE_MIN: 0.375 MAP_CELL_SIZE: 0.1 MAP_SIZE: 40 MIN_PTS_IN_OBSTACLE: 320.0 NEXT_WAYPOINT_TH: 0.5 NUM_ACTIONS: 3 PLANNER_MAX_STEPS: 500 PREPROCESS_MAP: True SLAM_SETTINGS_PATH: habitat_baselines/slambased/data/mp3d3_small1k.yaml SLAM_VOCAB_PATH: habitat_baselines/slambased/data/ORBvoc.txt PROFILING: CAPTURE_START_STEP: -1 NUM_STEPS_TO_CAPTURE: -1 RESULTS_DIR: data/checkpoints/pretrained/evals RL: CHECKPOINT_INTERVAL: 250 DDPPO: backbone: resnet18 distrib_backend: NCCL force_distributed: False num_recurrent_layers: 1 pretrained: False pretrained_encoder: False pretrained_weights: data/ddppo-models/gibson-2plus-resnet50.pth requeue_path: data/interrupted_state.pth reset_critic: True rnn_type: GRU start_from_requeue: False sync_frac: 0.6 train_encoder: True LOG_INTERVAL: 10 NUM_UPDATES: 200000 POLICY: OBS_TRANSFORMS: CENTER_CROPPER: HEIGHT: 256 WIDTH: 256 CENTER_CROPPER_PER_SENSOR: SENSOR_CROPS: [('rgb', (224, 224)), ('depth', (256, 256))] CUBE2EQ: HEIGHT: 256 SENSOR_UUIDS: [] WIDTH: 512 CUBE2FISH: FOV: 180 HEIGHT: 256 PARAMS: (0.2, 0.2, 0.2) SENSOR_UUIDS: [] WIDTH: 256 ENABLED_TRANSFORMS: () EQ2CUBE: HEIGHT: 256 SENSOR_UUIDS: [] WIDTH: 256 OBS_STACK: SENSOR_REWRITES: [('rgb', ['rgb', 'rgb_1', 'rgb_2', 'rgb_3', 'rgb_4', 'rgb_5', 'rgb_6', 'rgb_7', 'rgb_8', 'rgb_9', 'rgb_10', 'rgb_11']), ('depth', ['depth', 'depth_1', 'depth_2', 'depth_3', 'depth_4', 'depth_5', 'depth_6', 'depth_7', 'depth_8', 'depth_9', 'depth_10', 'depth_11'])] RESIZE_SHORTEST_EDGE: SIZE: 256 name: PointNavResNetPolicy PPO: clip_param: 0.2 clip_value_loss: True distance_entropy_coef: 0.0 entropy_coef: 0.01 eps: 1e-05 gamma: 0.99 hidden_size: 512 lr: 0.0002 max_grad_norm: 0.2 num_mini_batch: 4 num_steps: 16 offset_entropy_coef: 0.0 offset_regularize_coef: 0.1146 pano_entropy_coef: 1.0 ppo_epoch: 2 reward_window_size: 50 tau: 0.95 use_double_buffered_sampler: False use_gae: True use_linear_clip_decay: False use_linear_lr_decay: False use_normalized_advantage: False value_loss_coef: 0.5 REWARD_MEASURE: waypoint_reward_measure SLACK_REWARD: -0.01 SUCCESS_MEASURE: success SUCCESS_REWARD: 2.5 SENSORS: ['RGB_SENSOR', 'DEPTH_SENSOR'] SIMULATOR_GPU_IDS: [0] TASK_CONFIG: DATASET: CONTENT_SCENES: ['*'] DATA_PATH: data/datasets/R2R_VLNCE_v1-3_preprocessed/{split}/{split}.json.gz EPISODES_ALLOWED: ['*'] LANGUAGES: ['*'] ROLES: ['guide'] SCENES_DIR: data/scene_datasets/ SPLIT: train TYPE: VLN-CE-v1 ENVIRONMENT: ITERATOR_OPTIONS: CYCLE: True GROUP_BY_SCENE: True MAX_SCENE_REPEAT_EPISODES: -1 MAX_SCENE_REPEAT_STEPS: 10000 NUM_EPISODE_SAMPLE: -1 SHUFFLE: True STEP_REPETITION_RANGE: 0.2 MAX_EPISODE_SECONDS: 10000000 MAX_EPISODE_STEPS: 500 PYROBOT: BASE_CONTROLLER: proportional BASE_PLANNER: none BUMP_SENSOR: TYPE: PyRobotBumpSensor DEPTH_SENSOR: CENTER_CROP: False HEIGHT: 480 MAX_DEPTH: 5.0 MIN_DEPTH: 0.0 NORMALIZE_DEPTH: True TYPE: PyRobotDepthSensor WIDTH: 640 LOCOBOT: ACTIONS: ['BASE_ACTIONS', 'CAMERA_ACTIONS'] BASE_ACTIONS: ['go_to_relative', 'go_to_absolute'] CAMERA_ACTIONS: ['set_pan', 'set_tilt', 'set_pan_tilt'] RGB_SENSOR: CENTER_CROP: False HEIGHT: 480 TYPE: PyRobotRGBSensor WIDTH: 640 ROBOT: locobot ROBOTS: ['locobot'] SENSORS: ['RGB_SENSOR', 'DEPTH_SENSOR', 'BUMP_SENSOR'] SEED: 100 SIMULATOR: ACTION_SPACE_CONFIG: v0 AGENTS: ['AGENT_0'] AGENT_0: ANGULAR_ACCELERATION: 12.56 ANGULAR_FRICTION: 1.0 COEFFICIENT_OF_RESTITUTION: 0.0 HEIGHT: 1.5 IS_SET_START_STATE: False LINEAR_ACCELERATION: 20.0 LINEAR_FRICTION: 0.5 MASS: 32.0 RADIUS: 0.1 SENSORS: ['RGB_SENSOR', 'DEPTH_SENSOR'] START_POSITION: [0, 0, 0] START_ROTATION: [0, 0, 0, 1] DEFAULT_AGENT_ID: 0 DEPTH_SENSOR: HEIGHT: 256 HFOV: 90 MAX_DEPTH: 10.0 MIN_DEPTH: 0.0 NORMALIZE_DEPTH: True ORIENTATION: [0.0, 0.0, 0.0] POSITION: [0, 1.25, 0] TYPE: HabitatSimDepthSensor WIDTH: 256 FORWARD_STEP_SIZE: 0.25 HABITAT_SIM_V0: ALLOW_SLIDING: True ENABLE_PHYSICS: False GPU_DEVICE_ID: 0 GPU_GPU: False PHYSICS_CONFIG_FILE: ./data/default.physics_config.json RGB_SENSOR: HEIGHT: 224 HFOV: 90 ORIENTATION: [0.0, 0.0, 0.0] POSITION: [0, 1.25, 0] TYPE: HabitatSimRGBSensor WIDTH: 224 SCENE: data/scene_datasets/habitat-test-scenes/van-gogh-room.glb SEED: 100 SEMANTIC_SENSOR: HEIGHT: 480 HFOV: 90 ORIENTATION: [0.0, 0.0, 0.0] POSITION: [0, 1.25, 0] TYPE: HabitatSimSemanticSensor WIDTH: 640 TILT_ANGLE: 15 TURN_ANGLE: 15 TYPE: Sim-v0 TASK: ACTIONS: ANSWER: TYPE: AnswerAction GO_TOWARD_POINT: TYPE: GoTowardPoint rotate_agent: True LOOK_DOWN: TYPE: LookDownAction LOOK_UP: TYPE: LookUpAction MOVE_FORWARD: TYPE: MoveForwardAction STOP: TYPE: StopAction TELEPORT: TYPE: TeleportAction TURN_LEFT: TYPE: TurnLeftAction TURN_RIGHT: TYPE: TurnRightAction ANSWER_ACCURACY: TYPE: AnswerAccuracy COLLISIONS: TYPE: Collisions COMPASS_SENSOR: TYPE: CompassSensor CORRECT_ANSWER: TYPE: CorrectAnswer DISTANCE_TO_GOAL: DISTANCE_TO: POINT TYPE: DistanceToGoal EPISODE_INFO: TYPE: EpisodeInfo GLOBAL_GPS_SENSOR: DIMENSIONALITY: 2 TYPE: GlobalGPSSensor GOAL_SENSOR_UUID: pointgoal GPS_SENSOR: DIMENSIONALITY: 2 TYPE: GPSSensor HEADING_SENSOR: TYPE: HeadingSensor IMAGEGOAL_SENSOR: TYPE: ImageGoalSensor INSTRUCTION_SENSOR: TYPE: InstructionSensor INSTRUCTION_SENSOR_UUID: instruction MEASUREMENTS: ['DISTANCE_TO_GOAL', 'SUCCESS', 'SPL', 'NDTW', 'PATH_LENGTH', 'ORACLE_SUCCESS', 'STEPS_TAKEN'] NDTW: FDTW: True GT_PATH: data/datasets/R2R_VLNCE_v1-3_preprocessed/{split}/{split}_gt.json.gz SPLIT: train SUCCESS_DISTANCE: 3.0 TYPE: NDTW OBJECTGOAL_SENSOR: GOAL_SPEC: TASK_CATEGORY_ID GOAL_SPEC_MAX_VAL: 50 TYPE: ObjectGoalSensor ORACLE_ACTION_SENSOR: GOAL_RADIUS: 0.5 TYPE: OracleActionSensor ORACLE_NAVIGATION_ERROR: TYPE: OracleNavigationError ORACLE_SPL: TYPE: OracleSPL ORACLE_SUCCESS: SUCCESS_DISTANCE: 3.0 TYPE: OracleSuccess PANO_ANGLE_FEATURE_SENSOR: CAMERA_NUM: 12 TYPE: AngleFeaturesSensor PANO_ROTATIONS: 12 PATH_LENGTH: TYPE: PathLength POINTGOAL_SENSOR: DIMENSIONALITY: 2 GOAL_FORMAT: POLAR TYPE: PointGoalSensor POINTGOAL_WITH_GPS_COMPASS_SENSOR: DIMENSIONALITY: 2 GOAL_FORMAT: POLAR TYPE: PointGoalWithGPSCompassSensor POSSIBLE_ACTIONS: ['STOP', 'MOVE_FORWARD', 'TURN_LEFT', 'TURN_RIGHT'] PROXIMITY_SENSOR: MAX_DETECTION_RADIUS: 2.0 TYPE: ProximitySensor QUESTION_SENSOR: TYPE: QuestionSensor RXR_INSTRUCTION_SENSOR: TYPE: RxRInstructionSensor features_path: data/datasets/RxR_VLNCE_v0/text_features/rxr_{split}/{id:06}_{lang}_text_features.npz SDTW: TYPE: SDTW SENSORS: ['INSTRUCTION_SENSOR', 'SHORTEST_PATH_SENSOR', 'VLN_ORACLE_PROGRESS_SENSOR'] SHORTEST_PATH_SENSOR: GOAL_RADIUS: 0.5 TYPE: ShortestPathSensor USE_ORIGINAL_FOLLOWER: False SOFT_SPL: TYPE: SoftSPL SPL: SUCCESS_DISTANCE: 3.0 TYPE: SPL STEPS_TAKEN: TYPE: StepsTaken SUCCESS: SUCCESS_DISTANCE: 3.0 TYPE: Success SUCCESS_DISTANCE: 3.0 TOP_DOWN_MAP: DRAW_BORDER: True DRAW_GOAL_AABBS: True DRAW_GOAL_POSITIONS: True DRAW_SHORTEST_PATH: True DRAW_SOURCE: True DRAW_VIEW_POINTS: True FOG_OF_WAR: DRAW: True FOV: 90 VISIBILITY_DIST: 5.0 MAP_PADDING: 3 MAP_RESOLUTION: 1024 MAX_EPISODE_STEPS: 1000 TYPE: TopDownMap TOP_DOWN_MAP_VLNCE: DRAW_BORDER: True DRAW_FIXED_WAYPOINTS: True DRAW_MP3D_AGENT_PATH: True DRAW_REFERENCE_PATH: True DRAW_SHORTEST_PATH: True DRAW_SOURCE_AND_TARGET: True FOG_OF_WAR: DRAW: True FOV: 90 VISIBILITY_DIST: 5.0 GRAPHS_FILE: data/connectivity_graphs.pkl MAP_RESOLUTION: 1024 MAX_EPISODE_STEPS: 1000 TYPE: TopDownMapVLNCE TYPE: VLN-v0 VLN_ORACLE_PROGRESS_SENSOR: TYPE: VLNOracleProgressSensor WAYPOINT_REWARD_MEASURE: TYPE: WaypointRewardMeasure distance_scalar: 1.0 scale_slack_on_prediction: True slack_reward: -0.05 success_reward: 2.5 use_distance_scaled_slack_reward: True TENSORBOARD_DIR: data/tensorboard_dirs/debug TORCH_GPU_ID: 0 TOTAL_NUM_STEPS: -1.0 TRAINER_NAME: dagger VERBOSE: True VIDEO_DIR: data/videos/debug VIDEO_OPTION: [] 2025-08-23 22:32:05,231 Initializing dataset VLN-CE-v1 2025-08-23 22:32:05,266 initializing sim Sim-v0 WARNING: Logging before InitGoogleLogging() is written to STDERR I0823 22:32:05.266547 21439 ManagedContainerBase.cpp:19] ManagedContainerBase::convertFilenameToJSON : Filename : default changed to proposed JSON configuration filename : default.scene_dataset_config.json I0823 22:32:05.266567 21439 AttributesManagerBase.h:283] AttributesManager<T>::createFromJsonOrDefaultInternal (Dataset) : Proposing JSON name : default.scene_dataset_config.json from original name : default | This file does not exist. I0823 22:32:05.266610 21439 AssetAttributesManager.cpp:117] Asset attributes (capsule3DSolid : capsule3DSolid_hemiRings_4_cylRings_1_segments_12_halfLen_0.75_useTexCoords_false_useTangents_false) created and registered. I0823 22:32:05.266628 21439 AssetAttributesManager.cpp:117] Asset attributes (capsule3DWireframe : capsule3DWireframe_hemiRings_8_cylRings_1_segments_16_halfLen_1) created and registered. I0823 22:32:05.266644 21439 AssetAttributesManager.cpp:117] Asset attributes (coneSolid : coneSolid_segments_12_halfLen_1.25_rings_1_useTexCoords_false_useTangents_false_capEnd_true) created and registered. I0823 22:32:05.266654 21439 AssetAttributesManager.cpp:117] Asset attributes (coneWireframe : coneWireframe_segments_32_halfLen_1.25) created and registered. I0823 22:32:05.266659 21439 AssetAttributesManager.cpp:117] Asset attributes (cubeSolid : cubeSolid) created and registered. I0823 22:32:05.266664 21439 AssetAttributesManager.cpp:117] Asset attributes (cubeWireframe : cubeWireframe) created and registered. I0823 22:32:05.266676 21439 AssetAttributesManager.cpp:117] Asset attributes (cylinderSolid : cylinderSolid_rings_1_segments_12_halfLen_1_useTexCoords_false_useTangents_false_capEnds_true) created and registered. I0823 22:32:05.266686 21439 AssetAttributesManager.cpp:117] Asset attributes (cylinderWireframe : cylinderWireframe_rings_1_segments_32_halfLen_1) created and registered. I0823 22:32:05.266692 21439 AssetAttributesManager.cpp:117] Asset attributes (icosphereSolid : icosphereSolid_subdivs_1) created and registered. I0823 22:32:05.266697 21439 AssetAttributesManager.cpp:117] Asset attributes (icosphereWireframe : icosphereWireframe_subdivs_1) created and registered. I0823 22:32:05.266705 21439 AssetAttributesManager.cpp:117] Asset attributes (uvSphereSolid : uvSphereSolid_rings_8_segments_16_useTexCoords_false_useTangents_false) created and registered. I0823 22:32:05.266713 21439 AssetAttributesManager.cpp:117] Asset attributes (uvSphereWireframe : uvSphereWireframe_rings_16_segments_32) created and registered. I0823 22:32:05.266716 21439 AssetAttributesManager.cpp:105] AssetAttributesManager::buildCtorFuncPtrMaps : Built default primitive asset templates : 12 I0823 22:32:05.266949 21439 SceneDatasetAttributesManager.cpp:23] File (default) not found, so new default dataset attributes created and registered. I0823 22:32:05.266952 21439 MetadataMediator.cpp:47] MetadataMediator::createDataset : Dataset default successfully created. I0823 22:32:05.268803 21439 AttributesManagerBase.h:283] AttributesManager<T>::createFromJsonOrDefaultInternal (Physics Manager) : Proposing JSON name : ./data/default.physics_config.json from original name : ./data/default.physics_config.json | This file does not exist. I0823 22:32:05.268810 21439 PhysicsAttributesManager.cpp:27] File (./data/default.physics_config.json) not found, so new default physics manager attributes created and registered. I0823 22:32:05.268812 21439 AbstractObjectAttributesManagerBase.h:175] AbstractObjectAttributesManager<T>::createObject (Stage) : Making attributes with handle : data/scene_datasets/mp3d/zsNo4HB9uLZ/zsNo4HB9uLZ.glb I0823 22:32:05.268814 21439 ManagedContainerBase.cpp:19] ManagedContainerBase::convertFilenameToJSON : Filename : data/scene_datasets/mp3d/zsNo4HB9uLZ/zsNo4HB9uLZ.glb changed to proposed JSON configuration filename : data/scene_datasets/mp3d/zsNo4HB9uLZ/zsNo4HB9uLZ.stage_config.json I0823 22:32:05.268816 21439 AttributesManagerBase.h:283] AttributesManager<T>::createFromJsonOrDefaultInternal (Stage) : Proposing JSON name : data/scene_datasets/mp3d/zsNo4HB9uLZ/zsNo4HB9uLZ.stage_config.json from original name : data/scene_datasets/mp3d/zsNo4HB9uLZ/zsNo4HB9uLZ.glb | This file does not exist. E0823 22:32:05.268834 21439 StageAttributesManager.cpp:90] StageAttributesManager::registerObjectFinalize : Render asset template handle : data/scene_datasets/mp3d/zsNo4HB9uLZ/zsNo4HB9uLZ.glb specified in stage template with handle : data/scene_datasets/mp3d/zsNo4HB9uLZ/zsNo4HB9uLZ.glb does not correspond to any existing file or primitive render asset. Aborting. I0823 22:32:05.268836 21439 AbstractObjectAttributesManagerBase.h:181] AbstractObjectAttributesManager<T>::createObject (Stage) : Done making attributes with handle : data/scene_datasets/mp3d/zsNo4HB9uLZ/zsNo4HB9uLZ.glb 进程已结束,退出代码为 139 (interrupted by signal 11:SIGSEGV)

最新推荐

recommend-type

jdk 7u80 for macosx

jdk 7u80 for macosx 老系统改造,找不到jdk,上传一个留着备用
recommend-type

个人作品:使用React和Material-UI打造的赛车主题个人网站

### 知识点概述 该部分将围绕提供的文件信息进行展开,包含React框架、Material-UI库、网站性能优化、版本控制、网站部署以及相关的标签解析等详细知识点。 ### React框架 #### React简介 React是由Facebook开发和维护的一个用于构建用户界面的JavaScript库。它采用组件化的方式,使得开发者可以将UI分解为独立、可复用的组件。这些组件可以包含自己的状态,且只有状态发生变更时,才会重新渲染相应的组件,从而提高应用性能。 #### React应用生命周期 在React中,组件从创建到挂载、更新再到卸载,均遵循一套生命周期方法。例如,`componentDidMount`是在组件挂载后立即调用的方法,常用于执行如数据获取这类操作。`componentDidUpdate`则是组件更新后调用,可用于与当前和之前的props进行比较,并基于比较结果执行更新操作。 ### Material-UI #### Material-UI简介 Material-UI是一个React的用户界面框架,它提供了一整套现成的组件,符合Google的Material Design设计语言。Material-UI的核心优势在于其能够快速实现美观且一致的UI界面,同时保持高度的可定制性。该框架包含各种常用的UI元素,如按钮、输入框、卡片等,并拥有丰富的主题配置选项来支持不同品牌和风格的设计需求。 #### Material-UI中的组件使用 Material-UI通过组件化的方式提供各种UI元素,开发者可以根据需要自由组合和构建界面。例如,`Button`组件可以用于创建按钮,`Card`组件用于创建卡片布局等。每个组件的使用都遵循Material-UI的设计规范,确保界面美观和用户友好。 ### 网站性能优化 #### 响应式设计 从描述中提到网站支持移动和桌面端的定制设计,这是响应式设计的核心特点。响应式设计意味着网页能够根据不同的屏幕尺寸和分辨率,自动调整布局,提供最优化的浏览体验。 #### 动画和过渡效果 网站引入了新的过渡和动画,这不仅提升了用户体验,也可能有助于页面元素间转换时的直观性。使用React可以轻松地添加和管理动画,因为状态更新时React会自动处理组件树的更新。 ### 版本控制和分叉仓库 #### 版本控制(Git) 从描述中提到可以分叉此仓库,这涉及到了Git版本控制工具的使用。Git是一个分布式版本控制系统,用于跟踪代码变更,并且支持协作开发。通过分叉仓库,开发者可以从原始项目创建一个副本,可以在副本上进行修改而不影响原项目。 #### 分叉(Fork)和克隆(Clone) 分叉操作会创建一个远程仓库的副本,而克隆操作则会将远程仓库的内容复制到本地计算机。这意味着开发者可以在本地进行更改,然后选择将更改推送到自己的远程副本,或贡献回原始项目。 ### 安装与部署 #### 安装依赖项 在分叉/克隆仓库之后,开发者需要在项目目录中运行`npm install`来安装所有必需的依赖项。这一步骤是启动开发服务器前的必要准备。 #### 开发和生产环境 开发者需要区分开发环境和生产环境。开发环境通常包含调试工具,而生产环境需要优化和压缩资源以提升网站性能。通过运行`npm start`可以启动开发服务器,而使用`npm run build`则可以构建用于生产环境的静态文件。 ### 标签解析 #### React 标签提及了React,这意味着项目是基于React框架开发的,相关开发者和招聘者可能会搜索此标签以找到具有React开发经验的人才。 #### Blog & personal-website 标签还提到了博客和个人网站,这可能意味着项目是一个个人博客或者展示个人技能和经历的网站,通常用于职业发展或个人品牌建设。 #### Resume-website 标签提到了简历网站,这可以是一个在线简历或个人作品集的展示平台,提供了一个展示个人技能和经验的数字平台,便于求职者与潜在雇主之间沟通。 ### 结语 以上是根据提供的文件信息总结的详细知识点,涵盖了React和Material-UI框架的使用、网站性能优化、版本控制和仓库分叉、安装和部署流程以及对相关标签的解释和应用。这些内容不仅有助于理解该项目的技术细节,而且对于了解构建现代Web应用所涉及的技术和流程也是十分有帮助的。
recommend-type

活动图设计技巧大公开:如何在StarUML中构建高逻辑性的业务流程?

# 摘要 活动图作为软件系统建模的重要工具,能够有效表达复杂业务流程的逻辑结构与行为关系。本文围绕StarUML平台,系统探讨了活动图的设计原理、建模方法及其在实际项目中的应用价值。文章从活动图的核心元素与建模逻辑入手,分析了结构化建模方法与语义表达机制,并详细阐述了在StarUML中构建标准活动图的操作流程与优化策略。针对复杂业务场景,
recommend-type

模拟经典运放电路结构

<think>嗯,用户想了解经典运放电路的结构、模拟方法和实现方式。首先,我需要回忆运放的基本结构。运放通常由差分输入级、中间增益级和输出级组成,对吧?然后,模拟方法可能涉及虚短和虚断的概念,这是分析运放电路的关键。用户提到的引用里也强调了这两个概念的重要性,所以必须详细解释。接下来,我得考虑不同的经典电路结构,比如反向放大器、同向放大器、电压跟随器、加法器、微分器和积分器。每个电路的结构和公式需要准确描述,并确保使用正确的LaTeX格式。例如,反向放大器的公式是$V_o= -\frac{R_f}{R_1}V_i$,要检查是否用$$...$$还是$...$,根据用户的要求,行内公式用$,独立公
recommend-type

MATLAB模拟无线传感器网络与区块链技术

根据给定文件信息,我们将详细探讨以下几个关键知识点: 1. 无线传感器网络(Wireless Sensor Network, WSN): 无线传感器网络是由一组具有传感器、处理单元和通信能力的小型设备组成的网络,这些设备能够相互协作,完成对环境的监测任务。无线传感器网络具有部署便捷、自组织、灵活性高等特点。它在智能交通、环境监测、智能家居等领域有着广泛的应用。 2. 区块链技术(Blockchain Technology): 区块链是一种分布式数据库技术,其特点是去中心化、数据不可篡改、信息透明。在无线传感器网络中,区块链可用于提高数据的可信度和安全性。每个节点生成的块(block)将包含一段时期内的交易信息,这些块链式地连接在一起,形成链状结构,即区块链。通过共识机制(如工作量证明PoW、权益证明PoS等),网络中的节点对数据的有效性达成一致,从而保证数据的安全性和可靠性。 3. 随机泛洪路由技术(Random Flooding Routing): 随机泛洪路由技术是一种无需路由表的简单、基于概率的路由方法。在泛洪机制中,消息从源节点发出后,每个接收到消息的节点都会以一定的概率转发给其邻居节点。该技术易于实现,但可能会导致大量重复传输,进而增加网络的负载和能量消耗。因此,随机泛洪路由通常用于对实时性要求较高,但对能量和资源消耗要求不高的场合。 4. MATLAB仿真: MATLAB是一种高级数学计算和仿真软件,它广泛应用于工程计算、控制系统、信号处理、通信系统等领域。在无线传感器网络和区块链技术的研究中,MATLAB提供了强大的仿真环境和工具箱,使得研究人员能够模拟网络行为、验证算法性能和优化系统设计。 5. 能量效率(Energy Efficiency): 在无线传感器网络的设计中,能量效率是一个核心考量因素。由于传感器节点通常由电池供电,并且电池的更换或充电往往不便或不可行,因此降低节点能耗,延长网络的生命周期至关重要。研究者需要在保证网络性能的同时,采用各种策略来减少节点的能量消耗。 6. 静态节点(Static Node): 在无线传感器网络中,静态节点指的是那些位置固定不动的节点。与移动节点相比,静态节点的网络拓扑结构相对稳定,这有助于简化路由策略的设计,并且在一定程度上提高了系统的可预测性。静态节点适用于那些对位置变化不敏感的监测任务。 7. 节点块生成(Block Generation at Nodes): 在区块链技术中,节点块生成是指每个节点按照一定的规则(如PoW、PoS等)打包一段时间内的交易记录,生成新的数据块,并将其加入到区块链中的过程。每个新生成的块都包含前一个块的哈希值,确保了链的连续性和不可篡改性。在无线传感器网络中,节点生成块的过程也是数据交换的一部分,每个节点在完成数据处理和转发后,可能会产生新的块。 综合以上知识点,我们可以了解到,给定文件中的MATLAB仿真代码是专门用于无线传感器网络环境的仿真,其中实现了随机泛洪路由技术来模拟数据传输过程,并通过节点上生成块的方式构建了区块链。该代码特别适用于静态节点环境,其目的在于研究如何降低能量消耗,并保证数据传输的可靠性和安全性。代码的开源性将有助于研究人员和开发者访问、使用、修改和进一步优化该仿真模型。
recommend-type

UML状态图深度剖析:掌握对象生命周期建模的7个关键要点

# 摘要 UML状态图是描述系统动态行为的核心建模工具,广泛应用于软件与系统设计中。本文系统阐述了状态图的基本概念与理论基础,深入分析了状态、转移、复合结构及并发机制等关键建模元素,并详细探讨了状态图的构建流程与设计原则,强调行为建模的逻辑完整性与可维护性。结合嵌入式系统、业务流程和设计模式等实际应用场景,展示了状态图在复杂系统状态管理中的有效性。同时,本文研究了状态图与类图、序列图的协同机制,探讨了其在系统架构设计中的整合作用,并介绍了主流建模工具对状态图的支持与自动化实现方法,为工程实践提供了理论指导和技术路径。 # 关键字 UML状态图;状态转移;复合状态;并发建模;行为建模;
recommend-type

dbeaver缩小编辑页面

<think> 首先,用户的问题是:“我想查找在DBeaver中缩小编辑页面的方法 请问DBeaver缩小编辑页面的方法”。这似乎是用户在寻求如何在DBeaver数据库工具中缩小编辑页面的方法。 回顾系统级指令: - 所有行内数学表达式使用$...$格式。 - 独立公式使用$$...$$格式并单独成段。 - LaTeX语法必须正确。 - 使用中文回答。 - 生成相关问题。 - 引用的段落末尾自然地添加引用标识。 用户提供的引用: - 引用[1]:关于SQL函数和触发器的更新操作,与DBeaver缩小页面无关。 - 引用[2]:提到搜索历史、模糊查询、列编辑模式、自动换行和截图工具snipa
recommend-type

加强Magento管理员密码强度的扩展工具

Magento是一个流行的开源电子商务平台,它允许商家和开发人员构建和管理在线商店。为了确保在线商店的安全性,管理员密码的强度至关重要。Magento默认提供的管理员密码强度规则对于基本安全需求来说已经不错,但往往可以根据不同的安全需求进行增强。 Magento的“magento-admin-password-strength-enforcer”扩展就是用来加强默认密码策略的工具之一。通过这个扩展,网站管理员可以设置一个更高的密码长度阈值,以强制新创建的管理员密码满足一定的安全标准。 知识点说明: 1. Magento平台概述: Magento是一个基于PHP语言和MySQL数据库开发的电子商务解决方案,它具有模块化架构的特点,提供了丰富的扩展性和定制性。平台内置了许多功能,比如目录管理、营销工具、SEO优化等,同时支持多店铺管理和多种支付方式。 2. 管理员密码安全性: 在电子商务平台中,管理面板的访问权限至关重要。管理员账户通常拥有对网站进行设置、配置和维护的权限,因此密码的安全性直接关系到整个网站的安全性。如果密码强度不够,恶意攻击者就可能通过各种手段获取密码,进而对网站进行非法操作。 3. Magento密码强度策略: Magento默认配置中包含了密码强度的验证规则,如要求密码包含一定数量的字符、数字和特殊符号,以及不得包含用户名等。这些规则在一定程度上增强了密码的安全性,但随着网络攻击手段的不断进步,增强密码策略的要求变得越发重要。 4. Magento扩展与增强: Magento社区和开发人员不断提供各种扩展(Extensions)来增加Magento的功能,包括安全增强功能。magento-admin-password-strength-enforcer扩展就是这类工具之一,它允许管理员通过设置来提高密码强度,尤其是通过增加密码长度的要求。 5. 扩展安装和配置: 安装此类Magento扩展通常涉及解压下载的文件到Magento的特定目录,并通过Magento的后台管理系统进行安装配置。该扩展提供了一个简单的配置界面,管理员可以直接在Magento的后台设置界面(System > Configuration > Admin Password Strength)进行操作,输入想要设置的新密码长度阈值。 6. 特征和优势: - 简洁性:该扩展不会修改Magento的核心代码,这意味着它可以更加安全地应用,不会影响Magento平台的其他部分。 - 向后兼容性:扩展设计为向后兼容,因此可以在不影响其他已安装扩展的情况下使用。 - 灵活性:提供了默认密码长度的设置,管理员可以自行定义密码的最小长度要求,以满足自己的安全需求。 - 升级友好:由于不涉及核心代码,该扩展通常能够随着Magento平台的升级而正常工作。 7. 核心代码与扩展开发: Magento的PHP代码库是其架构的重要组成部分,扩展开发通常需要开发者具备良好的PHP开发能力。通过编写代码来增强或修改Magento的功能,是实现特定需求的常用方法。这种开发方式既保留了Magento的核心优势,又能够灵活地根据用户需求进行调整。 8. 安全维护与最佳实践: 在使用和维护电子商务平台时,应该遵循一些最佳实践,如定期更新Magento和扩展,保持软件处于最新版本以避免已知的安全漏洞。此外,还应定期更换密码,不在不同网站使用相同的密码,以减少被攻击的风险。 通过上述详细的知识点阐述,可以看出magento-admin-password-strength-enforcer扩展是如何通过提高管理员密码的最小长度要求来提升Magento平台的安全性的,同时也强调了在实施此类安全措施时需要注意的最佳实践和操作方法。
recommend-type

时序图绘制实战指南:3步提升可读性,精准表达复杂交互流程

# 摘要 时序图作为UML中描述对象间交互行为的核心工具,在软件设计与系统分析中具有重要价值,但其可读性常因结构复杂、信息过载而受限。本文系统阐述时序图的理论基础,解析参与者、生命线、消息序列等核心要素的语义逻辑,提出聚焦主流程、层次化抽象等建模原则,并深入探讨视觉密度与命名规范对认知负荷的影响。针对实际绘制难点,文章构建了“逻辑精炼—布局优化—
recommend-type

postman离线版arm64

<think> 首先,用户的问题是:“我想查找Postman离线版适用于arm64架构的版本 请问Postman离线版arm64架构版本下载”。这基于之前提供的引用。 回顾提供的引用: - 引用[1]:讨论macOS上的下载,提到了ARM64和x86_64架构。 - 引用[2]:关于系统处理器架构(ARM vs x86)。 - 引用[3]:描述了一个轻量级API调试工具,作为Postman的替代品,支持ARM架构。 - 引用[4]:提到麒麟V10 arm版和一个原创微型postman。 用户明确表示需要Postman离线版针对arm64架构的版本。但引用[3]和[4]似乎指向替代工具,而不