活动介绍

# Prediction settings -------------------------------------------------------------------------------------------------- source: # source directory for images or videos show: False # show results if possible save_txt: False # save results as .txt file save_conf: False # save results with confidence scores save_crop: False # save cropped images with results hide_labels: False # hide labels hide_conf: False # hide confidence scores vid_stride: 1 # video frame-rate stride line_thickness: 3 # bounding box thickness (pixels) visualize: False # visualize model features augment: False # apply image augmentation to prediction sources agnostic_nms: False # class-agnostic NMS classes: # filter results by class, i.e. class=0, or class=[0,2,3] retina_masks: False # use high-resolution segmentation masks boxes: True # Show boxes in segmentation predictions解释

时间: 2023-08-15 20:47:48 浏览: 205
这些参数是用于目标检测任务的模型预测设置,具体含义如下: - source:目标检测任务的输入源,可以是图片或视频的目录。 - show:是否在屏幕上显示结果。 - save_txt:是否将结果保存为.txt文件。 - save_conf:是否将结果保存为包括置信度得分的结果。 - save_crop:是否保存裁剪后的图像及其结果。 - hide_labels:是否隐藏结果中的标签。 - hide_conf:是否隐藏结果中的置信度得分。 - vid_stride:在视频帧中按多少步长执行预测。 - line_thickness:边界框的线条粗细(像素)。 - visualize:是否可视化模型的特征。 - augment:是否对预测源应用图像增强。 - agnostic_nms:是否采用无类别NMS。 - classes:过滤特定类别的结果,例如class=0或class=[0,2,3]。 - retina_masks:是否使用高分辨率分割掩模。 - boxes:在分割预测中是否显示边界框。
相关问题

### ------- input/output length settings -------------- parser.add_argument('--seq_len', type=int, default=96, help='input sequence length of LightTS, look back window') parser.add_argument('--pred_len', type=int, default=48, help='prediction sequence length, horizon') parser.add_argument('--lastWeight', type=float, default=1.0)

这段代码是在创建一些命令行参数,用于设置输入和输出序列的长度。具体来说,它使用了 `add_argument()` 方法来添加三个参数: 1. `--seq_len`:表示输入序列的长度,即LightTS的窗口大小,它是一个整数类型,可以被指定但不是必须的, 默认为 `96`。 2. `--pred_len`:表示预测序列的长度,即预测的时间步数,它是一个整数类型,可以被指定但不是必须的, 默认为 `48`。 3. `--lastWeight`:表示最后一个时间步的权重,它是一个浮点类型,可以被指定但不是必须的, 默认为 `1.0`。 这些参数的作用分别是: - `seq_len`:决定了模型每次接收多少个时间步的输入序列。 - `pred_len`:决定了模型需要预测多少个时间步的输出序列。 - `lastWeight`:在计算损失函数时,最后一个时间步的预测值会乘以这个权重,用于调整最后一个时间步的预测精度。 用户可以根据自己的需求来指定这些参数的值。

# Ultralytics YOLO 🚀, AGPL-3.0 license # Default training settings and hyperparameters for medium-augmentation COCO training #中等增强的COCO训练的默认训练设置和超参数 task: detect # (str) YOLO task, i.e. detect, segment, classify, pose mode: train # (str) YOLO mode, i.e. train, val, predict, export, track, benchmark # Train settings ------------------------------------------------------------------------------------------------------- model: # (str, optional) path to model file, i.e. yolov8n.pt, yolov8n.yaml data: # (str, optional) path to data file, i.e. coco128.yaml epochs: 100 # (int) number of epochs to train for time: # (float, optional) number of hours to train for, overrides epochs if supplied patience: 20 # (int) epochs to wait for no observable improvement for early stopping of training batch: 20 # (int) number of images per batch (-1 for AutoBatch)!!!!!!! imgsz: 640 # (int | list) input images size as int for train and val modes, or list[w,h] for predict and export modes save: True # (bool) save train checkpoints and predict results save_period: -1 # (int) Save checkpoint every x epochs (disabled if < 1) cache: False # (bool) True/ram, disk or False. Use cache for data loading device: 0 # (int | str | list, optional) device to run on, i.e. cuda device=0 or device=0,1,2,3 or device=cpu workers: 8 # (int) number of worker threads for data loading (per RANK if DDP)!!!!!!! project: # (str, optional) project name name: # (str, optional) experiment name, results saved to 'project/name' directory exist_ok: False # (bool) whether to overwrite existing experiment pretrained: True # (bool | str) whether to use a pretrained model (bool) or a model to load weights from (str) optimizer: auto # (str) optimizer to use, choices=[SGD, Adam, Adamax, AdamW, NAdam, RAdam, RMSProp, auto] verbose: True # (bool) whether to print verbose output seed: 0 # (int) random seed for reproducibility deterministic: True # (bool) whether to enable deterministic mode single_cls: False # (bool) train multi-class data as single-class rect: False # (bool) rectangular training if mode='train' or rectangular validation if mode='val' cos_lr: False # (bool) use cosine learning rate scheduler close_mosaic: 10 # (int) disable mosaic augmentation for final epochs (0 to disable) resume: False # (bool) resume training from last checkpoint amp: True # (bool) Automatic Mixed Precision (AMP) training, choices=[True, False], True runs AMP check fraction: 1.0 # (float) dataset fraction to train on (default is 1.0, all images in train set) profile: False # (bool) profile ONNX and TensorRT speeds during training for loggers freeze: None # (int | list, optional) freeze first n layers, or freeze list of layer indices during training # Segmentation overlap_mask: True # (bool) masks should overlap during training (segment train only) mask_ratio: 2 # (int) mask downsample ratio (segment train only) # Classification dropout: 0.0 # (float) use dropout regularization (classify train only) # Val/Test settings ---------------------------------------------------------------------------------------------------- val: True # (bool) validate/test during training split: val # (str) dataset split to use for validation, i.e. 'val', 'test' or 'train' save_json: False # (bool) save results to JSON file save_hybrid: False # (bool) save hybrid version of labels (labels + additional predictions) conf: 0.3 # 置信度(float, optional) object confidence threshold for detection (default 0.25 predict, 0.001 val) iou: 0.8 # (float) intersection over union (IoU) threshold for NMS max_det: 300 # (int) maximum number of detections per image half: False # (bool) use half precision (FP16) dnn: False # (bool) use OpenCV DNN for ONNX inference plots: True # (bool) save plots and images during train/val # Predict settings ----------------------------------------------------------------------------------------------------- source: # (str, optional) source directory for images or videos vid_stride: 1 # (int) video frame-rate stride stream_buffer: False # (bool) buffer all streaming frames (True) or return the most recent frame (False) visualize: False # (bool) visualize model features augment: False # (bool) apply image augmentation to prediction sources agnostic_nms: False # (bool) class-agnostic NMS classes: # (int | list[int], optional) filter results by class, i.e. classes=0, or classes=[0,2,3] retina_masks: True # (bool) use high-resolution segmentation masks # Visualize settings --------------------------------------------------------------------------------------------------- show: False # (bool) show predicted images and videos if environment allows save_frames: False # (bool) save predicted individual video frames save_txt: False # (bool) save results as .txt file save_conf: False # (bool) save results with confidence scores save_crop: False # (bool) save cropped images with results show_labels: True # (bool) show prediction labels, i.e. 'person' show_conf: True # (bool) show prediction confidence, i.e. '0.99' show_boxes: True # (bool) show prediction boxes line_width: # (int, optional) line width of the bounding boxes. Scaled to image size if None. # Export settings ------------------------------------------------------------------------------------------------------ format: torchscript # (str) format to export to, choices at https://docs.ultralytics.com/modes/export/#export-formats keras: False # (bool) use Kera=s optimize: False # (bool) TorchScript: optimize for mobile int8: False # (bool) CoreML/TF INT8 quantization dynamic: False # (bool) ONNX/TF/TensorRT: dynamic axes simplify: False # (bool) ONNX: simplify model opset: # (int, optional) ONNX: opset version workspace: 4 # (int) TensorRT: workspace size (GB) nms: False # (bool) CoreML: add NMS # Hyperparameters ------------------------------------------------------------------------------------------------------ lr0: 0.01 # (float) initial learning rate (i.e. SGD=1E-2, Adam=1E-3) lrf: 0.01 # (float) final learning rate (lr0 * lrf) momentum: 0.937 # (float) SGD momentum/Adam beta1 weight_decay: 0.0005 # (float) optimizer weight decay 5e-4 warmup_epochs: 3.0 # (float) warmup epochs (fractions ok) warmup_momentum: 0.8 # (float) warmup initial momentum warmup_bias_lr: 0.1 # (float) warmup initial bias lr box: 7.5 # (float) box loss gain cls: 0.5 # (float) cls loss gain (scale with pixels) dfl: 1.5 # (float) dfl loss gain pose: 12.0 # (float) pose loss gain kobj: 1.0 # (float) keypoint obj loss gain label_smoothing: 0.0 # (float) label smoothing (fraction) nbs: 64 # (int) nominal batch size hsv_h: 0 # (float) image HSV-Hue augmentation (fraction) 0.015 hsv_s: 0 # (float) image HSV-Saturation augmentation (fraction)0.7 hsv_v: 0 # (float) image HSV-Value augmentation (fraction)0.4 degrees: 0.0 # (float) image rotation (+/- deg) translate: 0.1 # (float) image translation (+/- fraction) scale: 0.5 # (float) image scale (+/- gain) shear: 0.0 # (float) image shear (+/- deg) perspective: 0.0 # (float) image perspective (+/- fraction), range 0-0.001 flipud: 0 # (float) image flip up-down (probability)0.5 fliplr: 0 # (float) image flip left-right (probability)0.5 mosaic: 1 # (float) image mosaic (probability)1.0 mixup: 0.0 # (float) image mixup (probability) copy_paste: 0.0 # (float) segment copy-paste (probability) # Custom config.yaml --------------------------------------------------------------------------------------------------- cfg: # (str, optional) for overriding defaults.yaml # Tracker settings ------------------------------------------------------------------------------------------------------ tracker: botsort.yaml # (str) tracker type, choices=[botsort.yaml, bytetrack.yaml] 帮我介绍一下这些参数

### Ultralytics YOLO 配置文件参数详解 Ultralytics YOLO 的配置文件(如 `yolov8s-seg.yaml`)用于定义模型的结构、训练参数和数据处理方式。理解这些参数的含义及其用途对于优化模型性能至关重要。 #### 模型结构参数 配置文件中的模型结构参数定义了网络的骨干(backbone)和检测头(head)的结构。这些参数直接影响模型的特征提取能力和推理速度。 - **backbone**:定义了模型的骨干网络,通常使用CSPDarknet结构。骨干网络负责从输入图像中提取多尺度特征。 - **head**:定义了检测头的结构,通常包含PANet(Path Aggregation Network)模块。检测头负责生成边界框、类别预测和掩膜预测。 - **depth_multiple**:控制网络深度的倍数。例如,`depth_multiple=0.33` 表示将标准网络的深度缩小为原来的三分之一。 - **width_multiple**:控制网络宽度的倍数。例如,`width_multiple=0.5` 表示将标准网络的宽度缩小为原来的一半。 #### 数据处理参数 数据处理参数定义了输入图像的预处理方式和数据增强策略。这些参数对模型的泛化能力和鲁棒性有重要影响。 - **imgsz**:输入图像的尺寸。例如,`imgsz=640` 表示输入图像的大小为640×640像素。 - **augment**:是否启用数据增强。数据增强可以提高模型的泛化能力。 - **hsv_h**、**hsv_s**、**hsv_v**:定义HSV颜色空间的随机增强范围。这些参数用于调整图像的色调、饱和度和亮度。 - **degrees**、**translate**、**scale**、**shear**:定义图像的旋转、平移、缩放和剪切增强范围。 - **flipud**、**fliplr**:定义图像的上下翻转和左右翻转概率。 #### 训练参数 训练参数定义了模型的训练过程,包括优化器、学习率和损失函数。 - **optimizer**:指定优化器的类型,如SGD或Adam。 - **lr0**:初始学习率。 - **lrf**:学习率的最终值,通常用于学习率衰减。 - **momentum**:动量参数,用于优化器。 - **weight_decay**:权重衰减系数,用于防止过拟合。 - **warmup_epochs**:学习率预热的轮数。 - **cos_lr**:是否使用余弦学习率衰减策略。 - **loss**:指定损失函数的类型,如CIoU损失或DIoU损失。 #### 推理参数 推理参数定义了模型在推理阶段的行为,包括置信度阈值和非极大值抑制(NMS)阈值。 - **conf**:置信度阈值,用于过滤低置信度的预测框。 - **iou**:NMS的IoU阈值,用于去除重叠的预测框。 #### 配置文件示例 以下是一个典型的YOLOv8配置文件示例: ```yaml # YOLOv8 configuration for segmentation tasks # Model parameters backbone: name: CSPDarknet depth_multiple: 0.33 width_multiple: 0.5 out_indices: [2, 3, 4] head: name: SegmentationHead in_channels: [128, 256, 512] num_classes: 80 mask_resolution: 160 # Data parameters imgsz: 640 augment: True hsv_h: 0.015 hsv_s: 0.7 hsv_v: 0.4 degrees: 0.0 translate: 0.1 scale: 0.5 shear: 0.0 flipud: 0.5 fliplr: 0.5 # Training parameters optimizer: SGD lr0: 0.01 lrf: 0.01 momentum: 0.937 weight_decay: 0.0005 warmup_epochs: 3 cos_lr: True loss: CIoULoss # Inference parameters conf: 0.25 iou: 0.45 ``` #### 相关问题 1. 如何在YOLOv8中调整数据增强策略? 2. YOLOv8的配置文件支持自定义模型结构吗? 3. 如何在YOLOv8中设置不同的损失函数?
阅读全文

相关推荐

这是我现在的参数配置文件,请你给出建议 # Ultralytics YOLO 🚀, AGPL-3.0 license # Default training settings and hyperparameters for medium-augmentation COCO training task: detect # (str) YOLO task, i.e. detect, segment, classify, pose, obb mode: train # (str) YOLO mode, i.e. train, val, predict, export, track, benchmark # Train settings ------------------------------------------------------------------------------------------------------- model: # (str, optional) path to model file, i.e. yolov8n.pt, yolov8n.yaml data: # (str, optional) path to data file, i.e. coco8.yaml epochs: 300 # (int) number of epochs to train for time: # (float, optional) number of hours to train for, overrides epochs if supplied patience: 15 # (int) epochs to wait for no observable improvement for early stopping of training batch: 8 # (int) number of images per batch (-1 for AutoBatch) imgsz: 640 # (int | list) input images size as int for train and val modes, or list[h,w] for predict and export modes save: True # (bool) save train checkpoints and predict results save_period: -1 # (int) Save checkpoint every x epochs (disabled if < 1) cache: False # (bool) True/ram, disk or False. Use cache for data loading device: # (int | str | list, optional) device to run on, i.e. cuda device=0 or device=0,1,2,3 or device=cpu workers: 4 # (int) number of worker threads for data loading (per RANK if DDP) project: # (str, optional) project name name: # (str, optional) experiment name, results saved to 'project/name' directory exist_ok: False # (bool) whether to overwrite existing experiment pretrained: True # (bool | str) whether to use a pretrained model (bool) or a model to load weights from (str) optimizer: auto # (str) optimizer to use, choices=[SGD, Adam, Adamax, AdamW, NAdam, RAdam, RMSProp, auto] verbose: True # (bool) whether to print verbose output seed: 0 # (int) random seed for reproducibility deterministic: True # (bool) whether to enable deterministic mode single_cls: False # (bool) train multi-class data as single-class rect: False # (bool) rectangular training if mode='train' or rectangular validation if mode='val' cos_lr: True # (bool) use cosine learning rate scheduler close_mosaic: 10 # (int) disable mosaic augmentation for final epochs (0 to disable) resume: False # (bool) resume training from last checkpoint amp: False # (bool) Automatic Mixed Precision (AMP) training, choices=[True, False], True runs AMP check fraction: 1.0 # (float) dataset fraction to train on (default is 1.0, all images in train set) profile: False # (bool) profile ONNX and TensorRT speeds during training for loggers freeze: None # (int | list, optional) freeze first n layers, or freeze list of layer indices during training multi_scale: False # (bool) Whether to use multiscale during training # Segmentation overlap_mask: True # (bool) masks should overlap during training (segment train only) mask_ratio: 2 # (int) mask downsample ratio (segment train only) # Classification dropout: 0.0 # (float) use dropout regularization (classify train only) # Val/Test settings ---------------------------------------------------------------------------------------------------- val: True # (bool) validate/test during training split: val # (str) dataset split to use for validation, i.e. 'val', 'test' or 'train' save_json: False # (bool) save results to JSON file save_hybrid: False # (bool) save hybrid version of labels (labels + additional predictions) conf: 0.39 # (float, optional) object confidence threshold for detection (default 0.25 predict, 0.001 val) iou: 0.65 # (float) intersection over union (IoU) threshold for NMS max_det: 300 # (int) maximum number of detections per image half: False # (bool) use half precision (FP16) dnn: False # (bool) use OpenCV DNN for ONNX inference plots: True # (bool) save plots and images during train/val # Predict settings ----------------------------------------------------------------------------------------------------- source: # (str, optional) source directory for images or videos vid_stride: 1 # (int) video frame-rate stride stream_buffer: False # (bool) buffer all streaming frames (True) or return the most recent frame (False) visualize: False # (bool) visualize model features augment: False # (bool) apply image augmentation to prediction sources agnostic_nms: False # (bool) class-agnostic NMS classes: # (int | list[int], optional) filter results by class, i.e. classes=0, or classes=[0,2,3] retina_masks: False # (bool) use high-resolution segmentation masks embed: # (list[int], optional) return feature vectors/embeddings from given layers # Visualize settings --------------------------------------------------------------------------------------------------- show: False # (bool) show predicted images and videos if environment allows save_frames: False # (bool) save predicted individual video frames save_txt: False # (bool) save results as .txt file save_conf: False # (bool) save results with confidence scores save_crop: False # (bool) save cropped images with results show_labels: True # (bool) show prediction labels, i.e. 'person' show_conf: True # (bool) show prediction confidence, i.e. '0.99' show_boxes: True # (bool) show prediction boxes line_width: # (int, optional) line width of the bounding boxes. Scaled to image size if None. # Export settings ------------------------------------------------------------------------------------------------------ format: torchscript # (str) format to export to, choices at https://docs.ultralytics.com/modes/export/#export-formats keras: False # (bool) use Kera=s optimize: False # (bool) TorchScript: optimize for mobile int8: False # (bool) CoreML/TF INT8 quantization dynamic: False # (bool) ONNX/TF/TensorRT: dynamic axes simplify: True # (bool) ONNX: simplify model using onnxslim opset: # (int, optional) ONNX: opset version workspace: # (int) TensorRT: workspace size (GB) nms: False # (bool) CoreML: add NMS # Hyperparameters ------------------------------------------------------------------------------------------------------ lr0: 0.005 # (float) initial learning rate (i.e. SGD=1E-2, Adam=1E-3) lrf: 0.00005 # (float) final learning rate (lr0 * lrf) momentum: 0.937 # (float) SGD momentum/Adam beta1 weight_decay: 0.0005 # (float) optimizer weight decay 5e-4 warmup_epochs: 5.0 # (float) warmup epochs (fractions ok) warmup_momentum: 0.8 # (float) warmup initial momentum warmup_bias_lr: 0.1 # (float) warmup initial bias lr box: 7.5 # (float) box loss gain cls: 0.5 # (float) cls loss gain (scale with pixels) dfl: 1.5 # (float) dfl loss gain pose: 12.0 # (float) pose loss gain kobj: 1.0 # (float) keypoint obj loss gain label_smoothing: 0.05 # (float) label smoothing (fraction) nbs: 64 # (int) nominal batch size hsv_h: 0.01 # (float) image HSV-Hue augmentation (fraction) hsv_s: 0.4 # (float) image HSV-Saturation augmentation (fraction) hsv_v: 0.4 # (float) image HSV-Value augmentation (fraction) degrees: 0.0 # (float) image rotation (+/- deg) translate: 0.0 # (float) image translation (+/- fraction) scale: 0.25 # (float) image scale (+/- gain) shear: 0.0 # (float) image shear (+/- deg) perspective: 0.0 # (float) image perspective (+/- fraction), range 0-0.001 flipud: 0.0 # (float) image flip up-down (probability) fliplr: 0.5 # (float) image flip left-right (probability) bgr: 0.0 # (float) image channel BGR (probability) mosaic: 0.3 # (float) image mosaic (probability) mixup: 0.1 # (float) image mixup (probability) copy_paste: 0.0 # (float) segment copy-paste (probability) copy_paste_mode: "flip" # (str) the method to do copy_paste augmentation (flip, mixup) auto_augment: randaugment # (str) auto augmentation policy for classification (randaugment, autoaugment, augmix) erasing: 0.4 # (float) probability of random erasing during classification training (0-0.9), 0 means no erasing, must be less than 1.0. crop_fraction: 1.0 # (float) image crop fraction for classification (0.1-1), 1.0 means no crop, must be greater than 0. # Custom config.yaml --------------------------------------------------------------------------------------------------- cfg: # (str, optional) for overriding defaults.yaml # Tracker settings ------------------------------------------------------------------------------------------------------ tracker: botsort.yaml # (str) tracker type, choices=[botsort.yaml, bytetrack.yaml]

from bertopic import BERTopic import numpy as np import pandas as pd from umap import UMAP from hdbscan import HDBSCAN from sklearn.feature_extraction.text import CountVectorizer from bertopic.vectorizers import ClassTfidfTransformer import re import nltk from nltk.corpus import stopwords from nltk.stem import WordNetLemmatizer from sklearn.feature_extraction.text import ENGLISH_STOP_WORDS from nltk.tokenize import word_tokenize from wordcloud import WordCloud import matplotlib.pyplot as plt # 加载原始文本数据(仍需用于主题表示) df = pd.read_csv('hydrogen_storage_patents_preprocessed.csv', encoding='utf-8') sentences = df['AB'].tolist() print('文本条数: ', len(sentences)) print('预览第一条: ', sentences[0]) # 检查缺失值 print("缺失值数量:", df['AB'].isna().sum()) # 检查非字符串类型 non_str_mask = df['AB'].apply(lambda x: not isinstance(x, str)) print("非字符串样本:\n", df[non_str_mask]['AB'].head()) vectorizer_model = None # 1. 加载时间数据 df['AD'] = pd.to_datetime(df['AD']) # 检查是否有空值 missing_dates = df['AD'].isna().sum() if missing_dates > 0: print(f"日期列中存在 {missing_dates} 个空值,请检查数据!") else: print("日期列中没有空值。") # 从Date列提取年份 years = df['AD'].dt.year print(years) from sentence_transformers import SentenceTransformer # Step 1 - Extract embeddings embedding_model = SentenceTransformer("C:\\Users\\18267\\.cache\\huggingface\\hub\\models--sentence-transformers--all-mpnet-base-v2\\snapshots\\9a3225965996d404b775526de6dbfe85d3368642") embeddings = np.load('emb_last_326.npy') print(f"嵌入的形状: {embeddings.shape}") # Step 2 - Reduce dimensionality umap_model = UMAP(n_neighbors=7, n_components=10, min_dist=0.0, metric='cosine',random_state=42) # Step 3 - Cluster reduced embeddings hdbscan_model = HDBSCAN(min_samples=7, min_cluster_size=60,metric='euclidean', cluster_selection_method='eom', prediction_data=True) # Step 4 - Tokenize topics vectorizer_model = CountVectorizer() # Step 5 - Create topic representation ctfidf_model = ClassTfidfTransformer() # All steps together topic_model = BERTopic( embedding_model=embedding_model, # Step 1 - Extract embeddings umap_model=umap_model, # Step 2 - Reduce dimensionality hdbscan_model=hdbscan_model, # Step 3 - Cluster reduced embeddings vectorizer_model=vectorizer_model, # Step 4 - Tokenize topics ctfidf_model=ctfidf_model, # Step 5 - Extract topic words top_n_words=50 ) # 拟合模型 topics, probs = topic_model.fit_transform(documents=sentences, # 仍需提供文档用于主题词生成 embeddings=embeddings # 注入预计算嵌入) ) # 获取主题聚类信息 topic_info = topic_model.get_topic_info() print(topic_info) # 可视化主题词 topic_model.visualize_barchart() # 可视化主题分布 topic_model.visualize_topics() # 查看层级 hierarchical_topics = topic_model.hierarchical_topics(sentences) fig = topic_model.visualize_hierarchy(hierarchical_topics=hierarchical_topics) # 隐藏小黑点 for trace in fig.data: if trace.mode == 'markers': trace.marker.opacity = 0 fig.show() new_topics = topic_model.reduce_outliers(sentences, topics, strategy="c-tf-idf",threshold=0.00) print(new_topics.count(-1), new_topics) topic_model.update_topics(docs=sentences, topics=new_topics,vectorizer_model=vectorizer_model,top_n_words=50) topic_info = topic_model.get_topic_info() print(topic_info)bertopic库怎么重新安装

# 随机森林多分类模型 # packages 安装 —— install.packages("") setwd("D:/Rpackages") # packages 载入 —— library() library(randomForest) library(tidyverse) library(skimr) library(DataExplorer) library(caret) library(pROC) library(ggplot2) library(splines) library(reshape2) library(scales) library(ggprism) library(ggpubr) # 加载数据 boston <- read.csv(file.choose()) # 数据框转换 boston <- as.data.frame(boston) # boston <- na.pass(boston) # boston <- na.omit(boston) # na.action = na.omit 返回删除所有包含NA值的行的数据框或矩阵 # na.action = na.pass 将NA值传递给后续的计算,可能会导致结果中也出现NA # 查看数据概貌 skim(boston) # 查看数据缺失情况 plot_missing(boston) boston[is.na(boston)] <- 0 #将matrix(此处为boston)中的NA值替换为0,按需运行 # 数据类型修正,将boston中的i列转为factor,i 将依次取c()中的值 for (i in c(1:2)) { boston[,i] <- factor(boston[,i]) } # 因变量分布情况 table(boston$Class) # 拆分数据 set.seed(1) trains <- createDataPartition( y = boston$Class, p = 0.7, list = F ) traindata <- boston[trains,] testdata <- boston[-trains,] # # test:自选训练集测试集(无特殊可不理 # traindata <- read.csv(file.choose()) # traindata <- as.data.frame(traindata) # for (i in c(1:5)) { # traindata[,i] <- factor(traindata[,i]) # } # # testdata <- read.csv(file.choose()) # testdata <- as.data.frame(testdata) # for (i in c(1:5)) { # testdata[,i] <- factor(testdata[,i]) # } # 拆分后因变量分布 table(traindata$Class) table(testdata$Class) dim(boston) # # 构建公式,自变量列名~a到b列(colnames[a:b]) # # colnames(boston) 用于获取数据框的列名, # form_clsm <- as.formula( # paste0( # "class~", # paste(colnames(traindata)[3:112],collapse = "+") # ) # ) # form_clsm # # # 构建模型 # set.seed(100001) # fit_rf_clsm <- randomForest( # form_clsm, # data = traindata, # ntree = 500, # 默认500,error & trees不稳定需增加树的数量 # mtry = 6, # 每个节点可提供选择的变量数目(指定节点中用于二叉树的变量个数 # # 默认情况下数据集变量个数的二次方根(分类模型)或三分之一(预测模型)) # importance = T # importance = TRUE: 启用特征重要性计算 # # 随机森林会根据每个特征对模型性能的影响,计算出每个特征的重要性 # ) # print(fit_rf_clsm) # 定义训练集特征和目标变量 X_train <- traindata[, -(1:2)] #traindata中除第1、2列外的列为自变量 x y_train <- as.factor(traindata[, 2]) #traindata中第2列为因变量 y # # 创建随机森林分类模型(基准模型 # model <- randomForest(x = X_train, y = y_train, ntree = 500) # # # 创建训练控制对象 # ctrl <- trainControl(method = "repeatedcv", number = 5, repeats = 10) # # k 折交叉验证作为模型评估方法,小数据做重复10次5折交叉验证 # # # 进行参数调优 # # mtry参数调节 # grid1 <- expand.grid(mtry = c(48:52)) # 定义mtry范围 # # grid1 <- expand.grid(mtry = c(10, 20, 30, 40, 50, 60, 70, 80, 90, 100)) # # 进阶 # # num_features <- ncol(X_train) # # mtry_range <- floor(sqrt(num_features)) # 取平方根并向下取整 # # grid_mtry <- expand.grid(mtry = seq(max(2, mtry_range - 5), # # min(mtry_range + 5, num_features))) # # # 使用caret包进行调参 # rf_model <- train(x = X_train, y = y_train, # 自变量 x, 因变量 y # method = "rf", # 使用的模型方法为随机森林(rf) # trControl = ctrl, # tuneGrid = grid1,) # # 输出结果 # print(rf_model) # # # # 定义最佳mtry参数 # grid2 <- expand.grid(mtry = c(30)) # # # 定义模型列表,存储每一个模型评估结果 # modellist <- list() # # 调整的参数是决策树的数量 # for (ntree in seq(100, 2000, by=100)) { # seq函数构建一个集合,间距by = 100 # set.seed(101) # fit <- train(x = X_train, y = y_train, method="rf", # metric="Accuracy", tuneGrid=grid2, # trControl=ctrl, ntree=ntree) # key <- toString(ntree) # modellist[[key]] <- fit # } # # # compare results # results <- resamples(modellist) # # # 输出最佳模型和参数 # summary(results) # # # 可视化模型性能,accuracy越近1越好,kappa越近1越好 # bwplot(results) # 箱线图 # dotplot(results) # 点图 # densityplot(results) # 密度图 # 使用最佳参数训练模型 set.seed(1001) fit_rf_clsm <- randomForest(x = X_train, y = y_train, mtry = 50, ntree = 500, importance = T) print(fit_rf_clsm) # ntree参数与error之间的关系图示 plot(fit_rf_clsm,main = "ERROR & TREES") legend("top", legend = colnames(fit_rf_clsm$err.rate), lty = 1:6, col = 1:6, horiz = T, cex = 0.9) plot(randomForest::margin(fit_rf_clsm), main = '观测值被判断正确的概率图') # 变量重要性 varImpPlot(fit_rf_clsm,main ="varImpPlot") varImpPlot(fit_rf_clsm,main = "varImpPlot",type = 1) varImpPlot(fit_rf_clsm,main = "varImpPlot",type = 2) importance_genus <- data.frame(importance(fit_rf_clsm)) importance_genus <- importance_genus[order(importance_genus$MeanDecreaseGini, decreasing=TRUE),] importance_genus <- importance_genus[order(importance_genus$MeanDecreaseAccuracy, decreasing=TRUE),] head(importance_genus) write.table(importance_genus,"importance_genus_DaChuang_SiteClass_unexposed20250209.txt", sep = '\t',col.names = NA,quote = FALSE) # 保存重要特征到文件 # # 预测 # # 训练集预测概率 # trainpredprob <- predict(fit_rf_clsm,newdata = traindata,type = "prob") # # 训练集ROC # multiclass.roc(response = traindata$class,predictor = trainpredprob) # # 训练集预测分类 # trainpredlab <- predict(fit_rf_clsm,newdata = traindata,type = "Class") # # 训练集混淆矩阵 # confusionMatrix_train <- confusionMatrix(data = trainpredlab, # reference = traindata$Class, # mode = "everything") # # 训练集综合结果 # multiClassSummary( # data.frame(obs = traindata$Class,pred=trainpredlab), # lev = levels(traindata$Class) # ) # # # # 作图展示 top N 重要的 OTUs # varImpPlot(fit_rf_clsm, n.var = min(20, nrow(fit_rf_clsm$importance)), # main = 'Top 20 - variable importance') # # # 测试集预测概率 # # X_test <- testdata[, -(1:2)] #testdata中除第1、2列外的列为自变量 x # # y_test <- as.factor(testdata[, 2]) #testdata中第2列为因变量 y # testpredprob <- predict(fit_rf_clsm,newdata = testdata,type = "prob") # # 测试集ROC # multiclass.roc(response = testdata$Class,predictor = testpredprob) # # 测试集预测分类 # testpredlab <- predict(fit_rf_clsm,newdata = testdata,type = "Class") # # 测试集混淆矩阵 # confusionMatrix_test <- confusionMatrix(data = testpredlab, # reference = testdata$Class, # mode = "everything") # confusionMatrix_test # # 测试集综合结果 # multiClassSummary( # data.frame(obs = testdata$Class,pred=testpredlab), # lev = levels(testdata$Class) # ) # 交叉验证帮助选择特定数量的特征 # 5次重复十折交叉验证 set.seed(10001) otu_train.cv <- replicate(5, rfcv(traindata[,-(1:2)], # 除去第a到b列(a:b) traindata$Class, cv.fold = 10, step = 1.5), simplify = FALSE) otu_train.cv <- data.frame(sapply(otu_train.cv, '[[', 'error.cv')) otu_train.cv$otus <- rownames(otu_train.cv) otu_train.cv <- reshape2::melt(otu_train.cv, id = 'otus') otu_train.cv$otus <- as.numeric(as.character(otu_train.cv$otus)) otu_train.cv.mean <- aggregate(otu_train.cv$value, by = list(otu_train.cv$otus), FUN = mean) head(otu_train.cv.mean, 18) # 绘图观察拐点 p <- ggplot(otu_train.cv,aes(otus,value)) + geom_smooth(se = FALSE, method = 'glm',formula = y~ns(x,6)) + theme(panel.grid = element_blank(), panel.background = element_rect(color = 'black',fill = 'transparent')) + labs(title = '',x='Number of genus',y='Cross-validation error') p # 在横坐标(xintecept)绘制竖线 p + geom_vline(xintercept = 160) # # 备用 # p2 <- ggplot(otu_train.cv,aes(otus,value)) + # geom_line() + # theme(panel.grid = element_blank(), # panel.background = element_rect(color = 'black', fill = 'transparent')) + # labs(title = '',x = 'Number of OTUs', y = 'Cross-validation error') # p2 # # p2 + geom_vline(xintercept = 30) # 大约提取前 N个重要的特征 importance_genus[1:160,] # importance_Ngenus <- importance_genus[1:160,] # # 输出表格 # write.table(importance_genus[1:70, ], # 'importance_genus_top70_of_zhangxiaofeng_human_drowning.txt', # sep = '\t', col.names = NA, quote = FALSE) # # 变量重要性 # varImpPlot(fit_rf_clsm,main ="varImpPlot") # varImpPlot(fit_rf_clsm,main = "varImpPlot",type = 1) # varImpPlot(fit_rf_clsm,main = "varImpPlot",type = 2) # # varImpPlot(fit_rf_clsm, n.var = min(160, nrow(fit_rf_clsm$importance)), # min(num,)处的num为图中的菌属数量 # main = 'Top 19 - variable importance',type = 1) # 简约分类器(只取部分高影响的自变量) # 选择 top N 重要的 OTUs,例如上述已经根据“Mean Decrease Accuracy”排名获得 genus_select <- rownames(importance_genus)[1:160] # 数据子集的训练集和测试集 genus_train_top <- traindata[ ,c(genus_select, 'Class')] genus_test_top<- testdata[ ,c(genus_select, 'Class')] # set.seed(10001) # form_clsm1 <- as.formula( # paste0( # "class~", # paste(colnames(genus_train_top)[1:10],collapse = "+") # ) # ) # 构建模型 # fit_rf_clsm1 <- randomForest( # form_clsm1, # data = genus_train_top, # ntree = 500, # mtry = 6, # importance = T # ) x_train1 <- genus_train_top[, -160 - 1] # 自变量 x y_train1 <- as.factor(genus_train_top[, 160 + 1]) # 因变量 y # fit_rf_clsm_test1 <- randomForest(x = x_train1, # y = y_train1, # ntree = 500, # 增加树的数量以提高稳定性 # importance = TRUE # 启用特征重要性计算 # ) # # fit_rf_clsm_test1 # # # 5 折交叉验证,重复 10 次 # ctrl1 <- trainControl(method = "repeatedcv", number = 5, repeats = 10) # # # 定义 mtry 和 ntree 的参数范围 # grid3 <- expand.grid(mtry = c(2:15)) # 定义mtry范围 # # # 进阶 # # num_features <- ncol(X_train) # # mtry_range <- floor(sqrt(num_features)) # 取平方根并向下取整 # # grid_mtry <- expand.grid(mtry = seq(max(2, mtry_range - 5), # # min(mtry_range + 5, num_features))) # # # 使用caret包进行调参 # fit_rf_clsm_test2 <- train(x = x_train1, y = y_train1, # 自变量 x, 因变量 y # method = "rf", # 使用的模型方法为随机森林(rf) # trControl = ctrl1, # tuneGrid = grid3,) # # 输出结果 # print(fit_rf_clsm_test2) # # # 定义最佳mtry参数 # grid4 <- expand.grid(mtry = c(16)) # # # 定义模型列表,存储每一个模型评估结果 # modellist1 <- list() # # 调整的参数是决策树的数量 # for (ntree in seq(100, 2000, by=100)) { # seq函数构建一个集合,间距by = 100 # set.seed(100003) # fit1 <- train(x = x_train1, y = y_train1, method="rf", # metric="Accuracy", tuneGrid=grid4, # trControl=ctrl1, ntree=ntree) # key1 <- toString(ntree) # modellist1[[key1]] <- fit1 # } # # # compare results # results1 <- resamples(modellist1) # # # 输出最佳模型和参数 # summary(results1) # # # 可视化模型性能,accuracy越近进1越好,kappa越近1越好 # bwplot(results1) # 箱线图 # dotplot(results1) # 点图 # densityplot(results1) # 密度图 # 使用最佳参数训练模型 set.seed(1) fit_rf_clsm1 <- randomForest(x = x_train1, y = y_train1, mtry = 12, ntree = 500, importance = T) print(fit_rf_clsm1) # # "ERROR & TREES" # plot(fit_rf_clsm1,main = "ERROR & TREES") # # # plot(randomForest::margin(fit_rf_clsm1), main = '观测值被判断正确的概率图') # # 预测 # # 训练集预测概率 # trainpredprob <- predict(fit_rf_clsm1,newdata = genus_train_top,type = "prob") # # 训练集ROC # multiclass.roc(response = genus_train_top$Class,predictor = trainpredprob) # # 训练集预测分类 #trainpredlab <- predict(fit_rf_clsm1,newdata = genus_train_top,type = "Class") # # 训练集混淆矩阵 # confusionMatrix(data = trainpredlab, # reference = genus_train_top$Class, # mode = "everything") # 预测 # 测试集预测概率 # x_test1 <- genus_test_top[, -11] # 自变量 x # y_test1 <- as.factor(genus_test_top[, 11]) # 因变量 y testpredprob <- predict(fit_rf_clsm1,newdata = genus_test_top,type = "prob") write.table(testpredprob, file = "D:/Rpackages/testpredprob.txt", sep = "\t", row.names = FALSE, col.names = TRUE) # 测试集ROC multiclass.roc(response = genus_test_top$Class,predictor = testpredprob) # 测试集预测分类 testpredlab <- predict(fit_rf_clsm1,newdata = genus_test_top,type = "Class") # 测试集混淆矩阵 confusion_matrix <- confusionMatrix(data = testpredlab, reference = genus_test_top$Class, mode = "everything") # 测试集综合结果 multiClassSummary( data.frame(obs = genus_test_top$Class,pred=testpredlab), lev = levels(genus_test_top$Class) ) # 查看样本预测结果 results <- data.frame(Actual = genus_test_top$Class, Predicted = testpredlab) # 测试集预测分类 # testpredlab <- predict(fit_rf_clsm1,newdata = testdata,type = "class") # t <- table(testpredlab,testdata$class) # acc = sum(diag(t))/nrow(testdata)*100 # print(paste("模型准确率为:",round(acc,4),sep='')) # 绘制混淆矩阵热图(内容复杂,好汉谨慎处之) # confusion_matrix是混淆矩阵对象 # 转换混淆矩阵为数据框 roc1 <- multiclass.roc(response = genus_test_top$Class,predictor =testpredprob[,1:6]) roc1 <- multiclass.roc(response = genus_test_top$Class,predictor =testpredprob[,1]) plot(roc1$rocs[[1]],col="#1f77b4",print.auc = TRUE,print.auc.x=0.8,print.auc.y=0.8) roc2 <- multiclass.roc(response = genus_test_top$Class,predictor =testpredprob[,2]) plot(roc2$rocs[[1]],add=TRUE,col="#ff7f0e",print.auc = TRUE,print.auc.x=0.6,print.auc.y=0.6) roc3 <- multiclass.roc(response = genus_test_top$Class,predictor =testpredprob[,3]) plot(roc3$rocs[[1]],add=TRUE,col="#2ca02c",print.auc=TRUE,print.auc.x=0.5,print.auc.y=0.5) roc4 <- multiclass.roc(response = genus_test_top$Class,predictor =testpredprob[,4]) plot(roc4$rocs[[1]],add=TRUE,col="#d62728",print.auc=TRUE,print.auc.x=0.4,print.auc.y=0.4) roc5 <- multiclass.roc(response = genus_test_top$Class,predictor =testpredprob[,5]) plot(roc5$rocs[[1]],add=TRUE,col="#9467bd",print.auc=TRUE,print.auc.x=0.3,print.auc.y=0.3) roc6 <- multiclass.roc(response = genus_test_top$Class,predictor =testpredprob[,6]) plot(roc1$rocs[[6]], add = TRUE, col = "#8c564b", print.auc = TRUE, print.auc.x = 0.2, print.auc.y = 0.2) # confusion_matrix_df <- as.data.frame.matrix(confusion_matrix$table) # colnames(confusion_matrix_df) <- c("F","H") # rownames(confusion_matrix_df) <- c("F","H") # # #c("1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15") # colnames(confusion_matrix_df) <- c("01", "02", "03", "04", "05", "06", "07", "08", "09", "10", "11", "12", "13", "14", "15") # rownames(confusion_matrix_df) <- c("01", "02", "03", "04", "05", "06", "07", "08", "09", "10", "11", "12", "13", "14", "15") # # 计算归一化值 # draw_data <- round(confusion_matrix_df / colSums(confusion_matrix_df),2) # draw_data <- as.data.frame(t(draw_data)) #使用t()函数转换draw_data的行列,原本列为真实类别,现转为行 # draw_data$real <- rownames(draw_data) #提取行名作为新生成列real的类别 # draw_data <- melt(draw_data) #将宽格式的数据框转换为长格式 # # # 绘制矩阵热图 # confusion1 <- ggplot(draw_data, aes(real, variable, fill = value)) + # geom_tile() + # geom_text(aes(label = scales::percent(value))) + # scale_fill_gradient(low = "#F0F0F0", high = "#3575b5") + # labs(x = "Prediction", y = "Reference", title = "Confusion matrix") + # theme_prism(border = T) + # theme(panel.border = element_blank(), # axis.ticks.y = element_blank(), # axis.ticks.x = element_blank(), # legend.position="right", # plot.title = element_text(hjust = 0.5)) # # confusion1 # # 二分类ROC曲线绘制 # # 计算ROC曲线的参数 # roc_obj <- roc(response = genus_test_top$class, predictor = testpredprob[, 2]) # roc_auc <- auc(roc_obj) # # 将ROC对象转换为数据框 # roc_data <- data.frame(1 - roc_obj$specificities, roc_obj$sensitivities) # # 绘制ROC曲线 # ROC1 <- ggplot(roc_data, aes(x = 1 - roc_obj$specificities, y = roc_obj$sensitivities)) + # geom_line(color = "#0073C2FF", size = 1) + # annotate("segment", x = 0, y = 0, xend = 1, yend = 1, linetype = "dashed", color = "gray") + # annotate("text", x = 0.8, y = 0.2, label = paste("AUC =", round(roc_auc, 3)), size = 4, color = "black") + # coord_cartesian(xlim = c(0, 1), ylim = c(0, 1)) + # theme_pubr() + # labs(x = "1 - Specificity", y = "Sensitivity") + # ggtitle("ROC Curve") + # theme(plot.title = element_text(size = 14, face = "bold")) + # theme_prism(border = T) # ROC1 # # # 计算 ROC 和 AUC # roc_obj <- roc(response = genus_test_top$class, predictor = testpredprob[, 2]) # roc_auc <- auc(roc_obj) # # # 将 ROC 对象转换为数据框 # roc_data <- data.frame( # FPR = 1 - roc_obj$specificities, # TPR = roc_obj$sensitivities # ) # # # 平滑处理 # smooth_roc <- data.frame( # FPR = spline(roc_data$FPR, n = 500)$x, # TPR = spline(roc_data$TPR, n = 500)$y # ) # # # 绘制平滑后的 ROC 曲线 # ROC1 <- ggplot(smooth_roc, aes(x = FPR, y = TPR)) + # geom_line(color = "#0073C2FF", size = 1) + # annotate("segment", x = 0, y = 0, xend = 1, yend = 1, linetype = "dashed", color = "gray") + # annotate("text", x = 0.8, y = 0.2, label = paste("AUC =", round(roc_auc, 2)), size = 4, color = "black") + # coord_cartesian(xlim = c(0, 1), ylim = c(0, 1)) + # theme_pubr() + # labs(x = "1 - Specificity", y = "Sensitivity") + # ggtitle("Smoothed ROC Curve") + # theme(plot.title = element_text(size = 14, face = "bold")) + # theme_prism(border = T) # # ROC1 # # geom_smooth(se = FALSE, method = 'glm',formula = y~ns(x,6)) # # # 保存模型 # saveRDS(fit_rf_clsm, # file = "D:/Documents/R.data/fit_rf_clsm1_UnSimplifiedSiteClass_ConcernGender_UnexposedTop19_240209.rds") # # # 读取模型 # fit_rf_clsm1 <- readRDS("D:/Documents/R.data/fit_rf_clsm1_hand_Simplified_240102.rds") # # # 读取待分类数据 # testdata1 <- read.csv(file.choose()) # testdata1 <- as.data.frame(testdata) # for (i in c(1:2)) { # testdata1[,i] <- factor(testdata1[,i]) # } # # # 应用 # # 待分类数据集预测概率 # testpredprob <- predict(fit_rf_clsm1, newdata = testdata1, type = "prob") # # 测试集ROC # multiclass.roc(response = testdata1$class, predictor = testpredprob) # # 待分类数据集预测分类 # testpredlab <- predict(fit_rf_clsm1, newdata = testdata1,type = "class") # # 待分类数据集混淆矩阵 # confusion_matrix <- confusionMatrix(data = testpredlab, # reference = testdata1$class, # mode = "everything") # # 待分类数据集综合结果 # multiClassSummary( # data.frame(obs = testdata1$class,pred=testpredlab), # lev = levels(testdata1$class) # ) # # # 查看样本分类结果 # results <- data.frame(Actual = testdata1$class, Predicted = testpredlab) (这是整个的代码,testpredprob <- predict(fit_rf_clsm1,newdata = genus_test_top[1:5, ],type = "prob"),跑不下来)

"C:\Program Files\Java\jdk1.8.0_281\bin\java.exe" "-javaagent:D:\新建文件夹 (2)\IDEA\idea\IntelliJ IDEA 2019.3.3\lib\idea_rt.jar=59342" -Dfile.encoding=UTF-8 -classpath "C:\Program Files\Java\jdk1.8.0_281\jre\lib\charsets.jar;C:\Program Files\Java\jdk1.8.0_281\jre\lib\deploy.jar;C:\Program Files\Java\jdk1.8.0_281\jre\lib\ext\access-bridge-64.jar;C:\Program Files\Java\jdk1.8.0_281\jre\lib\ext\cldrdata.jar;C:\Program Files\Java\jdk1.8.0_281\jre\lib\ext\dnsns.jar;C:\Program Files\Java\jdk1.8.0_281\jre\lib\ext\jaccess.jar;C:\Program Files\Java\jdk1.8.0_281\jre\lib\ext\jfxrt.jar;C:\Program Files\Java\jdk1.8.0_281\jre\lib\ext\localedata.jar;C:\Program Files\Java\jdk1.8.0_281\jre\lib\ext\nashorn.jar;C:\Program Files\Java\jdk1.8.0_281\jre\lib\ext\sunec.jar;C:\Program Files\Java\jdk1.8.0_281\jre\lib\ext\sunjce_provider.jar;C:\Program Files\Java\jdk1.8.0_281\jre\lib\ext\sunmscapi.jar;C:\Program Files\Java\jdk1.8.0_281\jre\lib\ext\sunpkcs11.jar;C:\Program Files\Java\jdk1.8.0_281\jre\lib\ext\zipfs.jar;C:\Program Files\Java\jdk1.8.0_281\jre\lib\javaws.jar;C:\Program Files\Java\jdk1.8.0_281\jre\lib\jce.jar;C:\Program Files\Java\jdk1.8.0_281\jre\lib\jfr.jar;C:\Program Files\Java\jdk1.8.0_281\jre\lib\jfxswt.jar;C:\Program Files\Java\jdk1.8.0_281\jre\lib\jsse.jar;C:\Program Files\Java\jdk1.8.0_281\jre\lib\management-agent.jar;C:\Program Files\Java\jdk1.8.0_281\jre\lib\plugin.jar;C:\Program Files\Java\jdk1.8.0_281\jre\lib\resources.jar;C:\Program Files\Java\jdk1.8.0_281\jre\lib\rt.jar;D:\carspark\out\production\carspark;C:\Users\wyatt\.ivy2\cache\org.scala-lang\scala-library\jars\scala-library-2.12.10.jar;C:\Users\wyatt\.ivy2\cache\org.scala-lang\scala-reflect\jars\scala-reflect-2.12.10.jar;C:\Users\wyatt\.ivy2\cache\org.scala-lang\scala-library\srcs\scala-library-2.12.10-sources.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\accessors-smart-1.2.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\activation-1.1.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\aircompressor-0.10.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\algebra_2.12-2.0.0-M2.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\antlr-runtime-3.5.2.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\antlr4-runtime-4.8-1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\aopalliance-1.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\aopalliance-repackaged-2.6.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\arpack_combined_all-0.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\arrow-format-2.0.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\arrow-memory-core-2.0.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\arrow-memory-netty-2.0.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\audience-annotations-0.5.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\automaton-1.11-8.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\avro-1.8.2.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\avro-ipc-1.8.2.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\avro-mapred-1.8.2-hadoop2.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\bonecp-0.8.0.RELEASE.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\breeze-macros_2.12-1.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\breeze_2.12-1.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\cats-kernel_2.12-2.0.0-M4.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\chill-java-0.9.5.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\chill_2.12-0.9.5.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\commons-beanutils-1.9.4.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\commons-cli-1.2.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\commons-codec-1.10.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\commons-collections-3.2.2.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\commons-compiler-3.0.16.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\commons-compress-1.20.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\commons-configuration2-2.1.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\commons-crypto-1.1.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\commons-daemon-1.0.13.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\commons-dbcp-1.4.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\commons-httpclient-3.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\commons-io-2.5.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\commons-lang-2.6.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\commons-lang3-3.10.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\commons-logging-1.1.3.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\commons-math3-3.4.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\commons-net-3.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\commons-pool-1.5.4.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\commons-text-1.6.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\compress-lzf-1.0.3.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\core-1.1.2.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\curator-client-2.13.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\curator-framework-2.13.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\curator-recipes-2.13.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\datanucleus-api-jdo-4.2.4.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\datanucleus-core-4.1.17.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\datanucleus-rdbms-4.1.19.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\derby-10.12.1.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\dnsjava-2.1.7.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\dropwizard-metrics-hadoop-metrics2-reporter-0.1.2.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\ehcache-3.3.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\flatbuffers-java-1.9.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\generex-1.0.2.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\geronimo-jcache_1.0_spec-1.0-alpha-1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\gson-2.2.4.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\guava-14.0.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\guice-4.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\guice-servlet-4.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\hadoop-annotations-3.2.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\hadoop-auth-3.2.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\hadoop-common-3.2.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\hadoop-hdfs-client-3.2.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\hadoop-mapreduce-client-common-3.2.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\hadoop-mapreduce-client-core-3.2.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\hadoop-mapreduce-client-jobclient-3.2.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\hadoop-yarn-api-3.2.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\hadoop-yarn-client-3.2.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\hadoop-yarn-common-3.2.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\hadoop-yarn-registry-3.2.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\hadoop-yarn-server-common-3.2.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\hadoop-yarn-server-web-proxy-3.2.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\HikariCP-2.5.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\hive-beeline-2.3.7.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\hive-cli-2.3.7.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\hive-common-2.3.7.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\hive-exec-2.3.7-core.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\hive-jdbc-2.3.7.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\hive-llap-common-2.3.7.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\hive-metastore-2.3.7.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\hive-serde-2.3.7.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\hive-service-rpc-3.1.2.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\hive-shims-0.23-2.3.7.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\hive-shims-common-2.3.7.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\hive-shims-scheduler-2.3.7.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\hive-storage-api-2.7.2.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\hive-vector-code-gen-2.3.7.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\hk2-api-2.6.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\hk2-locator-2.6.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\hk2-utils-2.6.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\htrace-core4-4.1.0-incubating.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\httpclient-4.5.6.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\httpcore-4.4.12.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\istack-commons-runtime-3.0.8.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\ivy-2.4.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\jackson-annotations-2.10.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\jackson-core-2.10.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\jackson-core-asl-1.9.13.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\jackson-databind-2.10.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\jackson-dataformat-yaml-2.10.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\jackson-datatype-jsr310-2.11.2.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\jackson-jaxrs-base-2.9.5.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\jackson-jaxrs-json-provider-2.9.5.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\jackson-mapper-asl-1.9.13.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\jackson-module-jaxb-annotations-2.10.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\jackson-module-paranamer-2.10.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\jackson-module-scala_2.12-2.10.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\jakarta.activation-api-1.2.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\jakarta.annotation-api-1.3.5.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\jakarta.inject-2.6.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\jakarta.servlet-api-4.0.3.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\jakarta.validation-api-2.0.2.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\jakarta.ws.rs-api-2.1.6.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\jakarta.xml.bind-api-2.3.2.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\janino-3.0.16.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\javassist-3.25.0-GA.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\javax.inject-1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\javax.jdo-3.2.0-m3.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\javolution-5.5.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\jaxb-api-2.2.11.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\jaxb-runtime-2.3.2.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\jcip-annotations-1.0-1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\jcl-over-slf4j-1.7.30.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\jdo-api-3.0.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\jersey-client-2.30.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\jersey-common-2.30.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\jersey-container-servlet-2.30.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\jersey-container-servlet-core-2.30.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\jersey-hk2-2.30.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\jersey-media-jaxb-2.30.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\jersey-server-2.30.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\JLargeArrays-1.5.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\jline-2.14.6.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\joda-time-2.10.5.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\jodd-core-3.5.2.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\jpam-1.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\json-1.8.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\json-smart-2.3.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\json4s-ast_2.12-3.7.0-M5.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\json4s-core_2.12-3.7.0-M5.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\json4s-jackson_2.12-3.7.0-M5.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\json4s-scalap_2.12-3.7.0-M5.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\jsp-api-2.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\jsr305-3.0.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\jta-1.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\JTransforms-3.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\jul-to-slf4j-1.7.30.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\kerb-admin-1.0.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\kerb-client-1.0.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\kerb-common-1.0.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\kerb-core-1.0.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\kerb-crypto-1.0.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\kerb-identity-1.0.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\kerb-server-1.0.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\kerb-simplekdc-1.0.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\kerb-util-1.0.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\kerby-asn1-1.0.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\kerby-config-1.0.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\kerby-pkix-1.0.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\kerby-util-1.0.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\kerby-xdr-1.0.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\kryo-shaded-4.0.2.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\kubernetes-client-4.12.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\kubernetes-model-admissionregistration-4.12.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\kubernetes-model-apiextensions-4.12.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\kubernetes-model-apps-4.12.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\kubernetes-model-autoscaling-4.12.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\kubernetes-model-batch-4.12.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\kubernetes-model-certificates-4.12.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\kubernetes-model-common-4.12.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\kubernetes-model-coordination-4.12.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\kubernetes-model-core-4.12.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\kubernetes-model-discovery-4.12.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\kubernetes-model-events-4.12.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\kubernetes-model-extensions-4.12.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\kubernetes-model-metrics-4.12.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\kubernetes-model-networking-4.12.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\kubernetes-model-policy-4.12.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\kubernetes-model-rbac-4.12.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\kubernetes-model-scheduling-4.12.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\kubernetes-model-settings-4.12.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\kubernetes-model-storageclass-4.12.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\leveldbjni-all-1.8.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\libfb303-0.9.3.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\libthrift-0.12.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\log4j-1.2.17.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\logging-interceptor-3.12.12.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\lz4-java-1.7.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\machinist_2.12-0.6.8.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\macro-compat_2.12-1.1.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\mesos-1.4.0-shaded-protobuf.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\metrics-core-4.1.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\metrics-graphite-4.1.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\metrics-jmx-4.1.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\metrics-json-4.1.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\metrics-jvm-4.1.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\minlog-1.3.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\netty-all-4.1.51.Final.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\nimbus-jose-jwt-4.41.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\objenesis-2.6.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\okhttp-2.7.5.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\okhttp-3.12.12.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\okio-1.14.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\opencsv-2.3.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\orc-core-1.5.12.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\orc-mapreduce-1.5.12.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\orc-shims-1.5.12.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\oro-2.0.8.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\osgi-resource-locator-1.0.3.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\paranamer-2.8.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\parquet-column-1.10.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\parquet-common-1.10.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\parquet-encoding-1.10.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\parquet-format-2.4.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\parquet-hadoop-1.10.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\parquet-jackson-1.10.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\protobuf-java-2.5.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\py4j-0.10.9.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\pyrolite-4.30.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\re2j-1.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\RoaringBitmap-0.9.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\scala-collection-compat_2.12-2.1.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\scala-compiler-2.12.10.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\scala-library-2.12.10.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\scala-parser-combinators_2.12-1.1.2.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\scala-reflect-2.12.10.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\scala-xml_2.12-1.2.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\shapeless_2.12-2.3.3.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\shims-0.9.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\slf4j-api-1.7.30.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\slf4j-log4j12-1.7.30.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\snakeyaml-1.24.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\snappy-java-1.1.8.2.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\spark-catalyst_2.12-3.1.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\spark-core_2.12-3.1.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\spark-graphx_2.12-3.1.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\spark-hive-thriftserver_2.12-3.1.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\spark-hive_2.12-3.1.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\spark-kubernetes_2.12-3.1.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\spark-kvstore_2.12-3.1.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\spark-launcher_2.12-3.1.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\spark-mesos_2.12-3.1.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\spark-mllib-local_2.12-3.1.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\spark-mllib_2.12-3.1.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\spark-network-common_2.12-3.1.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\spark-network-shuffle_2.12-3.1.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\spark-repl_2.12-3.1.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\spark-sketch_2.12-3.1.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\spark-sql_2.12-3.1.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\spark-streaming_2.12-3.1.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\spark-tags_2.12-3.1.1-tests.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\spark-tags_2.12-3.1.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\spark-unsafe_2.12-3.1.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\spark-yarn_2.12-3.1.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\spire-macros_2.12-0.17.0-M1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\spire-platform_2.12-0.17.0-M1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\spire-util_2.12-0.17.0-M1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\spire_2.12-0.17.0-M1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\ST4-4.0.4.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\stax-api-1.0.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\stax2-api-3.1.4.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\stream-2.9.6.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\super-csv-2.2.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\threeten-extra-1.5.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\token-provider-1.0.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\transaction-api-1.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\univocity-parsers-2.9.1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\velocity-1.5.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\woodstox-core-5.0.3.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\xbean-asm7-shaded-4.15.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\xz-1.5.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\zjsonpatch-0.3.0.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\zookeeper-3.4.14.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\zstd-jni-1.4.8-1.jar;D:\spark\spark-3.1.1-bin-hadoop3.2\jars\arrow-vector-2.0.0.jar" car.LoadModelRideHailing Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 25/06/08 17:05:07 INFO SparkContext: Running Spark version 3.1.1 25/06/08 17:05:07 INFO ResourceUtils: ============================================================== 25/06/08 17:05:07 INFO ResourceUtils: No custom resources configured for spark.driver. 25/06/08 17:05:07 INFO ResourceUtils: ============================================================== 25/06/08 17:05:07 INFO SparkContext: Submitted application: LoadModelRideHailing 25/06/08 17:05:07 INFO ResourceProfile: Default ResourceProfile created, executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , memory -> name: memory, amount: 1024, script: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0) 25/06/08 17:05:07 INFO ResourceProfile: Limiting resource is cpu 25/06/08 17:05:07 INFO ResourceProfileManager: Added ResourceProfile id: 0 25/06/08 17:05:07 INFO SecurityManager: Changing view acls to: wyatt 25/06/08 17:05:07 INFO SecurityManager: Changing modify acls to: wyatt 25/06/08 17:05:07 INFO SecurityManager: Changing view acls groups to: 25/06/08 17:05:07 INFO SecurityManager: Changing modify acls groups to: 25/06/08 17:05:07 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(wyatt); groups with view permissions: Set(); users with modify permissions: Set(wyatt); groups with modify permissions: Set() 25/06/08 17:05:07 INFO Utils: Successfully started service 'sparkDriver' on port 59361. 25/06/08 17:05:07 INFO SparkEnv: Registering MapOutputTracker 25/06/08 17:05:07 INFO SparkEnv: Registering BlockManagerMaster 25/06/08 17:05:08 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information 25/06/08 17:05:08 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up 25/06/08 17:05:08 INFO SparkEnv: Registering BlockManagerMasterHeartbeat 25/06/08 17:05:08 INFO DiskBlockManager: Created local directory at C:\Users\wyatt\AppData\Local\Temp\blockmgr-8fe065e2-024c-4e2f-8662-45d2fe3de444 25/06/08 17:05:08 INFO MemoryStore: MemoryStore started with capacity 1899.0 MiB 25/06/08 17:05:08 INFO SparkEnv: Registering OutputCommitCoordinator 25/06/08 17:05:08 INFO Utils: Successfully started service 'SparkUI' on port 4040. 25/06/08 17:05:08 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://windows10.microdone.cn:4040 25/06/08 17:05:08 INFO Executor: Starting executor ID driver on host windows10.microdone.cn 25/06/08 17:05:08 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 59392. 25/06/08 17:05:08 INFO NettyBlockTransferService: Server created on windows10.microdone.cn:59392 25/06/08 17:05:08 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy 25/06/08 17:05:08 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, windows10.microdone.cn, 59392, None) 25/06/08 17:05:08 INFO BlockManagerMasterEndpoint: Registering block manager windows10.microdone.cn:59392 with 1899.0 MiB RAM, BlockManagerId(driver, windows10.microdone.cn, 59392, None) 25/06/08 17:05:08 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, windows10.microdone.cn, 59392, None) 25/06/08 17:05:08 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, windows10.microdone.cn, 59392, None) Exception in thread "main" java.lang.IllegalArgumentException: 测试数据中不包含 features 列,请检查数据! at car.LoadModelRideHailing$.main(LoadModelRideHailing.scala:23) at car.LoadModelRideHailing.main(LoadModelRideHailing.scala) 进程已结束,退出代码为 1 package car import org.apache.spark.ml.classification.{LogisticRegressionModel, RandomForestClassificationModel} import org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator import org.apache.spark.sql.{SparkSession, functions => F} object LoadModelRideHailing { def main(args: Array[String]): Unit = { val spark = SparkSession.builder() .master("local[3]") .appName("LoadModelRideHailing") .getOrCreate() spark.sparkContext.setLogLevel("Error") // 使用经过特征工程处理后的测试数据 val TestData = spark.read.option("header", "true").csv("C:\\Users\\wyatt\\Documents\\ride_hailing_test_data.csv") // 将 label 列转换为数值类型 val testDataWithNumericLabel = TestData.withColumn("label", F.col("label").cast("double")) // 检查 features 列是否存在 if (!testDataWithNumericLabel.columns.contains("features")) { throw new IllegalArgumentException("测试数据中不包含 features 列,请检查数据!") } // 修正后的模型路径(确保文件夹存在且包含元数据) val LogisticModel = LogisticRegressionModel.load("C:\\Users\\wyatt\\Documents\\ride_hailing_logistic_model") // 示例路径 val LogisticPre = LogisticModel.transform(testDataWithNumericLabel) val LogisticAcc = new MulticlassClassificationEvaluator() .setLabelCol("label") .setPredictionCol("prediction") .setMetricName("accuracy") .evaluate(LogisticPre) println("逻辑回归模型后期数据准确率:" + LogisticAcc) // 随机森林模型路径同步修正 val RandomForest = RandomForestClassificationModel.load("C:\\Users\\wyatt\\Documents\\ride_hailing_random_forest_model") // 示例路径 val RandomForestPre = RandomForest.transform(testDataWithNumericLabel) val RandomForestAcc = new MulticlassClassificationEvaluator() .setLabelCol("label") .setPredictionCol("prediction") .setMetricName("accuracy") .evaluate(RandomForestPre) println("随机森林模型后期数据准确率:" + RandomForestAcc) spark.stop() } }

# Copyright (c) OpenMMLab. All rights reserved. import argparse import os import os.path as osp from mmdet.engine.hooks.utils import trigger_visualization_hook from mmengine.config import Config, ConfigDict, DictAction from mmengine.evaluator import DumpResults from mmengine.runner import Runner from mmyolo.registry import RUNNERS from mmyolo.utils import is_metainfo_lower # TODO: support fuse_conv_bn def parse_args(): parser = argparse.ArgumentParser( description='MMYOLO test (and eval) a model') parser.add_argument('config', help='test config file path') parser.add_argument('checkpoint', help='checkpoint file') parser.add_argument( '--work-dir', help='the directory to save the file containing evaluation metrics') parser.add_argument( '--out', type=str, help='output result file (must be a .pkl file) in pickle format') parser.add_argument( '--json-prefix', type=str, help='the prefix of the output json file without perform evaluation, ' 'which is useful when you want to format the result to a specific ' 'format and submit it to the test server') parser.add_argument( '--tta', action='store_true', help='Whether to use test time augmentation') parser.add_argument( '--show', action='store_true', help='show prediction results') parser.add_argument( '--deploy', action='store_true', help='Switch model to deployment mode') parser.add_argument( '--show-dir', help='directory where painted images will be saved. ' 'If specified, it will be automatically saved ' 'to the work_dir/timestamp/show_dir') parser.add_argument( '--wait-time', type=float, default=2, help='the interval of show (s)') parser.add_argument( '--cfg-options', nargs='+', action=DictAction, help='override some settings in the used config, the key-valu

""" mlperf inference benchmarking tool """ from __future__ import division from __future__ import print_function from __future__ import unicode_literals # from memory_profiler import profile import argparse import array import collections import json import logging import os import sys import threading import time from multiprocessing import JoinableQueue import sklearn import star_loadgen as lg import numpy as np from mindspore import context from mindspore.train.model import Model from mindspore.train.serialization import load_checkpoint, load_param_into_net from src.model_utils.device_adapter import get_device_num, get_rank_id,get_device_id import os from mindspore import Model, context from mindspore.train.serialization import load_checkpoint, load_param_into_net,\ build_searched_strategy, merge_sliced_parameter from src.wide_and_deep import PredictWithSigmoid, TrainStepWrap, NetWithLossClass, WideDeepModel from src.callbacks import LossCallBack from src.datasets import create_dataset, DataType from src.metrics import AUCMetric from src.model_utils.moxing_adapter import moxing_wrapper from src.metrics import AUCMetric from src.callbacks import EvalCallBack import src.wide_and_deep as wide_deep import src.datasets as datasets from src.model_utils.config import config # from pygcbs_client.task import Task # task = Task() # config = task.config context.set_context(mode=context.GRAPH_MODE, device_target=config.device_target , device_id = 1) print(config.device_id) batch_size = config.batch_size def add_write(file_path, print_str): with open(file_path, 'a+', encoding='utf-8') as file_out: file_out.write(print_str + '\n') def get_WideDeep_net(config): """ Get network of wide&deep model. """ WideDeep_net = WideDeepModel(config) loss_net = NetWithLossClass(WideDeep_net, config) train_net = TrainStepWrap(loss_net) eval_net = PredictWithSigmoid(WideDeep_net) return train_net, eval_net class ModelBuilder(): """ Wide and deep model builder """ def __init__(self): pass def get_hook(self): pass def get_train_hook(self): hooks = [] callback = LossCallBack() hooks.append(callback) if int(os.getenv('DEVICE_ID')) == 0: pass return hooks def get_net(self, config): return get_WideDeep_net(config) logging.basicConfig(level=logging.INFO) log = logging.getLogger("main") NANO_SEC = 1e9 MILLI_SEC = 1000 # pylint: disable=missing-docstring # the datasets we support SUPPORTED_DATASETS = { "debug": (datasets.Dataset, wide_deep.pre_process_criteo_wide_deep, wide_deep.WideDeepPostProcess(), {"randomize": 'total', "memory_map": True}), "multihot-criteo-sample": (wide_deep.WideDeepModel, wide_deep.pre_process_criteo_wide_deep, wide_deep.WideDeepPostProcess(), {"randomize": 'total', "memory_map": True}), "kaggle-criteo": (wide_deep.WideDeepModel, wide_deep.pre_process_criteo_wide_deep, wide_deep.WideDeepPostProcess(), {"randomize": 'total', "memory_map": True}), } # pre-defined command line options so simplify things. They are used as defaults and can be # overwritten from command line SUPPORTED_PROFILES = { "defaults": { "dataset": "multihot-criteo", "inputs": "continuous and categorical features", "outputs": "probability", "backend": "mindspore-native", "model": "wide_deep", "max-batchsize": 2048, }, "dlrm-debug-mindspore": { "dataset": "debug", "inputs": "continuous and categorical features", "outputs": "probability", "backend": "pytorch-native", "model": "dlrm", "max-batchsize": 128, }, "dlrm-multihot-sample-mindspore": { "dataset": "multihot-criteo-sample", "inputs": "continuous and categorical features", "outputs": "probability", "backend": "pytorch-native", "model": "dlrm", "max-batchsize": 2048, }, "dlrm-multihot-mindspore": { "dataset": "multihot-criteo", "inputs": "continuous and categorical features", "outputs": "probability", "backend": "pytorch-native", "model": "dlrm", "max-batchsize": 2048, } } SCENARIO_MAP = { "SingleStream": lg.TestScenario.SingleStream, "MultiStream": lg.TestScenario.MultiStream, "Server": lg.TestScenario.Server, "Offline": lg.TestScenario.Offline, } last_timeing = [] import copy class Item: """An item that we queue for processing by the thread pool.""" def __init__(self, query_id, content_id, features, batch_T=None, idx_offsets=None): self.query_id = query_id self.content_id = content_id self.data = features self.batch_T = batch_T self.idx_offsets = idx_offsets self.start = time.time() import mindspore.dataset as mds lock = threading.Lock() class RunnerBase: def __init__(self, model, ds, threads, post_proc=None): self.take_accuracy = False self.ds = ds self.model = model self.post_process = post_proc self.threads = threads self.result_timing = [] def handle_tasks(self, tasks_queue): pass def start_run(self, result_dict, take_accuracy): self.result_dict = result_dict self.result_timing = [] self.take_accuracy = take_accuracy self.post_process.start() def run_one_item(self, qitem): # run the prediction processed_results = [] try: lock.acquire() data = datasets._get_mindrecord_dataset(*qitem.data) t1 = time.time() results = self.model.eval(data) self.result_timing.append(time.time() - t1) print('##################',time.time()) lock.release() processed_results = self.post_process(results, qitem.batch_T, self.result_dict) # self.post_process.add_results(, ) # g_lables.extend(self.model.auc_metric.true_labels) # g_predicts.extend(self.model.auc_metric.pred_probs) g_lables.extend(self.model.auc_metric.true_labels) g_predicts.extend(self.model.auc_metric.pred_probs) except Exception as ex: # pylint: disable=broad-except log.error("thread: failed, %s", ex) # since post_process will not run, fake empty responses processed_results = [[]] * len(qitem.query_id) finally: response_array_refs = [] response = [] for idx, query_id in enumerate(qitem.query_id): # NOTE: processed_results returned by DlrmPostProcess store both # result = processed_results[idx][0] and target = processed_results[idx][1] # also each idx might be a query of samples, rather than a single sample # depending on the --samples-to-aggregate* arguments. # debug prints # print("s,e:",s_idx,e_idx, len(processed_results)) response_array = array.array("B", np.array(processed_results[0:1], np.float32).tobytes()) response_array_refs.append(response_array) bi = response_array.buffer_info() response.append(lg.QuerySampleResponse(query_id, bi[0], bi[1])) lg.QuerySamplesComplete(response) def enqueue(self, query_samples): idx = [q.index for q in query_samples] query_id = [q.id for q in query_samples] # print(idx) query_len = len(query_samples) # if query_len < self.max_batchsize: # samples = self.ds.get_samples(idx) # # batch_T = [self.ds.get_labels(sample) for sample in samples] # self.run_one_item(Item(query_id, idx, samples)) # else: bs = 1 for i in range(0, query_len, bs): ie = min(i + bs, query_len) samples = self.ds.get_samples(idx[i:ie]) # batch_T = [self.ds.get_labels(sample) for sample in samples] self.run_one_item(Item(query_id[i:ie], idx[i:ie], samples)) def finish(self): pass import threading class MyQueue: def __init__(self, *args, **kwargs): self.lock = threading.Lock() self._data = [] self.status = True def put(self, value): # self.lock.acquire() self._data.append(value) # self.lock.release() def get(self): if self.status and self._data: return self._data.pop(0) if self.status: while self._data: time.sleep(0.1) return self._data.pop(0) return None # self.lock.acquire() # return self._data.pop(0) # self.lock.release() def task_done(self, *args, **kwargs): self.status = False class QueueRunner(RunnerBase): def __init__(self, model, ds, threads, post_proc=None): super().__init__(model, ds, threads, post_proc) queue_size_multiplier = 4 # (args.samples_per_query_offline + max_batchsize - 1) // max_batchsize) self.tasks = JoinableQueue(maxsize=threads * queue_size_multiplier) self.workers = [] self.result_dict = {} for _ in range(self.threads): worker = threading.Thread(target=self.handle_tasks, args=(self.tasks,)) worker.daemon = True self.workers.append(worker) worker.start() def handle_tasks(self, tasks_queue): """Worker thread.""" while True: qitem = tasks_queue.get() if qitem is None: # None in the queue indicates the parent want us to exit tasks_queue.task_done() break self.run_one_item(qitem) tasks_queue.task_done() def enqueue(self, query_samples): idx = [q.index for q in query_samples] query_id = [q.id for q in query_samples] query_len = len(query_samples) # print(idx) # if query_len < self.max_batchsize: # samples = self.ds.get_samples(idx) # # batch_T = [self.ds.get_labels(sample) for sample in samples] # data = Item(query_id, idx, samples) # self.tasks.put(data) # else: bs = 1 for i in range(0, query_len, bs): ie = min(i + bs, query_len) samples = self.ds.get_samples(idx) # batch_T = [self.ds.get_labels(sample) for sample in samples] self.tasks.put(Item(query_id[i:ie], idx[i:ie], samples)) def finish(self): # exit all threads for _ in self.workers: self.tasks.put(None) for worker in self.workers: worker.join() def add_results(final_results, name, result_dict, result_list, took, show_accuracy=False): percentiles = [50., 80., 90., 95., 99., 99.9] buckets = np.percentile(result_list, percentiles).tolist() buckets_str = ",".join(["{}:{:.4f}".format(p, b) for p, b in zip(percentiles, buckets)]) if result_dict["total"] == 0: result_dict["total"] = len(result_list) # this is what we record for each run result = { "took": took, "mean": np.mean(result_list), "percentiles": {str(k): v for k, v in zip(percentiles, buckets)}, "qps": len(result_list) / took, "count": len(result_list), "good_items": result_dict["good"], "total_items": result_dict["total"], } acc_str = "" if show_accuracy: result["accuracy"] = 100. * result_dict["good"] / result_dict["total"] acc_str = ", acc={:.3f}%".format(result["accuracy"]) if "roc_auc" in result_dict: result["roc_auc"] = 100. * result_dict["roc_auc"] acc_str += ", auc={:.3f}%".format(result["roc_auc"]) # add the result to the result dict final_results[name] = result # to stdout print("{} qps={:.2f}, mean={:.4f}, time={:.3f}{}, queries={}, tiles={}".format( name, result["qps"], result["mean"], took, acc_str, len(result_list), buckets_str)) lock = threading.Lock() def append_file(file_path, data): with lock: my_string = ','.join(str(f) for f in data) with open(file_path, 'a+') as file: file.write(my_string + '\n') def read_file(file_path): lines_as_lists = [] with open(file_path, 'r') as file: for line in file: # 去除行尾的换行符,并将行分割成列表 lines_as_lists.extend([float(num) for num in line.strip().split(',')]) return lines_as_lists def get_score(model, quality: float, performance: float): print(model, quality,performance) try: score =0 if model["scenario"] == 'SingleStream'or model["scenario"]== 'MultiStream': if quality >= model["accuracy"]: score = model["baseline_performance"] / (performance + 1e-9) * model["base_score"] else: score ==0 elif model["scenario"]== 'Server' or model["scenario"] == 'Offline': if quality >= model["accuracy"]: score = performance * model["base_score"] / model["baseline_performance"] print(model["baseline_performance"]) else: score ==0 except Exception as e: score ==0 finally: return score def main(): global last_timeing # 初始化时清空文件 global g_lables global g_predicts # args = get_args() g_lables=[] g_predicts=[] # # dataset to use wanted_dataset, pre_proc, post_proc, kwargs = SUPPORTED_DATASETS["debug"] # # --count-samples can be used to limit the number of samples used for testing ds = wanted_dataset(directory=config.dataset_path, train_mode=False, epochs=15, line_per_sample=1000, batch_size=config.test_batch_size, data_type=DataType.MINDRECORD,total_size = config.total_size) # # load model to backend # model = backend.load(args.model_path, inputs=args.inputs, outputs=args.outputs) net_builder = ModelBuilder() train_net, eval_net = net_builder.get_net(config) # ckpt_path = config.ckpt_path param_dict = load_checkpoint(config.ckpt_path) load_param_into_net(eval_net, param_dict) train_net.set_train() eval_net.set_train(False) # acc_metric = AccMetric() # model = Model(train_net, eval_network=eval_net, metrics={"acc": acc_metric}) auc_metric1 = AUCMetric() model = Model(train_net, eval_network=eval_net, metrics={"auc": auc_metric1}) model.auc_metric = auc_metric1 # res = model.eval(ds_eval) final_results = { "runtime": "wide_deep_mindspore", "version": "v2", "time": int(time.time()), "cmdline": str(config), } mlperf_conf = os.path.abspath(config.mlperf_conf) if not os.path.exists(mlperf_conf): log.error("{} not found".format(mlperf_conf)) sys.exit(1) user_conf = os.path.abspath(config.user_conf) if not os.path.exists(user_conf): log.error("{} not found".format(user_conf)) sys.exit(1) if config.output: output_dir = os.path.abspath(config.output) os.makedirs(output_dir, exist_ok=True) os.chdir(output_dir) # # make one pass over the dataset to validate accuracy # count = ds.get_item_count() count = 5 base_score = config.config.base_score #task. accuracy = config.config.accuracy #task. baseline_performance = config.baseline_performance scenario_str = config["scenario"] #task. scenario = SCENARIO_MAP[scenario_str] runner_map = { lg.TestScenario.SingleStream: RunnerBase, lg.TestScenario.MultiStream: QueueRunner, lg.TestScenario.Server: QueueRunner, lg.TestScenario.Offline: QueueRunner } runner = runner_map[scenario](model, ds, config.threads_count, post_proc=post_proc) def issue_queries(query_samples): runner.enqueue(query_samples) def flush_queries(): pass settings = lg.TestSettings() settings.FromConfig(mlperf_conf, config.model_path, config.scenario) settings.FromConfig(user_conf, config.model_path, config.scenario) settings.scenario = scenario settings.mode = lg.TestMode.AccuracyOnly sut = lg.ConstructSUT(issue_queries, flush_queries) qsl = lg.ConstructQSL(count, config.performance_count, ds.load_query_samples, ds.unload_query_samples) log.info("starting {}".format(scenario)) result_dict = {"good": 0, "total": 0, "roc_auc": 0, "scenario": str(scenario)} runner.start_run(result_dict, config.accuracy) lg.StartTest(sut, qsl, settings) result_dict["good"] = runner.post_process.good result_dict["total"] = runner.post_process.total last_timeing = runner.result_timing post_proc.finalize(result_dict) add_results(final_results, "{}".format(scenario), result_dict, last_timeing, time.time() - ds.last_loaded, config.accuracy) runner.finish() lg.DestroyQSL(qsl) lg.DestroySUT(sut) # If multiple subprocesses are running the model send a signal to stop them if (int(os.environ.get("WORLD_SIZE", 1)) > 1): model.eval(None) from sklearn.metrics import roc_auc_score # labels = read_file(labels_path) # predicts = read_file(predicts_path) final_results['auc'] = sklearn.metrics.roc_auc_score(g_lables, g_predicts) print("auc+++++", final_results['auc']) NormMetric= { 'scenario': scenario_str, 'accuracy': accuracy, 'baseline_performance': baseline_performance, 'performance_unit': 's', 'base_score': base_score } # 打开文件 reprot_array=[] test_suit_array=[] test_suit_obj={} test_cases_array=[] test_cases_obj={} test_cases_obj["Name"] = config["task_name"] test_cases_obj["Performance Unit"] = config["performance_unit"] test_cases_obj["Total Duration"] = time.time() - ds.last_loaded test_cases_obj["Train Duration"] = None test_cases_obj["Training Info"] = { "Real Quality" :None, "Learning Rate" :None, "Base Quality" :None, "Epochs" :None, "Optimizer" :None } test_cases_obj["Software Versions"] = { "Python" :3.8, "Framework" :"Mindspore 2.2.14" } percentiles = [50., 80., 90., 95., 99., 99.9] buckets = np.percentile(last_timeing, percentiles).tolist() took = time.time() - ds.last_loaded qps = len(last_timeing) / took print(buckets) if scenario_str=="SingleStream": test_cases_obj["Performance Metric"] = buckets[2] if scenario_str=="MultiStream": test_cases_obj["Performance Metric"] = buckets[4] if scenario_str=="Server": test_cases_obj["Performance Metric"] = qps if scenario_str=="Offline": test_cases_obj["Performance Metric"] = qps score = get_score(NormMetric, final_results["auc"], test_cases_obj["Performance Metric"]) test_cases_obj["Score"] = score test_cases_array.append(test_cases_obj) test_suit_obj["Test Cases"] = test_cases_array test_suit_obj["Name"] = "Inference Suite" test_suit_array.append(test_suit_obj) test_obj = {"Test Suites": test_suit_array} reprot_array.append(test_obj) test_suit_result_obj = {"Name":"Inference Suite","Description":"inference model","Score":score } test_suit_result_array = [] test_suit_result_array.append(test_suit_result_obj) test_suit_result_obj1 = {"Test Suites Results":test_suit_result_array} reprot_array.append(test_suit_result_obj1) epoch_obj={ "epoch":None, "epoch_time":None, "train_loss":None, "metric":None, "metric_name":None } reprot_array.append(epoch_obj) result_final = {"Report Info": reprot_array } print("result_final", result_final) # task.save(result_final) if __name__ == "__main__": # try: # if task.config["is_run_infer"] == True: # main() # task.close() # else: # task.close() # raise ValueError # except Exception as e: # task.logger.error(e) # task.close(e) # task.logger.info("Finish ") # if task.config["is_run_infer"] == True: # print(config) main() # task.close()介绍一下这段代码

有网页的源码,怎么在idea里用vue复现,运用element-UI组件,view-source:https://rloopbase.nju.edu.cn/ ,源码如下: <html xmlns="http://www.w3.org/1999/xhtml " xml:lang="en" lang="en"> <head> <title>R-loopBase</title> </head> <body> Introduction The R-loop, a three-stranded nucleic acid structure composed of an RNA:DNA hybrid and a displaced single-stranded DNA, plays versatile roles in many physiological and pathological processes. However, its controversial genomic localization and incomplete understanding of its regulatory network raise great challenges for R-loop research. Here, we present R-loopBase to tackle these pressing issues by systematic integration of a huge amount of genomics and literature data. A reference set of human R-loop zones is computed with confidence scores and detailed annotations (Search, Help and Download), all of which can be visualized in well-annotated genomic context in a local genome browser (Browser). A list of 1,185 R-loop regulators is manually curated from PubMed literatures and annotated with most up-to-date multi-omics data (Regulator). We have also manually curated 24 R-loop regulators in mouse, 63 in yeast (saccharomyces cerevisiae) and 21 in Escherichia coli to facilicate R-loop research in these model organisms (Regulator). We further deduce the functional relationship between individual R-loops and their putative regulators (Regulator and Download). In total, we have generated billions of entries of functional annotations, which can be easily accessed in our user-friendly interfaces (Help). The regulator summary and genomic annotation of R-loop regions will be constantly updated along the research progress of field. You are welcome to contribute to updating R-loop regulators and related literatures (Regulator Updating System on Regulator page). Any suggestions and feedbacks from you are also welcome (Contact). Show more
R-loopBase <input id="searchText" type="text" name="" placeholder="e.g. BRCA2 or chr1:154572702-154628593" onkeypress="handle(event)"> search Browser Regulator Tool Download Help Contact News 2023.10.01 Online of DeepER, an R-loop prediction tool. 2022.02.19 A new preprint by R-loopBase team. 2021.11.18 Published on Nucleic Acids Res. 2021.06.20 Official release of R-loopBase v1.0. 2021.06.15 Internal evaluation before public release. 2021.05.10 Build-up of R-loopBase genome browser. 2021.03.31 Data freeze for R-loopBase v1.0. 2021.01.18 Launching R-loopBase project. 2020.10.10 Systematic evalation of all R-loop mapping technologies. Show more
Latest Publications LUC7L3 is a downstream factor of SRSF1 and prevents genomic instability. Cell Insight. 2024. Smartphone-based device for rapid and single-step detection of piRNA-651 using anti-DNA:RNA hybrid antibody and enzymatic signal amplification. Anal Chim Acta. 2024. Split activator of CRISPR/Cas12a for direct and sensitive detection of microRNA. Anal Chim Acta. 2024. More publications in PubMed
© 2020-2025 NANJING UNIVERSITY. ALL RIGHTS RESERVED. CONDITIONS OF USE </body> </html> <script type="text/javascript" language="JavaScript"> function handle(e){ if(e.keyCode === 13){ e.preventDefault(); // Ensure it is only this code that runs toGeneList(); } } function toGeneList(){ //var type = document.getElementById('searchSelect').value; var type; var value = document.getElementById('searchText').value; if(value==''){ alert("Please input keyword!"); }else if(value.match(/[^a-zA-Z0-9:-]/)){ alert("Only numbers, letters, \":\" and \"-\" are allowed!"); }else{ //top.location = '/geneList.jsp?searchFor=' + type + '&keyword=' + value; if(value.match(/^chr.+:\d+-\d+$/)){ type="location"; }else{ type="symbols"; } top.location = '/rloopList.jsp?searchFor=' + type + '&keyword=' + value; } } function changeLiSize(){ var selectId = document.getElementById("myNavigator"); } function showMoreOrLess(){ var selectId = document.getElementById("showIntroduction"); var selectText = document.getElementById("introductionText"); if(selectId.innerText == "Show more"){ selectText.style.height="auto"; selectId.innerText="Show less"; }else{ selectText.style.height="120px"; selectId.innerText="Show more"; } } function showMoreOrLess2(){ var selectId2 = document.getElementById("showNews"); var selectText2 = document.getElementById("newsDiv"); if(selectId2.innerText == "Show more"){ selectText2.style.height="auto"; selectId2.innerText="Show less"; }else{ selectText2.style.height="220px"; selectId2.innerText="Show more"; } } var publicationDivHeight = document.getElementById('publicationDiv').clientHeight; </script>

mean_dice =[] mean_hd95 = [] mean_MIOU = [] with torch.no_grad(): for i , batch in 运行时报错ValueError Traceback (most recent call last) Cell In[7], line 25 23 # print(dice_metric) 24 mean_dice.append(dice_metric.item()) ---> 25 mean_hd95.append(monai.metrics.HausdorffDistanceMetric(include_background = False,percentile=95)(val_outputs_2 , val_labels).mean().item()) 26 mean_MIOU.append(monai.metrics.MeanIoU(include_background=False)(val_outputs_2 , val_labels).mean().item()) 28 for i in range(7): File ~/anaconda3/envs/micformer/lib/python3.8/site-packages/monai/metrics/metric.py:347, in CumulativeIterationMetric.__call__(self, y_pred, y, **kwargs) 327 def __call__( 328 self, y_pred: TensorOrList, y: TensorOrList | None = None, **kwargs: Any 329 ) -> torch.Tensor | Sequence[torch.Tensor | Sequence[torch.Tensor]]: 330 """ 331 Execute basic computation for model prediction and ground truth. 332 It can support both list of channel-first Tensor and batch-first Tensor. (...) 345 a batch-first tensor (BC[HWD]) or a list of batch-first tensors. 346 """ --> 347 ret = super().__call__(y_pred=y_pred, y=y, **kwargs) 348 if isinstance(ret, (tuple, list)): 349 self.extend(*ret) File ~/anaconda3/envs/micformer/lib/python3.8/site-packages/monai/metrics/metric.py:80, in IterationMetric.__call__(self, y_pred, y, **kwargs) 78 if isinstance(y_pred, torch.Tensor): ... --> 179 raise ValueError(f"y_pred and y should have same shapes, got {y_pred.shape} and {y.shape}.") 181 batch_size, n_class = y_pred.shape[:2] 182 hd = torch.empty((batch_size, n_class), dtype=torch.float, device=y_pred.device) ValueError: y_pred and y should have same shapes, got torch.Size([1, 1, 128, 128, 128]) and torch.Size([1, 7, 128, 128, 128]). Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings...

最新推荐

recommend-type

【java毕业设计】喀什美食订餐网源码(ssm+mysql+说明文档+LW+PPT).zip

基于SSM框架的喀什美食订餐网的前台包含了用户注册、菜品列表、菜品排行、在线点餐和菜品收藏功能,下面是对这些功能的详细介绍: (1)用户注册功能:允许新用户创建自己的账户,并提供基本信息如用户名、密码等。注册完成后,用户可以登录到系统中进行后续操作。 (2)菜品列表功能:展示了所有可供选择的菜品,每个菜品都配有详细的描述、图片和价格等信息。用户可以通过搜索功能快速找到自己感兴趣的菜品,也可以按照分类或关键词进行筛选和排序。 (3)菜品排行功能:显示了当前最受欢迎或高评分的菜品,帮助用户挑选热门菜品,并参考其他用户的评价和推荐。 (4)在线点餐功能:允许用户在浏览菜品后直接下单,选择餐厅、菜品数量和其他相关选项。用户还可以添加特殊要求或备注,以满足个性化的需求。提交订单后,用户可以实时查看订单状态并付款。 完整前后端源码,部署后可正常运行! 环境说明 开发语言:Java后端 框架:ssm,mybatis JDK版本:JDK1.8+ 数据库:mysql 5.7+ 数据库工具:Navicat11+ 开发软件:eclipse/idea Maven包:Maven3.3+ 部署容器:tomcat7.5+
recommend-type

电力电子仿真技术解析:MMC、HVDC与微电网的建模与应用

电力电子仿真领域的关键技术,重点讨论了模块化多电平换流器(MMC)仿真、高压直流输电(HVDC)仿真以及微电网仿真。首先,通过MATLAB Simulink环境构建MMC仿真模型,研究其工作原理和性能特点,特别是如何通过级联子模块实现高电压、低谐波的效果。其次,探讨了柔性直流输电(VSC-HVDC)仿真,涉及SPWM、NLM、CPS-PWM等调制技术的应用,展示了这些技术如何提升系统的灵活性和稳定性。最后,针对微电网仿真,模拟了风电、光伏、储能等多种能源形式的协同工作,优化微电网的设计和运行。 适合人群:从事电力系统研究的技术人员、高校师生及相关领域的研究人员。 使用场景及目标:适用于电力系统设计、优化和故障诊断的研究项目,帮助理解和掌握电力电子仿真技术的基本原理和实际应用。 其他说明:文中提供了部分MATLAB伪代码示例,便于读者理解和实践。同时鼓励进一步咨询和探讨相关话题,以深化对电力电子仿真的认识。
recommend-type

Notes App API开发与使用指南

### API基础知识 #### 标题分析:“notes-app-api” 从标题“notes-app-api”可以推断,此API(Application Programming Interface,应用程序接口)是专为一个名为“notes-app”的应用程序设计的。这种API通常被用来允许不同的软件组件之间进行通信。在这个案例中,“notes-app”可能是一款笔记应用,该API提供了笔记数据的获取、更新、删除等操作的接口。 #### 描述分析:“API休息说明” 在提供的“API休息说明”中,我们可以看到几个重要的操作指令: 1. **指令“dev”:** `npm run dev` - 这是一个用于启动开发模式的命令。通常情况下,`npm run dev`会使用Node.js环境下的某种热重载功能,让开发者在开发过程中实时看到代码更改的效果。 - `npm`是Node.js的包管理器,用于安装项目所需的依赖、运行脚本等。 - `dev`是脚本命令的缩写,实际对应的是`package.json`文件中定义的某个开发环境下的脚本命令。 2. **指令“服务”:** `npm start` - 这是一个用于启动应用程序服务的命令。 - 同样利用Node.js的`npm`包管理器执行,其目的是部署应用程序,使其对外提供服务。 3. **指令“构建”:** `npm run build` - 这是用于构建项目的命令,通常会将源代码进行压缩、转译等操作,生成用于生产环境的代码。 - 例如,如果项目使用了TypeScript,构建过程可能包括将TypeScript代码编译成JavaScript,因为浏览器不能直接运行TypeScript代码。 #### 标签分析:“TypeScript” TypeScript是JavaScript的超集,提供了静态类型检查和ES6+的特性。使用TypeScript可以提高代码的可读性和可维护性,同时在编译阶段发现潜在的错误。 1. **TypeScript的特性:** - **静态类型检查:** 有助于在开发阶段捕捉类型错误,降低运行时错误的概率。 - **ES6+特性支持:** TypeScript支持最新的JavaScript语法和特性,可以使用装饰器、异步编程等现代JavaScript特性。 - **丰富的配置选项:** 开发者可以根据项目需求进行各种配置,如模块化系统、编译目标等。 2. **TypeScript的使用场景:** - 大型项目:在大型项目中,TypeScript有助于维护和扩展代码库。 - 多人协作:团队开发时,类型定义有助于减少沟通成本,提高代码一致性。 - 错误敏感应用:如金融、医疗等领域的应用,可以利用TypeScript的静态类型检查减少bug。 #### 文件分析:“压缩包子文件的文件名称列表: notes-app-api-develop” 这个文件列表中包含了“notes-app-api-develop”,它表明存在一个与开发相关的压缩包或存档文件。这个文件很可能包含了应用程序的源代码,通常还会包括`package.json`文件,这个文件定义了项目的依赖关系和可运行的脚本命令。在开发和部署过程中,开发者通常会根据`package.json`中定义的脚本来执行不同的任务,如`npm run dev`或`npm start`等。 ### Docker使用说明 在描述中还提到了使用Docker的命令: 1. **构建镜像:** `docker build -t notes-api .` - 这个命令用于构建一个名为`notes-api`的Docker镜像。 - `.`表示Dockerfile在当前目录。 - `-t`指定镜像的名称和标签。 2. **运行容器:** `docker run -d -it -p 3005:3005 notes-api` - 该命令用于从`notes-api`镜像启动一个容器,并在后台运行。 - `-d`表示后台运行容器。 - `-it`则是将容器的标准输入打开,并分配一个伪终端。 - `-p 3005:3005`将容器内部的3005端口映射到宿主机的3005端口,允许宿主机与容器内部的服务进行通信。 Docker作为容器化平台,它允许开发者打包应用及其依赖环境为一个轻量级、可移植的容器,这样无论在什么环境下,都能保证应用运行的一致性。这一点对于开发和部署API来说尤为重要,因为它确保了开发环境与生产环境的一致性,减少了所谓的“在我的机器上可以运行”的问题。 ### 总结 综上所述,“notes-app-api”是一个为笔记应用程序设计的API,开发者可以通过一系列npm脚本命令进行开发、服务和构建操作。此外,该API还涉及到使用Docker进行容器化部署的流程。最后,提及的TypeScript标签意味着该项目在前端开发上采用了TypeScript,以期获得更加健壮和可维护的代码库。了解这些知识点对开发、部署和维护现代Web应用至关重要。
recommend-type

【PMSM建模与测试最佳实践】:MATLAB电机仿真模型的权威指导

# 1. PMSM电机模型的基础知识 在现代工业自动化和电动车辆领域,永磁同步电机(PMSM
recommend-type

如何通过四元数避免万向节死锁?

<think>好的,用户想了解如何使用四元数来避免万向节锁问题,尤其是在3D旋转中保持姿态的稳定性。首先,我需要回忆万向节锁的基本概念和四元数如何解决这个问题。万向节锁发生在使用欧拉角时,当两个旋转轴对齐导致失去一个自由度。而四元数通过四维空间的旋转避免了这种顺序依赖。 接下来,我应该解释万向节锁的原因,比如三个轴依次旋转时,某个轴可能与其他轴对齐,导致无法正确旋转。然后对比四元数的优势,比如四元数的连续性和无奇异性。需要提到四元数的数学表示,如单位四元数和旋转插值方法(如球面线性插值),以及它们如何避免万向节锁。 还要考虑用户可能的实际应用场景,比如游戏开发或机器人学,是否需要示例代码?
recommend-type

Python实现Couchbase大规模数据复制技术

标题中提到的技术“couchbase-massive-replication”是一种针对Couchbase数据库的开源Python开发工具,专门用于高效地实现跨集群的大量存储桶和索引的复制。Couchbase是一个高性能、可扩展、容错的NoSQL文档数据库,它支持同步分布式复制(XDCR),能够实现跨地域的数据复制。 描述部分详细阐述了该技术的主要用途和优势。它解决了一个常见问题:在进行XDCR复制时,迁移大量存储桶可能会遇到需要手动检查并迁移缺失存储桶的繁琐步骤。Couchbase-massive-replication技术则允许用户在源和目标集群之间无需进行存储桶配置,简化了迁移过程。开发者可以通过简单的curl请求,向集群发送命令,从而实现大规模存储桶的自动化迁移。 此外,为了帮助用户更容易部署和使用该技术,项目提供了一个Dockerfile,允许用户通过Docker容器来运行程序。Docker是一种流行的容器化平台,可以将应用及其依赖打包到一个可移植的容器中,便于部署和扩展。用户只需执行几个Docker命令,即可快速启动一个名为“cbmigrator”的容器,版本为0.1。启动容器后,可以通过发送简单的POST请求来操作迁移任务。 项目中还提到了Docker Hub,这是一个公共的Docker镜像注册中心,用户可以在其中找到并拉取其他用户分享的镜像,其中就包括了“cbmigrator”镜像,即demir94/cbmigrator:0.1。这大大降低了部署和使用该技术的门槛。 根据标签“Python”,我们可以推断出该项目是使用Python开发的。Python是一种广泛使用的高级编程语言,以其简洁的语法和强大的库支持而闻名。该项目中Python的使用意味着用户可能需要具备一定的Python基础知识,以便对项目进行定制或故障排除。Python的动态类型系统和解释执行机制,使得开发过程中可以快速迭代和测试。 最后,从提供的压缩包子文件的文件名称列表“couchbase-massive-replication-main”来看,该项目的源代码文件夹可能遵循了通用的开源项目结构,其中“main”文件夹通常包含了项目的主要代码和入口文件。用户在获取项目后,可以在这个文件夹中找到相关的代码文件,包括配置文件、数据库模型、业务逻辑实现以及API接口等。 综合来看,这个项目涉及的技术点包括: - Couchbase数据库:一种文档数据库,广泛用于构建可扩展的应用程序。 - XDCR(Cross-Datacenter Replication):Couchbase提供的跨数据中心数据复制机制,实现数据的无缝迁移和灾难恢复。 - Python编程语言:用来开发该项目的高级编程语言,以其易读性和简洁的语法著称。 - Docker容器化技术:用于打包、分发和运行应用程序的平台,提供了一种便捷的部署方式。 - Docker Hub:一个存放和分享Docker镜像的平台,可以简化镜像的查找、下载和管理过程。 这个项目对于需要在多个Couchbase集群间迁移大量数据的开发者和运维人员来说是一个宝贵的资源,因为它大大简化了存储桶迁移的过程,并提高了操作的便利性和效率。
recommend-type

【MATLAB电机性能评估案例】:仿真环境下的深度研究

# 1. MATLAB在电机性能评估中的应用概述 电机作为现代工业中不可或缺的电力传动设备,其性能优劣直接影响整个系统的可靠性和效率。在众多的电机性能评估工具中,MATLAB凭借其强大的数值计算能力和丰富的工具箱资源,成为该领域研究和工程实践中的有力工具。本章将对MATLAB在电机性能评估中的应用进行概述,并介绍其在电机仿真、故障诊断和性能优化等方面的具体应用前景和价值。MA
recommend-type

专业版立创EDA怎么画板子边框圆角

<think>嗯,用户问的是如何在专业版立创EDA中画板子的圆角边框。首先,我得回忆一下立创EDA的功能布局。专业版和标准版可能有些不同,所以得确认步骤是否正确。 首先,用户可能需要知道如何进入板子边框的编辑模式。通常,这类操作是在PCB设计界面里进行的。所以第一步应该是打开或创建一个PCB文件。然后,找到板子边框的选项,可能在“设计”或者“工具”菜单下,或者有专门的边框层,比如Board Outline层。需要确认专业版的具体位置,可能在顶部工具栏或右键菜单里。 接下来,用户需要绘制一个矩形作为基础边框。这里可能需要使用绘制矩形工具,然后调整大小到所需的板子尺寸。但问题是如何将矩形的四个
recommend-type

自动化部署XMRig矿工的安装脚本

标题中的“xmrig-autoinstall:XMRig安装脚本”指明了该文档涉及的主题是XMRig这款软件的自动化安装过程。XMRig是一个流行的开源加密货币挖矿软件,主要用于挖掘Monero(XMR)以及其他基于CryptoNote协议的加密货币。脚本安装是为了简化部署过程,自动执行一系列命令来配置和启动挖矿服务。 描述中提到的脚本将自动安装XMRig作为一个服务,并且能够日志记录启动该服务。在Linux环境下,将软件作为服务运行通常意味着该软件将会随系统启动而自动启动,并且可以在后台稳定运行。脚本还提到了日志监视命令“tail -f /var/log/xmrig.log”,这是一个常用的Linux命令,用于实时查看文件的更新,特别是监控日志文件。 此外,描述中还提及了脚本允许用户修改GIT_SRC_URL以适应不同版本的XMRig。这表明安装脚本设计有一定的灵活性,可以根据需要调整源码地址来安装不同版本的XMRig。 描述还强调了该脚本最初是为HiveOS系统编写的,HiveOS是一个专门针对挖矿优化的操作系统。脚本能够处理操作系统更新时覆盖或卸载XMRig的情况,而自动化的安装脚本可以快速重新安装,节省重复的手动操作时间。 在描述中提到的“特征”部分,说明了脚本的几个核心功能:自动构建XMRig(MoneroOcean版本)、自动创建XMRig服务、在系统启动时自动启动XMRig服务,以及启动NTP和Logro。NTP代表网络时间协议(Network Time Protocol),用于同步计算机时钟。Logro可能是一个拼写错误,应该指的是Logrotate,它是Linux系统中用于管理日志文件的工具。 脚本的标签是“Shell”,这意味着脚本很可能是用Shell脚本语言编写的。Shell脚本广泛用于Linux和Unix系统中,适合用于自动化系统管理任务。 最后,提到的“压缩包子文件的文件名称列表”中包含了一个名为“xmrig-autoinstall-main”的文件。这应该是脚本的主要文件,包含了安装XMRig所需的核心代码。使用“main”作为文件名,表明这可能是脚本的主入口文件,或者包含了主执行逻辑。 综上所述,这份文件描述了一个自动化安装XMRig的Shell脚本,其设计目的是为了让用户能够更简单、快速地在支持Shell脚本的系统上部署XMRig挖矿软件,无需手动执行一系列复杂的安装和配置步骤。通过这个脚本,用户可以将XMRig安装为一个随系统启动的服务,并通过日志来监控挖矿过程,也可以根据需要调整源码地址来适应不同的XMRig版本。
recommend-type

MATLAB高级电机建模技巧:S函数与动态系统分析的完美结合

# 1. MATLAB高级电机建模概述 电机作为现代工业和日常生活中不可