活动介绍
file-type

无监督英文文本方面提取:ExtRA算法快速入门指南

下载需积分: 50 | 79.52MB | 更新于2025-08-15 | 8 浏览量 | 1 下载量 举报 收藏
download 立即下载
### 标题知识点 标题“运行ExtRA算法以无监督地提取英文文本方面的代码”指向了处理自然语言处理(NLP)的特定技术。ExtRA算法可能是一个假定的或特定的名称,用于指代用于从英文文本中提取方面(aspects)或主题的算法。这个算法被设计为无监督学习方式,意味着它不需要带有标签的训练数据,能够独立地从原始文本中发现潜在的主题或方面。 ### 描述知识点 描述中包含了几个关键知识点: #### Docker容器资源管理 描述提到了在运行Extra代码时需要确保Docker进程有足够的资源,例如在Mac或Windows系统上至少需要8GB的RAM。这说明了Docker容器可能在执行机器学习或大数据任务时对系统资源的要求较高,用户需要确保足够的硬件资源以保证程序的正常运行。 #### 嵌入文件的重要性 描述中指出GitHub的仓库并不包含手套曲面嵌入(可能是指词向量模型),暗示了项目依赖于外部的预训练模型进行文本向量化。用户需要下载相应的嵌入文件以保证程序可以正常工作。这强调了词向量在NLP任务中的核心作用,尤其是无监督学习任务,因为模型需要通过这些预训练的词向量来理解和处理语言。 #### 使用docker-compose 描述提到了使用docker-compose作为运行extra-model的首选方式,指导用户先构建镜像后运行测试命令以确保extra-model正确安装。docker-compose是一个工具,用于定义和运行多容器Docker应用程序。通过它,可以简化Docker容器的管理,一键构建环境并启动容器,这有助于用户轻松部署和运行复杂的软件。 #### 下载嵌入 描述还提到了下载嵌入文件的步骤,这是使用extra-model进行英文文本方面的提取所必需的。提及的Stanfor可能是Stanford NLP Group的缩写,暗示了嵌入文件可能来自于斯坦福大学NLP相关的预训练模型。这表明在执行算法之前,需要加载适当的词向量模型作为算法的输入。 ### 标签知识点 标签中包含了很多与NLP相关的技术和工具,它们指向了这个项目可能用到或与之相关的技术栈: - **Python**:作为程序的编写语言,这是NLP领域最常使用的编程语言之一。 - **NLP**:自然语言处理,是计算机科学、人工智能和语言学领域交叉的一个领域,用于使计算机能够理解人类语言。 - **Python Library**:特指Python库,是指Python的扩展,用来支持开发各种应用。 - **Machine Learning Algorithms**:机器学习算法,涉及算法,能够从数据中学习并作出决策或预测。 - **Python3**:Python的第三个主要版本,是当前主流的Python实现。 - **NLP Library**:指专门用于处理自然语言任务的Python库,如NLTK、SpaCy等。 - **NLP Keywords Extraction**:NLP关键词提取,是识别文本中关键信息的过程。 - **Aspect-Based Sentiment Analysis**:基于方面的意见挖掘,是从文本中识别出产品或服务方面的观点和情感的技术。 - **Aspect Extraction**:方面提取,即从文本中识别出讨论的主题或方面的过程。 ### 压缩包子文件的文件名称列表 从给出的文件名称“extra-model-main”可以推测,这是一个包含程序主入口的项目仓库。在很多Git仓库中,“main”分支通常是指默认分支,存放主要的或最新的代码。这可能意味着该文件夹内包含了用于构建和运行extra-model的主要代码和依赖。 总结起来,上述文件提供了关于如何在一个特定的IT场景中进行操作的详细指引,强调了对于资源管理、软件容器技术以及预训练模型的依赖。同时,涉及的标签指向了一个专门的Python项目,可能集中于自然语言处理任务,特别是涉及到主题识别和情感分析的应用。

相关推荐

filetype

class Algorithm: def __init__(self, model, optimizer, device=None, logger=None, monitor=None): self.device = device or torch.device("cpu") self.model = model self.optim = optimizer self.logger = logger self.monitor = monitor # 算法参数(与Config一致) self.num_head = Config.NUMB_HEAD # 4 self._gamma = Config.GAMMA # 0.99 self.target_update_freq = Config.TARGET_UPDATE_FREQ # 500步 # 目标网络 self.target_model = deepcopy(self.model) self.target_model.eval() # 训练状态 self.train_step = 0 self.last_report_time = time.time() def learn(self, list_sample_data): """学习逻辑:按head分别处理legal_action,避免堆叠""" if len(list_sample_data) == 0: self.logger.warning("样本为空,跳过学习") return {"loss": 0.0} # ------------------------------ 1. 提取样本数据(按head处理legal_action) ------------------------------ # 提取非legal_action数据(可直接stack) obs_list = [] action_list = [] rew_list = [] next_obs_list = [] not_done_list = [] # 提取legal_action(按head存储:head_idx -> [样本1的掩码, 样本2的掩码, ...]) legal_action_per_head = [[] for _ in range(self.num_head)] for sample in list_sample_data: # 基础数据 obs_list.append(torch.tensor(sample.obs, dtype=torch.float32, device=self.device)) action_list.append(torch.tensor(sample.act, dtype=torch.float32, device=self.device)) rew_list.append(torch.tensor(sample.rew, dtype=torch.float32, device=self.device)) next_obs_list.append(torch.tensor(sample._obs, dtype=torch.float32, device=self.device)) not_done_list.append(torch.tensor([sample.done], dtype=torch.float32, device=self.device)) # 处理legal_action(样本的legal_action是4个head的列表) legal = sample.legal_action for head_idx in range(self.num_head): if head_idx < len(legal): # 转换为张量并添加到对应head的列表 legal_tensor = torch.tensor(legal[head_idx], dtype=torch.float32, device=self.device) legal_action_per_head[head_idx].append(legal_tensor) # 堆叠基础数据 obs = torch.stack(obs_list) # (batch, obs_dim) action = torch.stack(action_list) # (batch, num_head) rew = torch.stack(rew_list) # (batch, num_head) next_obs = torch.stack(next_obs_list) # (batch, obs_dim) not_done = torch.stack(not_done_list).squeeze(1) # (batch,) # 堆叠legal_action(每个head单独堆叠:(batch, action_dim_per_head)) for head_idx in range(self.num_head): legal_action_per_head[head_idx] = torch.stack(legal_action_per_head[head_idx]) # ------------------------------ 2. 计算目标Q值(DDQN逻辑) ------------------------------ self.target_model.eval() self.model.eval() q_targets = [] with torch.no_grad(): for head_idx in range(self.num_head): # 2.1 当前网络选择下一状态的最佳动作(考虑合法动作) current_q_next = self.model(next_obs)[head_idx] # (batch, action_dim_per_head) legal_mask = legal_action_per_head[head_idx] # (batch, action_dim_per_head) # 非法动作Q值设为-1e10(不被选中) current_q_next_masked = current_q_next + (1 - legal_mask) * (-1e10) best_actions = torch.argmax(current_q_next_masked, dim=1, keepdim=True) # (batch, 1) # 2.2 目标网络评估最佳动作的价值 target_q_next = self.target_model(next_obs)[head_idx] # (batch, action_dim_per_head) target_q_best = target_q_next.gather(1, best_actions) # (batch, 1) # 2.3 计算目标Q值:rew + gamma * target_q_best * not_done rew_head = rew[:, head_idx].unsqueeze(1) # (batch, 1) q_target_head = rew_head + self._gamma * target_q_best * not_done.unsqueeze(1) q_targets.append(q_target_head) # 拼接所有head的目标Q值:(batch, num_head) q_targets = torch.cat(q_targets, dim=1) # ------------------------------ 3. 计算当前Q值与损失 ------------------------------ self.model.train() q_values = [] for head_idx in range(self.num_head): # 提取当前动作对应的Q值 current_q = self.model(obs)[head_idx] # (batch, action_dim_per_head) # 动作索引:(batch, 1)(确保是long类型) action_idx = action[:, head_idx].long().unsqueeze(1) # 提取当前动作的Q值 q_value_head = current_q.gather(1, action_idx) # (batch, 1) q_values.append(q_value_head) # 拼接当前Q值:(batch, num_head) q_values = torch.cat(q_values, dim=1) # 计算MSE损失(按head累加) loss = 0.0 for head_idx in range(self.num_head): loss += F.mse_loss(q_values[:, head_idx], q_targets[:, head_idx]) # ------------------------------ 4. 优化与目标网络更新 ------------------------------ self.optim.zero_grad() loss.backward() # 梯度裁剪(防止梯度爆炸) grad_norm = torch.nn.utils.clip_grad_norm_(self.model.parameters(), 1.0).item() self.optim.step() # 定期更新目标网络 self.train_step += 1 if self.train_step % self.target_update_freq == 0: self.target_model.load_state_dict(self.model.state_dict()) self.logger.info(f"训练步数{self.train_step}:更新目标网络") # ------------------------------ 5. 日志与监控 ------------------------------ # 定期上报监控数据(每30秒) now = time.time() if now - self.last_report_time >= 30: avg_q = q_values.mean().item() avg_target_q = q_targets.mean().item() monitor_data = { "value_loss": loss.item(), "avg_q_value": avg_q, "avg_target_q_value": avg_target_q, "grad_norm": grad_norm, "train_step": self.train_step } if self.monitor: self.monitor.put_data({os.getpid(): monitor_data}) self.logger.info( f"损失: {loss.item():.4f}, 平均Q值: {avg_q:.4f}, " f"平均目标Q值: {avg_target_q:.4f}, 梯度范数: {grad_norm:.4f}" ) self.last_report_time = now return {"loss": loss.item(), "grad_norm": grad_norm} class Agent(BaseAgent): def __init__(self, agent_type="player", device=None, logger=None, monitor=None): super().__init__(agent_type, device, logger, monitor) self.device = device or torch.device("cuda" if torch.cuda.is_available() else "cpu") self.model = Model(device=self.device) # 优化器与探索参数 self.optim = torch.optim.RMSprop(self.model.parameters(), lr=Config.LR) self._eps = Config.START_EPSILON_GREEDY self.end_eps = Config.END_EPSILON_GREEDY self.eps_decay = Config.EPSILON_DECAY # 动作维度(与Config一致) self.head_dim = [ Config.DIM_OF_ACTION_PHASE_1, Config.DIM_OF_ACTION_DURATION_1, Config.DIM_OF_ACTION_PHASE_2, Config.DIM_OF_ACTION_DURATION_2, ] self.num_head = Config.NUMB_HEAD # 4 # 预处理模块(核心依赖) self.preprocess = FeatureProcess(logger, usr_conf={}) self.road_info_initialized = False # 标记道路信息是否初始化 self.last_action = None # 算法模块 from your_algorithm import Algorithm # 替换为实际Algorithm导入 self.algorithm = Algorithm( self.model, self.optim, self.device, self.logger, self.monitor ) def reset(self): self.preprocess.reset() self._eps = Config.START_EPSILON_GREEDY def init_road_info(self, extra_info): """初始化道路信息(从extra_info加载静态配置)""" if self.road_info_initialized: return game_info = extra_info.get("gameinfo", {}) self.preprocess.init_road_info(game_info) self.road_info_initialized = True self.logger.info("Agent道路信息初始化完成") # ------------------------------ 预测与动作处理 ------------------------------ def __predict_detail(self, list_obs_data, exploit_flag=False): """内部预测逻辑:生成4个head的动作""" feature = [torch.tensor(obs.feature, dtype=torch.float32, device=self.device) for obs in list_obs_data] feature = torch.stack(feature) self.model.eval() self._eps = max(self.end_eps, self._eps * self.eps_decay) with torch.no_grad(): if np.random.rand() >= self._eps or exploit_flag: # 贪心选择:取Q值最大的动作 res = self.model(feature) # 假设model输出为 tuple([4个head的Q值]) list_phase_1 = torch.argmax(res[0], dim=1).cpu().tolist() list_duration_1 = torch.argmax(res[1], dim=1).cpu().tolist() list_phase_2 = torch.argmax(res[2], dim=1).cpu().tolist() list_duration_2 = torch.argmax(res[3], dim=1).cpu().tolist() else: # 随机探索:按动作维度随机选择 list_phase_1 = np.random.choice(self.head_dim[0], len(list_obs_data)).tolist() list_duration_1 = np.random.choice(self.head_dim[1], len(list_obs_data)).tolist() list_phase_2 = np.random.choice(self.head_dim[2], len(list_obs_data)).tolist() list_duration_2 = np.random.choice(self.head_dim[3], len(list_obs_data)).tolist() # 构建ActData列表 return [ ActData( phase_index_1=list_phase_1[i], duration_1=list_duration_1[i], phase_index_2=list_phase_2[i], duration_2=list_duration_2[i], ) for i in range(len(list_obs_data)) ] @predict_wrapper def predict(self, list_obs_data): return self.__predict_detail(list_obs_data, exploit_flag=False) @exploit_wrapper def exploit(self, observation): """推理模式:生成环境可执行的动作""" # 初始化道路信息(首次调用时) if not self.road_info_initialized: self.init_road_info(observation["extra_info"]) # 处理观测 obs_data = self.observation_process(observation["obs"], observation["extra_info"]) if not obs_data: self.logger.warning("观测处理失败,返回默认动作") return [[1, 0, 1], [2, 0, 1]] # 默认动作:路口1相位0时长1,路口2相位0时长1 # 预测动作 act_data = self.__predict_detail([obs_data], exploit_flag=True)[0] return self.action_process(act_data) # ------------------------------ 观测处理(核心修正) ------------------------------ def observation_process(self, obs, extra_info): """处理观测:生成feature和合法动作掩码""" # 1. 初始化道路信息(首次调用) if not self.road_info_initialized: self.init_road_info(extra_info) # 2. 更新预处理模块的动态数据 self.preprocess.update_traffic_info(obs, extra_info) frame_state = obs.get("framestate", {}) vehicles = frame_state.get("vehicles", []) phases = frame_state.get("phases", []) # 3. 栅格化处理(位置+速度) grid_w, grid_num = Config.GRID_WIDTH, Config.GRID_NUM speed_dict = np.zeros((grid_w, grid_num), dtype=np.float32) position_dict = np.zeros((grid_w, grid_num), dtype=np.int32) # 4. 遍历车辆填充栅格(容错处理) for vehicle in vehicles: if not self.preprocess.on_enter_lane(vehicle): continue # 仅处理进口道车辆 # 计算x_pos(车道编码) try: x_pos = get_lane_code(vehicle) # 假设该函数返回0-25的编码 if x_pos < 0 or x_pos >= grid_w: continue # 计算y_pos(栅格索引) pos_in_lane = vehicle.get("position_in_lane", {}) y_coord = pos_in_lane.get("y", 0) y_pos = int(y_coord // Config.GRID_LENGTH) if y_pos < 0 or y_pos >= grid_num: continue # 计算归一化速度(调用FeatureProcess的容错接口) v_config_id = vehicle.get("v_config_id", 1) max_speed = self.preprocess.get_vehicle_max_speed(v_config_id) vehicle_speed = vehicle.get("speed", 0) normalized_speed = vehicle_speed / max_speed if max_speed != 0 else 0.0 # 填充栅格 speed_dict[x_pos, y_pos] = normalized_speed position_dict[x_pos, y_pos] = 1 except Exception as e: v_id = vehicle.get("v_id", "未知") self.logger.warning(f"处理车辆{v_id}栅格化失败: {str(e)}") continue # 5. 解析当前相位(按信号灯ID匹配,非硬编码) # 5.1 路口1相位(对应Config.DIM_OF_ACTION_PHASE_1=4) j1 = self.preprocess.get_junction_by_signal(0) # 信号灯0对应路口1 current_phase1 = 0 for phase in phases: if phase.get("s_id") == 0: current_phase1 = phase.get("phase_id", 0) jct_phase_1 = [0] * Config.DIM_OF_ACTION_PHASE_1 if 0 <= current_phase1 < Config.DIM_OF_ACTION_PHASE_1: jct_phase_1[current_phase1] = 1 # 5.2 路口2相位(对应Config.DIM_OF_ACTION_PHASE_2=3) j2 = self.preprocess.get_junction_by_signal(1) # 信号灯1对应路口2 current_phase2 = 0 for phase in phases: if phase.get("s_id") == 1: current_phase2 = phase.get("phase_id", 0) jct_phase_2 = [0] * Config.DIM_OF_ACTION_PHASE_2 if 0 <= current_phase2 < Config.DIM_OF_ACTION_PHASE_2: jct_phase_2[current_phase2] = 1 # 6. 生成合法动作掩码(4个head,列表形式,不转张量) legal_action = self._generate_legal_action(current_phase1, current_phase2) # 7. 构建feature(确保维度与Config.DIM_OF_OBSERVATION=1056一致) position_flat = position_dict.flatten().tolist() # 26*20=520 speed_flat = speed_dict.flatten().tolist() # 26*20=520 weather_feature = [self.preprocess.get_weather()] # 1(天气编码) peak_feature = [1 if self.preprocess.is_peak_hour() else 0] # 1(高峰期标记) # 拼接:520+520+4+3+1+1=1049 → 补充7个0凑够1056(可替换为其他特征) padding = [0] * (Config.DIM_OF_OBSERVATION - len(position_flat + speed_flat + jct_phase_1 + jct_phase_2 + weather_feature + peak_feature)) feature = position_flat + speed_flat + jct_phase_1 + jct_phase_2 + weather_feature + peak_feature + padding # 8. 构建ObsData(携带legal_action,不加入feature) obs_data = ObsData(feature=feature) obs_data.legal_action = legal_action # 关键:传递4个head的合法掩码列表 return obs_data def _generate_legal_action(self, current_phase1, current_phase2): """生成4个head的合法动作掩码(列表形式,长度分别为4、30、3、30)""" legal = [] # Head0:路口1相位(禁止当前相位) phase1_dim = Config.DIM_OF_ACTION_PHASE_1 legal_phase1 = [1] * phase1_dim if 0 <= current_phase1 < phase1_dim: legal_phase1[current_phase1] = 0 # 禁止重复选择当前相位 legal.append(legal_phase1) # Head1:路口1时长(1-30s,全部合法) duration1_dim = Config.DIM_OF_ACTION_DURATION_1 legal_duration1 = [1] * duration1_dim # 可选:限制最大时长(如≤20s) # for i in range(20, duration1_dim): # legal_duration1[i] = 0 legal.append(legal_duration1) # Head2:路口2相位(禁止当前相位) phase2_dim = Config.DIM_OF_ACTION_PHASE_2 legal_phase2 = [1] * phase2_dim if 0 <= current_phase2 < phase2_dim: legal_phase2[current_phase2] = 0 legal.append(legal_phase2) # Head3:路口2时长(全部合法) duration2_dim = Config.DIM_OF_ACTION_DURATION_2 legal_duration2 = [1] * duration2_dim legal.append(legal_duration2) return legal def action_process(self, act_data): """将ActData转换为环境可执行的动作格式:[[j_id, phase, duration], ...]""" # 时长+1(动作从0开始,环境需要1-30s) duration1 = act_data.duration_1 + 1 duration2 = act_data.duration_2 + 1 # 限制时长在合理范围(1-Config.MAX_GREEN_DURATION) duration1 = max(1, min(duration1, Config.MAX_GREEN_DURATION)) duration2 = max(1, min(duration2, Config.MAX_GREEN_DURATION)) # 路口ID:从FeatureProcess获取(或默认j1→1,j2→2) junction_ids = self.preprocess.get_sorted_junction_ids() j1_id = junction_ids[0] if len(junction_ids) >=1 else 1 j2_id = junction_ids[1] if len(junction_ids) >=2 else 2 return [[j1_id, act_data.phase_index_1, duration1], [j2_id, act_data.phase_index_2, duration2]] # ------------------------------ 学习与模型保存 ------------------------------ @learn_wrapper def learn(self, list_sample_data): """调用算法学习""" return self.algorithm.learn(list_sample_data) @save_model_wrapper def save_model(self, path=None, id="1"): """保存模型(确保CPU兼容)""" model_path = f"{path}/model.ckpt-{id}.pkl" state_dict = {k: v.clone().cpu() for k, v in self.model.state_dict().items()} torch.save(state_dict, model_path) self.logger.info(f"保存模型到{model_path}") @load_model_wrapper def load_model(self, path=None, id="1"): """加载模型""" model_path = f"{path}/model.ckpt-{id}.pkl" state_dict = torch.load(model_path, map_location=self.device) self.model.load_state_dict(state_dict) self.logger.info(f"从{model_path}加载模型") class FeatureProcess: def __init__(self, logger, usr_conf=None): self.logger = logger self.usr_conf = usr_conf or {} self.reset() def reset(self): # 1. 静态道路信息(统一用 self.junction_dict 存储路口) self.junction_dict = {} # 核心:存储路口信息(j_id -> 路口数据) self.edge_dict = {} self.lane_dict = {} self.vehicle_configs = {} self.l_id_to_index = {} self.lane_to_junction = {} self.phase_lane_mapping = {} # 2. 动态帧信息 self.vehicle_status = {} self.lane_volume = {} # 车道ID -> 车流量(整数,协议v_count) self.lane_congestion = {} self.current_phases = {} # s_id -> {remaining_duration, phase_id} self.lane_demand = {} # 3. 车辆历史轨迹数据 self.vehicle_prev_junction = {} self.vehicle_prev_position = {} self.vehicle_distance_store = {} self.last_waiting_moment = {} self.waiting_time_store = {} self.enter_lane_time = {} self.vehicle_enter_time = {} self.vehicle_ideal_time = {} self.current_vehicles = {} # 车辆ID -> 完整信息 self.vehicle_trajectory = {} # 4. 场景状态 self.peak_hour = False self.weather = 0 # 0=晴,1=雨,2=雪,3=雾 self.accident_lanes = set() self.control_lanes = set() self.accident_configs = [] self.control_configs = [] # 5. 路口级指标累计 self.junction_metrics = {} # j_id -> 指标(delay、waiting等) self.enter_lane_ids = set() # 区域信息 self.region_dict = {} # region_id -> [j_id1, j_id2...] self.region_capacity = {} # region_id -> 总容量 self.junction_region_map = {} # j_id -> region_id def init_road_info(self, start_info): """完全兼容协议的初始化方法:确保所有数据存储到正确字段""" junctions = start_info.get("junctions", []) signals = start_info.get("signals", []) edges = start_info.get("edges", []) lane_configs = start_info.get("lane_configs", []) vehicle_configs = start_info.get("vehicle_configs", []) # 1. 初始化路口映射(关键:给 self.junction_dict 赋值) self._init_junction_mapping(junctions) # 2. 初始化相位-车道映射(依赖 self.junction_dict) self._init_phase_mapping(signals) # 3. 处理车道配置 self._init_lane_configs(lane_configs) # 4. 处理车辆配置(确保包含默认配置) self._init_vehicle_configs(vehicle_configs) # 5. 初始化区域(依赖 self.junction_dict) self._init_protocol_regions() # 6. 初始化场景配置 self._init_scene_config() self.logger.info(f"道路信息初始化完成:路口{len(self.junction_dict)}个,车辆配置{len(self.vehicle_configs)}个") def _init_junction_mapping(self, junctions): """修正:将路口数据存储到 self.junction_dict,处理缺失j_id""" if not junctions: self.logger.warning("路口列表为空,初始化默认路口(j1、j2)") # 兜底:添加默认路口(避免后续逻辑空指针) self.junction_dict = { "j1": {"j_id": "j1", "signal": 0, "cached_enter_lanes": [], "enter_lanes_on_directions": []}, "j2": {"j_id": "j2", "signal": 1, "cached_enter_lanes": [], "enter_lanes_on_directions": []} } return for idx, junction in enumerate(junctions): # 处理缺失j_id:生成默认j_id(j_原始id 或 j_idx) if "j_id" not in junction: if "id" in junction: junction["j_id"] = f"j_{junction['id']}" else: junction["j_id"] = f"j_{idx}" self.logger.debug(f"为路口{idx}生成默认j_id: {junction['j_id']}") j_id = junction["j_id"] # 提取并缓存进口车道(协议字段:enter_lanes_on_directions) all_enter_lanes = [] for dir_info in junction.get("enter_lanes_on_directions", []): all_enter_lanes.extend(dir_info.get("lanes", [])) junction["cached_enter_lanes"] = all_enter_lanes # 缓存进口车道列表 # 初始化路口指标 self.junction_metrics[j_id] = { "total_delay": 0.0, "total_vehicles": 0, "total_waiting": 0.0, "total_queue": 0, "queue_count": 0, "counted_vehicles": set(), "completed_vehicles": set() } # 存储到核心字典(供其他方法调用) self.junction_dict[j_id] = junction self.logger.debug(f"加载路口{j_id}:进口车道{len(all_enter_lanes)}条,关联信号灯{junction.get('signal', -1)}") def _init_phase_mapping(self, signals): """修正:基于 self.junction_dict 构建信号灯-路口映射,解析相位-车道""" self.phase_lane_mapping = {} # 从已初始化的路口字典中构建:信号灯ID -> 路口ID(signal_junction_map) signal_junction_map = {} for j_id, junction in self.junction_dict.items(): s_id = junction.get("signal", -1) if s_id != -1: signal_junction_map[s_id] = j_id for signal in signals: s_id = signal.get("s_id", -1) if s_id == -1: self.logger.warning("信号灯缺失s_id,跳过") continue # 关联路口(从signal_junction_map获取,而非原始junctions) j_id = signal_junction_map.get(s_id) if not j_id or j_id not in self.junction_dict: self.logger.warning(f"信号灯{s_id}未关联有效路口,跳过") continue junction = self.junction_dict[j_id] all_enter_lanes = junction["cached_enter_lanes"] self.phase_lane_mapping[s_id] = {} # s_id -> phase_idx -> [lane_id1...] for phase_idx, phase in enumerate(signal.get("phases", [])): controlled_lanes = [] for light_cfg in phase.get("lights_on_configs", []): green_mask = light_cfg.get("green_mask", 0) turns = self._mask_to_turns(green_mask) # 解析转向(直/左/右/掉头) for turn in turns: # 按转向匹配车道 controlled_lanes.extend(self._get_turn_lanes(junction, turn)) # 过滤:仅保留进口车道 valid_lanes = list(set(controlled_lanes) & set(all_enter_lanes)) self.phase_lane_mapping[s_id][phase_idx] = valid_lanes self.logger.debug(f"信号灯{s_id}相位{phase_idx}:控制车道{valid_lanes}") def _init_lane_configs(self, lane_configs): """基于协议DirectionMask解析车道转向类型""" self.lane_dict = {} for lane in lane_configs: l_id = lane.get("l_id", -1) if l_id == -1: self.logger.warning("车道缺失l_id,跳过") continue # 解析转向类型(协议:dir_mask) dir_mask = lane.get("dir_mask", 0) turn_type = 0 # 0=直行,1=左转,2=右转,3=掉头 if dir_mask & 2: turn_type = 1 elif dir_mask & 4: turn_type = 2 elif dir_mask & 8: turn_type = 3 # 存储车道信息 self.lane_dict[l_id] = { "l_id": l_id, "edge_id": lane.get("edge_id", 0), "length": lane.get("length", 100), "width": lane.get("width", 3), "turn_type": turn_type } def _init_vehicle_configs(self, vehicle_configs): """初始化车辆配置,确保包含默认配置(避免KeyError)""" self.vehicle_configs = {} # 协议VehicleType默认配置(覆盖常见v_config_id) default_configs = { 1: {"v_type": 1, "v_type_name": "CAR", "max_speed": 60, "length": 5}, 2: {"v_type": 2, "v_type_name": "BUS", "max_speed": 40, "length": 12}, 3: {"v_type": 3, "v_type_name": "TRUCK", "max_speed": 50, "length": 10}, 4: {"v_type": 4, "v_type_name": "MOTORCYCLE", "max_speed": 55, "length": 2}, 5: {"v_type": 5, "v_type_name": "BICYCLE", "max_speed": 15, "length": 1} } # 加载传入的配置,覆盖默认值 for cfg in vehicle_configs: cfg_id = cfg.get("v_config_id") if cfg_id is None: self.logger.warning("车辆配置缺失v_config_id,跳过") continue self.vehicle_configs[cfg_id] = { "v_type": cfg.get("v_type", default_configs.get(cfg_id, {}).get("v_type", 0)), "v_type_name": cfg.get("v_type_name", default_configs.get(cfg_id, {}).get("v_type_name", "Unknown")), "max_speed": cfg.get("max_speed", default_configs.get(cfg_id, {}).get("max_speed", 60)), "length": cfg.get("length", default_configs.get(cfg_id, {}).get("length", 5)) } # 补充未覆盖的默认配置(确保关键ID存在) for cfg_id, cfg in default_configs.items(): if cfg_id not in self.vehicle_configs: self.vehicle_configs[cfg_id] = cfg self.logger.debug(f"补充默认车辆配置:v_config_id={cfg_id}") def _init_protocol_regions(self): """基于 self.junction_dict 初始化区域(每个路口一个区域)""" self.region_dict = {} self.region_capacity = {} self.junction_region_map = {} for j_id, junction in self.junction_dict.items(): region_id = f"region_{j_id}" self.region_dict[region_id] = [j_id] self.junction_region_map[j_id] = region_id # 计算区域容量(进口车道总容量) total_cap = sum(self.get_lane_capacity(lane_id) for lane_id in junction["cached_enter_lanes"]) self.region_capacity[region_id] = total_cap if total_cap > 0 else 20 def _init_scene_config(self): """初始化场景参数(天气、高峰期、事故/管制)""" self.weather = self.usr_conf.get("weather", 0) self.peak_hour = self.usr_conf.get("rush_hour", 0) == 1 # 处理事故配置 self.accident_configs = [ cfg for cfg in self.usr_conf.get("traffic_accidents", {}).get("custom_configuration", []) if all(k in cfg for k in ["lane_index", "start_time", "end_time"]) ] # 处理管制配置 self.control_configs = [ cfg for cfg in self.usr_conf.get("traffic_control", {}).get("custom_configuration", []) if all(k in cfg for k in ["lane_index", "start_time", "end_time"]) ] def _mask_to_turns(self, green_mask): """解析green_mask为转向类型""" turns = [] if green_mask & 1: turns.append("straight") if green_mask & 2: turns.append("left") if green_mask & 4: turns.append("right") if green_mask & 8: turns.append("uturn") return turns def _get_turn_lanes(self, junction, turn): """根据转向类型获取车道(依赖车道的turn_type)""" turn_map = {"straight": 0, "left": 1, "right": 2, "uturn": 3} target_turn = turn_map.get(turn) if target_turn is None: return [] # 从进口车道中匹配转向 return [ lane_id for lane_id in junction["cached_enter_lanes"] if self.lane_dict.get(lane_id, {}).get("turn_type") == target_turn ] # ------------------------------ 动态数据更新 ------------------------------ def update_traffic_info(self, obs, extra_info): if "framestate" not in obs: self.logger.error("观测数据缺失framestate,跳过更新") return frame_state = obs["framestate"] frame_no = frame_state.get("frame_no", 0) frame_time = frame_state.get("frame_time", 0) / 1000.0 # 转秒 # 1. 更新相位信息(s_id -> 相位状态) self.current_phases.clear() for phase_info in frame_state.get("phases", []): s_id = phase_info.get("s_id", -1) if s_id == -1: continue self.current_phases[s_id] = { "remaining_duration": phase_info.get("remaining_duration", 0), "phase_id": phase_info.get("phase_id", 0) } # 2. 更新车道车流量(协议v_count) self.lane_volume.clear() for lane in frame_state.get("lanes", []): l_id = lane.get("lane_id", -1) if l_id == -1: continue self.lane_volume[l_id] = lane.get("v_count", 0) # 3. 更新车辆信息 vehicles = frame_state.get("vehicles", []) current_v_ids = set() self.current_vehicles.clear() self.vehicle_status.clear() for vehicle in vehicles: v_id = vehicle.get("v_id") if not v_id: continue current_v_ids.add(v_id) self.current_vehicles[v_id] = vehicle self.vehicle_status[v_id] = vehicle.get("v_status", 0) # 0=正常,1=事故,2=无规则 # 4. 清理过期车辆数据 self._clean_expired_vehicle_data(current_v_ids) # 5. 更新车辆位置、等待时间、行驶距离 for vehicle in vehicles: v_id = vehicle.get("v_id") if not v_id: continue if "position_in_lane" in vehicle: self._update_vehicle_position(vehicle, frame_time) self.cal_waiting_time(frame_time, vehicle) self.cal_travel_distance(vehicle) # 6. 更新事故/管制车道 self._update_active_invalid_lanes(frame_no) # 7. 计算车道需求 self.calculate_lane_demand() def _update_vehicle_position(self, vehicle, frame_time): v_id = vehicle["v_id"] current_pos = vehicle["position_in_lane"] if v_id in self.vehicle_prev_position: # 计算行驶距离 prev_pos = self.vehicle_prev_position[v_id] dx = current_pos["x"] - prev_pos["x"] dy = current_pos["y"] - prev_pos["y"] self.vehicle_distance_store[v_id] = self.vehicle_distance_store.get(v_id, 0.0) + math.hypot(dx, dy) self.vehicle_prev_position[v_id] = current_pos def _clean_expired_vehicle_data(self, current_v_ids): """清理不在当前帧的车辆数据""" expired_v_ids = set(self.vehicle_prev_position.keys()) - current_v_ids for v_id in expired_v_ids: self.vehicle_prev_position.pop(v_id, None) self.vehicle_distance_store.pop(v_id, None) self.waiting_time_store.pop(v_id, None) self.last_waiting_moment.pop(v_id, None) def _update_active_invalid_lanes(self, frame_no): """更新当前生效的事故/管制车道""" self.accident_lanes = set() for cfg in self.accident_configs: if cfg["start_time"] <= frame_no <= cfg["end_time"]: self.accident_lanes.add(cfg["lane_index"]) self.control_lanes = set() for cfg in self.control_configs: if cfg["start_time"] <= frame_no <= cfg["end_time"]: self.control_lanes.add(cfg["lane_index"]) # ------------------------------ 指标计算 ------------------------------ def calculate_lane_demand(self): """计算车道需求(车流量 + 接近停止线的车辆)""" self.lane_demand = {} for lane_id, count in self.lane_volume.items(): demand = count # 叠加接近停止线的车辆(y<100) for vehicle in self.current_vehicles.values(): if vehicle.get("lane") == lane_id: y_pos = vehicle.get("position_in_lane", {}).get("y", 200) if y_pos < 100: demand += 0.5 self.lane_demand[lane_id] = demand def calculate_junction_queue(self, j_id): """计算路口排队长度(速度≤1m/s的进口道车辆)""" junction = self.junction_dict.get(j_id) if not junction: return 0 valid_lanes = set(junction["cached_enter_lanes"]) - self.get_invalid_lanes() queue = 0 for vehicle in self.current_vehicles.values(): if (vehicle.get("lane") in valid_lanes and self.vehicle_status.get(vehicle["v_id"], 0) == 0 and vehicle.get("speed", 0) <= 1.0): queue += 1 return queue def cal_waiting_time(self, frame_time, vehicle): """计算车辆等待时间(进口道内、速度≤0.1m/s、接近停止线)""" v_id = vehicle["v_id"] if not self.on_enter_lane(vehicle): self.waiting_time_store.pop(v_id, None) self.last_waiting_moment.pop(v_id, None) return # 满足等待条件:速度低 + 接近停止线 if vehicle.get("speed", 0) <= 0.1 and vehicle["position_in_lane"]["y"] < 50: if v_id not in self.last_waiting_moment: self.last_waiting_moment[v_id] = frame_time else: duration = frame_time - self.last_waiting_moment[v_id] self.waiting_time_store[v_id] = self.waiting_time_store.get(v_id, 0.0) + duration self.last_waiting_moment[v_id] = frame_time else: self.last_waiting_moment.pop(v_id, None) def cal_travel_distance(self, vehicle): """计算车辆行驶距离""" v_id = vehicle["v_id"] if not self.on_enter_lane(vehicle): self.vehicle_prev_position.pop(v_id, None) self.vehicle_distance_store.pop(v_id, None) return # 初始化历史位置 if v_id not in self.vehicle_prev_position: current_pos = vehicle.get("position_in_lane", {}) if "x" in current_pos and "y" in current_pos: self.vehicle_prev_position[v_id] = { "x": current_pos["x"], "y": current_pos["y"], "distance_to_stop": current_pos["y"] } self.vehicle_distance_store[v_id] = 0.0 return # 计算距离 prev_pos = self.vehicle_prev_position[v_id] current_pos = vehicle["position_in_lane"] try: dx = current_pos["x"] - prev_pos["x"] dy = current_pos["y"] - prev_pos["y"] euclid_dist = math.hypot(dx, dy) stop_dist_reduce = prev_pos["distance_to_stop"] - current_pos["y"] self.vehicle_distance_store[v_id] += max(euclid_dist, stop_dist_reduce) self.vehicle_prev_position[v_id]["distance_to_stop"] = current_pos["y"] except Exception as e: self.logger.error(f"计算车辆{v_id}距离失败: {str(e)}") def on_enter_lane(self, vehicle): """判断车辆是否在进口道(依赖路口的cached_enter_lanes)""" lane_id = vehicle.get("lane") if lane_id is None: return False # 遍历所有路口的进口道,判断车道是否属于进口道 for junction in self.junction_dict.values(): if lane_id in junction["cached_enter_lanes"]: return True return False # ------------------------------ 外部调用接口 ------------------------------ def get_sorted_junction_ids(self): """获取排序后的路口ID列表(供Agent使用)""" return sorted(self.junction_dict.keys()) def get_junction_by_signal(self, s_id): """根据信号灯ID获取路口(供Agent解析相位)""" for j_id, junction in self.junction_dict.items(): if junction.get("signal") == s_id: return junction return None def get_invalid_lanes(self): """获取当前无效车道(事故+管制)""" return self.accident_lanes.union(self.control_lanes) def get_lane_capacity(self, lane_id): """获取车道容量(根据转向类型)""" turn_type = self.lane_dict.get(lane_id, {}).get("turn_type", 0) return 15 if turn_type in [0, 1] else 10 # 直行/左转15辆,右转/掉头10辆 def get_weather(self): return self.weather def is_peak_hour(self): return self.peak_hour def get_vehicle_max_speed(self, v_config_id): """获取车辆最大速度(容错接口)""" return self.vehicle_configs.get(v_config_id, {}).get("max_speed", 60)检查这些代码哪里不匹配,有没有地方不合理。修正一下

filetype

import pandas as pd import numpy as np from sklearn.model_selection import train_test_split from sklearn.impute import SimpleImputer from sklearn.preprocessing import StandardScaler from sklearn.metrics import roc_auc_score, f1_score, accuracy_score, roc_curve, confusion_matrix, auc # 新增 auc 导入 from sklearn.ensemble import AdaBoostClassifier, RandomForestClassifier, ExtraTreesClassifier, \ GradientBoostingClassifier from sklearn.neural_network import MLPClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.linear_model import LogisticRegression from sklearn.neighbors import KNeighborsClassifier from sklearn.svm import SVC from sklearn.naive_bayes import GaussianNB from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA from sklearn.neural_network import MLPClassifier import xgboost as xgb import lightgbm as lgb import joblib import os import time import matplotlib.pyplot as plt # 导入绘图库 from sklearn.calibration import calibration_curve # 后续代码保持不变 # 添加运行时间统计 start_time = time.time() # 创建结果目录 results_dir = "results_step1" os.makedirs(results_dir, exist_ok=True) # 数据路径检查 data_path = "D:/桌面/mimic_knn.csv" if not os.path.exists(data_path): raise FileNotFoundError(f"{data_path} does not exist") your_data = pd.read_csv(data_path) # 限制数据集为 aki = 0 #your_data = your_data[your_data['aki_stage_smoothed'] == 0] # 限制数据集为 aki > 0 your_data = your_data[your_data['aki_stage'] > 0] #your_data = your_data[(your_data['aki_stage'] > 0) & (your_data['crrt'] == 1)] features = ['creatinine_max','lactate_max','aki_stage','sofa', 'spo2_min','urineoutput','rdw_max'] #target = "died_in_icu" target = "crrt" X = your_data[features] y = your_data[target] # 划分数据集 X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42) # 修正:正确的标准化流程(避免数据泄露) scaler = StandardScaler() X_train = scaler.fit_transform(X_train) X_test = scaler.transform(X_test) # 统计训练集和测试集中应用了 CRRT 的数量 train_crrt_count = np.sum(y_train == 1) test_crrt_count = np.sum(y_test == 1) # 输出训练集和测试集的形状及 CRRT 统计信息 print(f"Training data shape: X_train: {X_train.shape}, y_train: {y_train.shape}") print(f"Number of patients with crrt in training set: {train_crrt_count}") print(f"Testing data shape: X_test: {X_test.shape}, y_test: {y_test.shape}") print(f"Number of patients with crrt in testing set: {test_crrt_count}") # 定义模型 models = { "AdaBoost": AdaBoostClassifier(algorithm='SAMME', random_state=42), "ANN": MLPClassifier(random_state=42, max_iter=300), "Decision Tree": DecisionTreeClassifier(random_state=42), #"Extra Trees": ExtraTreesClassifier(random_state=42), "Gradient Boosting": GradientBoostingClassifier(random_state=42), "KNN": KNeighborsClassifier(), "LightGBM": lgb.LGBMClassifier(random_state=42, verbosity=-1), "Logistic Regression": LogisticRegression(random_state=42, max_iter=10000), "Random Forest": RandomForestClassifier(random_state=42), "SVM": SVC(probability=True, random_state=42), "XGBoost": xgb.XGBClassifier(random_state=42), "Naive Bayes": GaussianNB(), "LDA": LDA() } # 训练模型并保存性能评估 model_performance = {} roc_data = [] # 用于存储每个模型的ROC曲线数据和AUC值 plt.figure(figsize=(12, 8)) for name, model in models.items(): try: model.fit(X_train, y_train) y_pred = model.predict(X_test) y_proba = model.predict_proba(X_test)[:, 1] # 计算各项指标 roc_auc = roc_auc_score(y_test, y_proba) f1 = f1_score(y_test, y_pred) accuracy = accuracy_score(y_test, y_pred) # 计算混淆矩阵并提取TP, FP, TN, FN tn, fp, fn, tp = confusion_matrix(y_test, y_pred).ravel() sensitivity = tp / (tp + fn) # 敏感性 (召回率) specificity = tn / (tn + fp) # 特异性 ppv = tp / (tp + fp) # 阳性预测值 npv = tn / (tn + fn) # 阴性预测值 from sklearn.utils import resample # 使用自助法计算AUC的95%置信区间 n_bootstraps = 1000 rng_seed = 42 # 可复现结果 bootstrapped_scores = [] for i in range(n_bootstraps): indices = resample(np.arange(len(y_test)), replace=True, random_state=rng_seed + i) y_test_bootstrap = y_test.iloc[indices] y_proba_bootstrap = y_proba[indices] auc_bootstrap = roc_auc_score(y_test_bootstrap, y_proba_bootstrap) bootstrapped_scores.append(auc_bootstrap) lower_ci, upper_ci = np.percentile(bootstrapped_scores, [2.5, 97.5]) # 存储性能指标 model_performance[name] = { "ROC AUC": round(roc_auc, 4), "95% CI Lower": round(lower_ci, 4), "95% CI Upper": round(upper_ci, 4), "F1 Score": round(f1, 4), "Accuracy": round(accuracy, 4), "Sensitivity": round(sensitivity, 4), "Specificity": round(specificity, 4), "PPV": round(ppv, 4), "NPV": round(npv, 4) } joblib.dump(model, os.path.join(results_dir, f'{name.replace(" ", "_")}_model.pkl')) print(f"{name} Model ROC AUC: {roc_auc:.4f} (95% CI: {lower_ci:.4f} - {upper_ci:.4f}), " f"F1 Score: {f1:.4f}, Accuracy: {accuracy:.4f}, " f"Sensitivity: {sensitivity:.4f}, Specificity: {specificity:.4f}, " f"PPV: {ppv:.4f}, NPV: {npv:.4f}") # 或打印前5条结果(可选) # print(f"\n{name} 前5条预测概率:") # print(y_proba[:5]) # 计算 ROC 曲线数据 fpr, tpr, _ = roc_curve(y_test, y_proba) roc_data.append((name, fpr, tpr, roc_auc)) # 绘制 ROC 曲线,添加 AUC 值到图例中 plt.plot(fpr, tpr, linestyle='-', label=f'{name} (AUC = {roc_auc:.4f})') except Exception as e: print(f"Error training {name}: {e}") # 按照AUC值从大到小排序 roc_data.sort(key=lambda x: x[3], reverse=True) # 只取前6名的模型数据------------------------------------------------------------------------------------- top_6_roc_data = roc_data[:12] # 创建ROC图 plt.figure(figsize=(12, 8)) for name, fpr, tpr, roc_auc in top_6_roc_data: # 绘制 ROC 曲线,添加 AUC 值到图例中 plt.plot(fpr, tpr, linestyle='-', label=f'{name} (AUC = {roc_auc:.4f})') # 绘制 ROC 曲线图 plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('ROC Curve Comparison of Different Models') plt.legend(loc='best') plt.grid(True) plt.tight_layout() # 保存 ROC 曲线图到文件 plt.savefig(os.path.join(results_dir, 'roc_curve_comparison.png')) plt.close() # 将性能数据保存到文件 performance_df = pd.DataFrame(model_performance).T performance_df.to_csv(os.path.join(results_dir, 'model_performance.csv')) # 筛选最佳 5 个算法 sorted_performance = sorted(model_performance.items(), key=lambda x: x[1]["ROC AUC"], reverse=True) best_five_models = [model[0] for model in sorted_performance[:5]] best_five_performance = {model[0]: model[1] for model in sorted_performance[:5]} # 保存最佳 5 个算法的名称和性能到文件 best_five_df = pd.DataFrame(best_five_performance).T best_five_df.to_csv(os.path.join(results_dir, 'best_five_models_performance.csv')) joblib.dump(model_performance, os.path.join(results_dir, 'model_performance.pkl')) np.savez(os.path.join(results_dir, 'preprocessed_data.npz'), X_train=X_train, X_test=X_test, y_train=y_train, y_test=y_test, features=features) # 打印总运行时间 end_time = time.time() print(f"Total Time: {end_time - start_time:.2f} seconds") # 修正的DCA曲线 def decision_curve_analysis(y_true, y_pred_proba, thresholds): net_benefits = [] n = len(y_true) for threshold in thresholds: y_pred = (y_pred_proba >= threshold).astype(int) tn, fp, fn, tp = confusion_matrix(y_true, y_pred).ravel() # 修正的净效益公式 net_benefit = (tp / n) - (fp / n) * (threshold / (1 - threshold)) net_benefits.append(net_benefit) return net_benefits dca_results = {} thresholds = np.linspace(0.01, 0.99, 99) plt.figure(figsize=(12, 8)) # 基准线 prevalence = np.mean(y_test) net_benefit_all = [prevalence - (prevalence * 0) - (1 - prevalence) * (t / (1 - t)) * 0 for t in thresholds] net_benefit_none = [0] * len(thresholds) plt.plot(thresholds, net_benefit_all, label='Treat all', linestyle='--', color='black') plt.plot(thresholds, net_benefit_none, label='Treat none', linestyle='--', color='gray') # 模型DCA曲线 for name, model in models.items(): y_proba = model.predict_proba(X_test)[:, 1] net_benefits = decision_curve_analysis(y_test, y_proba, thresholds) max_nb = max(net_benefits) best_threshold = thresholds[np.argmax(net_benefits)] avg_nb = np.mean(net_benefits) dca_results[name] = { 'max_net_benefit': round(max_nb, 4), 'best_threshold': round(best_threshold, 4), 'average_net_benefit': round(avg_nb, 4) } plt.plot(thresholds, net_benefits, label=f'{name} (Max: {max_nb:.4f})') # 保存DCA结果 dca_df = pd.DataFrame(dca_results).T dca_df.to_csv(os.path.join(results_dir, 'dca_performance.csv')) plt.xlabel('Threshold probability') plt.ylabel('Net benefit') plt.title('Decision Curve Analysis (Corrected)') plt.legend(loc='best') plt.grid(True) plt.tight_layout() plt.savefig(os.path.join(results_dir, 'decision_curve_analysis_corrected.png')) plt.close() # 修正的CIC曲线(临床影响曲线) plt.figure(figsize=(12, 8)) for name in best_five_models: # 只展示前5名模型 model = models[name] y_proba = model.predict_proba(X_test)[:, 1] # 按预测概率降序排序 sorted_indices = np.argsort(y_proba)[::-1] sorted_y_test = y_test.iloc[sorted_indices].values # 计算累积TP和FP cum_tp = np.cumsum(sorted_y_test) cum_fp = np.cumsum(1 - sorted_y_test) # 绘制曲线 plt.plot(np.arange(len(y_test)), cum_tp, label=f'{name} - True Positives') plt.plot(np.arange(len(y_test)), cum_fp, label=f'{name} - False Positives', linestyle='--') plt.xlabel('Number of patients classified as positive') plt.ylabel('Cumulative count') plt.title('Clinical Impact Curve (CIC)') plt.legend() plt.grid(True) plt.tight_layout() plt.savefig(os.path.join(results_dir, 'clinical_impact_curve.png')) plt.close() # 校准曲线(移除AUC计算) plt.figure(figsize=(12, 8)) for name, model in models.items(): y_proba = model.predict_proba(X_test)[:, 1] fraction_of_positives, mean_predicted_value = calibration_curve(y_test, y_proba, n_bins=10) plt.plot(mean_predicted_value, fraction_of_positives, 's-', label=name) plt.plot([0, 1], [0, 1], 'k:', label='Perfect calibration') plt.xlabel('Mean predicted value') plt.ylabel('Fraction of positives') plt.title('Calibration Curve') plt.legend(loc='best') plt.grid(True) plt.tight_layout() plt.savefig(os.path.join(results_dir, 'calibration_curve.png')) plt.close() 这个代码中的features = ['creatinine_max','lactate_max','aki_stage','sofa', 'spo2_min','urineoutput','rdw_max'] 更换为features = [#"age", "gender", "height", "weight", "bmi", "sofa","aki_stage", #"nequivalent_max", # "sapsii", "sirs", # "lods", "bun_max","pt_max", "sbp_min", "dbp_min",#特征共线性 "heart_rate_max", "mbp_min", "resp_rate_max", "spo2_min","temperature_min", # "ph_min", "ph_max", "po2_min", "pco2_max", # "rbc_min", "mcv_min", "rdw_max", "mchc_min", "wbc_max", "platelets_min", 'ph_min', 'ph_max', 'lactate_max', 'po2_min', 'pco2_max', 'baseexcess_min', 'pao2fio2ratio_min', 'rbc_min', 'mcv_min', 'rdw_max', 'mchc_min', 'wbc_max', 'platelets_min', 'hemoglobin_min',#'hematocrit_min', "creatinine_max", "potassium_max", "sodium_max", # "inr_max", "ptt_max", "urineoutput"]时为什么对模型的性能没有影响?

filetype

import os os.environ["HF_ENDPOINT"] = "https://hf-mirror.com" import json import re import hashlib import time import torch import numpy as np import pdfplumber from collections import defaultdict from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.metrics.pairwise import cosine_similarity import spacy from sentence_transformers import SentenceTransformer from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline # 初始化模型和工具 nlp = spacy.load("zh_core_web_sm") embedding_model = SentenceTransformer('all-MiniLM-L6-v2') class PDFProcessor: def __init__(self, pdf_path, output_dir="output"): self.pdf_path = pdf_path self.output_dir = output_dir self.document_id = os.path.splitext(os.path.basename(pdf_path))[0] os.makedirs(output_dir, exist_ok=True) # 初始化生成模型 model_name = "uer/gpt2-distil-chinese-cluecorpussmall" self.tokenizer = AutoTokenizer.from_pretrained(model_name) if self.tokenizer.pad_token is None: self.tokenizer.pad_token = self.tokenizer.eos_token self.model = AutoModelForCausalLM.from_pretrained(model_name) self.summarizer = pipeline( "text-generation", model=self.model, tokenizer=self.tokenizer, device=0 if torch.cuda.is_available() else -1 ) # 文档结构存储 self.full_text = "" self.structure = [] self.heading_tree = self.HeadingTree() self.font_stats = defaultdict(int) class HeadingTree: """管理标题层级关系的树结构""" def __init__(self): self.root = {"level": 0, "children": [], "text": "ROOT"} self.current = self.root self.level_path = "0" def add_heading(self, level, text, page): while self.current["level"] >= level: self.current = self.current["parent"] new_node = { "level": level, "text": text, "page": page, "parent": self.current, "children": [], "local_index": len(self.current["children"]) + 1 } self.current["children"].append(new_node) self.current = new_node if self.current["parent"] == self.root: self.level_path = str(new_node["local_index"]) else: self.level_path = f"{self.current['parent']['path']}.{new_node['local_index']}" new_node["path"] = self.level_path return self.level_path def parse_pdf(self): """解析PDF文档""" with pdfplumber.open(self.pdf_path) as pdf: for page_num, page in enumerate(pdf.pages, 1): # 提取文本元素 words = page.extract_words(extra_attrs=["fontname", "size"]) self._analyze_font_features(words) # 提取结构化文本 text_blocks = page.extract_text_lines() for block in text_blocks: self._process_text_block(block, page_num) # 处理表格 tables = page.extract_tables() for table in tables: self._process_table(table, page_num) # 保存原始文本 self.full_text += page.extract_text() + "\n" # 保存原始文本 with open(os.path.join(self.output_dir, f"{self.document_id}_full.txt"), "w", encoding="utf-8") as f: f.write(self.full_text) def _analyze_font_features(self, words): """分析字体特征建立标题识别模型""" for word in words: font_key = (word["fontname"], round(word["size"], 1)) self.font_stats[font_key] += 1 def _process_text_block(self, block, page_num): """处理文本块并识别标题""" font_key = (block["fontname"], round(block["size"], 1)) font_freq = self.font_stats[font_key] # 标题识别逻辑 is_heading = ( block["size"] > 12 and font_freq < 100 and len(block["text"].strip()) < 50 ) if is_heading: # 自动推断标题层级 heading_level = min(int(block["size"] // 2), 6) self.structure.append({ "type": "heading", "level": heading_level, "text": block["text"].strip(), "page": page_num, "start_pos": len(self.full_text) }) self.heading_tree.add_heading(heading_level, block["text"].strip(), page_num) self.full_text += block["text"].strip() + "\n" else: self.structure.append({ "type": "paragraph", "text": block["text"].strip(), "page": page_num, "start_pos": len(self.full_text) }) self.full_text += block["text"].strip() + "\n" def _process_table(self, table, page_num): """处理表格并转换为Markdown格式""" markdown_table = [] for row in table: markdown_row = "| " + " | ".join(str(cell).strip() for cell in row) + " |" markdown_table.append(markdown_row) if len(markdown_table) == 1: # 添加表头分隔线 markdown_table.append("| " + " | ".join(["---"] * len(row)) + " |") table_text = "\n".join(markdown_table) self.structure.append({ "type": "table", "text": table_text, "page": page_num, "start_pos": len(self.full_text) }) self.full_text += table_text + "\n" def dynamic_chunking(self, max_chunk_length=1500, min_chunk_length=200): """动态语义分块算法""" chunks = [] current_chunk = "" current_start = 0 chunk_id = 0 # 基于结构初步分块 for i, item in enumerate(self.structure): # 标题作为新块的开始 if item["type"] == "heading" and current_chunk: chunks.append({ "start": current_start, "end": len(self.full_text[:current_start]) + len(current_chunk), "text": current_chunk, "page": item["page"], "id": f"{self.document_id}_chunk{chunk_id:03d}" }) chunk_id += 1 current_chunk = "" current_start = item["start_pos"] current_chunk += item["text"] + "\n" # 长度保护:防止块过长 if len(current_chunk) > max_chunk_length: chunks.append({ "start": current_start, "end": current_start + len(current_chunk), "text": current_chunk, "page": item["page"], "id": f"{self.document_id}_chunk{chunk_id:03d}" }) chunk_id += 1 current_chunk = "" current_start = self.structure[i+1]["start_pos"] if i+1 < len(self.structure) else current_start + len(current_chunk) # 添加最后一个块 if current_chunk: chunks.append({ "start": current_start, "end": current_start + len(current_chunk), "text": current_chunk, "page": self.structure[-1]["page"], "id": f"{self.document_id}_chunk{chunk_id:03d}" }) # 语义边界优化 refined_chunks = [] for chunk in chunks: sentences = [sent.text for sent in nlp(chunk["text"]).sents] if len(sentences) < 2: # 无需分割 refined_chunks.append(chunk) continue # 计算句子嵌入 sentence_embeddings = embedding_model.encode(sentences) # 寻找最佳分割点 split_points = [] for i in range(1, len(sentences)): sim = cosine_similarity( [sentence_embeddings[i-1]], [sentence_embeddings[i]] )[0][0] if sim < 0.65: # 语义相似度阈值 split_points.append(i) # 如果没有分割点或只有一个句子,保留原块 if not split_points: refined_chunks.append(chunk) continue # 创建新块 start_idx = 0 for split_idx in split_points: new_chunk_text = " ".join(sentences[start_idx:split_idx]) if len(new_chunk_text) > min_chunk_length: # 最小长度保护 refined_chunks.append({ "start": chunk["start"] + sum(len(s) for s in sentences[:start_idx]), "end": chunk["start"] + sum(len(s) for s in sentences[:split_idx]), "text": new_chunk_text, "page": chunk["page"], "id": f"{self.document_id}_chunk{chunk_id:03d}" }) chunk_id += 1 start_idx = split_idx # 添加最后一段 if start_idx < len(sentences): new_chunk_text = " ".join(sentences[start_idx:]) if len(new_chunk_text) > min_chunk_length: refined_chunks.append({ "start": chunk["start"] + sum(len(s) for s in sentences[:start_idx]), "end": chunk["start"] + len(chunk["text"]), "text": new_chunk_text, "page": chunk["page"], "id": f"{self.document_id}_chunk{chunk_id:03d}" }) chunk_id += 1 return refined_chunks def extract_metadata(self, chunk): """提取块的元数据""" metadata = { "hierarchy": "0.0", "keywords": [], "entities": [], "has_table": False, "has_formula": False } # 1. 提取层级信息 for item in reversed(self.structure): if item["start_pos"] <= chunk["start"] and item["type"] == "heading": metadata["hierarchy"] = self._find_heading_path(item) break # 2. 提取关键词 (TF-IDF) vectorizer = TfidfVectorizer(stop_words="english", max_features=10) try: tfidf_matrix = vectorizer.fit_transform([chunk["text"]]) feature_names = vectorizer.get_feature_names_out() tfidf_scores = tfidf_matrix.toarray()[0] top_indices = np.argsort(tfidf_scores)[-5:] # 取前5个关键词 metadata["keywords"] = [feature_names[i] for i in top_indices if tfidf_scores[i] > 0.1] except: metadata["keywords"] = [] # 3. 实体识别 doc = nlp(chunk["text"]) for ent in doc.ents: if ent.label_ in ["PERSON", "ORG", "GPE", "PRODUCT", "DATE"]: # 过滤实体类型 metadata["entities"].append({ "text": ent.text, "type": ent.label_, "start_pos": ent.start_char, "end_pos": ent.end_char }) # 4. 检测表格和公式 metadata["has_table"] = bool(re.search(r'\+[-]+\+', chunk["text"])) # 简单表格检测 metadata["has_formula"] = bool(re.search(r'\$(.*?)\$|\\[a-zA-Z]+{', chunk["text"])) # LaTeX或数学公式 return metadata def _find_heading_path(self, heading_item): """根据标题项查找完整层级路径""" for node in self.heading_tree.root["children"]: path = self._find_node_path(node, heading_item["text"]) if path: return path return "0.0" def _find_node_path(self, node, text): """递归查找标题节点路径""" if node["text"] == text: return node["path"] for child in node["children"]: path = self._find_node_path(child, text) if path: return path return None def generate_summary(self, text): """生成轻量级摘要""" prompt = f"请为以下文本生成一句简洁的摘要(20-30字),严格基于内容不要添加新信息:\n{text[:2000]}" try: summary = self.summarizer( prompt, max_new_tokens=50, temperature=0.3, do_sample=True, pad_token_id=self.tokenizer.pad_token_id, # 使用已设置的pad_token_id eos_token_id=self.tokenizer.eos_token_id # 显式设置eos_token )[0]['generated_text'] # 提取生成的摘要部分 return summary.replace(prompt, "").strip() except Exception as e: print(f"摘要生成失败: {str(e)}") # 失败时使用简单的前三句作为摘要 sents = [sent.text for sent in nlp(text).sents][:3] return " ".join(sents) def process_to_json(self, chunks): """处理为最终的JSON格式""" results = [] summary_cache = {} for chunk in chunks: # 生成摘要(缓存相同文本) text_hash = hashlib.md5(chunk["text"].encode()).hexdigest() if text_hash in summary_cache: summary = summary_cache[text_hash] else: summary = self.generate_summary(chunk["text"]) summary_cache[text_hash] = summary # 提取元数据 metadata = self.extract_metadata(chunk) # 构建最终JSON对象 result = { "chunk_id": chunk["id"], "text": chunk["text"], "summary": summary, "metadata": metadata } results.append(result) return results def process_document(self): """处理文档的完整流程""" print(f"开始处理文档: {self.docx_path}") total_start = time.time() try: # 记录各阶段耗时 parse_start = time.time() self.parse_docx() parse_time = time.time() - parse_start chunk_start = time.time() chunks = self.dynamic_chunking() chunk_time = time.time() - chunk_start json_start = time.time() json_data = self.process_to_json(chunks) json_time = time.time() - json_start # 保存结果 output_path = os.path.join(self.output_dir, f"{self.document_id}_chunks.json") with open(output_path, "w", encoding="utf-8") as f: json.dump(json_data, f, ensure_ascii=False, indent=2) total_time = time.time() - total_start print(f"\n处理完成! 结果已保存至: {output_path}") print("="*40) print(f"总耗时: {total_time:.2f}秒") print(f"文档解析: {parse_time:.2f}秒") print(f"语义分块: {chunk_time:.2f}秒") print(f"元数据处理: {json_time:.2f}秒") print("="*40) return json_data except Exception as e: print(f"处理过程中发生错误: {str(e)}") return None if __name__ == "__main__": processor = PDFProcessor( pdf_path="test1.pdf", output_dir="processed_pdfs" ) processor.process_document()以上代码报错如下,应该怎么解决? (venv) C:\Users\Semi-YuLJ\Desktop\learning>python C:\Users\Semi-YuLJ\Desktop\learning\chunk_pdf.py Traceback (most recent call last): File "C:\Users\Semi-YuLJ\Desktop\learning\chunk_pdf.py", line 419, in <module> processor.process_document() File "C:\Users\Semi-YuLJ\Desktop\learning\chunk_pdf.py", line 376, in process_document print(f"开始处理文档: {self.docx_path}") AttributeError: 'PDFProcessor' object has no attribute 'docx_path'

filetype

import torch.optim as optim from run_utils.callbacks.base import ( AccumulateRawOutput, PeriodicSaver, ProcessAccumulatedRawOutput, ScalarMovingAverage, ScheduleLr, TrackLr, VisualizeOutput, TriggerEngine, ) from run_utils.callbacks.logging import LoggingEpochOutput, LoggingGradient from run_utils.engine import Events from .targets import gen_targets, prep_sample from .net_desc import create_model from .run_desc import proc_valid_step_output, train_step, valid_step, viz_step_output # TODO: training config only ? # TODO: switch all to function name String for all option def get_config(nr_type, mode): return { # ------------------------------------------------------------------ # ! All phases have the same number of run engine # phases are run sequentially from index 0 to N "phase_list": [ { "run_info": { # may need more dynamic for each network "net": { "desc": lambda: create_model( input_ch=3, nr_types=nr_type, freeze=True, mode=mode ), "optimizer": [ optim.Adam, { # should match keyword for parameters within the optimizer "lr": 1.0e-4, # initial learning rate, "betas": (0.9, 0.999), }, ], # learning rate scheduler "lr_scheduler": lambda x: optim.lr_scheduler.StepLR(x, 25), "extra_info": { "loss": { "np": {"bce": 1, "dice": 1}, "hv": {"mse": 1, "msge": 1}, "tp": {"bce": 1, "dice": 1}, }, }, # path to load, -1 to auto load checkpoint from previous phase, # None to start from scratch "pretrained": "../pretrained/ImageNet-ResNet50-Preact_pytorch.tar", # 'pretrained': None, }, }, "target_info": {"gen": (gen_targets, {}), "viz": (prep_sample, {})}, "batch_size": {"train": 16, "valid": 16,}, # engine name : value "nr_epochs": 50, }, { "run_info": { # may need more dynamic for each network "net": { "desc": lambda: create_model( input_ch=3, nr_types=nr_type, freeze=False, mode=mode ), "optimizer": [ optim.Adam, { # should match keyword for parameters within the optimizer "lr": 1.0e-4, # initial learning rate, "betas": (0.9, 0.999), }, ], # learning rate scheduler "lr_scheduler": lambda x: optim.lr_scheduler.StepLR(x, 25), "extra_info": { "loss": { "np": {"bce": 1, "dice": 1}, "hv": {"mse": 1, "msge": 1}, "tp": {"bce": 1, "dice": 1}, }, }, # path to load, -1 to auto load checkpoint from previous phase, # None to start from scratch "pretrained": -1, }, }, "target_info": {"gen": (gen_targets, {}), "viz": (prep_sample, {})}, "batch_size": {"train": 4, "valid": 8,}, # batch size per gpu "nr_epochs": 50, }, ], # ------------------------------------------------------------------ # TODO: dynamically for dataset plugin selection and processing also? # all enclosed engine shares the same neural networks # as the on at the outer calling it "run_engine": { "train": { # TODO: align here, file path or what? what about CV? "dataset": "", # whats about compound dataset ? "nr_procs": 16, # number of threads for dataloader "run_step": train_step, # TODO: function name or function variable ? "reset_per_run": False, # callbacks are run according to the list order of the event "callbacks": { Events.STEP_COMPLETED: [ # LoggingGradient(), # TODO: very slow, may be due to back forth of tensor/numpy ? ScalarMovingAverage(), ], Events.EPOCH_COMPLETED: [ TrackLr(), PeriodicSaver(), VisualizeOutput(viz_step_output), LoggingEpochOutput(), TriggerEngine("valid"), ScheduleLr(), ], }, }, "valid": { "dataset": "", # whats about compound dataset ? "nr_procs": 8, # number of threads for dataloader "run_step": valid_step, "reset_per_run": True, # * to stop aggregating output etc. from last run # callbacks are run according to the list order of the event "callbacks": { Events.STEP_COMPLETED: [AccumulateRawOutput(),], Events.EPOCH_COMPLETED: [ # TODO: is there way to preload these ? ProcessAccumulatedRawOutput( lambda a: proc_valid_step_output(a, nr_types=nr_type) ), LoggingEpochOutput(), ], }, }, }, } 一句一句的解释代码的意思

filetype

AssertionError在comfyui里的解决办法 ## Additional Context (Please add any additional context or steps to reproduce the error here)## Attached Workflow Please make sure that workflow does not contain any sensitive information such as API keys or passwords. ``` {"id":"592bffdc-1875-4a09-9fb6-6bd83518adb4","revision":0,"last_node_id":26,"last_link_id":33,"nodes":[{"id":12,"type":"VAEDecode","pos":[1600,450],"size":[200,100],"flags":{},"order":13,"mode":0,"inputs":[{"localized_name":"Latent","name":"samples","type":"LATENT","link":17},{"localized_name":"vae","name":"vae","type":"VAE","link":10}],"outputs":[{"localized_name":"图像","name":"IMAGE","type":"IMAGE","links":[18]}],"title":"🎨 VAE解码","properties":{"cnr_id":"comfy-core","ver":"0.3.29","Node name for S&R":"VAEDecode"},"widgets_values":[]},{"id":8,"type":"CLIPTextEncode","pos":[396.6999816894531,692.300048828125],"size":[400,150],"flags":{},"order":7,"mode":0,"inputs":[{"localized_name":"clip","name":"clip","type":"CLIP","link":8},{"localized_name":"文本","name":"text","type":"STRING","widget":{"name":"text"},"link":null}],"outputs":[{"localized_name":"条件","name":"CONDITIONING","type":"CONDITIONING","links":[12]}],"title":"❌ 负向提示词","properties":{"cnr_id":"comfy-core","ver":"0.3.29","Node name for S&R":"CLIPTextEncode"},"widgets_values":["blurry, low quality, distorted"]},{"id":7,"type":"CLIPTextEncode","pos":[397.79986572265625,456.6000061035156],"size":[400,200],"flags":{},"order":6,"mode":0,"inputs":[{"localized_name":"clip","name":"clip","type":"CLIP","link":7},{"localized_name":"文本","name":"text","type":"STRING","widget":{"name":"text"},"link":null}],"outputs":[{"localized_name":"条件","name":"CONDITIONING","type":"CONDITIONING","links":[11]}],"title":"✨ 正向提示词","properties":{"cnr_id":"comfy-core","ver":"0.3.29","Node name for S&R":"CLIPTextEncode"},"widgets_values":["professional modern office, clean minimalist design, soft natural lighting, high quality commercial photography, neutral colors, elegant atmosphere, depth of field, bokeh effect"]},{"id":9,"type":"FluxGuidance","pos":[852.199951171875,456.60003662109375],"size":[300,100],"flags":{},"order":9,"mode":0,"inputs":[{"localized_name":"条件","name":"conditioning","type":"CONDITIONING","link":11},{"localized_name":"引导","name":"guidance","type":"FLOAT","widget":{"name":"guidance"},"link":null}],"outputs":[{"localized_name":"条件","name":"CONDITIONING","type":"CONDITIONING","links":[13]}],"title":"🎯 Flux引导","properties":{"cnr_id":"comfy-core","ver":"0.3.29","Node name for S&R":"FluxGuidance"},"widgets_values":[3.5]},{"id":6,"type":"VAELoader","pos":[41.13795852661133,579.2371826171875],"size":[300,100],"flags":{},"order":0,"mode":0,"inputs":[{"localized_name":"vae名称","name":"vae_name","type":"COMBO","widget":{"name":"vae_name"},"link":null}],"outputs":[{"localized_name":"VAE","name":"VAE","type":"VAE","links":[10]}],"title":"🎨 VAE加载器","properties":{"cnr_id":"comfy-core","ver":"0.3.29","Node name for S&R":"VAELoader"},"widgets_values":["Flux\\ae.sft"]},{"id":1,"type":"LoadImage","pos":[50,50],"size":[320,314],"flags":{},"order":1,"mode":0,"inputs":[{"localized_name":"图像","name":"image","type":"COMBO","widget":{"name":"image"},"link":null},{"localized_name":"选择文件上传","name":"upload","type":"IMAGEUPLOAD","widget":{"name":"upload"},"link":null}],"outputs":[{"localized_name":"图像","name":"IMAGE","type":"IMAGE","links":[1,27]},{"localized_name":"遮罩","name":"MASK","type":"MASK","links":null}],"title":"📷 输入图片","properties":{"cnr_id":"comfy-core","ver":"0.3.29","Node name for S&R":"LoadImage"},"widgets_values":["小米.jpg","image"]},{"id":21,"type":"ImageScaleToTotalPixels","pos":[430.2999267578125,-0.8984887599945068],"size":[315,82],"flags":{"collapsed":false},"order":5,"mode":0,"inputs":[{"localized_name":"图像","name":"image","type":"IMAGE","link":27},{"localized_name":"缩放算法","name":"upscale_method","type":"COMBO","widget":{"name":"upscale_method"},"link":null},{"localized_name":"像素数量","name":"megapixels","type":"FLOAT","widget":{"name":"megapixels"},"link":null}],"outputs":[{"localized_name":"图像","name":"IMAGE","type":"IMAGE","links":[28]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.29","Node name for S&R":"ImageScaleToTotalPixels"},"widgets_values":["lanczos",1]},{"id":4,"type":"DualCLIPLoader","pos":[50,400],"size":[300,130],"flags":{},"order":2,"mode":0,"inputs":[{"localized_name":"CLIP名称1","name":"clip_name1","type":"COMBO","widget":{"name":"clip_name1"},"link":null},{"localized_name":"CLIP名称2","name":"clip_name2","type":"COMBO","widget":{"name":"clip_name2"},"link":null},{"localized_name":"类型","name":"type","type":"COMBO","widget":{"name":"type"},"link":null},{"localized_name":"设备","name":"device","shape":7,"type":"COMBO","widget":{"name":"device"},"link":null}],"outputs":[{"localized_name":"CLIP","name":"CLIP","type":"CLIP","links":[7,8]}],"title":"🧠 CLIP加载器","properties":{"cnr_id":"comfy-core","ver":"0.3.29","Node name for S&R":"DualCLIPLoader"},"widgets_values":["t5xxl_fp8_e4m3fn.safetensors","clip_l.safetensors","flux","default"]},{"id":3,"type":"GetImageSize","pos":[282.9517517089844,985.5597534179688],"size":[200,100],"flags":{},"order":10,"mode":0,"inputs":[{"localized_name":"image","name":"image","type":"IMAGE","link":3}],"outputs":[{"localized_name":"width","name":"width","type":"INT","links":[15,29]},{"localized_name":"height","name":"height","type":"INT","links":[16,30]}],"title":"📏 获取尺寸","properties":{"cnr_id":"stability-ComfyUI-nodes","ver":"001154622564b17223ce0191803c5fff7b87146c","Node name for S&R":"GetImageSize"},"widgets_values":[]},{"id":11,"type":"KSampler","pos":[1202.199951171875,456.60003662109375],"size":[350,262],"flags":{},"order":12,"mode":0,"inputs":[{"localized_name":"模型","name":"model","type":"MODEL","link":33},{"localized_name":"正面条件","name":"positive","type":"CONDITIONING","link":13},{"localized_name":"负面条件","name":"negative","type":"CONDITIONING","link":12},{"localized_name":"Latent图像","name":"latent_image","type":"LATENT","link":31},{"localized_name":"种子","name":"seed","type":"INT","widget":{"name":"seed"},"link":null},{"localized_name":"步数","name":"steps","type":"INT","widget":{"name":"steps"},"link":null},{"localized_name":"cfg","name":"cfg","type":"FLOAT","widget":{"name":"cfg"},"link":null},{"localized_name":"采样器名称","name":"sampler_name","type":"COMBO","widget":{"name":"sampler_name"},"link":null},{"localized_name":"调度器","name":"scheduler","type":"COMBO","widget":{"name":"scheduler"},"link":null},{"localized_name":"降噪","name":"denoise","type":"FLOAT","widget":{"name":"denoise"},"link":null}],"outputs":[{"localized_name":"Latent","name":"LATENT","type":"LATENT","links":[17]}],"title":"🎲 采样器","properties":{"cnr_id":"comfy-core","ver":"0.3.29","Node name for S&R":"KSampler"},"widgets_values":[993131313097047,"randomize",10,3.5,"euler","normal",1]},{"id":10,"type":"EmptySD3LatentImage","pos":[1105.3441162109375,946.8762817382812],"size":[300,150],"flags":{},"order":11,"mode":0,"inputs":[{"localized_name":"宽度","name":"width","type":"INT","widget":{"name":"width"},"link":29},{"localized_name":"高度","name":"height","type":"INT","widget":{"name":"height"},"link":30},{"localized_name":"批量大小","name":"batch_size","type":"INT","widget":{"name":"batch_size"},"link":null}],"outputs":[{"localized_name":"Latent","name":"LATENT","type":"LATENT","links":[31]}],"title":"🌌 空白潜在图像","properties":{"cnr_id":"comfy-core","ver":"0.3.29","Node name for S&R":"EmptySD3LatentImage"},"widgets_values":[512,512,1]},{"id":13,"type":"ImageScale","pos":[1225.1563720703125,93.6550064086914],"size":[300,150],"flags":{},"order":14,"mode":0,"inputs":[{"localized_name":"图像","name":"image","type":"IMAGE","link":18},{"localized_name":"缩放算法","name":"upscale_method","type":"COMBO","widget":{"name":"upscale_method"},"link":null},{"localized_name":"宽度","name":"width","type":"INT","widget":{"name":"width"},"link":15},{"localized_name":"高度","name":"height","type":"INT","widget":{"name":"height"},"link":16},{"localized_name":"裁剪","name":"crop","type":"COMBO","widget":{"name":"crop"},"link":null}],"outputs":[{"localized_name":"图像","name":"IMAGE","type":"IMAGE","links":[19]}],"title":"📐 背景尺寸调整","properties":{"cnr_id":"comfy-core","ver":"0.3.29","Node name for S&R":"ImageScale"},"widgets_values":["lanczos",1024,1024,"disabled"]},{"id":2,"type":"RMBG","pos":[460.11370849609375,136.773681640625],"size":[300,266],"flags":{},"order":4,"mode":0,"inputs":[{"localized_name":"图像","name":"image","type":"IMAGE","link":1},{"localized_name":"背景颜色","name":"background_color","shape":7,"type":"COLOR","link":null},{"localized_name":"模型","name":"model","type":"COMBO","widget":{"name":"model"},"link":null},{"localized_name":"灵敏度","name":"sensitivity","shape":7,"type":"FLOAT","widget":{"name":"sensitivity"},"link":null},{"localized_name":"处理分辨率","name":"process_res","shape":7,"type":"INT","widget":{"name":"process_res"},"link":null},{"localized_name":"遮罩模糊","name":"mask_blur","shape":7,"type":"INT","widget":{"name":"mask_blur"},"link":null},{"localized_name":"遮罩偏移","name":"mask_offset","shape":7,"type":"INT","widget":{"name":"mask_offset"},"link":null},{"localized_name":"反转输出","name":"invert_output","shape":7,"type":"BOOLEAN","widget":{"name":"invert_output"},"link":null},{"localized_name":"精细前景优化","name":"refine_foreground","shape":7,"type":"BOOLEAN","widget":{"name":"refine_foreground"},"link":null},{"localized_name":"背景类型","name":"background","shape":7,"type":"COMBO","widget":{"name":"background"},"link":null}],"outputs":[{"localized_name":"图像","name":"IMAGE","type":"IMAGE","links":[3]},{"localized_name":"遮罩","name":"MASK","type":"MASK","links":[4]},{"localized_name":"遮罩图像","name":"MASK_IMAGE","type":"IMAGE","links":null}],"title":"🎭 背景移除","properties":{"cnr_id":"comfyui-rmbg","ver":"8577848f31ed7dcf19b920451e7e90fad17cbc3b","Node name for S&R":"RMBG"},"widgets_values":["RMBG-2.0",0.5,1024,2,0,"Alpha",false,"Alpha"],"color":"#222e40","bgcolor":"#364254"},{"id":16,"type":"PreviewImage","pos":[815.4578247070312,4.8791584968566895],"size":[300,250],"flags":{},"order":8,"mode":0,"inputs":[{"localized_name":"图像","name":"images","type":"IMAGE","link":3}],"outputs":[],"title":"👁️ 预览分离结果","properties":{"cnr_id":"comfy-core","ver":"0.3.29","Node name for S&R":"PreviewImage"},"widgets_values":[]},{"id":15,"type":"SaveImage","pos":[1914.37451171875,-12.836181640625],"size":[300,350],"flags":{},"order":16,"mode":0,"inputs":[{"localized_name":"图片","name":"images","type":"IMAGE","link":21},{"localized_name":"文件名前缀","name":"filename_prefix","type":"STRING","widget":{"name":"filename_prefix"},"link":null}],"outputs":[],"title":"💾 保存结果","properties":{"cnr_id":"comfy-core","ver":"0.3.29","Node name for S&R":"SaveImage"},"widgets_values":["flux_background_replaced"]},{"id":14,"type":"ImageCompositeMasked","pos":[1589.7611083984375,-118.13032531738281],"size":[300,200],"flags":{},"order":15,"mode":0,"inputs":[{"localized_name":"目标图像","name":"destination","type":"IMAGE","link":19},{"localized_name":"来源图像","name":"source","type":"IMAGE","link":28},{"localized_name":"遮罩","name":"mask","shape":7,"type":"MASK","link":4},{"localized_name":"x","name":"x","type":"INT","widget":{"name":"x"},"link":null},{"localized_name":"y","name":"y","type":"INT","widget":{"name":"y"},"link":null},{"localized_name":"缩放来源图像","name":"resize_source","type":"BOOLEAN","widget":{"name":"resize_source"},"link":null}],"outputs":[{"localized_name":"图像","name":"IMAGE","type":"IMAGE","links":[21]}],"title":"🔄 图像合成","properties":{"cnr_id":"comfy-core","ver":"0.3.29","Node name for S&R":"ImageCompositeMasked"},"widgets_values":[0,0,false]},{"id":17,"type":"PreviewImage","pos":[1839.8709716796875,604.6951293945312],"size":[300,250],"flags":{},"order":17,"mode":0,"inputs":[{"localized_name":"图像","name":"images","type":"IMAGE","link":18}],"outputs":[],"title":"👁️ 预览生成背景","properties":{"cnr_id":"comfy-core","ver":"0.3.29","Node name for S&R":"PreviewImage"},"widgets_values":[]},{"id":26,"type":"NunchakuFluxDiTLoader","pos":[40.674591064453125,729.7367553710938],"size":[315,202],"flags":{},"order":3,"mode":0,"inputs":[{"localized_name":"model_path","name":"model_path","type":"COMBO","widget":{"name":"model_path"},"link":null},{"localized_name":"cache_threshold","name":"cache_threshold","type":"FLOAT","widget":{"name":"cache_threshold"},"link":null},{"localized_name":"attention","name":"attention","type":"COMBO","widget":{"name":"attention"},"link":null},{"localized_name":"cpu_offload","name":"cpu_offload","type":"COMBO","widget":{"name":"cpu_offload"},"link":null},{"localized_name":"device_id","name":"device_id","type":"INT","widget":{"name":"device_id"},"link":null},{"localized_name":"data_type","name":"data_type","type":"COMBO","widget":{"name":"data_type"},"link":null},{"localized_name":"i2f_mode","name":"i2f_mode","shape":7,"type":"COMBO","widget":{"name":"i2f_mode"},"link":null}],"outputs":[{"localized_name":"模型","name":"MODEL","type":"MODEL","links":[33]}],"properties":{"cnr_id":"ComfyUI-nunchaku","ver":"73dc0ad21765045136948309011cffac94d32ad9","Node name for S&R":"NunchakuFluxDiTLoader"},"widgets_values":["svdq-int4-flux.1-depth-dev",0.5000000000000001,"nunchaku-fp16","disable",0,"float16","enabled"]}],"links":[[1,1,0,2,0,"IMAGE"],[3,2,0,16,0,"IMAGE"],[4,2,1,14,2,"MASK"],[7,4,0,7,0,"CLIP"],[8,4,0,8,0,"CLIP"],[10,6,0,12,1,"VAE"],[11,7,0,9,0,"CONDITIONING"],[12,8,0,11,2,"CONDITIONING"],[13,9,0,11,1,"CONDITIONING"],[15,3,0,13,2,"INT"],[16,3,1,13,3,"INT"],[17,11,0,12,0,"LATENT"],[18,12,0,17,0,"IMAGE"],[19,13,0,14,0,"IMAGE"],[21,14,0,15,0,"IMAGE"],[27,1,0,21,0,"IMAGE"],[28,21,0,14,1,"IMAGE"],[29,3,0,10,0,"INT"],[30,3,1,10,1,"INT"],[31,10,0,11,3,"LATENT"],[33,26,0,11,0,"MODEL"]],"groups":[{"id":1,"title":"📷 图像输入与预处理","bounding":[30,10,1100,350],"color":"#3f789e","font_size":24,"flags":{}},{"id":2,"title":"🧠 模型与提示词","bounding":[30,360,800,500],"color":"#88A96B","font_size":24,"flags":{}},{"id":3,"title":"🎯 Flux生成流程","bounding":[832.199951171875,366.60003662109375,750,400],"color":"#B06634","font_size":24,"flags":{}},{"id":4,"title":"🔄 合成与输出","bounding":[1290.34033203125,-109.76000213623047,750,450],"color":"#A1678D","font_size":24,"flags":{}}],"config":{},"extra":{"ds":{"scale":0.9090909090909098,"offset":[-521.868340703553,79.93805687674202]},"frontendVersion":"1.17.11","VHS_latentpreview":false,"VHS_latentpreviewrate":0,"VHS_MetadataImage":true,"VHS_KeepIntermediate":true},"version":0.4} ```怎么解决这个问题?

filetype

PowerShell 7 环境已加载 (版本: 7.5.2) PS C:\Users\Administrator\Desktop> cd E:\AI_System PS E:\AI_System> python -m venv venv PS E:\AI_System> source venv/bin/activate # Linux/Mac source: The term 'source' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. PS E:\AI_System> venv\Scripts\activate # Windows (venv) PS E:\AI_System> pip install -r requirements.txt Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Requirement already satisfied: accelerate==0.27.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 1)) (0.27.2) Requirement already satisfied: aiofiles==23.2.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 2)) (23.2.1) Requirement already satisfied: aiohttp==3.9.3 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 3)) (3.9.3) Requirement already satisfied: aiosignal==1.4.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 4)) (1.4.0) Requirement already satisfied: altair==5.5.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 5)) (5.5.0) Requirement already satisfied: annotated-types==0.7.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 6)) (0.7.0) Requirement already satisfied: ansicon==1.89.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 7)) (1.89.0) Requirement already satisfied: anyio==4.10.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 8)) (4.10.0) Requirement already satisfied: async-timeout==4.0.3 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 9)) (4.0.3) Requirement already satisfied: attrs==25.3.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 10)) (25.3.0) Requirement already satisfied: bidict==0.23.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 11)) (0.23.1) Requirement already satisfied: blessed==1.21.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 12)) (1.21.0) Requirement already satisfied: blinker==1.9.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 13)) (1.9.0) Requirement already satisfied: certifi==2025.8.3 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 14)) (2025.8.3) Requirement already satisfied: cffi==1.17.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 15)) (1.17.1) Requirement already satisfied: charset-normalizer==3.4.3 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 16)) (3.4.3) Requirement already satisfied: click==8.2.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 17)) (8.2.1) Requirement already satisfied: colorama==0.4.6 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 18)) (0.4.6) Requirement already satisfied: coloredlogs==15.0.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 19)) (15.0.1) Requirement already satisfied: contourpy==1.3.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 20)) (1.3.2) Requirement already satisfied: cryptography==42.0.4 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 21)) (42.0.4) Requirement already satisfied: cycler==0.12.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 22)) (0.12.1) Requirement already satisfied: diffusers==0.26.3 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 23)) (0.26.3) Requirement already satisfied: distro==1.9.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 24)) (1.9.0) Requirement already satisfied: exceptiongroup==1.3.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 25)) (1.3.0) Requirement already satisfied: fastapi==0.116.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 26)) (0.116.1) Requirement already satisfied: ffmpy==0.6.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 27)) (0.6.1) Requirement already satisfied: filelock==3.19.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 28)) (3.19.1) Requirement already satisfied: Flask==3.0.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 29)) (3.0.2) Requirement already satisfied: Flask-SocketIO==5.3.6 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 30)) (5.3.6) Requirement already satisfied: flatbuffers==25.2.10 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 31)) (25.2.10) Requirement already satisfied: fonttools==4.59.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 32)) (4.59.1) Requirement already satisfied: frozenlist==1.7.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 33)) (1.7.0) Requirement already satisfied: fsspec==2025.7.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 34)) (2025.7.0) Requirement already satisfied: gpustat==1.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 35)) (1.1) Requirement already satisfied: gradio==4.19.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 36)) (4.19.2) Requirement already satisfied: gradio_client==0.10.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 37)) (0.10.1) Requirement already satisfied: h11==0.16.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 38)) (0.16.0) Requirement already satisfied: httpcore==1.0.9 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 39)) (1.0.9) Requirement already satisfied: httpx==0.28.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 40)) (0.28.1) Requirement already satisfied: huggingface-hub==0.21.3 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 41)) (0.21.3) Requirement already satisfied: humanfriendly==10.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 42)) (10.0) Requirement already satisfied: idna==3.10 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 43)) (3.10) Requirement already satisfied: importlib_metadata==8.7.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 44)) (8.7.0) Requirement already satisfied: importlib_resources==6.5.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 45)) (6.5.2) Requirement already satisfied: itsdangerous==2.2.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 46)) (2.2.0) Requirement already satisfied: Jinja2==3.1.6 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 47)) (3.1.6) Requirement already satisfied: jinxed==1.3.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 48)) (1.3.0) Requirement already satisfied: jsonschema==4.25.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 49)) (4.25.1) Requirement already satisfied: jsonschema-specifications==2025.4.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 50)) (2025.4.1) Requirement already satisfied: kiwisolver==1.4.9 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 51)) (1.4.9) Requirement already satisfied: loguru==0.7.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 52)) (0.7.2) Requirement already satisfied: markdown-it-py==4.0.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 53)) (4.0.0) Requirement already satisfied: MarkupSafe==2.1.5 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 54)) (2.1.5) Requirement already satisfied: matplotlib==3.10.5 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 55)) (3.10.5) Requirement already satisfied: mdurl==0.1.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 56)) (0.1.2) Requirement already satisfied: mpmath==1.3.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 57)) (1.3.0) Requirement already satisfied: multidict==6.6.4 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 58)) (6.6.4) Requirement already satisfied: narwhals==2.1.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 59)) (2.1.2) Requirement already satisfied: networkx==3.4.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 60)) (3.4.2) Requirement already satisfied: numpy==1.26.3 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 61)) (1.26.3) Requirement already satisfied: nvidia-ml-py==13.580.65 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 62)) (13.580.65) Requirement already satisfied: onnxruntime==1.17.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 63)) (1.17.1) Requirement already satisfied: openai==1.13.3 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 64)) (1.13.3) Requirement already satisfied: orjson==3.11.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 65)) (3.11.2) Requirement already satisfied: packaging==25.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 66)) (25.0) Requirement already satisfied: pandas==2.1.4 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 67)) (2.1.4) Requirement already satisfied: pillow==10.4.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 68)) (10.4.0) Requirement already satisfied: prettytable==3.16.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 69)) (3.16.0) Requirement already satisfied: propcache==0.3.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 70)) (0.3.2) Requirement already satisfied: protobuf==6.32.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 71)) (6.32.0) Requirement already satisfied: psutil==5.9.7 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 72)) (5.9.7) Requirement already satisfied: pycparser==2.22 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 73)) (2.22) Requirement already satisfied: pydantic==2.11.7 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 74)) (2.11.7) Requirement already satisfied: pydantic_core==2.33.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 75)) (2.33.2) Requirement already satisfied: pydub==0.25.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 76)) (0.25.1) Requirement already satisfied: Pygments==2.19.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 77)) (2.19.2) Requirement already satisfied: pyparsing==3.2.3 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 78)) (3.2.3) Requirement already satisfied: pyreadline3==3.5.4 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 79)) (3.5.4) Requirement already satisfied: python-dateutil==2.9.0.post0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 80)) (2.9.0.post0) Requirement already satisfied: python-dotenv==1.0.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 81)) (1.0.1) Requirement already satisfied: python-engineio==4.12.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 82)) (4.12.2) Requirement already satisfied: python-multipart==0.0.20 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 83)) (0.0.20) Requirement already satisfied: python-socketio==5.13.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 84)) (5.13.0) Requirement already satisfied: pytz==2025.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 85)) (2025.2) Requirement already satisfied: pywin32==306 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 86)) (306) Requirement already satisfied: PyYAML==6.0.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 87)) (6.0.2) Requirement already satisfied: redis==5.0.3 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 88)) (5.0.3) Requirement already satisfied: referencing==0.36.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 89)) (0.36.2) Requirement already satisfied: regex==2025.7.34 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 90)) (2025.7.34) Requirement already satisfied: requests==2.31.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 91)) (2.31.0) Requirement already satisfied: rich==14.1.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 92)) (14.1.0) Requirement already satisfied: rpds-py==0.27.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 93)) (0.27.0) Requirement already satisfied: ruff==0.12.10 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 94)) (0.12.10) Requirement already satisfied: safetensors==0.4.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 95)) (0.4.2) Requirement already satisfied: semantic-version==2.10.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 96)) (2.10.0) Requirement already satisfied: shellingham==1.5.4 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 97)) (1.5.4) Requirement already satisfied: simple-websocket==1.1.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 98)) (1.1.0) Requirement already satisfied: six==1.17.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 99)) (1.17.0) Requirement already satisfied: sniffio==1.3.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 100)) (1.3.1) Requirement already satisfied: starlette==0.47.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 101)) (0.47.2) Requirement already satisfied: sympy==1.14.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 102)) (1.14.0) Requirement already satisfied: tokenizers==0.15.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 103)) (0.15.2) Requirement already satisfied: tomlkit==0.12.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 104)) (0.12.0) Requirement already satisfied: torch==2.1.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 105)) (2.1.2) Requirement already satisfied: tqdm==4.67.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 106)) (4.67.1) Requirement already satisfied: transformers==4.37.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 107)) (4.37.0) Requirement already satisfied: typer==0.16.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 108)) (0.16.1) Requirement already satisfied: typing-inspection==0.4.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 109)) (0.4.1) Requirement already satisfied: typing_extensions==4.14.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 110)) (4.14.1) Requirement already satisfied: tzdata==2025.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 111)) (2025.2) Requirement already satisfied: urllib3==2.5.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 112)) (2.5.0) Requirement already satisfied: uvicorn==0.35.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 113)) (0.35.0) Requirement already satisfied: waitress==2.1.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 114)) (2.1.2) Requirement already satisfied: wcwidth==0.2.13 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 115)) (0.2.13) Requirement already satisfied: websockets==11.0.3 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 116)) (11.0.3) Requirement already satisfied: Werkzeug==3.1.3 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 117)) (3.1.3) Requirement already satisfied: win32_setctime==1.2.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 118)) (1.2.0) Requirement already satisfied: wsproto==1.2.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 119)) (1.2.0) Requirement already satisfied: yarl==1.20.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 120)) (1.20.1) Requirement already satisfied: zipp==3.23.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 121)) (3.23.0) WARNING: typer 0.16.1 does not provide the extra 'all' [notice] A new release of pip available: 22.3.1 -> 25.2 [notice] To update, run: python.exe -m pip install --upgrade pip (venv) PS E:\AI_System> python diagnose_modules.py ============================================================ 模块文件诊断报告 ============================================================ 🔍 检查 CognitiveSystem 模块: 预期路径: E:\AI_System\agent\cognitive_architecture.py ✅ 文件存在 ✅ 找到类定义: class CognitiveSystem ✅ 类继承CognitiveModule ✅ 找到__init__方法 📋 初始化方法: def __init__(self, name: str, model_manager, config: dict = None): 🔍 检查 EnvironmentInterface 模块: 预期路径: E:\AI_System\agent\environment_interface.py ✅ 文件存在 ✅ 找到类定义: class EnvironmentInterface ✅ 类继承CognitiveModule ✅ 找到__init__方法 📋 初始化方法: def __init__( 🔍 检查 AffectiveSystem 模块: 预期路径: E:\AI_System\agent\affective_system.py ✅ 文件存在 ✅ 找到类定义: class AffectiveSystem ✅ 类继承CognitiveModule ✅ 找到__init__方法 📋 初始化方法: def __init__(self, coordinator=None, config=None): ============================================================ 建议解决方案: ============================================================ 1. 检查每个模块文件中的相对导入语句 2. 确保每个模块类都正确继承CognitiveModule 3. 检查初始化方法的参数是否正确 4. 确保模块内部的导入使用绝对路径或正确处理相对导入 5. 考虑使用try-catch包装模块内部的导入语句 (venv) PS E:\AI_System> python test_core_import.py E:\Python310\python.exe: can't open file 'E:\\AI_System\\test_core_import.py': [Errno 2] No such file or directory (venv) PS E:\AI_System> python diagnose_architecture.py ❌ 导入Agent模块失败: No module named 'agent.model_manager' ============================================================ AI系统架构诊断报告 ============================================================ ERROR:root:无法导入 CognitiveModule 基类,使用占位实现 1. 模块文件检查: ---------------------------------------- ✅ CognitiveSystem: E:\AI_System\agent\cognitive_architecture.py ✅ EnvironmentInterface: E:\AI_System\agent\environment_interface.py ✅ AffectiveSystem: E:\AI_System\agent\affective_system.py 2. Agent目录结构 (E:\AI_System\agent): ---------------------------------------- 📄 action_executor.py 📁 affective_modules/ 📄 affective_system.py 📄 agent_core.log 📄 agent_core.py 📄 autonomous_agent.py 📄 auto_backup.bat 📄 base_module.py 📄 cognitive_architecture.py 📁 cognitive_system/ 📄 communication_system.py 📁 concrete_modules/ 📄 conscious_framework.py 📁 conscious_system/ 📁 decision_system/ 📄 diagnostic_system.py 📄 enhanced_cognitive.py 📄 environment.py 📄 environment_interface.py 📄 env_loader.py 📁 generated_images/ 📄 health_monitor.py 📄 health_system.py 📄 knowledge graph.db 📁 knowledge_system/ 📄 main.py 📄 maintain_workspace.py 📄 memory_manager.py 📁 memory_system/ 📄 meta_cognition.py 📄 minimal_model.py 📁 models/ 📄 model_learning.py 📄 notepad 📄 performance_monitor.py 📄 pip 📄 security_manager.py 📄 self_growth.bat 📄 shortcut_resolver.py 📄 system_maintain.bat 📁 tests/ 📄 test_my_models.py 📁 text_results/ 📄 unified_learning.py 📁 utils/ 📄 world_view.py 📄 __init__.py 📁 __pycache__/ 3. 建议下一步: ---------------------------------------- 📍 所有模块文件都存在,需要检查模块实现内容 诊断完成 (venv) PS E:\AI_System> python main.py ❌ 导入Agent模块失败: No module named 'agent.model_manager' 2025-08-30 14:28:25,977 - CoreConfig - INFO - 🌐 从 E:\AI_System\.env 加载环境变量 2025-08-30 14:28:25,977 - CoreConfig - INFO - 📄 加载配置文件: E:\AI_System\config\config.json 2025-08-30 14:28:25,977 - CoreConfig - INFO - ✅ 配置系统初始化完成 Traceback (most recent call last): File "E:\AI_System\main.py", line 11, in <module> from agent.model_manager import ModelManager ModuleNotFoundError: No module named 'agent.model_manager' (venv) PS E:\AI_System>

filetype

对代码进行理解、排查错误问题,特别关注是强化学习部分,根据你的推理逻辑给出正确且合理的方案步骤(文字描述),并给出优化后正确逻辑的完整代码 import torch import torch.nn as nn import torch.optim as optim import torch.nn.functional as F from torch.utils.tensorboard import SummaryWriter import numpy as np import random import argparse from collections import deque from torch.distributions import Normal, Categorical from torch.nn.parallel import DistributedDataParallel as DDP import matplotlib.pyplot as plt from tqdm import tqdm import time from mmengine.registry import MODELS, DATASETS from mmengine.config import Config from rl_seg.datasets.build_dataloader import init_dist_pytorch, build_dataloader from rl_seg.datasets import load_data_to_gpu device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # print(f"Using device: {device}") # PPO 代理(Actor-Critic 网络) class PPOAgent(nn.Module): def __init__(self, state_dim, action_dim, hidden_dim=256): super(PPOAgent, self).__init__() self.state_dim = state_dim self.action_dim = action_dim # 共享特征提取层 self.shared_layers = nn.Sequential( nn.Linear(state_dim, hidden_dim), # nn.ReLU(), nn.LayerNorm(hidden_dim), nn.GELU(), nn.Linear(hidden_dim, hidden_dim), # nn.ReLU() nn.LayerNorm(hidden_dim), nn.GELU(), ) # Actor 网络 (策略) self.actor = nn.Sequential( nn.Linear(hidden_dim, hidden_dim), # nn.ReLU(), nn.GELU(), nn.Linear(hidden_dim, action_dim), nn.Tanh() # 输出在[-1,1]范围内 ) # Critic 网络 (值函数) self.critic = nn.Sequential( nn.Linear(hidden_dim, hidden_dim), # nn.ReLU(), nn.GELU(), nn.Linear(hidden_dim, 1) ) # 动作标准差 (可学习参数) self.log_std = nn.Parameter(torch.zeros(1, action_dim)) # 初始化权重 self.apply(self._init_weights) def _init_weights(self, module): """初始化网络权重""" if isinstance(module, nn.Linear): nn.init.orthogonal_(module.weight, gain=0.01) nn.init.constant_(module.bias, 0.0) def forward(self, state): features = self.shared_layers(state) action_mean = self.actor(features) value = self.critic(features) return action_mean, value def act(self, state): """与环境交互时选择动作""" state = torch.FloatTensor(state).unsqueeze(0).to(device) # 确保是 [1, state_dim] print(state.shape) with torch.no_grad(): action_mean, value = self.forward(state) # 创建动作分布 (添加最小标准差确保稳定性) action_std = torch.clamp(self.log_std.exp(), min=0.01, max=1.0) dist = Normal(action_mean, action_std) # 采样动作 action = dist.sample() log_prob = dist.log_prob(action).sum(-1) return action, log_prob, value def evaluate(self, state, action): """评估动作的概率和值""" # 添加维度检查 if len(state.shape) == 1: state = state.unsqueeze(0) if len(action.shape) == 1: action = action.unsqueeze(0) action_mean, value = self.forward(state) # 创建动作分布 action_std = torch.clamp(self.log_std.exp(), min=0.01, max=1.0) dist = Normal(action_mean, action_std) # 计算对数概率和熵 log_prob = dist.log_prob(action).sum(-1) entropy = dist.entropy().sum(-1) return log_prob, entropy, value # 强化学习优化器 class PPOTrainer: """PPO训练器,整合了策略优化和模型微调""" def __init__(self, seg_net, agent, cfg): """ Args: seg_net: 预训练的分割网络 agent: PPO智能体 cfg: 配置对象,包含以下属性: - lr: 学习率 - clip_param: PPO裁剪参数 - ppo_epochs: PPO更新轮数 - gamma: 折扣因子 - tau: GAE参数 - value_coef: 值函数损失权重 - entropy_coef: 熵正则化权重 - max_grad_norm: 梯度裁剪阈值 """ self.seg_net = seg_net self._base_seg_net = seg_net.module if isinstance(seg_net, DDP) else seg_net self._base_seg_net.device = self.seg_net.device self.agent = agent self.cfg = cfg self.writer = SummaryWriter(log_dir='runs/ppo_trainer') # 使用分离的优化器 self.optimizer_seg = optim.AdamW( self.seg_net.parameters(), lr=cfg.lr, weight_decay=1e-4 ) self.optimizer_agent = optim.AdamW( self.agent.parameters(), lr=cfg.lr, weight_decay=1e-4 ) # 训练记录 self.best_miou = 0.0 self.metrics = { 'loss': [], 'reward': [], 'miou': [], 'class_ious': [], 'lr': [] } def compute_state(self, features, pred, gt_seg): """ 计算强化学习状态向量 Args: features: 从extract_features获取的字典包含: - spatial_features: [B, C1, H, W] - bev_features: [B, C2, H, W] - neck_features: [B, C3, H, W] pred: 网络预测的分割结果 [B, num_classes, H, W] gt_seg: 真实分割标签 [B, H, W] Returns: state: 状态向量 [state_dim] """ # 主要使用neck_features作为代表特征 torch.Size([4, 64, 496, 432]) feats = features["neck_features"] # [B, C, H, W] print(feats.shape) B, C, H, W = feats.shape # 初始化状态列表 states = [] # 为批次中每个样本单独计算状态 for i in range(B): # 特征统计 feat_mean = feats[i].mean(dim=(1, 2)) # [C] feat_std = feats[i].std(dim=(1, 2)) # [C] # 预测类别分布 pred_classes = pred[i].argmax(dim=0) # [H, W] class_dist = torch.bincount( pred_classes.flatten(), minlength=21 ).float() / (H * W) # 各类IoU (需实现单样本IoU计算) sample_miou, sample_cls_iou = self.compute_sample_iou( pred[i:i+1], {k: v[i:i+1] for k, v in gt_seg.items()} ) sample_cls_iou = torch.FloatTensor(sample_cls_iou).to(feats.device) # 组合状态 state = torch.cat([ feat_mean, feat_std, class_dist, sample_cls_iou ]) states.append(state) return torch.stack(states) # 特征统计 (均值、标准差) feat_mean = feats.mean(dim=(2, 3)).flatten() # [B*C] feat_std = feats.std(dim=(2, 3)).flatten() # [B*C] # 预测类别分布 pred_classes = pred.argmax(dim=1) # class_dist = torch.bincount(pred_classes.flatten(), minlength=21).float() / pred_classes.numel() class_dist = torch.bincount( pred_classes.flatten(), minlength=21 ).float() / (B * H * W) # 各类IoU batch_miou, cls_iou = get_miou(pred, gt_seg, classes=range(21)) cls_iou = torch.FloatTensor(cls_iou).to(feats.device) # 组合状态 state = torch.cat([feat_mean, feat_std, class_dist, cls_iou]) print(feat_mean.shape, feat_std.shape, class_dist.shape, cls_iou.shape) print(state.shape) # 必须与PPOAgent的state_dim完全一致 assert len(state) == self.agent.state_dim, \ f"State dim mismatch: {len(state)} != {self.agent.state_dim}" return state def compute_reward(self, miou, prev_miou, class_ious, prev_class_ious): """ 计算复合奖励函数 Args: miou: 当前mIoU prev_miou: 前一次mIoU class_ious: 当前各类IoU [num_classes] prev_class_ious: 前一次各类IoU [num_classes] Returns: reward: 综合奖励值 """ # 基础奖励: mIoU提升 miou_reward = 10.0 * (miou - prev_miou) # 类别平衡奖励: 鼓励所有类别均衡提升 class_reward = 0.0 for cls, (iou, prev_iou) in enumerate(zip(class_ious, prev_class_ious)): if iou > prev_iou: # 对稀有类别给予更高奖励 weight = 1.0 + (1.0 - prev_iou) # 性能越差的类权重越高 class_reward += weight * (iou - prev_iou) # 惩罚项: 防止某些类别性能严重下降 penalty = 0.0 # for cls in range(21): # if class_ious[cls] < prev_class_ious[cls] * 0.8: # penalty += 5.0 * (prev_class_ious[cls] - class_ious[cls]) for cls, (iou, prev_iou) in enumerate(zip(class_ious, prev_class_ious)): if iou < prev_iou * 0.9: # 性能下降超过10% penalty += 5.0 * (prev_iou - iou) total_reward = miou_reward + class_reward - penalty return np.clip(total_reward, -5.0, 10.0) # 限制奖励范围 def apply_action(self, action): """ 应用智能体动作调整模型参数 Args: action: [6] 连续动作向量,范围[-1, 1] """ # 动作0-1: 调整学习率 lr_scale = 0.1 + 0.9 * (action[0] + 1) / 2 # 映射到[0.1, 1.0] for param_group in self.optimizer.param_groups: param_group['lr'] *= lr_scale # 动作2-3: 调整特征提取层权重 (范围[0.8, 1.2]) backbone_scale = 0.8 + 0.2 * (action[2] + 1) / 2 with torch.no_grad(): for param in self.seg_net.module.backbone_2d.parameters(): param.data *= backbone_scale # (0.9 + 0.1 * action[2]) # 调整范围[0.9,1.1] # 动作4-5: 调整分类头权重 head_scale = 0.8 + 0.2 * (action[4] + 1) / 2 with torch.no_grad(): for param in self.seg_net.module.at_seg_head.parameters(): param.data *= head_scale # (0.9 + 0.1 * action[4]) # 调整范围[0.9,1.1] def train_epoch(self, train_loader, epoch): """执行一个训练周期""" epoch_metrics = { 'seg_loss': 0.0, 'reward': 0.0, 'miou': 0.0, 'class_ious': np.zeros(21), 'policy_loss': 0.0, 'value_loss': 0.0, 'entropy_loss': 0.0, 'batch_count': 0 } self.seg_net.train() self.agent.train() for data_dicts in tqdm(train_loader, desc=f"RL Epoch {epoch+1}/{self.cfg.num_epochs_rl}"): load_data_to_gpu(data_dicts) # 初始预测和特征 with torch.no_grad(): initial_pred = self.seg_net(data_dicts) initial_miou, initial_class_ious = get_miou( initial_pred, data_dicts, classes=range(21) ) features = self.seg_net.module.extract_features(data_dicts) # DDP包装了 # features = self._base_seg_net.extract_features(data_dicts) # 计算初始状态 states = self.compute_state(features, initial_pred, data_dicts) # 为批次中每个样本选择动作 actions, log_probs, values = [], [], [] for state in states: action, log_prob, value = self.agent.act(state.cpu().numpy()) actions.append(action) log_probs.append(log_prob) values.append(value) # 应用第一个样本的动作 (简化处理) self.apply_action(actions[0]) # 调整后的预测 adjusted_pred = self.seg_net(data_dicts) adjusted_miou, adjusted_class_ious = get_miou( adjusted_pred, data_dicts, classes=range(21) ) # 计算奖励 (使用整个批次的平均改进) reward = self.compute_reward( adjusted_miou, initial_miou, adjusted_class_ious, initial_class_ious ) # 计算优势 (修正为单步优势) advantages = [reward - v for v in values] # 存储经验 experience = { 'states': states.cpu().numpy(), 'actions': actions, 'rewards': [reward] * len(actions), 'old_log_probs': log_probs, 'old_values': values, 'advantages': advantages, } # PPO优化 policy_loss, value_loss, entropy_loss = self.ppo_update(experience) # 分割网络损失 seg_loss = self.seg_net.module.at_seg_head.get_loss( adjusted_pred, data_dicts ) # 分割网络更新 (使用单独优化器) self.optimizer_seg.zero_grad() seg_loss.backward() torch.nn.utils.clip_grad_norm_( self.seg_net.parameters(), self.cfg.max_grad_norm ) self.optimizer_seg.step() # 记录指标 epoch_metrics['seg_loss'] += seg_loss.item() epoch_metrics['reward'] += reward epoch_metrics['miou'] += adjusted_miou epoch_metrics['class_ious'] += adjusted_class_ious epoch_metrics['policy_loss'] += policy_loss epoch_metrics['value_loss'] += value_loss epoch_metrics['entropy_loss'] += entropy_loss epoch_metrics['batch_count'] += 1 # 计算平均指标 avg_metrics = {} for k in epoch_metrics: if k != 'batch_count': avg_metrics[k] = epoch_metrics[k] / epoch_metrics['batch_count'] # 记录到TensorBoard self.writer.add_scalar('Loss/seg_loss', avg_metrics['seg_loss'], epoch) self.writer.add_scalar('Reward/total', avg_metrics['reward'], epoch) self.writer.add_scalar('mIoU/train', avg_metrics['miou'], epoch) self.writer.add_scalar('Loss/policy', avg_metrics['policy_loss'], epoch) self.writer.add_scalar('Loss/value', avg_metrics['value_loss'], epoch) self.writer.add_scalar('Loss/entropy', avg_metrics['entropy_loss'], epoch) return avg_metrics def ppo_update(self, experience): """ PPO策略优化步骤 Args: batch: 包含以下键的字典: - states: [batch_size, state_dim] - actions: [batch_size, action_dim] - old_log_probs: [batch_size] - old_values: [batch_size] - rewards: [batch_size] - advantages: [batch_size] Returns: policy_loss: 策略损失值 value_loss: 值函数损失值 entropy_loss: 熵损失值 """ states = torch.FloatTensor(experience['states']).unsqueeze(0).to(device) actions = torch.FloatTensor(experience['actions']).unsqueeze(0).to(device) old_log_probs = torch.FloatTensor([experience['old_log_probs']]).to(device) old_values = torch.FloatTensor([experience['old_values']]).to(device) rewards = torch.FloatTensor([experience['rewards']]).to(device) advantages = torch.FloatTensor(experience['advantages']).to(device) # GAE优势 优势估计使用GAE(广义优势估计) policy_losses, value_losses, entropy_losses = [], [], [] for _ in range(self.cfg.ppo_epochs): # 评估当前策略 log_probs, entropy, values = self.agent.evaluate(states, actions) # 比率 ratios = torch.exp(log_probs - old_log_probs) # 裁剪目标 surr1 = ratios * advantages surr2 = torch.clamp(ratios, 1.0 - self.cfg.clip_param, 1.0 + self.cfg.clip_param) * advantages # 策略损失 policy_loss = -torch.min(surr1, surr2).mean() # 值函数损失 value_loss = 0.5 * (values - rewards).pow(2).mean() # 熵损失 entropy_loss = -entropy.mean() # 总损失 loss = policy_loss + self.cfg.value_coef * value_loss + self.cfg.entropy_coef * entropy_loss # 智能体参数更新 self.optimizer_agent.zero_grad() loss.backward() torch.nn.utils.clip_grad_norm_( self.agent.parameters(), self.cfg.max_grad_norm ) self.optimizer_agent.step() policy_losses.append(policy_loss.item()) value_losses.append(value_loss.item()) entropy_losses.append(entropy_loss.item()) return ( np.mean(policy_losses), np.mean(value_losses), np.mean(entropy_losses) ) def close(self): """关闭资源""" self.writer.close() # 监督学习预训练 def supervised_pretrain(cfg): seg_net = MODELS.build(cfg.model).to('cuda') seg_head = MODELS.build(cfg.model.at_seg_head).to('cuda') if cfg.pretrained_path: ckpt = torch.load(cfg.pretrained_path) print(ckpt.keys()) seg_net.load_state_dict(ckpt['state_dict']) print(f'Load pretrained ckpt: {cfg.pretrained_path}') seg_net = DDP(seg_net, device_ids=[cfg.local_rank]) print(seg_net) return seg_net optimizer = optim.Adam(seg_net.parameters(), lr=cfg.lr) writer = SummaryWriter(log_dir='runs/pretrain') train_losses = [] train_mious = [] train_class_ious = [] # 存储每个epoch的各类IoU for epoch in range(cfg.num_epochs): cfg.sampler.set_epoch(epoch) epoch_loss = 0.0 epoch_miou = 0.0 epoch_class_ious = np.zeros(21) # 初始化各类IoU累加器 batch_count = 0 seg_net.train() for data_dicts in tqdm(cfg.train_loader, desc=f"Pretrain Epoch {epoch+1}/{cfg.num_epochs}"): optimizer.zero_grad() pred = seg_net(data_dicts) device = pred.device seg_head = seg_head.to(device) loss = seg_head.get_loss(pred, data_dicts["gt_seg"].to(device)) loss.backward() optimizer.step() epoch_loss += loss.item() # import pdb;pdb.set_trace() # 计算mIoU class_ious = [] batch_miou, cls_iou = get_miou(pred, data_dicts, classes=[i for i in range(21)]) # for cls in range(5): # pred_mask = (pred.argmax(dim=1) == cls) # true_mask = (labels == cls) # intersection = (pred_mask & true_mask).sum().float() # union = (pred_mask | true_mask).sum().float() # iou = intersection / (union + 1e-8) # class_ious.append(iou.item()) epoch_miou += batch_miou epoch_class_ious += np.array(cls_iou) # 累加各类IoU batch_count += 1 # avg_loss = epoch_loss / len(cfg.dataloader) # avg_miou = epoch_miou / len(cfg.dataloader) # 计算epoch平均指标 avg_loss = epoch_loss / batch_count if batch_count > 0 else 0.0 avg_miou = epoch_miou / batch_count if batch_count > 0 else 0.0 avg_class_ious = epoch_class_ious / batch_count if batch_count > 0 else np.zeros(21) train_losses.append(avg_loss) train_mious.append(avg_miou) train_class_ious.append(avg_class_ious) # 存储各类IoU # 记录到TensorBoard writer.add_scalar('Loss/train', avg_loss, epoch) writer.add_scalar('mIoU/train', avg_miou, epoch) for cls, iou in enumerate(avg_class_ious): writer.add_scalar(f'IoU/class_{cls}', iou, epoch) print(f"Epoch {epoch+1}/{cfg.num_epochs} - Loss: {avg_loss:.3f}, mIoU: {avg_miou*100:.3f}") print("Class IoUs:") for cls, iou in enumerate(avg_class_ious): print(f" {cfg.class_names[cls]}: {iou*100:.3f}") # # 保存预训练模型 torch.save(seg_net.state_dict(), "polarnet_pretrained.pth") writer.close() # 绘制训练曲线 plt.figure(figsize=(12, 5)) plt.subplot(1, 2, 1) plt.plot(train_losses) plt.title("Supervised Training Loss") plt.xlabel("Epoch") plt.ylabel("Loss") plt.subplot(1, 2, 2) plt.plot(train_mious) plt.title("Supervised Training mIoU") plt.xlabel("Epoch") plt.ylabel("mIoU") plt.tight_layout() plt.savefig("supervised_training.png") return seg_net # 强化学习微调 def rl_finetune(cfg): # 状态维度 = 特征统计(1024*2) + 类别分布(5) + 各类IoU(5) state_dim = 256*2 + 21 + 21 action_dim = 6 # 6个连续动作;动作0调整学习率,动作1调整特征提取层权重,动作2调整分类头权重 # 初始化PPO智能体 agent = PPOAgent(state_dim, action_dim).to(device) if cfg.agent_path: agent.load_state_dict(torch.load(cfg.agent_path)) trainer = PPOTrainer(cfg.seg_net, agent, cfg) train_losses = [] train_rewards = [] train_mious = [] # 训练循环 for epoch in range(cfg.num_epochs_rl): avg_metrics = trainer.train_epoch(cfg.train_loader, epoch) # 记录指标 train_losses.append(avg_metrics['seg_loss']) train_rewards.append(avg_metrics['reward']) train_mious.append(avg_metrics['miou']) # trainer.metrics['loss'].append(avg_metrics['seg_loss']) # trainer.metrics['reward'].append(avg_metrics['reward']) # trainer.metrics['miou'].append(avg_metrics['miou']) # trainer.metrics['class_ious'].append(avg_metrics['class_ious']) # trainer.metrics['lr'].append(trainer.optimizer.param_groups[0]['lr']) # 保存最佳模型 if avg_metrics['miou'] > trainer.best_miou: trainer.best_miou = avg_metrics['miou'] torch.save(cfg.seg_net.state_dict(), "polarnet_rl_best.pth") torch.save(agent.state_dict(), "ppo_agent_best.pth") np.savetxt("best_class_ious.txt", avg_metrics['class_ious']) # 打印日志 print(f"\nRL Epoch {epoch+1}/{cfg.num_epochs_rl} Results:") print(f" Seg Loss: {avg_metrics['seg_loss']:.4f}") print(f" Reward: {avg_metrics['reward']:.4f}") print(f" mIoU: {avg_metrics['miou']*100:.3f} (Best: {trainer.best_miou*100:.3f})") print(f" Policy Loss: {avg_metrics['policy_loss']:.4f}") print(f" Value Loss: {avg_metrics['value_loss']:.4f}") print(f" Entropy Loss: {avg_metrics['entropy_loss']:.4f}") print(f" Learning Rate: {trainer.optimizer.param_groups[0]['lr']:.2e}") print(" Class IoUs:") for cls, iou in enumerate(avg_metrics['class_ious']): print(f" {cfg.class_names[cls]}: {iou:.4f}") # 保存最终模型和训练记录 torch.save(cfg.seg_net.state_dict(), "polarnet_rl_final.pth") torch.save(agent.state_dict(), "ppo_agent_final.pth") np.savetxt("training_metrics.txt", **trainer.metrics) print(f"\nTraining completed. Best mIoU: {trainer.best_miou:.4f}") trainer.close() # 绘制训练曲线 plt.figure(figsize=(15, 10)) plt.subplot(2, 2, 1) plt.plot(train_losses) plt.title("RL Training Loss") plt.xlabel("Epoch") plt.ylabel("Loss") plt.subplot(2, 2, 2) plt.plot(train_rewards) plt.title("Average Reward") plt.xlabel("Epoch") plt.ylabel("Reward") plt.subplot(2, 2, 3) plt.plot(train_mious) plt.title("RL Training mIoU") plt.xlabel("Epoch") plt.ylabel("mIoU") plt.subplot(2, 2, 4) plt.plot(train_losses, label='Loss') plt.plot(train_mious, label='mIoU') plt.title("Loss vs mIoU") plt.xlabel("Epoch") plt.legend() plt.tight_layout() plt.savefig("rl_training.png") return cfg.seg_net, agent # 模型评估 def evaluate_model(cfg): cfg.seg_net.eval() avg_miou = 0.0 total_miou = 0.0 class_ious = np.zeros(21) batch_count = 0 # 记录实际处理的batch数量 return avg_miou, class_ious with torch.no_grad(): for data_dicts in tqdm(cfg.val_loader, desc="Evaluating"): pred = cfg.seg_net(data_dicts) batch_miou, cls_iou = get_miou(pred, data_dicts, classes=[i for i in range(21)]) total_miou += batch_miou class_ious += cls_iou batch_count += 1 # avg_miou = total_miou / len(cfg.dataloader) # class_ious /= len(cfg.dataloader) # 计算平均值 avg_miou = total_miou / batch_count if batch_count > 0 else 0.0 class_ious = class_ious / batch_count if batch_count > 0 else np.zeros(21) print("\nEvaluation Results:") print(f"Overall mIoU: {avg_miou*100:.3f}") for cls, iou in enumerate(class_ious): print(f" {cfg.class_names[cls]}: {iou*100:.3f}") return avg_miou, class_ious def fast_hist(pred, label, n): k = (label >= 0) & (label < n) bin_count = np.bincount(n * label[k].astype(int) + pred[k], minlength=n**2) return bin_count[: n**2].reshape(n, n) def fast_hist_crop(output, target, unique_label): hist = fast_hist( output.flatten(), target.flatten(), np.max(unique_label) + 1 ) hist = hist[unique_label, :] hist = hist[:, unique_label] return hist def compute_miou_test(y_true, y_pred): from sklearn.metrics import confusion_matrix current = confusion_matrix(y_true, y_pred) intersection = np.diag(current) gt = current.sum(axis=1) pred = current.sum(axis=0) union = gt + pred - intersection iou_list = intersection / union.astype(np.float32) + 1e-8 return np.mean(iou_list), iou_list def get_miou(pred, target, classes=[i for i in range(21)]): # import pdb;pdb.set_trace() gt_val_grid_ind = target["grid_ind"] gt_val_pt_labs = target["labels_ori"] pred_labels = torch.argmax(pred, dim=1).cpu().detach().numpy() metric_data = [] miou_list = [] for bs, i_val_grid in enumerate(gt_val_grid_ind): val_grid_idx = pred_labels[ bs, i_val_grid[:, 1], i_val_grid[:, 0], i_val_grid[:, 2] ] # (N,) gt_val_pt_lab_idx = gt_val_pt_labs[bs] #(N,1) hist = fast_hist_crop( val_grid_idx, gt_val_pt_lab_idx, classes ) # (21, 21) hist_tensor = torch.from_numpy(hist).to(pred.device) metric_data.append(hist_tensor) # miou, iou_dict = compute_miou_test(gt_val_pt_lab_idx, val_grid_idx) # miou_list.append(miou) hist = sum(metric_data).cpu().numpy() iou_overall = np.diag(hist) / ((hist.sum(1) + hist.sum(0) - np.diag(hist)) + 1e-6) miou = np.nanmean(iou_overall) # print(metric_data) # print(iou_overall) # print(miou) # print(miou_list, np.nanmean(miou_list)) # import pdb;pdb.set_trace() return miou, iou_overall # 主函数 def main(args): # 第一阶段:监督学习预训练 print("="*50) print("Starting Supervised Pretraining...") print("="*50) cfg_file = "rl_seg/configs/rl_seg_leap.py" cfg = Config.fromfile(cfg_file) print('aaaaaaaa ',cfg.keys()) total_gpus, LOCAL_RANK = init_dist_pytorch( tcp_port=18888, local_rank=0, backend='nccl' ) cfg.local_rank = LOCAL_RANK dist_train = True train_dataset, train_dataloader, sampler = build_dataloader(dataset_cfg=cfg, data_path=cfg.train_data_path, workers=cfg.num_workers, samples_per_gpu=cfg.batch_size, num_gpus=cfg.num_gpus, dist=dist_train, pipeline=cfg.train_pipeline, training=True) cfg.train_loader = train_dataloader cfg.sampler = sampler seg_net = supervised_pretrain(cfg) val_dataset, val_dataloader, sampler = build_dataloader(dataset_cfg=cfg, data_path=cfg.val_data_path, workers=cfg.num_workers, samples_per_gpu=cfg.batch_size, num_gpus=cfg.num_gpus, dist=True, pipeline=cfg.val_pipeline, training=False) cfg.val_loader = val_dataloader cfg.sampler = sampler cfg.seg_net = seg_net # 评估预训练模型 print("\nEvaluating Pretrained Model...") pretrain_miou, pretrain_class_ious = evaluate_model(cfg) # 第二阶段:强化学习微调 print("\n" + "="*50) print("Starting RL Finetuning...") print("="*50) seg_net, ppo_agent = rl_finetune(cfg) # 评估强化学习优化后的模型 print("\nEvaluating RL Optimized Model...") rl_miou, rl_class_ious = evaluate_model(cfg) # 结果对比 print("\nPerformance Comparison:") print(f"Pretrained mIoU: {pretrain_miou*100:.3f}") print(f"RL Optimized mIoU: {rl_miou*100:.3f}") print(f"Improvement: {(rl_miou - pretrain_miou)*100:.3f} ({((rl_miou - pretrain_miou)/pretrain_miou)*100:.2f}%)") # 绘制各类别IoU对比 plt.figure(figsize=(10, 6)) x = np.arange(5) width = 0.35 plt.bar(x - width/2, pretrain_class_ious, width, label='Pretrained') plt.bar(x + width/2, rl_class_ious, width, label='RL Optimized') plt.xticks(x, cfg.class_names) plt.ylabel("IoU") plt.title("Per-Class IoU Comparison") plt.legend() plt.tight_layout() plt.savefig("class_iou_comparison.png") print("\nTraining completed successfully!") if __name__ == "__main__": def args_config(): parser = argparse.ArgumentParser(description='arg parser') parser.add_argument('--cfg_file', type=str, default="rl_seg/configs/rl_seg_leap.py", help='specify the config for training') parser.add_argument('--batch_size', type=int, default=16, required=False, help='batch size for training') parser.add_argument('--epochs', type=int, default=20, required=False, help='number of epochs to train for') parser.add_argument('--workers', type=int, default=10, help='number of workers for dataloader') parser.add_argument('--extra_tag', type=str, default='default', help='extra tag for this experiment') parser.add_argument('--ckpt', type=str, default=None, help='checkpoint to start from') parser.add_argument('--pretrained_model', type=str, default=None, help='pretrained_model') return parser.parse_args() args = args_config() main(args)

皮卡学长
  • 粉丝: 89
上传资源 快速赚钱